text
stringlengths
100
500k
subset
stringclasses
4 values
Does single blind peer review hinder newcomers? Marco Seeber1 & Alberto Bacchelli2 Scientometrics volume 113, pages 567–585 (2017)Cite this article Several fields of research are characterized by the coexistence of two different peer review modes to select quality contributions for scientific venues, namely double blind (DBR) and single blind (SBR) peer review. In the first, the identities of both authors and reviewers are not known to each other, whereas in the latter the authors' identities are visible since the start of the review process. The need to adopt either one of these modes has been object of scholarly debate, which has mostly focused on issues of fairness. Past work reported that SBR is potentially associated with biases related to the gender, nationality, and language of the authors, as well as the prestige and type of their institutions. Nevertheless, evidence is lacking on whether revealing the identities of the authors favors reputed authors and hinder newcomers, a bias with potentially important consequences in terms of knowledge production. Accordingly, we investigate whether and to what extent SBR, compared to a DBR, relates to a higher ration of reputed scholars, at the expense of newcomers. This relation is pivotal for science, as past research provided evidence that newcomers support renovation and advances in a research field by introducing new and heterodox ideas and approaches, whereas inbreeding have serious detrimental effects on innovation and creativity. Our study explores the mentioned issues in the field of computer science, by exploiting a database that encompasses 21,535 research papers authored by 47,201 individuals and published in 71 among the 80 most impactful computer science conferences in 2014 and 2015. We found evidence that—other characteristics of the conferences taken in consideration—SBR indeed relates to a lower ration of contributions from newcomers to the venue and particularly newcomers that are otherwise experienced of publishing in other computer science conferences, suggesting the possible existence of ingroup–outgroup behaviors that may harm knowledge advancement in the long run. Avoid the most common mistakes and prepare your manuscript for journal editors. Peer review is the evaluation process employed by the largest majority of scientific outlets to select quality contributions (Bedeian 2004). It is a practice highly institutionalized and conferring legitimacy (DiMaggio and Powell 1983). However, in recent decades, researchers provided empirical evidence on the limitations of peer review, related, among others, to reviewers' biases (Armstrong 1997), low inter-reviewers agreement (Bornmann and Daniel 2009), and weak capability to identify break through and impactful ideas (Campanario 2009; Chen and Konstan 2010; Siler et al. 2015). Thus, understanding how the organization of the peer review process can affect its outcomes is of crucial importance. In recent years, scholars have begun exploring how different modes of organizing peer review can affect the quality of review and its outcome, such as testing the effects of incorporating monetary rewards for reviewers (Squazzoni et al. 2013) and variations in the number of reviewers (Roebber and Schultz 2011; Bianchi and Squazzoni 2015; Snell 2015). Scholars have also investigated the consequences of adopting peer review modes with different visibility criteria concerning the authors' identity, particularly double blind peer review (DBR)—where authors' identities are disclosed only after acceptance of a paper—and single blind peer review (SBR)—where authors' identities are visible throughout the entire review process. These studies, predominantly motivated by considerations of fairness, found that when authors' identities are revealed to reviewers, evaluation is less objective and biases due to gender, nationality, and language, as well as the prestige and type of institution of affiliation play a role (Snodgrass 2006). However, supporters of SBR argue that the identity of authors is useful to judge the reliability of scientific claims, resulting beneficial to the advancement of knowledge (Pontille and Torny 2014). Thus, the debate between supporters of DBR and SBR review has been to some extent a dialogue of the deaf, the former stressing issues of fairness and the latter focusing on functionalistic arguments, which implicitly justify un-blinding for the superior interest of the advancement of knowledge. However, by reviewing studies on innovation it can be argued that anonymity of authors can be beneficial for scientific advancement as well. In fact, studies of innovation highlight the importance of certain characteristics of the social context for both collective and individual propensity to innovate. In teams, newcomers are essential to raise new questions and provide new ideas, perspectives, and methods (Perretti and Negro 2007). Research on networks of innovation show that, at the individual level, the propensity of entrepreneurs towards innovation or reproduction of old ideas is influenced by the diversity of social relationships in which they are embedded (Marsden 1987; Ruef 2002). In a similar vein, studies of research activity have shown the detrimental effects of academic inbreeding (i.e., the tendency of academic institutions to recruit personnel that have studied in the same institution) on individual and institutional creativity and performance (Pelz and Andrews 1966; Soler 2001; Horta et al. 2010; Franzoni et al. 2014). Scientific outlets are a pivotal source of new inputs and ideas and, in the case of conferences, they are also social contexts where a community of scholars meet to develop relationships and future collaborations. It is thus of crucial importance to explore whether and to what extent a specific peer review mode introduces a bias against people that can bring new ideas and perspectives, namely early researchers or researchers that are new to that specific venue: newcomers. In this article, we explore the hypothesis that when the identity of the authors is revealed, referees' evaluation tend to be affected by authors' past productivity. We investigate whether scientific outlets adopting a SBR display less contributions from researchers that have less publications in general and in the same outlet, than DBR outlets. We also explore whether SBR outlets display a larger share of contributions from researchers that are relatively new to the outlet but otherwise productive. In particular, we test two competing hypotheses, that in SBR contribution from these researchers are: (1) more frequent because overall productivity positively affects reviewers' evaluation, either that are (2) less frequent because reviewers might be skeptical of contributions from researchers that comes from other venues, or even perceive them as a potential competitors threating their academic tribe (Becher and Trowler 1989/2001). We test these hypotheses on a sample of 21,535 research papers published 71 among the 80 most impactful conferences in the field of computer science research. This empirical context is particularly suitable, as SBR and DBR are both widely adopted by computer science conferences. The main contribution of the article is thus twofold: (1) we stress the implication for knowledge production of a newcomers' bias in peer review, and (2) explore bias towards two types of newcomers and their interaction. Empirically, we consider a large sample of articles and venues, thus providing a stronger evidence on the impact on individual reputation bias in SBR (Snodgrass 2006). The article is organized as follows: In the following section, we review the scholarly debate on anonymity and related bias in peer review, selected studies on innovation and inbreeding, as well as formulate the hypotheses. Subsequently, we introduce the data and method of analysis, and in the fourth section we present the analysis and the results. We conclude discussing the findings and directions of future research. Peer review process anonymity and biases Pontille and Torny recently described how peer review practice and the debate on anonymity in peer review have evolved throughout time (Pontille and Torny 2014). At its outset, in the 18th century, peer review was organized in the form of an editorial committee that collegially examined and selected manuscripts, while the editor took the main responsibility for the final decision (Crane 1967; Bazerman 1988). Only in the last century, due to the increasing specialization of science and growth of research production, the use of external reviewers diffused as to complement the competences of the editorial boards (Burnham 1990). From the '50 a debate emerged regarding the anonymity of authors to reviewers. Sociologists first spotted that article's assessment should regards its content and not be affected by the reputation and prestige of its authors or their institutions of affiliation. Quest for anonymity were backed by the Mertonian norms of ScienceFootnote 1 and, in particular, by the norm of universalism, stating that scientific claims should be evaluated according to the same impersonal criteria', regardless of personal or social attributes of the author (Merton 1973). From mid-1970s, anonymization of authors spread to journals in management, economy, and psychology as well, due to studies examining reviewers' "bias" (Zuckerman and Merton 1971; Mahoney 1977), as well as pressures from women within American learned societies, which highlighted the low acceptance rates of articles by female scholars (Benedek 1976; Weller 2001).Footnote 2 Opposed to the view that the evaluation of scientific writings must be based only on the content of the article, other scholars argued that to validate a scientific claim the reviewers needs to link writings to writers, because the credit that reviewers give to experiments and results is also backed by past studies and the use of specific equipment, so that anonymization would weaken evaluation (Ward and Goudsmit 1967). Opponents to anonymization where also skeptical on the effectiveness of anonymity as such, since authors' self-quotations would disclose their identities or reviewers would try still to attribute a text to an author (BMJ 1974). These arguments were particularly popular in the experimental sciences, so that anonymization of authors did not diffuse in fields like physics, medicine, biology and biomedicine as much as in the social sciences (Weller 2001). Moreover, in recent years access to search engines have arguably made easier to guess authors' identity, so that some journals in the field of Economics decided to return to SBR (AER 2011). Parallel to the debate on anonymity of authors is the discussion on the opportunity to disclose reviewers' identities. In turn, four main categories of peer review can be identified based on (non-)anonymity of reviewers and authors: both unknown (double blind), authors known and reviewers unknown (single blind), authors unknown and reviewers known (blind review) and both known (open peer review). A survey of 553 journals from eighteen disciplines found that DBR is the most diffused peer review mode (58%) and of growing diffusion, followed by SBR (37%) and open review (5%) (Bachand and Sawallis 2003). So far, empirical studies related to anonymity of reviewers have focused on three main issues, namely the efficacy of blinding, quality of reviews, and potential biases. As to the efficacy of blinding, research across a wide range of disciplines found that blinding is effective in most of the cases (53–79%) (Snodgrass 2006). Evidence on the quality of reviews in the two modes is mixed (Snodgrass 2006). Studies on bias in peer review have focused on four main topics, namely: (i) error in assessing true quality, (ii) social characteristics of the reviewer, (iii) content of the submission and (iv) social characteristics of the author (Lee et al. 2013). Since SBR and DBR differ for revealing or not the author's identity, then research comparing SBR and DBR has mostly focused on the latter typology of bias, namely when an author's submission is not judged solely on the merit of the work, but related to her/his academic rank, sex, place of work, publication record, etc. (Peters and Ceci 1982). Budden et al. provided evidence of a gender bias by showing that a journal that switched to DBR experiences an increased representation of female researchers (Budden et al. 2008). However, their findings were contested (Webb et al. 2008) and most research on the subject did not find evidence of a gender bias when authors' identity is revealed (Lee et al. 2013; Blank 1991; Borsuk et al. 2009). Instead, there is consistent evidence on bias related to the prestige of the institution to which authors are employed (Peters and Ceci 1982), language, namely in favor of authors from English speaking countries (Ross et al. 2006), and nationality, with journals favoring authors from the same country of the journal (Daniel 1993; Ernst and Kienbacher 1991), whereas there is mixed evidence on whether American reviewers tend to favor or be more critical with compatriots (Link 1998; Marsh et al. 2008). An affiliation bias has been detected when reviewers and authors/applicants enjoy formal and informal relationships (Wenneras and Wold 1997; Sandström and Hällsten 2008), although not always leading to more positive evaluations (Oswald 2008). Only two studies analyzed the influence of individual productivity, and they do not reach a consensus (Snodgrass 2006). In particular, a study of two conferences found no impact on prolific authors (Madden and DeWitt 2006), while a further analysis on the same data using medians rather than means, reached the opposite conclusion (Tung 2006). The importance of newcomers The capability of given social structures to hinder or ease access to newcomers has important implications for innovation, research, and knowledge advancement. The importance of newcomers for innovation has been highlighted by several studies. Katz argued that newcomers represent a novelty-enhancing condition in teams, as they challenge and broaden the scope of existing methods and knowledge, whereas when members of a group remain stable, over time they tend to reduce external communication, to ignore and isolate from critical sources of feedback and information (Katz 1982). Since agents search for solutions within a limited range of all possible alternatives, then homogeneous groups will search within a similar range (Perretti and Negro 2007). On the contrary, newcomers contribute to innovation by bringing new knowledge and also by searching opportunities and feedbacks in new directions (McKelvey 1997), so that higher incidence of newcomers is predictive of team innovativeness (Perretti and Negro 2007). Literature on organizational learning (Levitt and March 1988; March 1991) shows that the mixing newcomers and established members affects organizational learning and innovation. According to March, experienced members know more on average, but their knowledge is redundant with that already in the organization (March 1991). New recruits, instead, are less knowledgeable than the individuals they replace, but what they know is less redundant and they are more likely to deviate from it. Newcomers enhance exploration, innovation, and the chances of finding creative solutions to team problems, whereas old-timers increase exploitation, inertial behavior, and resistance to new solutions. Overall, renewing members maintains social communities innovative, by easing access to information, improving ability to consider alternatives, and generating novel and creative solutions (Bantel and Jackson 1989; Jackson 1996; Watson et al. 1993; Guzzo and Dickson 1996). In the case of research activity, novelty and creativity are crucial. Research on 'academic inbreeding', e.g. the tendency of academic institutions to recruit personnel that have studied in the same institution has shown its several drawbacks, for individuals as well as research institutions, related to the parochialism of an inbred faculty, which are much less likely than non-inbred colleagues to exchange scholarly information outside their group (Berelson 1960; Pelz and Andrews 1966; Horta et al. 2010). Similarly to academic institutions, scientific outlets are social spaces committed to the production of knowledge. They represent both crucial sources of new ideas and, in the case of conferences, are also social contexts where a community of scholars meet and establish new collaborations. It is thus important to understand whether different peer review modes ease or hinder access of newcomers. We explore the conjecture that when the identity of the authors is revealed to referees, their evaluation will be affected by the authors' previous productivity—in that specific venue and/or overall hindering publications from newcomers. Accordingly, our first expectation is that, compared to DBR outlets, the SBR outlets will display relatively less contributions from researchers that have less experience in publishing in that outlet and overall. Hp1 outlet newcomers A scientific outlet' share of articles from researchers with few or no publications in the outlet is smaller when contributions are selected via SBR rather than DBR other outlets' characteristics being the same. Hp2 overall newcomers A scientific outlet' share of articles from researchers with few or no publications overall is smaller when contributions are selected via SBR rather than DBR other outlets' characteristics being the same. Since revealing the identity is expected to hinder publications from newcomers to the venue and newcomers overall, then the effect of un-blinding is uncertain regarding a particular category of newcomers, namely experienced newcomers': authors that are newcomers to the outlet but which have published elsewhere. Two different expectations can be formulated. First, while experienced newcomers can be disadvantaged for they might not be sufficiently acquitted to theories, methods and approaches in the outlet' area of study, referees in SBR might take into account their origin and, when newcomers are particularly experienced, then their reputation may support the validity of their claims. According to this line of reasoning we can formulate the following hypothesis. Hp 3a Experienced newcomers welcomed A scientific outlet' share of articles from researchers new to the outlet but with experience of publishing in other outlets, will be larger when contributions are selected via SBR rather than DBR other outlets' characteristics being the same. A competing hypothesis is that reviewers might be prejudiced towards contributions coming from other areas of research and/or they might perceive experienced researchers coming from other venues as a potential threat to their academic tribe' (Becher and Trowler 2001); accordingly: Hp 3b Experienced newcomers not welcomed A scientific outlet' share of articles from researchers new to the venue but with experience of publishing in other venues will be smaller when contributions are selected via SBR rather than DBR other outlets' characteristics being the same. Data and methods The field of computer science research is particularly suitable to address the questions and hypotheses of this article, since DBR and SBR are both widely adopted. Computer science research is mostly oriented to propose new models, algorithms, or software, so that reviewers typically focus on a paper's novelty, whether it addresses a useful problem and the solutions is applicable in practice as well as based on sufficient theoretical and empirical validation (Ragone et al. 2013). Differently from most research fields, in computer science research the conferences are considered at least as important as journals as a publication venue (Meyer et al. 2009; Chen and Konstan 2010; Freyne et al. 2010). Peer review for computer science conferences is done on submitted full papers, as opposed to other academic fields where the selection of contributions is often done on (extended) abstracts. This is due to the importance of conferences for computer science academic research. The peer review is often done by a committee of known reviewers (aka, program committee). The assignment of the submitted papers to reviewers is facilitated by a bidding process: The reviewers bid on articles they would prefer to review; reviewers are expected to review articles for which they feel competent and for which they have no conflict of interest; this process is applied both with DBR and SBR. Based on the bidding information, the submissions are assigned to reviewers, who will remain anonymous to the authors. Online or physical program committee meetings take place to discuss the inclusion of each submitted contribution into the conference program and proceedings. As subjects of our study, we consider the 21,535 research papers (and their 47,201 authors) published in 2014 or 2015 in the proceedings of 71 of the 80Footnote 3 largest computer science conferences in terms of the cumulative number of citations received (source: Microsoft academic searchFootnote 4). We retrieved information on conferences' size as well as reputation from Microsoft academic search (a free public search engine for academic papers and literature, developed by MicrosoftFootnote 5) and we extracted information on peer review mode from the conferences' websites. As our subjects, we only considered research papers, thus excluding conference contributions such as tool demonstrations, tutorials, short papers, posters, keynote speeches. To collect historical information on authors in computer science, we used DBLP,Footnote 6 the computer science bibliography. DBPL is the largest database on academic publications on computer science research, it indexes more than 32,000 journal volumes, 31,000 conference or workshop proceedings, and 23,000 monographs, for a total of 3.3 million publications published by more than 1.7 million authors. The full dataset was retrieved on 23rd March 2016 from the publicly available full DBLP data dump.Footnote 7 For each author of the aforementioned 21,535 research papers, we used these data to build a profile based on past productivity in the venues and in the field of computer science overall (Table 2 provides additional details). Tests and variables We aim to explore whether a conference' share of contributions from different types of newcomers is predicted by articles being reviewed under a SBR or a DBR mode (article level characteristic) and selected conference characteristics. To define whether an article was written by newcomers, we considered the productivity of the most prolific co-author before 2014 (or 2015). Accordingly, we first computed percentiles of past productivity at conference level and in the field of computer science (considering publications in conferences and journals included in DBLP) Table 1. Next, we defined as conference newcomers the authors with a past productivity in the conference up to the 25th percentile, i.e., as maximum two publications. In a similar vein, we considered as field newcomers those authors with a past productivity in the field of computer science research up to the 25th percentile, i.e., maximum 41 publications. Further, we defined as experienced newcomers the authors that are newcomers to the conference (below 2 publications) and experienced of publishing in other venues, namely having a productivity in the field above the median of the sample (above 85 publications). Table 1 Number of articles before 2014 of the most productive co-author, in the 71 most impactful conferences (Source: DBLP and Microsoft Academic Search) By employing values at article level, we could compute the dependent variables at conference level, namely proportions given by the ration between the number of articles from newcomers and the total number of articles accepted (Table 2). The dependent variables are then given by the average of \(n_j\) binary variables \(y_i\), assuming a value 1 if the article is authored by newcomer(s) and 0 if not, where \(n_j\) is the total number of articles accepted in the conference j, so that the proportion \(\pi _j\) results from \(n_j\) independent events of peer review and \(y_j\) are binary variables that can be modelled through a logistic regression. $$\begin{aligned} \pi _j = \sum _{1}^{n_j}\frac{y_j}{n_j} \end{aligned}$$ Table 2 describes the dependent variables. Table 2 Dependent variables—Source: authors' elaboration on DBLP data The independent variables include: (i) the peer review mode, the (ii) age and the (iii) reputation of the conference (Table 3). In the hypotheses section we have already discussed the expected effects of peer review mode. Moreover, older conferences are expected to display a smaller share of contributions from newcomers, as the community around the conference is expected to stabilize over time. The reputation and quality of the conference may also have a negative impact on the share of newcomers, since newcomers can be discouraged submitting to a high reputed conference and high quality conference tend to be more selective, thus hindering less experienced researchers. We consider the conference size as control variable.Footnote 8 Table 3 describes the characteristics of the predicting variables. Table 3 Independent and control variables—Source: authors' elaboration on Microsoft Academic search data and conference website information We run logistic regressions of proportions, where \(Logit(\pi _j)\) represents the predicted proportion of articles from newcomers to conference j, \(\beta _0\) represents the log odds of being an article authored by newcomers for a conference adopting single blind peer review, and of grand mean of age, reputation and size (the reference categories), while parameters \(\beta _1 \cdot DBR_j\), \(\beta _2 \cdot Age_j\), \(\beta _3 \cdot Rep_j\), and \(\beta _4 \cdot Size_j\) represent the differentials in the log odds of being a paper from newcomers for a paper reviewed in double blind peer review, presented in a conference of \(age_{j-grand mean}\), \(reputation_{j-grand mean}\), and \(size_{j-grand mean}\). $$\begin{aligned} Logit(\pi _j) = \beta _0 + \beta _1 \cdot DBR_{j-gm} + \beta _2 \cdot Age_{j-gm} + \beta _3 \cdot Rep_{j-gm} + \beta _4 \cdot Size_{j-gm} \end{aligned}$$ We estimate the model through Bayesian Markov Chain Monte Carlo methods (Snijders and Bosker 2012), which produce chains of model estimates and sample the distribution of the model parameters. As a diagnostic for model comparison we employ the Deviance Information Criterion (DIC), which penalizes for a model complexity—similarly to the Akaike Information Criterion (AIC)Footnote 9 and it is a measure particularly valuable for testing improved goodness of fit in logit models (Jones and Subramanian 2012). Descriptive statistics Descriptive statistics can be provided both at conference level and article level. Table 4 shows that on average, the share of contributions from newcomers to a conference is 32%, from field newcomers is 23%, whereas contributions from experienced newcomers represent 10%. There is a considerable level of variations between conferences, as shown by standard variations, minimum and maximum values. In our set, 34 conferences adopt DBR and 36 SBR; SBR conferences tend to be larger, so that 66% of the published articles were reviewed in SBR. Considerable variability exists regarding conferences' age, their reputation, and size. Table 4 Conferences characteristics—descriptive statistics - n. 71 Some significant correlations emerge between conferences' characteristics (Table 5). Most notably, high reputed conferences have less contributions from newcomers and larger conferences have less contributions from experienced newcomers. Considering conference averages, there is no significant correlation between the peer review mode and shares of newcomers. However, simple correlations do not take into account of other conferences characteristics. Moreover, a macro-macro association (between share of newcomers and peer review mode) is inappropriate to draw meaningful implication for a micro-micro relationship—i.e., that an article from newcomers has more chances to be accepted under DBR—because it would incur in an ecological fallacy, i.e., the relationship between individual variables cannot be inferred from the correlation of the variables collected for the group to which those individuals belong (Robinson 2009). Table 5 Pearson's correlations between conferences' characteristics—n. 71 Article level correlations (Table 6) indeed show a significant and positive association between DBR and the article being coauthored from newcomers, although only for conference newcomers and experience newcomers the correlation is positive, as expected, whereas it is negative for field newcomers. Table 6 Correlations article characteristics—no. 21.535 Hypotheses 1 and 2 Logistic regressions of proportions are the appropriate technique to explore whether conferences adopting DBR are more likely to display larger proportion of contributions from newcomers, while taking into consideration other conferences' characteristics. Tables 7 and 8 present the results of the regressions exploring hypotheses 1 and 2. For each hypothesis, the results of three regression models are displayed: (i) an empty model—e.g., a model with no predicting variables, (ii) DBR model—e.g., a model with only the peer review mode as predicting variable, (iii) full model, including all the predicting variables. Table 7 Regression share of newcomers to the conference Table 8 Regression share of newcomers to computer science The results confirm hypothesis 1: DBR is a significant predictor of a higher share of articles from newcomers to the conference. To calculate the odds of being an article from newcomers to the conference for DBR compared to the baseline SBR, we exponentiate the differential logit, thus: \(\exp(0.40)=1.50\), which means 50% more chances in case of DBR than SBR. The age of the conference has not a significant impact, whereas more reputed and larger conferences display relatively less contributions from newcomers to the conference. DIC values highlight the better fit of the full model in respect to DBR model and empty model. Hypothesis 2 is not confirmed: DBR is not a significant predictor of a higher share of articles from newcomers to the overall field of computer science research, e.g., authors with relatively less publications on DBLP. The age of the conference has not a significant impact, whereas more reputed and larger conferences display relatively less contributions from newcomers to the field. Hypotheses 3a and 3b The results of the regression predicting the share of contributions from experienced newcomers support the hypothesis 3b 'experienced newcomers not welcomed' (Table 9). DBR is in fact a significant and positive predictor of a higher share of articles from this category of newcomers. The odds of being an article from experienced newcomers for DBR compared to the baseline SBR is: \(\exp (0.49)=1.64\), which means 64% more chances in case of DBR than SBR. The age and size of the conference have a significant and negative impact, whereas reputation is not significant. DIC values highlight the better fit of the full model in respect to the DBR model and the empty model. Table 9 Regression share of experienced newcomers To further explore the relationship between peer review mode and contributions from experienced newcomers we provide descriptive statistics of the share of contribution from four categories of authors: (i) newcomers in computer science and in the conference, (ii) newcomers in computer science and experienced in the conference, (iii) experienced in computer science and in the conference, (iv) experienced in computer science and newcomers in the conference. The threshold for the conference is at 25th percentile of productivity (below 3 publications), whereas we considered three different thresholds of productivity for defining an author as experienced in computer science research, namely: (1) above 25th percentile (41 publications), (2) above median (85 publications) and (3) above 75th percentile (169 publications). Table 10 confirms that, compared to SBR conferences, the DBR conferences display a larger share of contributions from newcomers to the conference (categories i and iv), in particular those that are experienced in computer science research and especially highly experienced ones, as the share of contributions for experienced newcomers is 19% in DBR versus 12% in SBR for experienced above 25th percentile, 11 versus 6% above median and 5 versus 2% above 75th percentile of productivity. In turn, DBR conferences display almost two times more contributions from highly experienced newcomers than SBR conferences. Table 10 Number and share of contributions from categories of coauthors Alternative specifications of newcomers As a final test, we explore whether the results are confirmed with more stringent definitions of newcomers. We test different thresholds, namely: newcomers to the conference as researchers with (i) no previous publications, (ii) maximum one publication; newcomers to the field as researchers with maximum 18 publications (e.g. 10th percentile of productivity); newcomers to the conference and experienced in the field considering newcomers those researchers with maximum one publication to the conference and experienced in the field those researchers with productivity above the median value. The alternative specifications are highly correlated with the previous ones. Conferences' share of newcomers to the conference (below 3 publications) correlates at 0.812** with newcomers conference 0 publications and 0.964** with newcomers conference maximum one publication; shares of newcomers to the field below 25th and 10th percentile of productivity correlate at 0.930**; the two measures of conferences' share of newcomers to the conference and experienced in the field correlate at 0.956**. The results of the regressions confirm the findings (Table 11). Table 11 Full model binary logistic regressions with alternative specifications This article investigated whether revealing the identity of authors to referees is related to shares of publications from newcomers, as referees' evaluation may be affected by authors' track record of publications. Understanding the effects of peer review modes on the accessibility to newcomers is important as newcomers are shown by literature on innovation and inbreeding in research to be important in providing new perspectives, novel and creative ideas and solutions, thus playing a crucial role for advancing knowledge in a given field of study. We explored the assumption of a reputation bias in computer science research, where two modes of peer review are adopted, namely single blind and double blind peer review, where identity of authors is revealed to referees in the first but not in the latter mode. We considered 71 among the 80 most impactful computer science conferences, and retrieved data on 21,535 articles and conference characteristics from the DBLP database and conferences websites. We tested the hypotheses that three categories of newcomers are related to less publications in SBR in respect to DBR, namely newcomers to the conference, newcomers to computer science research, and newcomers to the conference that are otherwise experienced in publishing in computer science. We found that, after taking into consideration the size, age and reputation of the conference, the contributions from newcomers to the conference are underrepresented when articles are reviewed in SBR mode. We did not find a confirmation that contributions from newcomers to computer science research are hindered in SBR conferences, which can possibly be related to the fact that compared to DBR conferences, in SBR conferences the experienced researchers are underrepresented when they are newcomers to the conference. In fact, regression results and descriptive analysis show that DBR display almost two times more contributions from this category of authors. Overall, the results suggest that by knowing the identity of the authors, reviewers may be biased towards authors that are not sufficiently embedded in their research community. In recent years some journals decided to switch the peer review mode from DBR to SBR, under the argument that search engines have made easier to guess authors' identity (AER 2011); our results suggest that at least identity is not made fully evident in all fields, so that reintroducing SBR may have non-negligible consequences in terms of access to newcomers and bias in peer review. Arguably, in order to consider whether our findings can be generalized to academic journals and other field of research, it has to be considered how easy is to guess or retrieve the authors' identity, namely it can be expected that: (i) reviewers of academic journals focused on niche research topics are more likely to know who the authors are than reviewers of academic journals focused on broad research topics; and that (ii) in fields where it is common practice to publish pre-prints—for instance Economics, on websites like repec.org then reviewers can retrieve authors' identity more easily than in fields where this is not a common practice. We identify some promising directions for future research. First, to provide further evidence on the consequences of revealing authors' identity on the outcome of peer review, future studies can consider longitudinal data and conferences that have switched peer review mode in the considered period. This will allow to test our or similar hypotheses with a multilevel design and to explore random effects as well (Subramanian et al. 2009). Availability of data on both submitted and accepted papers would allow additional evidence on this regards. Second, future studies may explore the extent and the way in which different degree of a conference openness to newcomers affect knowledge evolution and advancement in a research community. The norms are communalism, universalism, disinterestedness and organized skepticism—the so called CUDOS' (Merton 1973). Pontille and Torny built a detailed depiction on the evolution of anonymity debate (Pontille and Torny 2014). 9 were excluded for: (i) adopting a different review process than the bidding process typically employed in computer science (e.g. VLDB), (ii) some missing or not clear information (HICSS, ISCAS, ISMB), (iii) not found on DBLP (BIOMED, Storage and retrieval) or multiple pages (ECCV), (iv) one case of merge (IGCA in GECCO). Top conferences in computer science by cumulative number of citations Microsoft academic search: http://academic.research.microsoft.com/RankList?entitytype=3&topdomainid=2&subdomainid=0&last=0&orderby=1. http://academic.research.microsoft.com. http://dblp.uni-trier.de/. Dump of the data is available at: http://dblp.uni-trier.de/xml/; information and statistics on DBLP can be retrieved at http://dblp.uni-trier.de/faq/What+is+dblp.html and http://dblp.uni-trier.de/statistics/. We also controlled whether conferences indexed in Scopus or the Web of Science (WoS) display different peer review modes or can predict the share of newcomers. However, we found that the large majority of conferences are indexed and no significant difference in the peer review mode along indexed and non-indexed conference. In fact only seven conferences are not indexed in Scopus, of which four are DBR and three SBR; 64 are indexed, of which 30 DBR and 34 SBR. Four conferences are not indexed in the WoS (one in DBR and three SBR), 11 have been covered but not updated to nowadays (9 in DBR and 2 in SBR), 56 are indexed (24 DBR and 32 SBR). The Akaike Information Criterion—AIC (Akaike, 1974) compares models by considering both goodness of fit and complexity of the model, estimating loss of information due to using a given model to represent the true model, i.e. a hypothetical model that would perfectly describe the data. Accordingly, the model with the smaller AIC points out the model that implies the smaller loss of information, thus having more chances to be the best model. In particular, given n models from 1 to n models and \(model_min\) being the one with the smaller AIC, then the exponential of: (\(AIC_{min} AIC_j\))/2 indicates the probability of model j in respect to \(model_min\) to minimize the loss of information. AER. (2011). Special announcement to authors. American Economic Review, 3(2). Armstrong, J. S. (1997). Peer review for journals: Evidence on quality control, fairness, and innovation. Science and Engineering Ethics, 3(1), 63–84. Bachand, R. G., & Sawallis, P. P. (2003). Accuracy in the identification of scholarly and peer-reviewed journals and the peer-review process across disciplines. The Serials Librarian, 45(2), 39–59. Bantel, K. A., & Jackson, S. E. (1989). Top management and innovations in banking: Does the composition of the top team make a difference? Strategic Management Journal, 10(S1), 107–124. Bazerman, C., et al. (1988). Shaping written knowledge: The genre and activity of the experimental article in science (Vol. 356). Madison: University of Wisconsin Press. Becher, T., & Trowler, P. R. (1989/2001). Academic tribes and territories: Intellectual enquiry and the culture of disciplines. Buckingham: Open University Press. Bedeian, A. G. (2004). Peer review and the social construction of knowledge in the management discipline. Academy of Management Learning & Education, 3(2), 198–216. Benedek, E. P. (1976). Editorial practices of psychiatric and related journals: Implications for women. American Journal of Psychiatry, 133(1), 89–92. Berelson, B. (1960). Graduate education in the United States. Washington, DC: ERIC. Bianchi, F., & Squazzoni, F. (2015). Is three better than one? simulating the effect of reviewer selection and behavior on the quality and efficiency of peer review. In 2015 Winter simulation conference (WSC) (pp. 4081–4089). IEEE. Blank, R. M. (1991). The effects of double-blind versus single-blind reviewing: Experimental evidence from the American economic review. The American Economic Review, 81, 1041–1067. BMJ. (1974). Editorial: Both sides of the fence. British Medical Journal, 2(5912), 185–186. Bornmann, L., & Daniel, H.-D. (2009). The luck of the referee draw: The effect of exchanging reviews. Learned Publishing, 22(2), 117–125. Borsuk, R. M., Aarssen, L. W., Budden, A. E., Koricheva, J., Leimu, R., Tregenza, T., et al. (2009). To name or not to name: The effect of changing author gender on peer review. BioScience, 59(11), 985–989. Budden, A. E., Tregenza, T., Aarssen, L. W., Koricheva, J., Leimu, R., & Lortie, C. J. (2008). Double-blind review favours increased representation of female authors. Trends in Ecology & Evolution, 23(1), 4–6. Burnham, J. C. (1990). The evolution of editorial peer review. JAMA, 263(10), 1323–1329. Campanario, J. (2009). Rejecting and resisting nobel class discoveries: Accounts by nobel laureates. Scientometrics, 81(2), 549–565. Chen, J., & Konstan, J. A. (2010). Conference paper selectivity and impact. Communications of the ACM, 53(6), 79–83. Crane, D. (1967). The gatekeepers of science: Some factors affecting the selection of articles for scientific journals. The American Sociologist, 32, 195–201. Daniel, H.-D., et al. (1993). Guardians of science. Fairness and reliability of peer review. New York: VCH. DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160. Ernst, E., & Kienbacher, T. (1991). Chauvinism. Nature, 352, 560. Franzoni, C., Scellato, G., & Stephan, P. (2014). The movers advantage: The superior performance of migrant scientists. Economics Letters, 122(1), 89–93. Freyne, J., Coyle, L., Smyth, B., & Cunningham, P. (2010). Relative status of journal and conference publications in computer science. Communications of the ACM, 53(11), 124–132. Guzzo, R. A., & Dickson, M. W. (1996). Teams in organizations: Recent research on performance and effectiveness. Annual Review of Psychology, 47(1), 307–338. Horta, H., Veloso, F. M., & Grediaga, R. (2010). Navel gazing: Academic inbreeding and scientific productivity. Management Science, 56(3), 414–429. Jackson, S. E. (1996). The consequences of diversity in multidisciplinary work teams. In Handbook of work group psychology, (pp. 53–75). Jones, K., & Subramanian, S. (2012). Developing multilevel models for analysing contextuality, heterogeneity and change. Bristol: Centre for Multilevel Modelling. Katz, R. (1982). The effects of group longevity on project communication and performance. Administrative Science Quarterly, 27, 81–104. Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17. Levitt, B., & March, J. G. (1988). Organizational learning. Annual Review of Sociology, 14, 319–340. Link, A. M. (1998). Us and non-us submissions: An analysis of reviewer bias. JAMA, 280(3), 246–247. Madden, S., & DeWitt, D. (2006). Impact of double-blind reviewing on sigmod publication rates. ACM SIGMOD Record, 35(2), 29–32. Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161–175. March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87. Marsden, P. V. (1987). Core discussion networks of Americans. American Sociological Review, 52, 122–131. Marsh, H. W., Jayasinghe, U. W., & Bond, N. W. (2008). Improving the peer-review process for grant applications: Reliability, validity, bias, and generalizability. American Psychologist, 63(3), 160. McKelvey, M. (1997). Using evolutionary theory to define systems of innovation. In Systems of innovation: Technologies, institutions and organizations, (pp. 200–222). Merton, R. K. (1973). The sociology of science: Theoretical and empirical investigations. Chicago: University of Chicago Press. Meyer, B., Choppy, C., Staunstrup, J., & van Leeuwen, J. (2009). Viewpoint research evaluation for computer science. Communications of the ACM, 52(4), 31–34. Oswald, A. J. (2008). Can we test for bias in scientific peer-review. IZA discussion paper 3665. Bonn: Institute for the Study of Labor. Pelz, D. C., & Andrews, F. M. (1966). Scientists in organizations: Productive climates for research and development. New York: Wiley. Perretti, F., & Negro, G. (2007). Mixing genres and matching people: A study in innovation and team composition in hollywood. Journal of Organizational Behavior, 28(5), 563–586. Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(02), 187–195. Pontille, D., & Torny, D. (2014). The blind shall see! the question of anonymity in journal peer review. Ada: A Journal of Gender, New Media, and Technology. doi:10.7264/N3542KVW. Ragone, A., Mirylenka, K., Casati, F., & Marchese, M. (2013). On peer review in computer science: Analysis of its effectiveness and suggestions for improvement. Scientometrics, 97(2), 317–356. Robinson, W. S. (2009). Ecological correlations and the behavior of individuals. International Journal of Epidemiology, 38(2), 337–341. Roebber, P. J., & Schultz, D. M. (2011). Peer review, program officers and science funding. PLoS ONE, 6(4), e18680. Ross, J. S., Gross, C. P., Desai, M. M., Hong, Y., Grant, A. O., Daniels, S. R., et al. (2006). Effect of blinded peer review on abstract acceptance. JAMA, 295(14), 1675–1680. Ruef, M. (2002). Strong ties, weak ties and islands: Structural and cultural predictors of organizational innovation. Industrial and Corporate Change, 11(3), 427–449. Sandström, U., & Hällsten, M. (2008). Persistent nepotism in peer-review. Scientometrics, 74(2), 175–189. Siler, K., Lee, K., & Bero, L. (2015). Measuring the effectiveness of scientific gatekeeping. Proceedings of the National Academy of Sciences, 112(2), 360–365. Snell, R. R. (2015). Menage a quoi? Optimal number of peer reviewers. PLoS ONE, 10(4), e0120838. Snijders, T. A., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling. London: SAGE. MATH Google Scholar Snodgrass, R. (2006). Single-versus double-blind reviewing: An analysis of the literature. ACM Sigmod Record, 35(3), 8–21. Soler, M. (2001). How inbreeding affects productivity in europe. Nature, 411(6834), 132–132. Squazzoni, F., Bravo, G., & Takács, K. (2013). Does incentive provision increase the quality of peer review? An experimental study. Research Policy, 42(1), 287–294. Subramanian, S., Jones, K., Kaddour, A., & Krieger, N. (2009). Revisiting robinson: The perils of individualistic and ecologic fallacy. International Journal of Epidemiology, 38(2), 342–360. Tung, A. K. (2006). Impact of double blind reviewing on sigmod publication: A more detail analysis. ACM SIGMOD Record, 35(3), 6–7. Ward, W. D., & Goudsmit, S. (1967). Reviewer and author anonymity. Physics Today, 20, 12. Watson, W. E., Kumar, K., & Michaelsen, L. K. (1993). Cultural diversity's impact on interaction process and performance: Comparing homogeneous and diverse task groups. Academy of Management Journal, 36(3), 590–602. Webb, T. J., OHara, B., & Freckleton, R. P. (2008). Does double-blind review benefit female authors? Heredity, 77, 282–291. Weller, A. C. (2001). Editorial peer review: Its strengths and weaknesses. Medford: Information Today. Wenneras, C., & Wold, A. (1997). Nepotism and sexism in peer-review. Nature, 387(6631), 341. Zuckerman, H., & Merton, R. K. (1971). Patterns of evaluation in science: Institutionalisation, structure and functions of the referee system. Minerva, 9(1), 66–100. The authors thank the two anonymous reviewers for their useful comments and Kelvin Jones for discussing the statistical method. This article is based upon work from COST Action TD1306 "New Frontiers of Peer Review", supported by COST (European Cooperation in Science and Technology). This work was supported by Fonds Voor Wetenschappelijk Onderzoek Vlaanderen. Marco Seeber Mekelweg 4, 2628 CD, Delft, The Netherlands Alberto Bacchelli Correspondence to Marco Seeber. Seeber, M., Bacchelli, A. Does single blind peer review hinder newcomers?. Scientometrics 113, 567–585 (2017). https://doi.org/10.1007/s11192-017-2264-7 Issue Date: October 2017 Double blind review Single blind review In-group out-group
CommonCrawl
Oscillation for second-order half-linear delay damped dynamic equations on time scales Jimeng Li1 & Jiashan Yang2 We investigate oscillation of second-order half-linear variable delay damped dynamic equations $$ \bigl[a(t) \bigl\vert x^{\Delta }(t) \bigr\vert ^{\lambda -1}x^{\Delta }(t) \bigr]^{\Delta }+b(t) \bigl\vert x ^{\Delta }(t) \bigr\vert ^{\lambda -1}x^{\Delta }(t)+p(t) \bigl\vert x \bigl(\delta (t) \bigr) \bigr\vert ^{\lambda -1}x \bigl(\delta (t) \bigr)=0 $$ on a time scale \(\mathbb{T}\). By using the generalized Riccati transformation and the inequality technique, we establish some new oscillation criteria for the equations under the condition $$ \int ^{\infty }_{t_{0}} \bigl[a^{-1}(s)e_{{-b/a}}(s,t_{0}) \bigr]^{1/\lambda } \Delta s< \infty. $$ These results deal with some cases not covered by existing results in the literature. In this paper, we are concerned with the oscillatory behavior of a second-order half-linear damped dynamic equation $$ \bigl[a(t) \bigl\vert x^{\Delta }(t) \bigr\vert ^{\lambda -1}x^{\Delta }(t) \bigr]^{\Delta }+b(t) \bigl\vert x ^{\Delta }(t) \bigr\vert ^{\lambda -1}x^{\Delta }(t)+p(t) \bigl\vert x \bigl(\delta (t) \bigr) \bigr\vert ^{\lambda -1}x \bigl(\delta (t) \bigr)=0,\quad t\in \mathbb{T}, $$ where \(t\geqslant t_{0}\), \(t_{0}>0\), and \(\lambda >0\) are constants. For completeness, we recall the following concepts related to the notion of time scales. A time scale \(\mathbb{T}\) is an arbitrary nonempty closed subset of the real numbers \(\mathbb{R}\). On the time scale \(\mathbb{T}\) we define the forward and the backward jump operators by $$\sigma (t)=\inf \{s\in \mathbb{T}:s>t\} \quad\text{and}\quad \rho (t)=\sup \{s\in \mathbb{T}:s< t\}. $$ A point \(t\in \mathbb{T}\) is said to be left-dense if \(\rho (t)=t\), right-dense if \(\sigma (t)=t\), left-scattered if \(\rho (t)< t\), and right-scattered if \(\sigma (t)>t\). The graininess μ of the time scale is defined by \(\mu (t)=\sigma (t)-t\). For a function \(f: \mathbb{T}\rightarrow \mathbb{R}\), the (delta) derivative is defined by \(f^{\Delta }(t)=\frac{f(\sigma (t))-f(t)}{\sigma (t)-t}\) if f is continuous at t and t is right-scattered. If t is right-dense, then the derivative is defined by \(f^{\Delta }(t)=\lim\limits_{s\rightarrow t^{+}}\frac{f(t)-f(s)}{t-s}\), provided this limit exists. A function \(f:\mathbb{T}\rightarrow \mathbb{R}\) is said to be rd-continuous if it is continuous at each right-dense point and if there exists a finite left limit at all left-dense points. The set of rd-continuous functions \(f:\mathbb{T}\rightarrow \mathbb{R}\) is denoted by \(C_{rd}(\mathbb{T},\mathbb{R})\). The derivative \(f^{\Delta }\) of f and the shift \(f^{\sigma }\) of f are related by the formula \(f^{\sigma }=f+\mu f^{\Delta }\) where \(f^{\sigma }=f\circ \sigma \). Throughout, we assume the following hypotheses: (H1): \(\mathbb{T}\) is an arbitrary time scale (i.e., a nonempty closed subset of the real numbers \(\mathbb{R}\)) which is unbounded above (i.e., \(\sup \mathbb{T}=\infty \)), and \(t_{0}\in \mathbb{T}\) with \(t_{0}>0\), we define a time scale interval of the form \([t_{0},\infty )_{\mathbb{T}}\) by \([t_{0},\infty )_{\mathbb{T}}=[t_{0},\infty ) \cap {\mathbb{T}}\). (H2): the delay function \(\delta:\mathbb{T}\rightarrow \mathbb{T}\) is strictly increasing and differentiable, and \(\delta (t) \leqslant t, \lim\limits_{t\rightarrow \infty }\delta (t)=\infty, \delta (\mathbb{T})=\mathbb{T}\). (H3): \(a,b,p\in C_{rd}(\mathbb{T},(0,\infty ))\) (i.e., \(a,b,p\) are positive rd-continuous functions) and \(-b/a\in \Re ^{+}\). By a solution of (1.1) we mean a nontrivial real-valued function \(x\in C_{rd}([T_{x},\infty )_{\mathbb{T}},\mathbb{R})\), where \(T_{x}\in [t_{0},\infty )_{\mathbb{T}}\), which has the property that \(a(t)|x^{\Delta }(t)|^{\lambda -1}x^{\Delta }(t)\in C^{1}_{rd}([T_{x}, \infty )_{\mathbb{T}},\mathbb{R})\) and satisfies (1.1) for \(t\in [T _{x},\infty )_{\mathbb{T}}\). The solutions vanishing in some neighborhood of infinity will be excluded from our consideration. A solution x of (1.1) is said to be oscillatory if it is neither eventually positive nor eventually negative; otherwise, it is nonoscillatory. Equation (1.1) is called oscillatory if all its solutions oscillate. Recently, there has been an increasing interest in studying oscillatory behavior of solutions to various classes of dynamic equations on time scales, we refer the reader to [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] and the references cited therein. In particular, oscillation of dynamic equations with damping attracted significant attention of researchers due to the fact that such equations arise in many real life problems, see [1,2,3,4,5, 9, 12, 13, 16, 17, 20, 22]. For instance, Zhang et al. [1,2,3, 5] considered the second-order damped dynamic equation (1.1) under the conditions $$ \int ^{\infty }_{t_{0}} \bigl[a^{-1}(s)e_{{-b/a}}(s,t_{0}) \bigr]^{1/\lambda } \Delta s=\infty $$ $$ \int ^{\infty }_{t_{0}} \bigl[a^{-1}(s)e_{{-b/a}}(s,t_{0}) \bigr]^{1/\lambda } \Delta s< \infty, $$ respectively, and established many sufficient conditions for oscillation of (1.1). The main result is as follows. (see [1], Theorem 4.1) Assume (H1)–(H3) and (1.2). If there exists a positive and differentiable function \(\varphi:\mathbb{T}\rightarrow \mathbb{R}\) such that $$ \limsup_{t\rightarrow \infty } \int ^{t}_{t_{0}}\varphi (s) \biggl[p(s)- \frac{a( \delta (s))}{(\lambda +1)^{\lambda +1}(\delta ^{\Delta }(s))^{\lambda }} \biggl\vert \frac{\varphi ^{\Delta }(s)}{\varphi (s)}-\frac{b(s)}{a(s)} \biggr\vert ^{\lambda +1} \biggr]\Delta s=\infty, $$ then Eq. (1.1) is oscillatory on \([t_{0},\infty )_{\mathbb{T}}\). From this Theorem 1.1 and its proof, one can obtain various types of Kamenev-type oscillation criteria (such as Theorem 4.2 in [1], etc.) and different classes of Philos-type oscillation criteria for Eq. (1.1) under condition (1.2) (see [2, 3, 5]). Assume (H1)–(H3), (1.3), and (1.4). If $$ \int ^{\infty }_{t_{0}} \biggl[\frac{1}{a(t)} \int ^{t}_{t_{0}}e_{{-b/a}} \bigl(t, \sigma (s) \bigr)p(s)\Delta s \biggr]^{1/\lambda }\Delta t=\infty, $$ then every solution \(x(t)\) of Eq. (1.1) is either oscillatory or satisfies \(\lim\limits_{t\rightarrow \infty }x(t)=0\). Obviously, under the case (1.3), Theorem 1.2 (the other results are the same, for example, Theorem 4.3 and Theorem 4.4 in [2], Theorem 3.5 in [9], etc.) cannot ensure that the solution \(x(t)\) of Eq. (1.1) is oscillatory. We can see that, in applications, it is inconvenient that every solution \(x(t)\) of Eq. (1.1) oscillates or converges to zero, since we do not know under what conditions separately the solutions oscillate or converge to zero. On the above basis, in [3] and [5], the authors discussed oscillatory criteria of Eq. (1.1), got the result that every solution of Eq. (1.1) oscillates, and improved the results in [1, 2, 9]. One of the results they provided is as follows. Assume (H1)–(H3), (1.3), and (1.4). If for every \(t_{1}\in [t_{0},\infty )_{ \mathbb{T}}\), $$ \int ^{\infty }_{t_{1}} \biggl[\frac{1}{a(t)} \int ^{t}_{t_{1}}e_{{-b/a}} \bigl(t, \sigma (s) \bigr)\theta ^{\lambda }(s)p(s)\Delta s \biggr]^{1/\lambda } \Delta t = \infty, $$ where \(\theta (t)=\int ^{\infty }_{t}a^{-1/\lambda }(s)\Delta s\), then Eq. (1.1) is oscillatory on \([t_{0},\infty )_{\mathbb{T}}\). Using the same method, one can deduce from Theorem 1.3 a great number of oscillation criteria for Eq. (1.1) (see [3, 5]). On the other hand, it is well known that the Euler differential equation $$ \bigl(t^{2}x'(t) \bigr)'+p_{{0}}x(t)=0,\quad t \geqslant 1, $$ is oscillatory if \(p_{{0}}>1/4\). However, Theorem 1.3 cannot be applied in (1.7) due to \(\int ^{\infty }_{1}t^{-2}\ln t \,\mathrm{d}t<\infty \). One can easily see that the recent results (such as the results in [3, 4, 6,7,8, 12,13,14,15,16,17,18,19,20,21], etc.) cannot be applied in (1.7). The purpose of this article is to obtain new criteria for the oscillation of (1.1) under condition (1.3) which promote some existing results. Lemma 2.1 Assume that \(x(t)\) is delta-differentiable and eventually positive or eventually negative, then $$ \bigl(x^{\lambda }(t) \bigr)^{\Delta }=\lambda x^{\Delta }(t) \int ^{1}_{0} \bigl[hx \bigl( \sigma (t) \bigr) +(1-h)x(t) \bigr]^{\lambda -1}\,\mathrm{d}h. $$ (Yang's inequality [17]) Let \(A>0,B>0\), and \(p>0\) be constants, then \(AB\leqslant \frac{A^{p}}{p}+ \frac{B^{q}}{q}\) for \(\frac{1}{p}+\frac{1}{q}=1\). ([21]) Let \(\lambda \geqslant 1\) be a quotient of two odd numbers, then $$ X^{1+\frac{1}{\lambda }}-(X-Y)^{1+\frac{1}{\lambda }} \leqslant Y^{\frac{1}{ \lambda }} \biggl[ \biggl(1+\frac{1}{\lambda } \biggr)X-\frac{1}{\lambda }Y \biggr],\quad XY\geqslant 0. $$ $$ \limsup_{t\rightarrow \infty } \int ^{t}_{t_{0}} \biggl[p(s) \theta ^{\lambda } \bigl(\sigma (s) \bigr) -\frac{a(s)\theta ^{\lambda (\lambda +1)}(s)}{( \lambda +1)^{\lambda +1}\theta ^{\lambda ^{2}}(\sigma (s))} \biggl( \frac{\overline{ \theta }(s)}{\theta ^{\lambda }(s)}+ \frac{b(s)}{a(s)} \biggr)^{\lambda +1} \biggr]\Delta s= \infty, $$ $$\begin{aligned} &\theta (t)= \int ^{\infty }_{t} \bigl[a^{-1}(s)e_{{-b/a}}(s,t_{0}) \bigr] ^{1/\lambda }\Delta s,\\ & \overline{\theta }(t)= \textstyle\begin{cases} \lambda \theta ^{\lambda -1}(t) [a^{-1}(t)e_{{-b/a}}(t,t_{0}) ] ^{1/\lambda },& \lambda \geqslant 1, \\ \lambda \theta ^{\lambda -1}(\sigma (t)) [a^{-1}(t)e_{{-b/a}}(t,t _{0}) ]^{\frac{1}{\lambda }},& 0< \lambda < 1, \end{cases}\displaystyle \end{aligned}$$ Let \(x(t)\) be a nonoscillatory solution of Eq. (1.1). Without loss of generality, we may assume that there exists \(t_{1}\in [t_{0},\infty )_{\mathbb{T}}\) such that \(x(t)>0, x(\delta (t))>0(t \in [t_{1},\infty )_{\mathbb{T}})\). Then, from (1.1), we have for \(t\in [t_{1},\infty )_{\mathbb{T}}\) $$ \bigl[a(t) \bigl\vert x^{\Delta }(t) \bigr\vert ^{\lambda -1}x^{\Delta }(t) \bigr]^{\Delta }+b(t) \bigl\vert x ^{\Delta }(t) \bigr\vert ^{\lambda -1}x^{\Delta }(t)=-p(t)x^{\lambda } \bigl(\delta (t) \bigr)< 0. $$ Proceeding as in the proof of Lemma 3.5 in [1], we get that \(\frac{a(t)|x^{\Delta }(t)|^{\lambda -1}x^{\Delta }(t)}{e_{{-b/a}}(t,t _{0})}(t\in [t_{1},\infty )_{\mathbb{T}})\) is decreasing and \(x^{\Delta }(t)\) is either eventually positive or eventually negative. Therefore, we shall distinguish the following two cases: (I) \(x^{\Delta }(t)>0, t\in [t_{1},\infty )_{\mathbb{T}}\); (II) \(x^{\Delta }(t)<0, t\in [t_{1},\infty )_{\mathbb{T}}\). Case (I): \(x^{\Delta }(t)>0, t\in [t_{1},\infty )_{\mathbb{T}}\). As in the proof of [1], Theorem 4.1, one can obtain a contradiction to (1.4). Case (II): \(x^{\Delta }(t)<0, t\in [t_{1},\infty )_{\mathbb{T}}\). Let $$ w(t)=\frac{a(t) \vert x^{\Delta }(t) \vert ^{\lambda -1}x^{\Delta }(t)}{ \vert x(t) \vert ^{ \lambda -1}x(t)} =\frac{a(t)(-x^{\Delta }(t))^{\lambda -1}x^{\Delta }(t)}{x ^{\lambda }(t)},\quad t\in [t_{1},\infty )_{\mathbb{T}}, $$ then \(w(t)<0(t\in [t_{1},\infty )_{\mathbb{T}})\). Since \(\frac{a(t)|x ^{\Delta }(t)|^{\lambda -1}x^{\Delta }(t)}{e_{{-b/a}}(t,t_{0})} =\frac{a(t)(-x ^{\Delta }(t))^{\lambda -1}x^{\Delta }(t)}{e_{{-b/a}}(t,t_{0})}(t \in [t_{1},\infty )_{\mathbb{T}})\) is decreasing, and therefore, for all \(s\in [t,\infty )_{\mathbb{T}}\), we have $$ \frac{a(s)(-x^{\Delta }(s))^{\lambda -1}x^{\Delta }(s)}{e_{{-b/a}}(s,t _{0})}\leqslant \frac{a(t)(-x^{\Delta }(t))^{\lambda -1}x^{\Delta }(t)}{e _{{-b/a}}(t,t_{0})}, $$ $$ x^{\Delta }(s)\leqslant \biggl(\frac{e_{{-b/a}}(s,t_{0})}{e_{{-b/a}}(t,t _{0})} \biggr)^{1/\lambda } \frac{a^{1/\lambda }(t)x^{\Delta }(t)}{a ^{1/\lambda }(s)}. $$ It follows that $$ x(u)\leqslant x(t)+\frac{a^{1/\lambda }(t)x^{\Delta }(t)}{[e_{{-b/a}}(t,t _{0})]^{1/\lambda }} \int ^{u}_{t} \bigl[a^{-1}(s)e_{{-b/a}}(s,t_{0}) \bigr]^{1/ \lambda }\Delta s. $$ Let \(u\rightarrow \infty \), we find that $$ x(t)+\frac{a^{1/\lambda }(t)x^{\Delta }(t)}{[e_{{-b/a}}(t,t_{0})]^{1/ \lambda }}\theta (t)\geqslant 0. $$ In view of \(0< e_{{-b/a}}(t,t_{0})\leqslant 1\) and \(x^{\Delta }(t)<0\), we see that \(x(t)+a^{1/\lambda }(t)x^{\Delta }(t)\theta (t)\geqslant 0\), it follows $$ -1\leqslant \frac{a^{1/\lambda }(t)x^{\Delta }(t)}{x(t)}\theta (t) \leqslant 0. $$ By virtue of (2.3) and (2.6), we conclude that $$ -1\leqslant w(t)\theta ^{\lambda }(t)\leqslant 0. $$ By Lemma 2.1 and \(x^{\Delta }(t)<0\), it is not difficult to find that $$ \textstyle\begin{cases} (x^{\lambda }(t))^{\Delta }\leqslant \lambda x^{\lambda -1}(\sigma (t))x ^{\Delta }(t),\quad \lambda \geqslant 1, \\ (x^{\lambda }(t))^{\Delta }\leqslant \lambda x^{\lambda -1}(t)x^{ \Delta }(t),\quad 0< \lambda < 1. \end{cases} $$ If \(0<\lambda <1\), in view of (2.8), (2.2), and \(x^{\Delta }(t)<0\), from (2.3), we then get $$\begin{aligned} w^{\Delta }(t)&=\frac{[a(t) \vert x^{\Delta }(t) \vert ^{\lambda -1}x^{\Delta }(t)]^{ \Delta }}{x^{\lambda }(\sigma (t))} -\frac{a(t)(-x^{\Delta }(t))^{ \lambda -1}x^{\Delta }(t)(x^{\lambda }(t))^{\Delta }}{x^{\lambda }(t)x ^{\lambda }(\sigma (t))} \\ &\leqslant -\frac{p(t)x^{\lambda }(\delta (t))+b(t) \vert x^{\Delta }(t) \vert ^{ \lambda -1}x^{\Delta }(t)}{x^{\lambda }(\sigma (t))} -\frac{a(t)(-x ^{\Delta }(t))^{\lambda -1}x^{\Delta }(t)\lambda x^{\lambda -1}(t)x ^{\Delta }(t)}{x^{\lambda }(t)x^{\lambda }(\sigma (t))} \\ &\leqslant -p(t)- \frac{b(t)x^{\lambda }(t)}{a(t)x^{\lambda }(\sigma (t))}\frac{a(t) \vert x ^{\Delta }(t) \vert ^{\lambda -1}x^{\Delta }(t)}{x^{\lambda }(t)} -\frac{ \lambda a(t)(-x^{\Delta }(t))^{\lambda -1}(x^{\Delta }(t))^{2}}{x^{ \lambda +1}(t)} \\ &=-p(t)+\frac{b(t)x^{\lambda }(t)}{a(t)x^{\lambda }(\sigma (t))} \bigl(-w(t) \bigr) -\frac{\lambda }{a^{1/\lambda }(t)} \bigl(-w(t) \bigr)^{\frac{\lambda +1}{\lambda }}. \end{aligned}$$ If \(\lambda \geqslant 1\), in view of (2.8) and \(x^{\Delta }(t)<0\), similarly, we can obtain (2.9). Using \(\theta ^{\Delta }(t)=- [a^{-1}(t)e_{{-b/a}}(t,t_{0}) ] ^{1/\lambda }\) and (2.5), we have $$\begin{aligned} \biggl(\frac{x(t)}{\theta (t)} \biggr)^{\Delta }& =\frac{x^{\Delta }(t) \theta (t)+x(t) [a^{-1}(t)e_{{-b/a}}(t,t_{0}) ]^{1/\lambda }}{\theta (t)\theta (\sigma (t))} \\ &\geqslant \frac{x^{\Delta }(t)\theta (t)-\frac{a^{1/\lambda }(t)x^{ \Delta }(t)}{[e_{{-b/a}}(t,t_{0})]^{1/\lambda }}\theta (t) [a ^{-1}(t)e_{{-b/a}}(t,t_{0}) ]^{1/\lambda }}{\theta (t)\theta ( \sigma (t))}=0. \end{aligned}$$ Consequently, the function \(\frac{x(t)}{\theta (t)}\) is non-decreasing, and therefore, $$ \frac{x(t)}{x(\sigma (t))}\leqslant \frac{\theta (t)}{\theta (\sigma (t))}. $$ Substituting (2.10) into (2.9), we obtain $$ w^{\Delta }(t)\leqslant -p(t)+\frac{b(t)\theta ^{\lambda }(t)}{a(t) \theta ^{\lambda }(\sigma (t))} \bigl(-w(t) \bigr) - \frac{\lambda }{a^{1/\lambda }(t)} \bigl(-w(t) \bigr)^{\frac{\lambda +1}{\lambda }}. $$ By Lemma 2.1 and \(\theta ^{\Delta }(t)<0\), it is easy to show that $$ \bigl[\theta ^{\lambda }(t) \bigr]^{\Delta }\geqslant \textstyle\begin{cases} -\lambda \theta ^{\lambda -1}(t) [a^{-1}(t)e_{{-b/a}}(t,t_{0}) ] ^{1/\lambda }, & \lambda \geqslant 1, \\ -\lambda \theta ^{\lambda -1}(\sigma (t)) [a^{-1}(t)e_{{-b/a}}(t,t _{0}) ]^{\frac{1}{\lambda }},& 0< \lambda < 1. \end{cases} $$ $$ \bigl[\theta ^{\lambda }(t) \bigr]^{\Delta }\geqslant -\overline{ \theta }(t). $$ Multiplying (2.11) by \(\theta ^{\lambda }(\sigma (t))\) and then integrating from \(t_{1}\) to t, and using the integration by parts formula on time scales, (2.13), and Lemma 2.2, we are led to $$\begin{aligned} &\int ^{t}_{t_{1}}p(s)\theta ^{\lambda } \bigl( \sigma (s) \bigr)\Delta s\\ &\quad\leqslant - \int ^{t}_{t_{1}}\theta ^{\lambda } \bigl( \sigma (s) \bigr)w^{\Delta }(s)\Delta s\\ &\qquad{} + \int ^{t}_{t_{1}}\theta ^{\lambda } \bigl( \sigma (s) \bigr) \biggl[\frac{b(s) \theta ^{\lambda }(s)(-w(s))}{a(s)\theta ^{\lambda }(\sigma (s))} -\frac{ \lambda (-w(s))^{\frac{\lambda +1}{\lambda }}}{a^{1/\lambda }(s)} \biggr] \Delta s \\ &\quad=\theta ^{\lambda }(t_{1})w(t_{1})-\theta ^{\lambda }(t)w(t)+ \int ^{t} _{t_{1}} \bigl[\theta ^{\lambda }(s) \bigr]^{\Delta }w(s)\Delta s\\ &\qquad{} + \int ^{t}_{t_{1}} \theta ^{\lambda } \bigl( \sigma (s) \bigr) \biggl[\frac{b(s)\theta ^{\lambda }(s)(-w(s))}{a(s) \theta ^{\lambda }(\sigma (s))} -\frac{\lambda (-w(s))^{\frac{\lambda +1}{\lambda }}}{a^{1/\lambda }(s)} \biggr]\Delta s \\ &\quad\leqslant \theta ^{\lambda }(t_{1})w(t_{1})-\theta ^{\lambda }(t)w(t) \\ &\qquad{}+ \int ^{t}_{t_{1}} \biggl[ \biggl(\overline{\theta }(s)+ \frac{b(s) \theta ^{\lambda }(s)}{a(s)} \biggr) \bigl(-w(s) \bigr) -\frac{\lambda \theta ^{\lambda }(\sigma (s))}{a^{1/\lambda }(s)} \bigl(-w(s) \bigr)^{\frac{ \lambda +1}{\lambda }} \biggr]\Delta s. \end{aligned}$$ $$ p=\frac{\lambda +1}{\lambda },\qquad q=\lambda +1 $$ $$ A=\frac{(\lambda +1)^{\frac{\lambda }{\lambda +1}} \theta ^{\frac{\lambda ^{2}}{\lambda +1}}(\sigma (s))}{a^{\frac{1}{ \lambda +1}}(s)} \bigl(-w(s) \bigr),\qquad B=\frac{a^{\frac{1}{\lambda +1}}(s) \theta ^{\lambda }(s)}{(\lambda +1)^{\frac{\lambda }{\lambda +1}} \theta ^{\frac{\lambda ^{2}}{\lambda +1}}(\sigma (s))} \biggl( \frac{\overline{ \theta }(s)}{\theta ^{\lambda }(s)}+\frac{b(s)}{a(s)} \biggr). $$ Using the inequality (see Lemma 2.2) $$ AB-\frac{A^{p}}{p}\leqslant \frac{B^{q}}{q}, $$ $$\begin{aligned} &\biggl(\overline{\theta }(s)+\frac{b(s)\theta ^{\lambda }(s)}{a(s)} \biggr) \bigl(-w(s) \bigr) - \frac{\lambda \theta ^{\lambda }(\sigma (s))}{a^{1/\lambda }(s)} \bigl(-w(s) \bigr)^{\frac{ \lambda +1}{\lambda }} \\ &\quad \leqslant \frac{a(s)\theta ^{\lambda (\lambda +1)}(s)}{( \lambda +1)^{\lambda +1}\theta ^{\lambda ^{2}}(\sigma (s))} \biggl(\frac{\overline{ \theta }(s)}{\theta ^{\lambda }(s)}+\frac{b(s)}{a(s)} \biggr)^{\lambda +1}. \end{aligned}$$ Hence, we obtain $$\begin{aligned} &\int ^{t}_{t_{1}}p(s)\theta ^{\lambda } \bigl( \sigma (s) \bigr)\Delta s \\ &\quad \leqslant \theta ^{\lambda }(t_{1})w(t_{1})- \theta ^{\lambda }(t)w(t) + \int ^{t} _{t_{1}}\frac{a(s)\theta ^{\lambda (\lambda +1)}(s)}{(\lambda +1)^{ \lambda +1}\theta ^{\lambda ^{2}}(\sigma (s))} \biggl( \frac{\overline{ \theta }(s)}{\theta ^{\lambda }(s)}+\frac{b(s)}{a(s)} \biggr)^{\lambda +1}\Delta s. \end{aligned}$$ By virtue of (2.7) and (2.14), we conclude that $$ \int ^{t}_{t_{1}} \biggl[p(s)\theta ^{\lambda } \bigl(\sigma (s) \bigr)-\frac{a(s) \theta ^{\lambda (\lambda +1)}(s)}{(\lambda +1)^{\lambda +1} \theta ^{\lambda ^{2}}(\sigma (s))} \biggl( \frac{\overline{\theta }(s)}{ \theta ^{\lambda }(s)}+ \frac{b(s)}{a(s)} \biggr)^{\lambda +1} \biggr] \Delta s \leqslant \theta ^{\lambda }(t_{1})w(t_{1})+1, $$ taking the limsup as \(t\rightarrow \infty \), then we get a contradiction to condition (2.1). The proof is complete. □ Assume (H1)–(H3), (1.3), (1.4), and \(\lambda \geqslant 1\) is a quotient of two odd numbers. Suppose further that there exist two functions \(\psi,\xi \in C^{1}( \mathbb{T},(0,\infty ))\) with \(\xi (t)\geqslant \frac{1}{\theta ^{ \lambda }(t)a(t)}\) such that $$\begin{aligned} &\limsup_{t\rightarrow \infty } \int ^{t}_{t_{0}}\theta ^{\lambda } \bigl( \sigma (s) \bigr) \biggl[\psi \bigl(\sigma (s) \bigr)\varPhi (s) -\frac{a(s)\psi ^{\lambda +1}(s) \vert \varTheta (s)-\frac{\lambda (e_{{-b/a}}(s,t_{0}))^{1/ \lambda }}{\theta (\sigma (s))a^{1/\lambda }(s)} \vert ^{\lambda +1}}{( \lambda +1)^{\lambda +1}\psi ^{\lambda }(\sigma (s))} \biggr]\Delta s \\ &\quad = \infty, \end{aligned}$$ where the function \(\theta (t)\) is defined as in Theorem 2.1, $$\begin{aligned} &\varPhi (t)=p(t)- \bigl[\xi (t)a(t) \bigr]^{\Delta }-\frac{b(t)}{a(t)\theta ^{\lambda }(\sigma (t))}+a(t) \xi ^{\frac{\lambda +1}{\lambda }}(t), \\ &\varTheta (t)=\frac{\psi ^{\Delta }(t)}{\psi (t)}+\frac{(\lambda +1)\psi (\sigma (t))\xi ^{\frac{1}{\lambda }}(t)}{\psi (t)}, \end{aligned}$$ Let \(x(t)\) be a nonoscillatory solution of Eq. (1.1), say, \(x(t)>0\) and \(x(\delta (t))>0\) for all \(t\in [t_{1}, \infty )_{\mathbb{T}}\) for some \(t_{1}\in [t_{0},\infty )_{\mathbb{T}}\). Similar to the proof of Theorem 2.1, we consider two cases. Assume first that \(x^{\Delta }(t)>0\) for \(t\in [t_{1},\infty )_{\mathbb{T}}\), by (1.4), this case is not true. Assume now that \(x^{\Delta }(t)<0\) for \(t\in [t_{1},\infty )_{\mathbb{T}}\), we proceed as in the proof of Theorem 2.1 to obtain (2.6) for \(t\in [t_{1},\infty )_{\mathbb{T}}\). Then, by (2.6), we are led to $$ x^{\lambda }(t)\geqslant a(t) \bigl(-x^{\Delta }(t) \bigr)^{\lambda }\theta ^{ \lambda }(t), $$ which yields $$ \bigl(x^{\Delta }(t) \bigr)^{\lambda }+\frac{x^{\lambda }(t)}{a(t)\theta ^{\lambda }(t)}\geqslant 0,\quad t \in [t_{1},\infty )_{\mathbb{T}}. $$ We introduce a generalized Riccati transformation $$\begin{aligned} v(t)&=\psi (t) \biggl[\frac{a(t) \vert x^{\Delta }(t) \vert ^{\lambda -1}x^{\Delta }(t)}{x^{\lambda }(t)}+\xi (t)a(t) \biggr] \\ &=\psi (t) \biggl[ \frac{a(t)(x ^{\Delta }(t))^{\lambda }}{x^{\lambda }(t)}+\xi (t)a(t) \biggr], \quad t \in [t_{1},+\infty )_{\mathbb{T}}. \end{aligned}$$ Then, it is not hard to see that \(v(t)\geqslant 0(t\in [t_{1},+\infty )_{\mathbb{T}})\) due to (2.16) and the definition of \(\xi (t)\). In view of (2.2), the first formula of (2.8), (2.16), and \(x^{\Delta }(t)<0\), respectively, it follows from (2.17) that $$\begin{aligned} v^{\Delta }(t)={}&\psi ^{\Delta }(t) \biggl[\frac{a(t)(x^{\Delta }(t))^{ \lambda }}{x^{\lambda }(t)}+\xi (t)a(t) \biggr] +\psi \bigl(\sigma (t) \bigr) \biggl[\frac{a(t)(x^{\Delta }(t))^{\lambda }}{x^{\lambda }(t)}+\xi (t)a(t) \biggr] ^{\Delta } \\ ={}&\frac{\psi ^{\Delta }(t)}{\psi (t)}v(t)+\psi \bigl(\sigma (t) \bigr) \biggl\{ \bigl[ \xi (t)a(t) \bigr]^{\Delta }+\frac{[a(t)(x^{\Delta }(t))^{\lambda }]^{ \Delta }}{x^{\lambda }(\sigma (t))} -\frac{a(t)(x^{\Delta }(t))^{ \lambda }[x^{\lambda }(t)]^{\Delta }}{x^{\lambda }(t)x^{\lambda }( \sigma (t))} \biggr\} \\ \leqslant{}& \frac{\psi ^{\Delta }(t)}{\psi (t)}v(t)+\psi \bigl(\sigma (t) \bigr) \biggl\{ \bigl[\xi (t)a(t) \bigr]^{\Delta } -\frac{p(t)x^{\lambda }(\delta (t))+b(t)(x ^{\Delta }(t))^{\lambda }}{x^{\lambda }(\sigma (t))} \\ &{}-\frac{\lambda a(t)(x ^{\Delta }(t))^{\lambda }x^{\Delta }(t)}{x^{\lambda }(t)x(\sigma (t))} \biggr\} \\ \leqslant{}& \frac{\psi ^{\Delta }(t)}{\psi (t)}v(t)+\psi \bigl(\sigma (t) \bigr) \biggl\{ \bigl[\xi (t)a(t) \bigr]^{\Delta }-p(t) -\frac{b(t)(x^{\Delta }(t))^{ \lambda }}{x^{\lambda }(\sigma (t))}-\frac{\lambda a(t)(x^{\Delta }(t))^{ \lambda +1}}{x^{\lambda +1}(t)} \biggr\} \\ \leqslant{}& \frac{\psi ^{\Delta }(t)}{\psi (t)}v(t)+\psi \bigl(\sigma (t) \bigr) \biggl\{ \bigl[\xi (t)a(t) \bigr]^{\Delta }-p(t) +\frac{b(t)x^{\lambda }(t)}{x ^{\lambda }(\sigma (t))a(t)\theta ^{\lambda }(t)} \\ &{}-\frac{\lambda }{a^{\frac{1}{\lambda }}(t)} \biggl(\frac{v(t)}{ \psi (t)}-\xi (t)a(t) \biggr)^{\frac{\lambda +1}{\lambda }}\biggr\} . \end{aligned}$$ By Lemma 2.3, $$ (X-Y)^{1+\frac{1}{\lambda }}\geqslant X^{1+\frac{1}{\lambda }}+\frac{1}{ \lambda }Y^{1+\frac{1}{\lambda }} - \biggl(1+\frac{1}{\lambda } \biggr)XY ^{\frac{1}{\lambda }}, $$ where \(\lambda \geqslant 1\) is a quotient of two odd numbers, \(XY\geqslant 0\). Let \(X=\frac{v(t)}{\psi (t)},Y=\xi (t)a(t)\), then we have $$ \biggl(\frac{v(t)}{\psi (t)}-\xi (t)a(t) \biggr)^{\frac{\lambda +1}{ \lambda }}\geqslant \frac{v^{\frac{\lambda +1}{\lambda }}(t)}{ \psi ^{\frac{\lambda +1}{\lambda }}(t)} +\frac{1}{\lambda } \bigl[\xi (t)a(t) \bigr]^{\frac{ \lambda +1}{\lambda }}- \biggl(1+\frac{1}{\lambda } \biggr)\frac{[ \xi (t)a(t)]^{\frac{1}{\lambda }}}{\psi (t)}v(t). $$ In view of (2.10), (2.19), the definition of \(\varPhi (t)\) and \(\varTheta (t)\), it follows from (2.18) that $$\begin{aligned} v^{\Delta }(t)\leqslant{}& \frac{\psi ^{\Delta }(t)}{\psi (t)}v(t)+\psi \bigl( \sigma (t) \bigr) \biggl\{ \bigl[\xi (t)a(t) \bigr]^{\Delta }-p(t) + \frac{b(t)}{ \theta ^{\lambda }(t)a(t)} \frac{\theta ^{\lambda }(t)}{\theta ^{\lambda }(\sigma (t))} \\ &{}-\frac{\lambda }{a^{\frac{1}{\lambda }}(t) \psi ^{\frac{\lambda +1}{\lambda }}(t)}v^{\frac{\lambda +1}{\lambda }}(t) -a(t)\xi ^{\frac{\lambda +1}{\lambda }}(t)+ \frac{(\lambda +1) \xi ^{\frac{1}{\lambda }}(t)}{\psi (t)}v(t)\biggr\} \\ ={}&{-}\psi \bigl(\sigma (t) \bigr)\varPhi (t)+\varTheta (t)v(t) - \frac{\lambda \psi ( \sigma (t))}{a^{\frac{1}{\lambda }}(t)\psi ^{\frac{\lambda +1}{\lambda }}(t)}v^{\frac{\lambda +1}{\lambda }}(t). \end{aligned}$$ Using the integration by parts formula on time scales, (2.13), and Lemma 2.2, it follows now from (2.20) that $$\begin{aligned} &\int ^{t}_{t_{1}}\theta ^{\lambda } \bigl( \sigma (s) \bigr)\psi \bigl(\sigma (s) \bigr)\varPhi (s) \Delta s \\ &\quad\leqslant - \int ^{t}_{t_{1}}\theta ^{\lambda } \bigl( \sigma (s) \bigr)v^{ \Delta }(s)\Delta s + \int ^{t}_{t_{1}}\theta ^{\lambda } \bigl( \sigma (s) \bigr) \biggl[\varTheta (s)v(s)- \frac{\lambda \psi (\sigma (s))v^{\frac{\lambda +1}{\lambda }}(s)}{a^{\frac{1}{\lambda }}(s) \psi ^{\frac{\lambda +1}{\lambda }}(s)} \biggr]\Delta s \\ &\quad=-\theta ^{\lambda }(t)v(t)+\theta ^{\lambda }(t_{1})v(t_{1})+ \int ^{t} _{t_{1}} \bigl[\theta ^{\lambda }(s) \bigr]^{\Delta }v(s)\Delta s \\ &\qquad{}+ \int ^{t}_{t_{1}} \theta ^{\lambda } \bigl( \sigma (s) \bigr) \biggl[\varTheta (s)v(s)- \frac{\lambda \psi (\sigma (s))v^{\frac{\lambda +1}{\lambda }}(s)}{a^{\frac{1}{ \lambda }}(s)\psi ^{\frac{\lambda +1}{\lambda }}(s)} \biggr]\Delta s \\ &\quad\leqslant \theta ^{\lambda }(t_{1})v(t_{1})+ \int ^{t}_{t_{1}} \theta ^{\lambda } \bigl( \sigma (s) \bigr) \biggl[ \biggl(\varTheta (s) -\frac{\lambda (e_{{-b/a}}(s,t_{0}))^{1/\lambda }}{\theta (\sigma (s))a^{1/\lambda }(s)} \biggr)v(s) \\ &\qquad{}- \frac{\lambda \psi (\sigma (s))v^{\frac{\lambda +1}{ \lambda }}(s)}{a^{\frac{1}{\lambda }}(s)\psi ^{\frac{\lambda +1}{ \lambda }}(s)} \biggr]\Delta s. \end{aligned}$$ $$ A=\frac{(\lambda +1)^{\frac{\lambda }{\lambda +1}} \psi ^{\frac{\lambda }{\lambda +1}}(\sigma (s))}{a^{ \frac{1}{\lambda +1}}(s)\psi (s)}v(s),\qquad B=\frac{a^{\frac{1}{ \lambda +1}}(s)\psi (s)}{(\lambda +1)^{\frac{\lambda }{\lambda +1}} \psi ^{\frac{\lambda }{\lambda +1}}(\sigma (s))} \biggl\vert \varTheta (s)- \frac{ \lambda (e_{{-b/a}}(s,t_{0}))^{1/\lambda }}{\theta (\sigma (s))a^{1/ \lambda }(s)} \biggr\vert . $$ $$\begin{aligned} &\biggl(\varTheta (s)-\frac{\lambda (e_{{-b/a}}(s,t_{0}))^{\frac{1}{ \lambda }}}{\theta (\sigma (s))a^{\frac{1}{\lambda }}(s)} \biggr)v(s)- \frac{\lambda \psi (\sigma (s))v^{\frac{\lambda +1}{\lambda }}(s)}{a ^{\frac{1}{\lambda }}(s)\psi ^{\frac{\lambda +1}{\lambda }}(s)} \\ &\quad \leqslant \frac{a(s)\psi ^{\lambda +1}(s)}{(\lambda +1)^{\lambda +1}\psi ^{\lambda }(\sigma (s))} \biggl\vert \varTheta (s)-\frac{\lambda (e_{{-b/a}}(s,t_{0}))^{\frac{1}{ \lambda }}}{\theta (\sigma (s))a^{\frac{1}{\lambda }}(s)} \biggr\vert ^{ \lambda +1}. \end{aligned}$$ By virtue of (2.21) and the above inequality, we conclude that $$\begin{aligned} &\int ^{t}_{t_{1}}\theta ^{\lambda } \bigl( \sigma (s) \bigr)\psi \bigl(\sigma (s) \bigr)\varPhi (s) \Delta s \\ &\quad \leqslant \theta ^{\lambda }(t_{1})v(t_{1})+ \int ^{t}_{t_{1}} \theta ^{\lambda } \bigl( \sigma (s) \bigr) \biggl[\frac{a(s)\psi ^{\lambda +1}(s)}{( \lambda +1)^{\lambda +1}\psi ^{\lambda }(\sigma (s))} \biggl\vert \varTheta (s)- \frac{ \lambda (e_{{-b/a}}(s,t_{0}))^{1/\lambda }}{\theta (\sigma (s))a^{1/ \lambda }(s)} \biggr\vert ^{\lambda +1} \biggr]\Delta s; \end{aligned}$$ consequently, $$\begin{aligned} &\int ^{t}_{t_{1}}\theta ^{\lambda } \bigl( \sigma (s) \bigr) \biggl[\psi \bigl(\sigma (s) \bigr) \varPhi (s)-\frac{a(s)\psi ^{\lambda +1}(s)}{(\lambda +1)^{\lambda +1} \psi ^{\lambda }(\sigma (s))} \biggl\vert \varTheta (s)- \frac{\lambda (e_{{-b/a}}(s,t _{0}))^{1/\lambda }}{\theta (\sigma (s))a^{1/\lambda }(s)} \biggr\vert ^{\lambda +1} \biggr]\Delta s\\ &\quad\leqslant \theta ^{\lambda }(t_{1})v(t _{1}), \end{aligned}$$ which leads to a contradiction with (2.15). The proof is complete. □ Example 2.1 Consider the second-order Euler differential equation (1.7), i.e., here \(p_{{0}}>0\) is a constant. Let \(a(t)=t^{2}, b(t)=0, P(t)=p_{{0}}, \delta (t)=t, \lambda =1, t_{0}=1\), clearly, conditions (H1)–(H3) and (1.3) are satisfied. Since \(\mathbb{T}= \mathbb{R}\), we see that $$ \theta (t)= \int ^{+\infty }_{t} \bigl[a^{-1}(s)e_{{-b/a}}(s,t_{0}) \bigr] ^{1/\lambda }\Delta s= \int ^{+\infty }_{t}s^{-2}\,\mathrm{d}s= \frac{1}{t}, $$ $$ \overline{\theta }(t)=\lambda \theta ^{\lambda -1}(t) \bigl[a^{-1}(t)e_{{-b/a}}(t,t_{0}) \bigr]^{1/\lambda }=t^{-2},\qquad \pi (t)=1,\qquad \varPsi (t)=p_{{0}}. $$ Now, pick \(\varphi (t)=1\), then $$ \limsup_{t\rightarrow \infty } \int ^{t}_{t_{0}}\varphi (s) \biggl[p(s)- \frac{a( \delta (s))}{(\lambda +1)^{\lambda +1}(\delta ^{\Delta }(s))^{\lambda }} \biggl\vert \frac{\varphi ^{\Delta }(s)}{\varphi (s)}-\frac{b(s)}{a(s)} \biggr\vert ^{\lambda +1} \biggr]\Delta s =\limsup_{t\rightarrow \infty } \int ^{t} _{1}p_{{0}}\,\mathrm{d}s= \infty, $$ $$\begin{aligned} &\limsup_{t\rightarrow \infty } \int ^{t}_{t_{0}} \biggl[p(s) \theta ^{\lambda } \bigl(\sigma (s) \bigr) -\frac{a(s)\theta ^{\lambda (\lambda +1)}(s)}{( \lambda +1)^{\lambda +1}\theta ^{\lambda ^{2}}(\sigma (s))} \biggl( \frac{\overline{ \theta }(s)}{\theta ^{\lambda }(s)}+ \frac{b(s)}{a(s)} \biggr)^{\lambda +1} \biggr]\Delta s \\ &\quad = \biggl(p_{{0}}-\frac{1}{4} \biggr)\limsup _{t\rightarrow \infty } \int ^{t}_{1}\frac{1}{s}\,\mathrm{d}s= \infty, \end{aligned}$$ provided that \(p_{{0}}>1/4\). Therefore, by Theorem 2.1, the Euler equation (1.7) is oscillatory when \(p_{{0}}>1/4\). This conclusion is sharp. Remark 2.1 Application of Theorem 1.2 or the corresponding result in [1, 2, 9] implies that the every solution \(x(t)\) of the Euler equation (1.7) is either oscillatory or satisfies \(\lim_{t\rightarrow \infty }x(t)=0\). The results in [3, 5] cannot be applied in (1.7) due to \(\int ^{\infty }_{1}t^{-2}\ln t \,\mathrm{d}t< \infty \). One can easily find that the results in [4, 7, 8, 12,13,14,15,16,17,18,19,20,21,22,23] cannot be applied in (1.7). Consider the second-order dynamic equation $$ \bigl[t^{2}x^{\Delta }(t) \bigr]^{\Delta }+p_{0} \frac{2t-1}{t}x \biggl(\frac{t}{2} \biggr)=0,\quad t \in \mathbb{T}=2^{\mathbf{z}},t\geqslant 2, $$ where \(p_{{0}}>0\) is a constant. Obviously, conditions (H1)–(H3) and (1.3) are satisfied, and we see that $$ \theta (t)= \int ^{\infty }_{t}s^{-2}\Delta s=\lim _{u\rightarrow \infty }\frac{u^{-1}-t^{-1}}{t^{-1}-1}= \frac{1}{t-1},\qquad \overline{ \theta }(t)=\frac{1}{t^{2}}. $$ Now, pick \(\varphi (t)=1\), then we have $$\begin{aligned} &\limsup_{t\rightarrow \infty } \int ^{t}_{t_{0}}\varphi (s) \biggl[p(s)- \frac{a( \delta (s))}{(\lambda +1)^{\lambda +1}(\delta ^{\Delta }(s))^{\lambda }} \biggl\vert \frac{\varphi ^{\Delta }(s)}{\varphi (s)}-\frac{b(s)}{a(s)} \biggr\vert ^{\lambda +1} \biggr]\Delta s\\ &\quad =p_{0}\limsup_{t\rightarrow \infty } \int ^{t}_{2}\frac{2s-1}{s}\Delta s= \infty, \end{aligned}$$ $$\begin{aligned} &\limsup_{t\rightarrow \infty } \int ^{t}_{t_{0}} \biggl[p(s) \theta ^{\lambda } \bigl(\sigma (s) \bigr) -\frac{a(s)\theta ^{\lambda (\lambda +1)}(s)}{( \lambda +1)^{\lambda +1}\theta ^{\lambda ^{2}}(\sigma (s))} \biggl( \frac{\overline{ \theta }(s)}{\theta ^{\lambda }(s)}+ \frac{b(s)}{a(s)} \biggr)^{\lambda +1} \biggr]\Delta s \\ &\quad =\limsup_{t\rightarrow \infty } \int ^{t}_{2} \biggl[ \biggl(p_{0}- \frac{1}{2} \biggr)\frac{1}{s}+\frac{1}{4s^{2}} \biggr] \Delta s= \infty, \end{aligned}$$ provided that \(p_{{0}}>1/2\). Therefore, by Theorem 2.1, equation (2.2) is oscillatory when \(p_{{0}}>1/2\). One can easily find that the results in [1,2,3,4,5, 7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] cannot be applied in (2.22). Zhang, Q., Gao, L.: Oscillation criteria for second-order half-linear delay dynamic equations with damping on time scales. Sci. Sin., Math. 40, 673–682 (2010) Zhang, Q., Gao, L., Liu, S.: Oscillation criteria for second-order half-linear delay dynamic equations with damping on time scales. Sci. Sin., Math. 43, 793–806 (2013) Zhang, Q., Gao, L., Liu, S.: Oscillation criteria for second-order half-linear delay dynamic equations with damping on time scales (II). Sci. Sin., Math. 41, 885–896 (2011) Erbe, L., Hassan, T.S., Peterson, A.: Oscillation criteria for nonlinear damped dynamic equations on time scales. Appl. Math. Comput. 203, 343–357 (2008) Zhang, Q.: Oscillation of second-order half-linear delay dynamic equations with damping on time scales. J. Comput. Appl. Math. 235, 1180–1188 (2011) Bohner, M., Peterson, A.: Dynamic Equations on Time Scales, an Introduction with Applications. Boston, Birkhauser (2001) Saker, S.H.: Oscillation of second-order nonlinear neutral delay dynamic equations on time scales. J. Comput. Appl. Math. 187, 123–141 (2006) Han, Z., Li, T., Sun, S.: Oscillation for second-order nonlinear delay dynamic equations on time scales. Adv. Differ. Equ. 2009, 756171 (2009) Sun, Y., Han, Z., Sun, S.: Oscillation of a class of second order half-linear neutral delay dynamic equations with damping on time scales. Acta Math. Appl. Sin. 36, 480–494 (2013) Agarwal, R.P., Bohner, M., Li, T.: Oscillation criteria for second-order dynamic equations on time scales. Appl. Math. Lett. 31, 34–40 (2014) Han, Z., Li, T., Sun, S.: Remarks on the paper [Appl. Math. Comput. 207 (2009) 388–396]. Appl. Math. Comput. 215, 3998–4007 (2010) Yang, J., Tan, W., Qin, X.: Oscillation analysis of certain second-order nonlinear damped dynamic equations on time scales. J. Zhejiang Univ. Sci. Ed. 43, 64–70 (2016) Deng, X., Wang, Q., Zhou, Z.: Oscillation criteria for second order neutral dynamic equations of Emden–Fowler type with positive and negative coefficients on time scales. Sci. China Math. 60, 113–132 (2017) Bohner, M., Li, X.: Oscillation of second-order p-Laplace dynamic equations with a nonpositive neutral coefficient. Appl. Math. Lett. 37, 72–76 (2014) Zhang, C., Agarwal, R.P., Bohner, M.: New oscillation results for second-order neutral delay dynamic equations. Adv. Differ. Equ. 2012, 227 (2012) Yang, J., Qin, X.: Oscillation criteria for certain second-order Emden–Fowler delay functional dynamic equations with damping on time scales. Adv. Differ. Equ. 2015, 97 (2015) Tang, S., Gao, C., Li, T.: Oscillation theorems for second-order quasi-linear delay dynamic equations. Bull. Malays. Math. Sci. Soc. 36, 907–916 (2013) Yang, J., Fang, B.: Oscillation analysis of certain second-order nonlinear delay damped dynamic equations on time scales. Math. Appl. 30, 16–26 (2017) Yang, J.: Oscillation criteria for certain third-order delay dynamic equations. Adv. Differ. Equ. 2013, 1 (2013) Yang, J.: Oscillation for a class of second-order Emden–Fowler dynamic equations on time scales. Acta Math. Appl. Sin. 39, 334–350 (2016) Li, T., Saker, S.H.: A note on oscillation criteria for second-order neutral dynamic equations on isolated time scales. Commun. Nonlinear Sci. Numer. Simul. 19, 4185–4188 (2014) Qiu, Y., Wang, H., Jiang, C., Li, T.: Existence of nonoscillatory solutions to third-order neutral functional dynamic equations on time scales. J. Nonlinear Sci. Appl. 11, 274–287 (2018) The authors express their sincere gratitude to the editors and referees for the careful reading of the original manuscript and useful comments that helped to improve the presentation of the results and accentuate important details. This work was supported by NNSF of P.R. China (Grant No. 51765060) and the natural science fund project of Hunan Province (Grant No. 12JJ3008). This research was partially supported by the National Natural Science Foundation of China (Grant No. 51765060). Research Fund for the natural science fund project of Hunan Province (Grant No. 12JJ3008). School of Science, Shaoyang University, Shaoyang, P.R. China Jimeng Li School of Data Science and Software Engineering, Wuzhou University, Wuzhou, P.R. China Jiashan Yang Both authors contributed equally to this work. They both read and approved the final version of the manuscript. Correspondence to Jiashan Yang. Li, J., Yang, J. Oscillation for second-order half-linear delay damped dynamic equations on time scales. Adv Differ Equ 2019, 241 (2019). https://doi.org/10.1186/s13662-019-2136-y Received: 27 March 2019 Accepted: 10 May 2019 Functional dynamic equations Riccati substitutions Variable delay
CommonCrawl
communications biology Multiplexed quantitative proteomics provides mechanistic cues for malaria severity and complexity Vipin Kumar1, Sandipan Ray ORCID: orcid.org/0000-0002-9960-57681 na1 nAff7, Shalini Aggarwal1 na1, Deeptarup Biswas1, Manali Jadhav1, Radha Yadav2, Sanjeev V. Sabnis2, Soumaditya Banerjee3, Arunansu Talukdar3, Sanjay K. Kochar4, Suvin Shetty5, Kunal Sehgal6, Swati Patankar1 & Sanjeeva Srivastava1 Communications Biology volume 3, Article number: 683 (2020) Cite this article Parasitic infection Management of severe malaria remains a critical global challenge. In this study, using a multiplexed quantitative proteomics pipeline we systematically investigated the plasma proteome alterations in non-severe and severe malaria patients. We identified a few parasite proteins in severe malaria patients, which could be promising from a diagnostic perspective. Further, from host proteome analysis we observed substantial modulations in many crucial physiological pathways, including lipid metabolism, cytokine signaling, complement, and coagulation cascades in severe malaria. We propose that severe manifestations of malaria are possibly underpinned by modulations of the host physiology and defense machinery, which is evidently reflected in the plasma proteome alterations. Importantly, we identified multiple blood markers that can effectively define different complications of severe falciparum malaria, including cerebral syndromes and severe anemia. The ability of our identified blood markers to distinguish different severe complications of malaria may aid in developing new clinical tests for monitoring malaria severity. Malaria is a vector-borne infectious disease caused by the protozoan parasites of the Plasmodium genus1, and the vector involved is female Anopheles mosquito. It is the most widespread tropical parasitic disease with a worldwide occurrence of 228 million clinical cases and 0.4 million deaths in 2018 (ref. 2). India majorly contributes to the global malaria burden and has the largest population in the world at risk of malaria, with 85% of the total Indian population living in malarious zones3. Worrisomely, the subsidiary burdens of malaria, such as malnutrition and anemia, increase the risk of complications and severity of the disease4,5. Among the five parasites causing malaria in humans, Plasmodium falciparum and Plasmodium vivax have the most extensive global distributions and are capable of leading severe fatal clinical manifestations6. Particularly, P. falciparum infections often turn severe and life-threatening, specifically when managed inappropriately7,8. One of the prime causes behind the progression of this parasitic infection from mild through complicated to severe disease is missed or delayed diagnosis9,10. Of note, neither parasite density nor parasitemia can consistently define malaria severity11. Hyper-parasitemia does not necessarily provide primary prognostic significance in semi-immune individuals, as they often tolerate high parasitemia burden without any physiological signs of disease or severe effects12. There are several limitations for the existing diagnostic methods for malaria, which include microscopic examination of thick and thin blood smears, polymerase chain reaction (PCR)-based molecular diagnostics, and rapid diagnostic tests (RDTs)13,14,15,16. Additionally, the RDT-based detection approaches target PfHRPII for P. falciparum only, pLDH for all Plasmodium species, and aldolase for P. falciparum and P. vivax. Importantly, many population-based studies have shown partial or complete deletion of pfhrpII and pfhrpIII genes, which may lead to false-negative results in RDTs17,18,19. Overall, these issues amply highlight the need for identification of additional parasite proteins and host factors for better diagnosis of malaria. Moreover, the establishment of predictive blood biomarkers for malaria severity and complications would be highly promising for prognosis, monitoring disease progression and responses to therapy, and predicting outcomes. In recent years, plasma/serum proteomics studies have contributed substantially to elucidate the complex pathogenesis of malaria and other infectious diseases17,18. Intriguingly, several studies from our and other research groups have defined promising panels of plasma and serum markers in falciparum19,20,21,22,23,24 and vivax19,25,26 malaria. A considerable amount of further research is required to understand the complex pathophysiology of severe malaria27,28,29. In particular, the blood markers that can effectively define different complications of severe malaria are not clearly demarcated hitherto. Indeed, the precise mechanisms that cause a transition from non-severe to severe fatal clinical manifestations in malaria, which often happens very rapidly, remains largely obscure. These are the critical questions that remained to be addressed to understand the molecular basis of severe malaria. In this study, we performed comprehensive proteomics analysis of plasma samples from falciparum malaria patients with different severity levels and clinical manifestations to understand the mechanisms of malaria severity and its complications. We further carried out a comparative analysis with differentially abundant plasma proteins identified in vivax malaria (VM) and dengue patients to specify the alterations observed in falciparum malaria. Importantly, we identified several blood-based host proteins marker for malaria severity and complexity, which can aid in monitoring disease progression. Further, we have also identified parasite proteins such as Serine Repeat Antigen 4 and Fructose-Bisphosphate Aldolase in severe malaria patients' plasma samples, which can help in the diagnosis of severe malaria. To the best of our knowledge, this is one of the most comprehensive blood-based proteomic studies on falciparum and VM. Our findings provided insights regarding malaria pathogenesis and molecular cues of severity and complicated disease manifestations associated with this parasitic infection. Analysis of clinicopathological parameters of malaria and dengue patients A total of 98 subjects were analyzed in the discovery-phase quantitative proteomics, while 111 subjects were included in the targeted validation study. The following cohorts were included in the present study—Healthy control (HC), Non-severe falciparum malaria (NSFM), Severe falciparum malaria (SFM), Cerebral malaria (CB), Severe anemia (SA), Control for severe anemia (CSA), Control for cerebral malaria (CCB), Non-severe dengue fever (NSD), Severe dengue (SD), Non-severe vivax malaria (NSVM), and Severe vivax malaria (SVM). The subjects were recruited from different malaria epidemic regions in India. The number of samples analyzed for each group is provided in Supplementary Data 1. Dengue patients were incorporated in this study as non-malaria febrile infectious disease control. In falciparum malaria patients, platelet counts and Hb levels were found to be significantly lower (p < 0.05) (in both NSFM and SFM) as compared to HC. The magnitude of alterations (decrease) in platelet counts and hemoglobin (Hb) levels were found to be more prominent in severe malaria. Liver function parameters including total bilirubin, serum glutamic-oxaloacetic transaminase (SGOT), serum glutamic-pyruvic transaminase (SGPT), and alkaline phosphatase (ALP) were found to be higher in SFM patients as compared to NSFM and HC (Supplementary Fig. 1). Of note, Hb was significantly lower (p < 0.05) in SA as compared to CB and other different types of complications of SFM, while the other parameters were almost comparable. These liver function parameters were slightly higher in non-severe malaria as compared to HC, but the level of alteration was minimal (Supplementary Fig. 2). Lower blood levels of Hb were observed in malaria patients [both falciparum malaria (FM) and VM], but no significant alteration was observed in dengue fever (DF) as compared to HC. Platelet count was found to be extremely low in DF as compared to malaria and HC, while SGOT and SGPT levels were found to be higher in DF as compared to HC and malaria (FM and VM). ALP and total bilirubin levels were very high in FM patients compared to all the other study cohorts (HC, VM, and DF) (Supplementary Fig. 3). Workflow for comprehensive plasma proteomic analysis of malaria and dengue patients Malaria and dengue samples were confirmed using different diagnostic techniques, and the positive cases were incorporated in the quantitative proteomic analysis. Such multiplexing using stable isotope labeling provides increased throughput, higher precision, better reproducibility, reduced technical variations, and lower number of missing values30,31,32. TMT-based multiplexed quantitative proteomics was used to map the plasma proteome (host) alterations, while we used a label-free quantitation (LFQ) approach for detection and quantification of the parasite (P. falciparum) proteins in host plasma. Quality control check (QC) of the datasets was performed by plotting the density plots for FM, VM, and DF using raw and normalized abundances at a proteome scale (Supplementary Fig. 4a–c). The significantly (p < 0.05) altered proteins were considered for machine learning, and the elastic net regularized logistic regression method was applied to predict the best panel of proteins. Some selected targets were validated using mass spectrometry (MS)-based multiple reaction monitoring (MRM) assays. Eventually, we investigated the physiological pathways over-expressed in falciparum and VM (Fig. 1). Fig. 1: Schematic representation of the experimental strategy used in discovery-phase proteomics and for targeted validation of potential biomarkers in plasma samples. a Malaria patients were diagnosed by microscopy and RDT, and some randomly selected samples were confirmed further using PCR. Dengue patients were included as a non-malaria febrile infectious disease control and were diagnosed using IgM and NS1 antigen. b Depleted plasma samples were trypsin digested, and TMT labeled for studying the plasma proteome (host) alterations in malaria patients. A label-free quantitation (LFQ) approach was used for the detection and quantification of the parasite (P. falciparum) proteins in host plasma. c Protein search was performed using proteome discoverer 2.2, and then PSMs files were analyzed using the MSstatsTMT package. d Machine learning was performed using the elastic net regression method. e Top hits of differentially abundant proteins were selected for validation using multiple reaction monitoring (MRM) assays. *Patients number is based on available clinical data. Differential plasma proteomic analysis of non-severe and severe FM A comprehensive proteomic analysis was performed for FM patients. Overall, 239,034 peptide spectral matches (PSMs) and 11,530 peptides were identified corresponding to 1495 proteins identified combining all the samples, among which 296 proteins were present in more than 60% of the clinical samples used for the label-based FM study (Fig. 2a). The proteins that were quantified with ≥1 unique peptide and detected in at least 60% of the samples were selected for the subsequent differential analysis. Importantly, 91% of these proteins (271 out of 296) were quantified with ≥2 unique peptides. Twenty-five plasma proteins were found to be significantly (p < 0.05) altered in SFM as compared to NSFM, and their abundance profiles enabled us to differentiate SFM and NSFM (Fig. 2b, Supplementary Fig. 5a, and Supplementary Data 2). Partial least-squares discriminant analysis (PLS-DA) was performed on the differentially abundant proteins (p < 0.05), and it distinguished between the three study populations: HC, NSFM, and SFM (Fig. 2c and Supplementary Table 1). Supervised hierarchical clustering of the proteome profiles stratifies the different groups of malaria patients (Fig. 2d). Significantly altered proteins from this study and other published literature20,21 were considered for protein network analysis (Fig. 2e and Supplementary Data 2) using NetworkAnalyst. Fig. 2: Comprehensive quantitative plasma proteomics analysis of non-severe and severe falciparum malaria. a Overview of the proteome coverage obtained in TMT-based quantitative analysis. b Differentially abundant proteins (p < 0.05) in SFM as compared to NSFM. c Three-dimensional PLS-DA plot showing clear segregation among HC, NSFM, and SFM. d Heat map representation showing abundances of the altered proteins in NSFM, SFM, and DF, and different clusters were identified as 1, 2, and 3 on the basis of protein abundance in DF, malaria (NSFM+SFM) and SFM, respectively. e Physiological pathways associated with the differentially abundant plasma proteins identified in NSFM and SFM. Sequential numbering for the networks is provided based on the statistical significance (high to low significance, FDR < 0.05). NSFM non-severe falciparum malaria, SFM severe falciparum malaria, DF dengue fever, HC healthy control, CSA control for severe anemia malaria, CCB control for cerebral malaria. The platelet degranulation process was found to be highly active in FM. Plasminogen (PLG), Platelet factor 4 (PF4), Profilin-1 (PFN1), Kininogen-1 (KNG1), and Pro-platelet basic protein (PPBP) were found to be down-regulated in NSFM as compared to SFM, while Alpha1-antitrypsin (SERPINA1) and Alpha2-antiplasmin (SERPINF2) were dysregulated in SFM as compared to NSFM (Fig. 2e). Proteins involved in binding and uptake of ligand scavenging functions such as haptoglobin (HP), hemopexin (HPX), haptoglobin-related proteins (HPR), and protein AMBP (AMBP) were down-regulated in SFM as compared to NSFM. However, plasma levels of several proteins such as hemoglobin subunit delta (HBD), hemoglobin subunit beta (HBB), hemoglobin subunit alpha (HBA1), and carbonic anhydrase 1 (CA1) were almost similar in SFM and NSFM (Fig. 2e). Gene Ontology (GO) analysis revealed that several vital biological processes such as immune system activation and response to stimulus involved more entities (altered proteins) in NSFM as compared to SFM. Biological adhesion and multicellular organismal processes were over-expressed in SFM as compared to NSFM (Supplementary Fig. 6a). Molecular functions associated with the altered proteins in NSFM are mainly regulatory and catalytic activity, while binding, molecular transducer, and transcription activities were enriched in SFM (Supplementary Fig. 6b). The cellular component analysis revealed overexpression of the extracellular region and cell in NSFM as compared to SFM. At the same time, protein-containing complex and membrane were over-expressed in SFM (Supplementary Fig. 6c). Differential plasma proteomic analysis of multiple complications of severe FM Plasma proteomics of different complications of SFM such as severe anemia (n = 6) and cerebral malaria (CB; n = 6) exhibited 28 and 35 differentially abundant proteins (p < 0.05), respectively, in comparison to HC and 17 proteins in SA as compared to CB (Fig. 3a, Supplementary Table 2 and Data 2), and heat map profiles were able to distinguish the individual samples of CB and SA (Supplementary Fig. 5b). Five proteins in CB, 8 proteins in SA, and 15 proteins in other severe malaria cases were found to be specifically altered in various complications of FM (Fig. 3b). 3D-PLS-DA plot of the differentially abundant proteins was able to segregate CB, SA, and SFM with other complications (Fig. 3c). Expectedly, the significantly altered proteins (p < 0.05) can effectively distinguish CB or SA as compared to HC (Fig. 3d, e). Supervised clustering indicated that a group of proteins could differentiate among the different complications of SFM. Cluster 1 and 2 showed differentially altered proteins in severe falciparum malaria (SFM_Others + CB + SA) as compared to the disease controls (CSA and CCB). Cluster 3 showed down-regulated proteins in CB, while cluster 4 showed down-regulated proteins in SA as compared to others (Fig. 3f). Hemostasis, ligand scavenging, and complement cascades were found to be associated with the differentially abundant plasma proteins identified in SFM_Others, CB, and SA (Fig. 3g). Fig. 3: Comparative proteomic analysis of different complications of severe falciparum malaria. a Differentially abundant proteins (p < 0.05) in severe malaria anemia (SA) as compared to cerebral malaria (CB). b Three-dimensional PLS-DA plot showing effective segregation among the different complications of SFM (CB, SA, and SFM with other complications). c Common and specific differentially altered proteins in CB, SA, and SFM with other complications. d Correlation analysis of differentially altered proteins in CB as compared to HC. e Correlation analysis of differentially altered proteins in SA as compared to HC. f Heat map representing discrimination of plasma protein abundances in CB, SA, and SFM with other complications, and their negative controls (CCB+CSA). Different clusters were identified as 1 and 2 in severe malaria (CB+SA+SFM), 3 in CB, and 4 in SA on the basis of proteins abundance. g Physiological pathways associated with the differentially abundant plasma proteins identified in severe falciparum, cerebral, and severe anemia malaria. Sequential numbering for the networks is provided on the basis of the statistical significance (high to low significance, FDR < 0.05). SFM_Others: severe falciparum malaria, except CB and SA. Quantitative proteomics analysis of CB patients showed 12 proteins significantly altered (p < 0.05) as compared to NSFM. Most of these proteins were found to be down-regulated in CB. Out of these 12 altered proteins, 3 candidates—HGFA33, SERPINA3, and ALX1 protein—were found to be up-regulated, while the other 9 proteins were down-regulated as compared to NSFM. Five proteins were significantly altered, specifically in CB as compared to NSFM. Of note, these proteins were unaltered in meningitis patients [a control used for cerebral malaria (CCB) (Supplementary Data 2)]. Quantitative proteomics analysis of SA patients showed 12 significantly altered proteins as compared to NSFM (p < 0.05). Most of these proteins were found to be down-regulated in SA. Eight proteins, including C1QB, TF, SAA1, IGFBP3, C3, APOE, CYST3, and B2M were significantly altered in SA only. Importantly, plasma levels of these proteins were not altered in anemic subjects [CSA] (Supplementary Data 2). Differential plasma proteome landscape of non-severe and severe VM Further, we performed comparative proteomic analysis on another form of malaria caused by P. vivax. The abundance profile of each sample of HC, NSVM, and SVM showed uniform distribution across all the TMT 10-plex reactions (Supplementary Fig. 4b). We found 23 and 37 differentially abundant proteins (p < 0.05) in NSVM and SVM as comparison to HC and 45 differentially abundant proteins (p < 0.05) in SVM as compared to NSVM, respectively (Fig. 4a). Heat map profiles of these altered proteins were able to distinguish individual samples of NSVM and SVM (Supplementary Fig. 7). Altered proteins identified in P. vivax patients segregated into three groups of clinical conditions, as shown in the PLS-DA plot (Fig. 4b). Correlation analysis of the altered proteins (p < 0.05) in NSVM, SVM, and DF also showed three distinct clusters (Fig. 4c). Fig. 4: Differential plasma proteome maps of non-severe and severe vivax malaria and dengue. a Differentially abundant proteins (p < 0.05) in non-severe (NSVM) and severe vivax malaria (SVM) as compared to healthy controls (HC). b PLS-DA plots segregated HC, SVM, and NSVM patients. c Correlation analysis of differentially altered proteins in SVM as compared to NSVM. d Heat map representing discrimination of plasma protein abundances in NSVM, SVM, and dengue fever (DF) and different clusters were identified as 1, 2, and 3 based on protein abundance in SVM, NSVM, and DF, respectively. e Physiological pathways associated with the differentially abundant plasma proteins identified in NSVM and SVM. Based on the abundance profiles of the altered proteins, we were also able to effectively define SVM, NSVM, and DF (Fig. 4d). A comparison of these three groups generated three distinct clusters. Cluster 1, 2, and 3 are representing the up-regulated proteins in SVM, NSVM, and DF, respectively (Fig. 4d). The differentially abundant proteins were enriched to four major pathways contributing towards VM, viz. complement cascade, and platelet degranulation along with platelet activation, lipid metabolism, and hemostasis. In complement pathways, Complement C1q subcomponent subunit A (C1QA) was found to be up-regulated significantly in NSVM, whereas Complement C3 (C3) was significantly down-regulated in both NSVM and SVM patients. In the platelet degradation pathway, fructose-bisphosphate aldolase (ALDOA), Clusterin (CLU), and PLG were significantly down-regulated in both NSVM and SVM, whereas Serotransferrin (TF) exhibited significant up-regulation in all the VM patients. Furthermore, Alpha2-macroglobulin (A2M) exhibited up-regulation in NSVM, while its down-regulation was observed in SVM. Additionally, components of the lipid metabolism pathways, such as Apolipoprotein A-I (APOA1) and Apolipoprotein B-100 (APOB) were significantly down-regulated in both NSVM and SVM (Fig. 4e and Supplementary Data 3). The altered plasma proteins identified in NSVM were involved in the immune system process, metabolic process, and localization, while the proteins altered in SVM were associated with response to stimulus and diverse cellular processes (Supplementary Fig. 8a). Prime molecular functions for the majority of the altered proteins were catalytic activity and binding in SVM and NSVM, respectively (Supplementary Fig. 8b). GO analysis indicated almost similar cellular localizations for the altered proteins in NSVM and SVM, except the extracellular region, which was more prominent in NSVM (Supplementary Fig. 8c). We identified eight proteins as commonly altered candidates in NSFM and NSVM. Of note, several vital physiological pathways/biological processes such as platelet degranulation, response to elevated platelet cytosolic Ca2+, and platelet activation, signaling and aggregation pathways are associated with these eight commonly altered plasma proteins identified in NSFM and NSVM (Supplementary Table 3). Identification of P. falciparum proteins in host plasma samples We identified 23 parasite proteins in the plasma samples of SFM patients (n = 21) using an LFQ approach. (Supplementary Data 4). We further selected 10 proteins that were identified with ≥2 unique peptides for validation using the MRM approach (Table 1). Six out of these 10 proteins—heat-shock protein (HSP) 70 and HSP 90, enolase, actin I, fructose-bisphosphate aldolase (FBA), and serine repeat antigen 4 (SERA4) were detected consistently (>80%) in the malaria patients. These parasite proteins have catalytic activity and may play a vital role in the survival and virulence of the pathogen in the host system. Table 1 Identification of parasite proteins (P. falciparum) in plasma samples of severe falciparum malaria patients. HSP 70 and HSP 90 were reported as up-regulated at temperature 38 °C and above, which helps the survival of the parasite in the erythrocytic stage of its life cycle in the hyperthermic condition of the host34. Enolase along with HSP 70 and iron superoxide dismutase forms the DegP complex which protects the parasite from heat and oxidative stress in the host system35. FBA and actin have been reported to interact with TRAP and TRAP like protein (TLP) for sporozoites gliding and invasion36. SERA4, along with the other SERA member proteins, helps in maintaining the blood stage of the pathogen's life cycle. However, their clear physiological functions still remain unknown37, and need to be investigated further. Elastic net regularized logistic regression model for feature selection to predict malaria and its complications Machine learning was performed on the significantly altered (p < 0.05) plasma proteins identified in malaria and dengue. Of note, we are able to separate VM, FM, dengue, and HC samples based on the abundance profiles of the altered plasma proteins (Supplementary Fig. 9a). The elastic net regularized logistic regression method was used to classify: dengue vs. malaria, FM vs. VM, and cerebral vs. severe malaria anemia (Fig. 5a). Fig. 5: Machine learning model to identify biomarker signatures. a Schematic overview of the machine learning model with nested k-fold cross-validation to identify the important proteins for effective diagnosis and prognosis of malaria and dengue. The three-dimensional PLS-DA plot of b dengue vs. malaria, c FM vs.VM, and d CB vs. SA. Heat map distribution of the top 20 (most significant) differentially abundant proteins—e malaria vs. dengue and f FM vs. VM. The elastic net regularized logistic regression model hyperparameters (alpha and lambda) and performance metrics (balanced accuracy, F1-score and Kappa) for the three models were as follows: (i) dengue vs. malaria: 0.1, 0.26, 0.975, 0.967, and 0.96; (ii) FM vs. VM: 0, 80.2, 1, 1, and 1; and (iii) CB vs. SA: 0.04, 41.73, 1, 1, and 1, respectively. It is evident from the model performance metrics that the resulting elastic net regularized logistic regression model can predict and classify dengue vs. malaria, FM vs. VM, and cerebral vs. severe malaria anemia cases almost perfectly. These results are quite fairly stable across all (outer) k-fold iterations (Supplementary Data 5). A total of 44 proteins were selected out of 66 proteins, which could differentiate malaria (including FM and VM) from dengue patients. The three-dimensional PLS-DA plot for all altered proteins and heat map profiles of the top 20 significantly altered proteins (p < 0.05) across malaria and Dengue effectively differentiated between these two infections (Fig. 5b, e). We found 49 proteins, which can distinguish FM from VM; importantly, unsupervised clustering of the top 20 significantly altered proteins effectively separated the individual subjects in the respective groups (Fig. 5c, f). Intriguingly, 19 proteins were able to differentiate between cerebral and severe malaria anemia (Fig. 5d). However, these two different complications of SFM were not easily separable based on the clinicopathological parameters (Supplementary Fig. 2). The receiver operating characteristic (ROC) curves of six selected biomarkers indicate that the model performs very well with area under the ROC curve (AUC) > 0.8, even for a small number of proteins. There was no further improvement in AUC after including a larger number of proteins. Keeping parsimony in mind and using this as a criterion to decide the panel size, we defined the final panel of two biomarkers for each model using MRM-based mass spectrometric assays (Supplementary Data 5). Validation of potential protein biomarkers of malaria using MRM assays Finally, we validated the differential abundance of a few selected human (host) proteins and parasite proteins in plasma samples using MRM-based mass spectrometric assays. We have considered 7048 transitions corresponding to 86 human plasma proteins (403 unique peptides), which were identified as the best classifier in our machine learning analysis for the optimization of MRM assays. Initially, we started with 40 different method files having a maximum of ~200 transitions, then we refined our methods and finalized 847 transition corresponding to 128 peptides and 46 proteins (Fig. 6a). MRM assays involved HC (n = 24 × 3), FM (n = 30 × 3), VM (n = 30 × 3), and DF (n = 27 × 3) along with 15 synthetic heavy peptides. The measurements obtained in MRM correlated well with the TMT-based discovery-phase proteomics (Supplementary Data 6). Fig. 6: Validation of potential target proteins in malaria and dengue using MRM assays. a Workflow for the validation of potential target proteins in malaria and dengue using MRM. Eighty-six proteins were considered for optimization for MRM method. ROC curves for the best panels of protein biomarkers are represented, b dengue vs. malaria, c falciparum vs. vivax, and d cerebral vs. severe anemia malaria. Violin plots showing the comparison of discovery and validation phase measurements for a few selected differentially abundant plasma proteins. e Dengue vs. malaria, f falciparum vs. vivax malaria, and g cerebral vs. severe anemia malaria. All data are represented as median (*p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001, and ****p ≤ 0.0001) in a Student's t-test. Similarly, we validated the detection and quantification of the parasite proteins in human plasma samples. Ninety-five transitions corresponding to eight parasite proteins were optimized, and we were able to quantify those in the plasma samples using MRM assays (Supplementary Data 6). We monitored the system suitability using a heavy synthetic peptide. The area of a few representative heavy synthetic peptides is shown in the Supplementary Information (Supplementary Fig. 10a–c). Top-ranked proteins were validated using the MRM approach. We analyzed ROC curves and determined AUC for all the proteins that were common in both machine learning model and MRM, and were able to differentiate between malaria vs. dengue (Fig. 6b), FM vs. VM (Fig. 6c) and cerebral vs. severe malaria anemia (Fig. 6d and Supplementary Fig. 9b–d). Differential abundance of LRG1 and CP was observed between dengue and malaria using MRM assays as well as in TMT-based quantitation (Fig. 6e). Similarly, AZGP1 and HRG were able to distinguish between FM and VM (Fig. 6f), and SERPINA3 and AHSG were able to differentiate among the different complications of SFM (Fig. 6g). The elastic net regularized logistic regression model equations for the final biomarker models identified from MRM data are provided in Supplementary Methods. In this study, using a multiplexed quantitative approach for plasma proteomics, we have provided insights into the progression of malaria from non-severe to severe infection and its different complications. Apart from the identification of potential disease monitoring and prognostic protein biomarkers for malaria, the differentially abundant plasma proteins mapped in the pathways provided mechanistic cues for various aspects of the pathogenesis of severe malaria and the host immune responses against the parasites. In this study, we identified the landscape of differentially abundant plasma proteins associated with diverse biological functions in malaria patients. P. falciparum expresses var genes, which encodes erythrocytes membrane protein 1 (PfEMP1). The variants of PfEMP1 are involved in cytoadherence and mediate binding of the infected erythrocytes to endothelial vasculature6. Some of the parasite proteins that were identified in the plasma samples of SFM patients, such as FBA, HSP 70, HSP 90, enolase, and SRA4 can generate antibody responses, as we have shown previously in VM38. Similarly, Moussa et al.24 reported the presence of FBA, SRA protein, and histone H3 in plasma samples of children with CB. Most of these proteins play a very important role in catalytic activity and protein binding (Table 1). However, P. vivax does not express var genes, and hence binding to the erythrocytes is very less as compared to P. falciparum6. We observed that the major complications associated with SFM are CB, SA, acidosis, and multiple organ failure. However, most of the complications observed in the SFM patients were similar to those observed in SVM, except renal failure, splenic rupture, and hepatic dysfunction along with gastrointestinal symptoms39. Of note, earlier we identified five unique parasite proteins in plasma and parasite isolates from VM patients38. Several altered plasma or serum proteins identified in SFM patients as described here and in our previous studies20,26 were mapped to biological adhesion and extracellular matrix (Supplementary Fig. 6). It indicates the roles of cell-to-cell adhesion-related host proteins such as von Willebrand factor (vWF), ICAM-1, VCAM-1, VTN, and LGALS3BP in P. falciparum infection. We observed the up-regulation of these proteins in SFM, which may help in erythrocyte invasion and adherence to endothelial cells. Surprisingly, these proteins were not found to be altered in SVM, indicating some clear differences between these two plasmodial infections (Fig. 7). Similarly, we mapped the proteome of SVM and observed dysregulation of biological functions such as catalytic activity and cellular process along with the classical complement system being more active in SVM as compared to SFM (Supplementary Fig. 11). Fig. 7: Landscape of physiological pathways altered in severe falciparum and vivax malaria. Physiological pathways associated with the differentially abundant plasma proteins identified in severe falciparum malaria (SFM) and severe vivax malaria (SVM). The proteins written in black are identified in our study, while the red ones represent unaltered associated proteins within the pathways. Additionally, we investigated plasma proteome from various complications of SFM. Platelet degranulation acts as exocytosis cells and secretes a plethora of effector molecules at sites of vascular injury40. In our study, most of the proteins related to platelet degranulation were found to be down-regulated in FM. It indicates that this could be due to the highly active state of the immune system in the initial stage of FM (NSFM) possibly due to alterations in the levels of proteins such as PF4, PPBP, PFN1, KNG1, CLU, and PLG in NSFM (Fig. 2e). PF4 (CXCL4) acts as a chemokine and initiates a killing of the infected erythrocytes in malaria41,42,43. Pro-platelet basic protein (PPBP), a platelet-secreted chemokine, and platelet activation marker that takes part in the process of clearing the parasites by inducing macrophage chemotaxis and mediating neutrophil accumulation44 (Fig. 8a). This protein was found to be significantly dysregulated in malaria patients in our study. PFN1 is a cytoskeleton protein, and it is involved in actin-polymerization during the parasite invasion45. Clusterin, involved in innate immune response, has been reported to be down-regulated in malaria23. Many of the dysregulated plasma proteins identified in NSFM were strongly associated with immune responses (Supplementary Fig. 6a). Along with the findings obtained from plasma proteome profiling, hematological (Hb, platelets) and liver function (SGOT, SGPT, ALP, total bilirubin) parameters were found to be significantly altered in NSFM and SFM. It reflects the commencement of inflammatory responses in NSFM, which in turn proceeds towards the severity of the disease. Fig. 8: Possible molecular mechanisms driving the different complications of severe falciparum malaria. a Disease progression from non-severe to different complications of severe falciparum malaria, the infected erythrocytes bind to platelets with aid of host proteins (PFN1) and leads to secretion of chemokines (PF4, PPBP) into blood circulation from platelets. The activated platelets start degranulation and release a plethora of cytokines and produce pro-inflammatory signals, which aggravate the inflammatory system, b once the coagulation system gets activated, prothrombin converted into thrombin and then it helps in secretion of cytokines, which enhance the severity of this parasitic infection. vWF strings attach to the infected erythrocytes, and together form a cluster. c Pro-inflammatory signals activate the macrophages, which engulf the infected erythrocytes and destroy those with the help of activated complement system. Infected erythrocytes get recognized by the spleen and are destroyed rapidly. Some infected erythrocytes burst in circulation and release diverse types of free radicals, which are subsequently scavenged by the host proteins such as HP and HPX. Cytokines and other mediators (hemozoin) slow down the process of erythropoiesis, which leads to an inadequate number of erythrocytes, and consequently, the patients suffer from an anemic condition. d The changes in and around the cerebral vessel, which causes the breakdown of the blood–brain barrier. The increased level of thrombin and production of cytokines via activated platelets leads to the expression of endothelial receptors such as ICAM-1 and VCAM-1. These receptors play a key role in cytoadherence of the parasitized erythrocytes on endothelial cell line and obstruct the blood flow. The release of toxins from the sequestered parasites leads to the recruitment of leukocytes and platelets. These mediators lead to endothelial cell activation, increase junctional permeability, and eventually the breakdown of the blood–brain barrier followed by secondary neuropathological events that can lead to cerebral edema or coma. The schematic diagram is structured using the information obtained from our quantitative plasma proteomics analyses. We observed up-regulation of the proteins related to intrinsic pathways such as vWF, F10, and F2 in SFM. Infected RBCs induce the expression of tissue factors on endothelial cells and monocytes, which results in the expression of cytokines via signaling pathways, ultimately leading to endothelial cell activation. vWF is a secondary marker for endothelial cell activation46, and we observed a high plasma level of this protein in SFM. It binds to the activated platelets and the infected erythrocytes and promotes in sequestration of the infected erythrocytes. This causes activation of the coagulation system driving towards disease severity (Fig. 8b)47. Importantly, proteins related to cell-to-cell adhesion such as ICAM-1, VCAM-1, and VTN were found to be dysregulated in severe FM. Host proteins such as ICAM-1 and VCAM-1 are expressed on the endothelium of leukocytes48,49,50,51,52,53,54 and are involved in sequestration of the infected erythrocytes in microvasculature with the assistance of a parasite-derived protein PfEMP1 (refs. 55,56). This leads to the activation of endothelium, disruption of blood flow, and ultimately causes tissue hypoxia. These events lead to an increase of endothelial receptors, and therefore shedding of soluble endothelial receptors, eventually leading to endothelial damage57. We anticipate that this could be one of the possible mechanisms associated with the progression of NSFM to CB (Fig. 8c). Some of the up-regulated proteins in CB identified in our study were previously reported in the murine malaria model48,58, which substantially enhances the strength of our findings. For instance, Bauer et al. reported the up-regulation of ICAM-1, P-selectin, and VCAM-1 on brain vascular endothelium in P. berghei ANKA infection. Up-regulated ICAM-1 levels may help the parasite sequestration on the epithelial cells causing brain injury and creating hypoxia conditions49. Blocking of the ICAM-1 receptor can cause a more than 150% increase of schizonts in the peripheral blood in mice. Importantly, VCAM-1 plays an important role in the resetting of the parasite to the blood vessels. However, it is limited to large blood vessels, unlike ICAM-1 (ref. 59). Severe anemia in malaria results mainly due to hemolysis and phagocytosis of the parasitized and the non-parasitized RBCs, and also to a large extent, by suppression of erythropoiesis that is driven by parasite hemozoin generated from the infected erythrocytes60. We observed increased levels of C3 which is an active component of the complement system. Complement C5, C6, and C7 form a complex c5c6c7 (ref. 61), and bind to the cell membrane and membrane attack complex with the help of C8 and C9 (refs. 62,63). In our study, we observed all of these three proteins forming the complex (C5, C6, and C7) were down-regulated while the free monomers (C8 and C9) were up-regulated in NSFM (Fig. 8d). This may indicate a higher involvement of complement cascade during the initial stage of malaria61, leading to hemolysis of infected erythrocytes. Such destruction of a high number of erythrocytes results in increased systemic concentrations of free Hb, heme, and reactive oxygen species (ROS) in circulation39,64,65. We observed a significant elevation in blood levels of Hb subunit alpha and beta in FM patients, as also described previously by Kassa et al.23. The free Hb is oxidized by free radicals, thereby releasing free heme. Of note, free heme is very toxic to the cells and induces inflammation, macrophage activation, and oxidative stress. However, the down-regulation of these proteins (HP, HPX, HPR, and AMBP) indicate their involvement in scavenging of free heme and ROS in FM. RBC lysis and highly activated complement system may have direct effects on the destruction of erythrocytes and cytokines indirectly affect the inhibition of erythropoiesis. This clearly indicates an increased removal of the infected and uninfected erythrocytes that plays an essential role in severe anemia during malaria (Fig. 8d). In summary, we provide here a comprehensive landscape of plasma proteome alterations in different severity levels of malaria. Our findings indicated the association of the dysregulated proteins with a few vital physiological pathways such as platelet degranulation and integrin cell surface interaction, which in term explain the mechanisms of fatal complications in SFM. We observed that proteins related to the inflammatory system are highly dysregulated in NSFM, which may eventually cause the severity of infection leading to either cerebral syndrome or severe anemia. We also observed the alterations in the plasma levels of PF4 and PPBP, which help to destroy the infected erythrocytes and cytokines along with other mediators (hemozoin) and reduce the process of erythropoiesis41,44. This may eventually lead to severe anemia in malaria patients. Importantly, dysregulation of the hemostasis-related proteins was observed in severe anemia, while alterations in the blood levels of the proteins associated with endothelial cell activation strongly correlated with CB. Parasite proteins that were consistently detected in almost all the severe malaria patients could be promising for developing different diagnostic approaches. Of note, those could be an indicator of severe infection as they are specifically detected in severe malaria patients. However, it remains to be seen whether these blood-based protein markers for malaria severity and complexity identified in our study are validated in bigger heterogeneous clinical cohorts. One potential caveat to this study is the possibility for asymptomatic infection in the control cohorts, although these were entirely devoid of any clinical symptoms of malaria. Moreover, genetic analysis of human plasma samples for glucose 6 phosphate dehydrogenase (G6PD) deficiency and hemoglobinopathies was outside the scope of this study. This leaves a lack of information on certain key host responses during the regulation of host proteins in the response of malaria. These could be an effective future continuation of the present investigation, especially as the findings continue to move towards translational research. Collectively, our findings provided some novel mechanistic insights into malaria severity, and we anticipate that this will accelerate the opportunities for developing clinical tests integrating these host and parasite proteins for monitoring malaria severity and complexity. Subject recruitment and blood collection This study was conducted involving malaria and DF patients, and control subjects from two different malaria-endemic regions of India. Subjects were enrolled in Calcutta medical college (CMC), Kolkata, India, and Sardar Patel medical college (SPMC), Bikaner, India. More precisely, this comprehensive proteomics study was accomplished involving HC subjects, and patients suffering from NSFM, SFM, and different complications (severe malaria anemia (SA) and CB and their corresponding control subjects, anemic [CSA], and meningitis patients [CCB], NSD; SD, NSVM, and SVM. This study was approved by the institutional review boards and ethics committee of the Indian Institute of Technology Bombay (IITB-IEC/2016/026). Prior to the sample collection process, written informed consent was received from each participant after giving detailed explanations about the experimental procedure in the language best understood by the potential participants. The clinicopathological details of all the subjects enrolled in this study were thoroughly documented (Supplementary Table 4). The diagnosis of malaria-positive samples was carried out primarily by microscopic examination of thin peripheral blood smears followed by RDTs (FalciVax, Zephyr Biomedicals). The parasitemia count in individual samples was not reordered (in particular, due to the scarcity of time in handling the extreme burden of malaria patients in the tertiary care hospitals in a highly populated country like India). Additionally, samples were confirmed using PCR-based molecular diagnosis66. The brief workflow is depicted in Supplementary Materials and Methods, and schematic representation is provided in Supplementary Fig. 12 and Supplementary Tables 5 and 6. Patients with any other infectious diseases such as leptospirosis, chronic liver diseases, and mixed infections (infected with both P. falciparum and P. vivax) were excluded from this study. The case definitions of severe malaria67 were adopted from standard WHO guidelines (Supplementary Table 7). The negative CSA (anemic patients without malaria) patients were defined by hemoglobin <7 g/dl and negative CCB (meningitis patients without malaria) were defined as clinically altered sensorium, neck rigidity, meningeal enhancement on MRI brain, and abnormal CSF study. According to the recent WHO guideline68, we classified the dengue patients into two categories—(1) NSD and (2) SD based on clinical features (blood pressure, bleeding manifestation, etc.) and hematological parameters (hematocrit, platelet counts, organ failure, etc.). In our study, IgM was used as the confirmatory marker for dengue infection. If NS1 was found to be positive, we retested for IgM during the 5–7 days of fever to confirm the diagnosis. Blood samples were collected into ethylenediaminetetraacetic acid (EDTA), BD vacutainer, tube, and gently mixed by inverting 8–10 times. EDTA-anticoagulated blood (3 ml) was centrifuged at 1000g for 10 min, and plasma samples were stored at −80 °C until further processing69. Species confirmation using nested PCR Blood samples of malaria-infected patients were collected from the different endemic regions of India. The collected samples were microscopy and RDT-positive samples. Further confirmation of the malaria parasite, nested PCR was performed using dried blood spots of the obtained samples. Only PCR-positive singly infected patient samples were taken forward for multi-omic study. For nested PCR from dried blood spots, Whatman filter paper 1 was taken, and 20 µl of RBC pellet was spotted on the Whatman paper. For each sample, 3–5 spots were made as per the availability of the RBC pellet. Two dried spots were punched in a 1.5 ml eppendorf tube. Added 200 µl of 0.5% freshly prepared saponin in PBS (or up to the volume required to submerge the punched spots). A short spin was given to settle down the droplets on the brim. Samples were incubated for 10 min at RT. Six hundred microliters of PBS was added to the blood spots for wash and supernatant was discarded. Washing was done for 2–3 times, and then the PCR reaction was set. Depletion of high-abundance proteins from plasma Removal of high abundant proteins is crucial for the detection of the low-abundant proteins, many of which frequently act as potential biomarkers. Therefore, before processing the plasma samples for proteomic analyses, the top 12 highly abundant proteins (albumin, IgG, Alpha1-acid glycoprotein, alpha1-antitrypsin, alpha2-macroglobulin, apolipoprotein A-I, apolipoprotein A-II, fibrinogen, HP, IgA, IgM, and transferrin) were removed by using PierceTM Top 12 abundant depletion column following the manufacturer's instructions (Thermo Scientific, Catalog no. 85164). Briefly, 15 µl of crude plasma was directly added to the resin slurry in the columns, and the mixture was incubated for 60 min at room temperature with gentle vortexing in between. The filtrate was collected in 2 ml Eppendorf tubes by centrifuging at 1000g for 2 min. The depleted plasma samples were concentrated using a vacuum centrifuge. Protein extracted from the plasma samples of HCs and infected patients were quantified using Bradford Assay kit (Bio-Rad) following the manufacturer's instructions. The depleted plasma samples were loaded onto one-dimensional sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) gels (12.5% acrylamide-bisacrylamide) for the quality check control (Supplementary Fig. 13a–c). The gels were stained with SDS-staining solution (methanol—40%, acetic acid—7%, D/W—53%, Coomassie blue—1 tablet) for 4 h and destained in SDS destaining solution (methanol—40%, acetic acid—7%, D/W—53%) for 5 h. The gels were scanned using a Image scanner III (GE Healthcare). Sample preparation for TMT-based quantitative proteomics analysis Prior to the enzymatic digestion, protein quantification was performed using Bradford assay (Bio-Rad) following the manufacturer's instructions. One hundred micrograms of depleted protein sample from each group (HC, CSA, CCB, SA, CB, SFM, and NSFM) were denatured using 15 µl of 6 M urea. The denatured proteins were reduced by adding tris (2-carboxyethyl) phosphine (TCEP) (Sigma Aldrich) to a final concentration of 20 mM and were incubated at 37 °C for 60 min. The reduced samples were alkylated by adding iodoacetamide [(IAA), Sigma Aldrich] to a final concentration of 40 mM and were incubated at room temperature in dark for 30 min. The samples were then diluted eight times with 50 mM ammonium bicarbonate buffer, and trypsin (Pierce Trypsin) was added to a 1:30 ratio (trypsin: protein) for performing in-solution digestion. Samples were incubated at 37 °C for 18 h for efficient digestion. The digestion was quenched by adding formic acid to a final concentration of 1%70. Subsequently, sample cleaning was performed using C18 Ziptip. Activation of the C18 ziptip was done by using 50% ACN in 0.1% FA, 40 µl, 1 min, 1500g thrice, followed by 99% ACN in 0.1% FA 40 µl, 1 min, 1500g thrice. Equilibration of ziptip by using 0.1% FA 40 µl, 1 min, 1500g was done thrice. Sample addition in ziptip up to 80 µl volume max, 1 min, 1000g five times. Clean-up using 0.1% FA 40 µl, 1 min, 1500g thrice. Elution by using 50% ACN in 0.1% FA 60 µl, 1 min, 1000g twice and 50% ACN in 0.1% FA 60 µl, 3 min, 4000g. TMT six-plex isobaric tags (TMTsixplex™ Isobaric Label Reagent Set, 1 × 0.8 mg, Thermo Fisher Scientific, Catalog number: 9006) were used for FM and Dengue (FC) samples and TMT 10-plex isobaric tags (TMT 10plex™ Isobaric Label Reagent Set, 1 × 0.8 mg, Thermo Fisher Scientific, Catalog number: 90110) were used for VM samples for labeling of the digested peptides as per the manufacturer's instructions. In brief, digested peptides were reconstituted in dissolution buffer and were vortexed to mix well. A pooled sample was prepared from each individual patient sample and was considered as a reference for performing normalization of the different mass spectrometric datasets. TMT reagents were reconstituted in 40 µl of anhydrous acetonitrile (ACN) and the reagents were added to the corresponding aliquot of digested plasma protein sample following the labeling strategy (Supplementary Fig. 13d–f); 15 µg of each digested peptide sample was labeled at ~1:13 ratio (digested peptides: TMT reagents) for performing efficient labeling. The solution was mixed well and incubated at room temperature (RT) for 1 h. The reactions were quenched by adding 2 µl of 5% hydroxylamine and were incubated for 15 min at RT. All the respective samples were pooled and were dried completely in a vacuum centrifuge. Samples were then fractionated into nine fractions using high-pH reverse-phase chromatography following the manufacturer's instructions (Pierce™ High pH Reversed-Phase Peptide Fractionation Kit, Thermo Fisher Scientific, Catalog number: 84868). Liquid chromatography–mass spectrometry/mass spectrometry (LC-MS/MS) analysis Multiplexed TMT-labeled samples (6/10-plex) were analyzed as biological replicates using an Orbitrap Fusion Tribrid mass spectrometer interfaced with an Easy-nLC 1200 system (Thermo Fisher Scientific). The mobile phase consisted of milli-Q water with 0.1% formic acid as solvent A and 0.1% formic acid/80% acetonitrile as solvent B. Each fraction was reconstituted in 15 µl of solvent A and 1 µg of digested peptides were loaded on to a pre-analytical column (100 µm × 2 cm, nanoViper C18, 5 µm, 100 A; Thermo Fisher Scientific). Isocratic gradient of 10–35% B in 103 min, 35–95% B in 2 min and holds at 95% B for 15 min at 300 nl/min flow rate were used on an analytical column (75 μm × 50 cm, 3 μm particle, and 100 Å pore size; Thermo Fisher Scientific). A single Orbitrap MS scan from 375 to 1700m/z at a resolution of 60,000 with automatic gain control (AGC) set at 5e (ref. 4) was followed by up to 20 ms/ms scans at a resolution of 30,000 with AGC set at 4e (ref. 5). MS/MS spectra were collected with a normalized collision energy of 35% and an isolation width of 1.2m/z. Dynamic exclusion was set to 40 s, and the peptide match was set to on. Surveys scans were performed in the Orbitrap mass analyzer and data-dependent MS2 scans were performed in Orbitrap mass analyzer trap using higher-energy collisional dissociation (HCD) following isolation with the instrument's quadrupole. The intensity threshold of the peptide was set 5e (ref. 3). Internal calibration was carried out using a lock mass option (m/z 445.1200025) from ambient air. The same parameters were used for LFQ (n = 21), except collision energy, which was set 30%. Database search for peptide and protein identification TMT 6-plex- and TMT 10-plex-based quantitative proteomic analysis were carried out using individual samples of malaria and dengue patients. Raw instrument files were processed using Proteome Discoverer (PD) version 2.2 (Thermo Fisher Scientific). In each TMT experiment, .raw files for all fractions were merged and MS2 spectra were searched using the Sequest HT and Mascot (v2.6.0) search engine against Homo sapiens fasta (71,523 sequence entries, dated: 24/06/2018) from Uniprot database (Proteome ID: UP000005640, Organism ID: 9606). All searches were configured with dynamic modifications for the TMT reagents (+229.163 Da) on lysine and N-termini, oxidation of methionine residues (+15.9949 Da) and static modification as carbamidomethyl (+57.021 Da) on cysteines, monoisotopic masses, and trypsin cleavage (max 2 missed cleavages). The peptide precursor mass tolerance was 10 ppm, and MS/MS tolerance was 0.05 Da. The false discovery rate (FDR) for proteins and peptides was kept 1%71. TMT signals were corrected for isotope impurities based on the manufacturer's instructions. In LFQ, the .raw files from the label-free method were searched against the database of Plasmodium falciparum 3D7 (Plasmo TaxID=36329_and_subtaxonomies) (v2017-08-25). The parasite data were downloaded from PlasmoDB (https://plasmodb.org/plasmo/) on 28/06/2018. The search parameters were kept the same as above mentioned for the TMT 6-plex method except for dynamic modifications for the TMT reagents (+229.163 Da) on lysine and N-termini of the peptide. Machine learning and feature selection The PSMs values were processed using MSstatsTMT72,73, where run-to-run normalization using a reference pool and quantile normalization were performed. Sixty-six proteins were selected based on adjusted p value (p < 0.05) in malaria and p value (p < 0.05) in DF (Supplementary Data 7). Since missing values are associated with the proteins with low levels of expression, we imputed the missing values by drawing samples from a normal distribution with a mean that is down-shifted from the sample mean and a standard deviation that is a fraction of the standard deviation of the sample distribution. The elastic net regularized logistic regression method was used to classify: (1) dengue vs. malaria: HC vs. Dengue, HC vs. malaria, dengue vs. malaria; (2) type of malaria: HC vs. FM, HC vs. VM, FM vs. VM; (3) severity of FM: HC vs. NSFM, HC vs. SFM, NSFM vs. SFM; (4) severity of VM: HC vs. NSVM, HC vs. SVM, NSVM vs. SVM; (5) severity of dengue: HC vs. NSD, HC vs. SD, NSD vs. SD. Of note, the elastic net regularized logistic regression model offers more flexibility in two ways—(i) the l1 norm helps attain parsimony in the sense that it optimally chooses the number of covariates (in the context of present dataset the covariates are proteins) by driving coefficients of unimportant covariates to zero and (ii) l2 norm helps address the issue of multicollinearity74,75,76,77,78,79,80,81,82. k-fold nested cross-validation (k = 10 for malaria vs. dengue, k = 10 for falciparum vs. vivax, and k = 5 for cerebral vs. severe malaria anemia) was used as it provides robust and almost unbiased parameter estimates and model performance evaluation even for small sample sizes83,84. The nested cross-validation approach involves using—(i) k-fold inner cross-validation loop for hyperparameter tuning and model selection and (ii) k-fold outer cross-validation loop for evaluating the model selected by the inner cross-validation. The entire dataset was randomly split into k-folds, out of which the kth fold was used as a test set and the remaining k−1 folds were used for training purpose. For each split, a model was trained and validated on the training set using (inner) k-fold cross-validation and tested on the held-out test set. The hyperparameters alpha and lambda were tuned by inner k-fold cross-validation over a grid of values ranging from 0 to 1 with a step-size of 0.1 for alpha and another grid of 100 values ranging from 10−2 to 102 for lambda, respectively. To evaluate model performance, ROC–AUC, balanced accuracy, F1-score, and Kappa metrics were computed. These metrics were chosen because we had imbalanced classes. The average variable importance of a protein was measured as the weighted average of variable importance of that protein in all k predictive models in the outer k-fold cross-validation with weights being the balanced accuracy of each model. This implicates that the importance of a protein in the classification of samples is directly proportional to its occurrence in the predictive models, which can classify the samples better. A schematic diagram of the method used here is shown in Figs. 1d and 5a. The proteins with non-zero variable importance from an elastic net regularized logistic regression model were considered for further analysis. For validation of the selected proteins as good classifiers for dengue vs. malaria, FM vs. VM, and CB vs. SA models, we created (i) heat map and (ii) three-dimensional PLS-DA plot (Fig. 5b–f and Supplementary Data 5). To arrive at the best panel of biomarkers, six different biomarker models were created from different numbers of proteins selected out of all the important covariates identified in the machine learning model. The ROC curves for these biomarker models shown in Supplementary Fig. 9b–d enabled comparison of different models based on their corresponding AUC values and confidence intervals. We have provided a detailed discussion on the justification and performance of the net regularized logistic regression model under the Supplementary Information (Supplementary Methods and Supplementary Data 5). The elastic net regularized logistic regression model equations for the final biomarker models identified from MRM were as follows: dengue vs. malaria: $$\log_e\left( {\frac{{p_{{\mathrm{{dengue}}}}}}{{1 - p_{{\mathrm{{dengue}}}}}}} \right) = - 0.4269 + 2.23 \times {\mathrm{LRG}}1 - 1.7374 \times {\mathrm{CP}}.$$ Falciparum vs. vivax malaria: $$\log_b\left( {\frac{{p_{\mathrm{{VM}}}}}{{1 - p_{{\mathrm{{VM}}}}}}} \right) = 0.0893 + 0.169 \times {\mathrm{AZGP}}1 + 0.1425 \times {\mathrm{HRG}}.$$ Cerebral vs. severe malaria anemia: $$\log_b\left( {\frac{{p_{{\mathrm{{CB}}}}}}{{1 - p_{{\mathrm{{CB}}}}}}} \right) = - 0.2118 - 0.2365 \times {\mathrm{SERPINA}}3 + 0.4309 \times {\mathrm{AHSG}},$$ where pPositive Class is the probablility of event that Y=Postitive Class. For these three models, the hyperparameters (alpha and lambda) and model performance evaluation metrics (balanced accuracy, F1-score, Kappa, ROC–AUC, ROC–AUC confidence interval) were as follows: (i) dengue vs. malaria: 0.22, 0.02, 0.72, 0.89, 0.46, 0.807 (0.666–0.947); (ii) FM vs. VM: 0.25, 4.47, 0.8, 0.77, 0.6, 0.865 (0.772–0.958); and (iii) CB vs. SA: 0.02, 41.10, 0.8, 0.83, 0.56, 0.944 (0.816–1), respectively. These models serve two purposes, and they are as follows: They help assess the impact of a particular protein on the estimated log odds or the estimated odds of an individual, for example, being a dengue patient as opposed to being a malaria patient. For illustration sake, let us consider the model Eq. (1). Note that this model equation yields: $$\frac{{\hat p_{{\mathrm{{dengue}}}}}}{{1 - \hat p_{{\mathrm{{dengue}}}}}} = \exp( - 0.4269 + 2.23 \times {\mathrm{LRG}}1 - 1.7374 \times {\mathrm{CP}})\\ = \exp\left( { - 0.4269} \right) \times \exp\left( {2.23} \right)^{{\mathrm{{LRG1}}}} \times \exp\left( { - 1.7374} \right)^{{\mathrm{{CP}}}}\\ = 0.6525 \times 9.2999^{{\mathrm{{LRG1}}}} \times 0.176^{{\mathrm{{CP}}}}.$$ This can be interpreted by observing that the estimated odds of an individual being a dengue patient as opposed to s(he) being a malaria patient increase multiplicatively (decrease multiplicatively) by 9.2999 (0.176) for every unit increase in the value of the protein LRG1 (CP). Note that the equation involving the estimated odds \(\frac{{\hat p_{{\mathrm{{dengue}}}}}}{{1 - \hat p_{{\mathrm{{dengue}}}}}}\) given in I further leads to $$\begin{array}{l}\frac{{\hat p_{{\mathrm{{dengue}}}} \,=\, \exp \left( { - 0.4269 \,+\, 2.23 \times {\mathrm{LRG}}1 - 1.7374 \times {\mathrm{CP}}} \right)}}{{1 \,+\, \exp \left( { - 0.4269 + 2.23 \times {\mathrm{LRG}}1 - 1.7374 \times {\mathrm{CP}}} \right)}}\\ \quad \quad \quad = \frac{{0.6525 \times 9.2999^{{\mathrm{{LRG1}}}} \ast 0.176^{{\mathrm{{CP}}}}}}{{1 \,+\, 0.6525 \times 9.2999^{{\mathrm{{LRG1}}}} \times 0.176^{{\mathrm{{CP}}}}}}.\end{array}$$ and this, in turn, enables one to classify a new patient as a dengue or a malaria patient depending upon whether \(\hat p_{{\mathrm{{dengue}}}}\) is ≥0.5 or not for specified values of LRG1 and CP for the patient under consideration. The other two logistic regression models corresponding to falciparum vs. vivax malaria or cerebral vs. severe malaria anemia could be interpreted analogously. Interaction network and bioinformatics analysis In order to investigate the complex interactions among the candidate marker proteins and for prediction of the pathways associated with the differentially altered proteins identified in FM, VM, and DF patients, diverse bioinformatic analyses were carried out. The pathway enrichment was performed using Protein Analysis THrough Evolutionary Relationships (PANTHER) classification system, version 12.0 (www. pantherdb.org)85, and Reactome pathway Knowledgebase, version 62 (www.reactome.org)86. A multi-functional online software NetworkAnalyst (http://www.networkanalyst.ca/) was applied for constructing and visualizing the PPI networks. The batch selection option of NetworkAnalyst was used to narrow down the network nodes, and the clusters were specified by highlighting with different colors. Validation using MRM Plasma proteins showing a prominent differential abundance in malaria patients were further validated using MRM assays on a triple quadrupole mass spectrometer Altis (Thermo Fisher Scientific) equipped with an Easy-Spray electrospray ionization ion source (Thermo Fisher Scientific). Peptides separations were performed using a C18 column (Hypersil GOLD, 150 mm, 2.1 mm, 1.9 µm, Thermo Fisher Scientific). The mobile phase consists of Milli-Q water with 0.1% formic acid as solvent A and 0.1% formic acid/80% acetonitrile as solvent B. Following chromatographic conditions were used: 20-min gradient at a flow rate of 300 nl/min starting with 100% A (water), stepping up to 2% B (ACN) in 0 min, followed by 45% B at 16 min, followed by a steep increase to 95% B at 17 min and static for 1 min at 95%. The steep decrease to equilibrate the column 5% of B at 19 and then static for another 1 min at 5% of B. The suitability of the system was evaluated using iRT peptide and digested BSA (Supplementary Fig. 10a). To assess the suitability of the selected peptides for MRM, 4.8 nmol/peptide (n = 17) of each heavy synthetic peptide was spiked into 10 µg of the digested samples, with 142 peptides having 901 transitions over a 20-min chromatographic gradient with a scheduling method. The total 40 methods were used to accommodate all the transitions (approx. 176 transition/method) for host and parasite proteins. During the first phase of optimization, six proteins were not detected and hence removed those from the list. The refinement of the transitions performed by running different plasma pool samples and some individual samples on different days to check the inter-day variation and stability of the peptides (Supplementary Fig. 10b). The 46 proteins finalized, which has good dotp value for the peptides (dotp > 0.85) (Supplementary Fig. 10c). The scheduling of peptides was performed, and the run was carried out as a single method to quantify the proteins from FM, VM, dengue, and HC samples. Targeted acquisition of the eluting ions was performed by the mass spectrometer operated in SRM-MS mode with Q1 and Q3 set to 0.7m/z full-width at half-maximum resolution and a cycle time of 2 s. A single scheduled method was utilized with a 2-min elution window for all MRM-MS runs. Data analysis was performed using Skyline daily87. Statistics and reproducibility The proteins that were quantified with ≥1 unique peptide and detected in at least 60% of the samples were selected for the subsequent differential analysis. The PSMs values were exported from Proteome Discoverer 2.2 (PD) for combined analysis for FM, VM, and DF (FM: n = 10 experimental sets (TMT 6-plex), 90 fractions; VM: n = 4 experimental sets (TMT 10-plex), 25 fractions, and DF: n = 4 experimental sets (TMT 6-plex), 24 fractions). The protein-level summarization and significance analysis of the proteins was performed in R using the MSstatsTMT package72,73. In brief, The PSMs were preprocessed in PD and were converted into the required input format for MSstatsTMT using an in-house R-code. The protein abundance was calculated based on the peptide quantification using the "protein summarization" script (MSstatsTMT). It includes normalization between the MS runs using reference pool channels and the imputation of missing values before summarizing peptide level data into protein-level data. In the protein summarization method, MSstats assumes missing values are censored and then imputes the missing values using the accelerated failure model. Missing value imputation was performed for only of those proteins, which were quantified in ≥60% of the samples. Then, a moderated t-test was performed on quantile normalized values using "Group-Comparison-TMT" script in R and p value <0.05 were considered as statistically significant72,73. Proteins passing p value <0.05 threshold of moderated t-test was used for data visualization and pathway analysis. However, adjusted p value Benjamini–Hochberg (BH) was also calculated for multiple comparisons using the MSstatsTMT package in R. BH-corrected p value [p value (adjusted) <0.05] was considered in the selection of proteins for machine learning analysis (except for dengue). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. All processed data associated with this study are present in the manuscript or in the Supplementary Materials. Raw MS data and search output files for TMT-based quantitative proteomics analyses described in this article are deposited to the ProteomeXchange Consortium via the PRIDE88 partner repository with the dataset identifier PXD014991. Targeted proteomic data are deposited in the Peptide Atlas89 and can be accessed through Dataset Identifier: PASS01467. All raw and processed data are made available as Supplementary Datasets 1–7. Code availability The custom code for MS data analysis (TMT 6/10plex) and machine learning have been deposited at https://github.com/vipin786/R-code-for-MS-data as well as publicly available via Zenodo (https://doi.org/10.5281/zenodo.4022347)90. Bruce-Chwatt, L. J. Alphonse Laveran's discovery 100 years ago and today's global fight against malaria. J. R. Soc. Med. 74, 531–536 (1981). WHO. WHO (2019). WHO, World Malaria Report, https://apps.who.int/iris/bitstream/handle/10665/311696/WHO-DAD-2019.1-eng.pdf (2019). Shah, N. K. et al. Antimalarial drug resistance of Plasmodium falciparum in India: changes over time and space. Lancet Infect. Dis. 11, 57–64 (2011). Rowe, A. K. et al. The burden of malaria mortality among African children in the year 2000. Int. J. Epidemiol. 35, 691–704 (2006). Paul, R. et al. Study of C reactive protein as a prognostic marker in malaria from Eastern India. Adv. Biomed. Res. 1, 41 (2012). Phillips, M. A. et al. Malaria. Nat. Rev. Dis. Prim. 3, 17050 (2017). Seringe, E. et al. Severe imported Plasmodium falciparum malaria, France, 1996-2003. Emerg. Infect. Dis. 17, 807–813 (2011). Wångdahl, A. et al. Severity of Plasmodium falciparum and non-falciparum malaria in travelers and migrants: a nationwide observational study over 2 decades in Sweden. J. Infect. Dis. 220, 1335–1345 (2019). Allan, P. J. & Tahir, H. I. S. How easily malaria can be missed. J. R. Soc. Med. 99, 201–202 (2006). Ashley, E. A., Pyae Phyo, A. & Woodrow, C. J. Malaria. Lancet 391, 1608–1621 (2018). Snow, R. W., Guerra, C. A., Noor, A. M., Myint, H. Y. & Hay, S. I. The global distribution of clinical episodes of Plasmodium falciparum malaria. Nature 434, 214–217 (2005). Wilairatana, P. Prognostic factors in severe Falciparum malaria. Trop. Med. Surg. 2, 3 (2014). Moody, A. Rapid diagnostic tests for malaria parasites. Clin. Microbiol. Rev. 15, 66–78 (2002). Guthmann, J.-P. et al. Evaluation of three rapid tests for diagnosis of P. falciparum and P. vivax malaria in colombia. Am. J. Trop. Med. Hyg. 75, 1209–1215 (2006). Murray, C. K. & Bennett, J. W. Rapid diagnosis of malaria. Interdiscip. Perspect. Infect. Dis. 2009, 1–7 (2009). Chaijaroenkul, W., Wongchai, T., Ruangweerayut, R. & Na-Bangchang, K. Evaluation of rapid diagnostics for Plasmodium falciparum and P. vivax in Mae Sot Malaria Endemic Area, Thailand. Korean J. Parasitol. 49, 33 (2011). Ray, S., Patel, S. K., Kumar, V., Damahe, J. & Srivastava, S. Differential expression of serum/plasma proteins in various infectious diseases: specific or nonspecific signatures. Proteomics Clin. Appl. 8, 53–72 (2014). Geyer, P. E., Holdt, L. M., Teupser, D. & Mann, M. Revisiting biomarker discovery by plasma proteomics. Mol. Syst. Biol. 13, 942 (2017). Ray, S. et al. Proteomic investigation of falciparum and vivax malaria for identification of surrogate protein markers. PLoS ONE 7, e41751 (2012). Ray, S. et al. Proteomic analysis of Plasmodium falciparum induced alterations in humans from different endemic regions of India to decipher malaria pathogenesis and identify surrogate markers of severity. J. Proteomics 127, 103–113 (2015). Bachmann, J. et al. Affinity proteomics reveals elevated muscle proteins in plasma of children with cerebral malaria. PLoS Pathog. 10, e1004038 (2014). Kumar, M. et al. Identification of host-response in cerebral malaria patients using quantitative proteomic analysis. Proteomics Clin. Appl 12, e1600187 (2018). Kassa, F. A. et al. New inflammation-related biomarkers during malaria infection. PLoS ONE 6, e26495 (2011). Moussa, E. M. et al. Proteomic profiling of the plasma of Gambian children with cerebral malaria. Malar. J. 17, 337 (2018). Ray, S. et al. Clinicopathological analysis and multipronged quantitative proteomics reveal oxidative stress and cytoskeletal proteins as possible markers for severe Vivax malaria. Sci. Rep. 6, 24557 (2016). Ray, S. et al. Quantitative proteomics analysis of Plasmodium vivax induced alterations in human serum during the acute and convalescent phases of infection. Sci. Rep. 7, 4400 (2017). Cunnington, A. J., Walther, M. & Riley, E. M. Piecing together the puzzle of severe malaria. Sci. Transl. Med. 5, 211ps18 (2013). Mackintosh, C. L., Beeson, J. G. & Marsh, K. Clinical features and pathogenesis of severe malaria. Trends Parasitol. 20, 597–603 (2004). Venkatesh, A. et al. Proteomics of Plasmodium vivax malaria: new insights, progress and potential. Expert Rev. Proteomics 13, 771–782 (2016). Mertins, P. et al. Reproducible workflow for multiplexed deep-scale proteome and phosphoproteome analysis of tumor tissues by liquid chromatography-mass spectrometry. Nat. Protoc. 13, 1632–1661 (2018). O'Connell, J. D., Paulo, J. A., O'Brien, J. J. & Gygi, S. P. Proteome-wide evaluation of two common protein quantification methods. J. Proteome Res. 17, 1934–1942 (2018). Ray, S. et al. Phenotypic proteomic profiling identifies a landscape of targets for circadian clock-modulating compounds. Life Sci. Alliance 2, e201900603 (2019). Carrolo, M. et al. Hepatocyte growth factor and its receptor are required for malaria infection. Nat. Med. 9, 1363–1369 (2003). Su, X. Z. & Wellems, T. E. Sequence, transcript characterization and polymorphisms of a Plasmodium falciparum gene belonging to the heat-shock protein (HSP) 90 family. Gene 151, 225–230 (1994). Sharma, S., Jadli, M., Singh, A., Arora, K. & Malhotra, P. A secretory multifunctional serine protease, DegP of Plasmodium falciparum, plays an important role in thermo-oxidative stress, parasite growth and development. FEBS J. 281, 1679–1699 (2014). Nemetski, S. M. et al. Inhibition by stabilization: targeting the Plasmodium falciparum aldolase–TRAP complex. Malar. J. 14, 324 (2015). Miller, S. K. et al. A subset of Plasmodium falciparum SERA genes are expressed and appear to play an important role in the erythrocytic cycle. J. Biol. Chem. 277, 47524–47532 (2002). Venkatesh, A. et al. Comprehensive proteomics investigation of P. vivax-infected human plasma and parasite isolates. BMC Infect. Dis. 20, 188 (2020). Coban, C., Lee, M. S. J. & Ishii, K. J. Tissue-specific immunopathology during malaria infection. Nat. Rev. Immunol. 18, 266–278 (2018). Coppinger, J. A. Characterization of the proteins released from activated platelets leads to localization of novel platelet proteins in human atherosclerotic lesions. Blood 103, 2096–2104 (2004). Srivastava, K. et al. Platelet factor 4 mediates inflammation in experimental cerebral malaria. Cell Host Microbe 4, 179–187 (2008). Wang, D. et al. ELF4 facilitates innate host defenses against Plasmodium by activating transcription of Pf4 and Ppbp. J. Biol. Chem. 294, 7787–7796 (2019). McMorran, B. J., Burgio, G. & Foote, S. J. New insights into the protective power of platelets in malaria infection. Commun. Integr. Biol. 6, e23653 (2013). Brown, A. J., Sepuru, K. M., Sawant, K. V. & Rajarathnam, K. Platelet-derived chemokine CXCL7 dimer preferentially exists in the glycosaminoglycan-bound form: implications for neutrophil–platelet crosstalk. Front. Immunol. 8, 1248 (2017). Moreau, C. A. et al. A unique profilin-actin interface is important for malaria parasite motility. PLoS Pathog. 13, e1006412 (2017). Montgomery, R. R. The heads and the tails of malaria and VWF. Blood 127, 1081–1082 (2016). Moxon, C. A. et al. Loss of endothelial protein C receptors links coagulation and inflammation to parasite sequestration in cerebral malaria in African children. Blood 122, 842–851 (2013). Bauer, P. R., Van Der Heyde, H. C., Sun, G., Specian, R. D. & Granger, D. N. Regulation of endothelial cell adhesion molecule expression in an experimental model of cerebral malaria. Microcirculation 9, 463–470 (2002). Favre, N. et al. Role of ICAM-1 (CD54) in the development of murine cerebral malaria. Microbes Infect. 1, 961–968 (1999). Idro, R., Marsh, K., John, C. C. & Newton, C. R. J. Cerebral malaria: mechanisms of brain injury and strategies for improved neurocognitive outcome. Pediatr. Res. 68, 267–274 (2010). Jakobsen, P. H. et al. Increased plasma concentrations of sICAM-1, sVCAM-1 and sELAM-1 in patients with Plasmodium falciparum or P. vivax malaria and association with disease severity. Immunology 83, 665–669 (1994). Neva, F. A., Sheagren, J. N., Shulman, N. R. & Canfield, C. J. Malaria: host-defense mechanisms and complications. Ann. Intern. Med. 73, 295–306 (1970). Normark, J. et al. Maladjusted host immune responses induce experimental cerebral malaria-like pathology in a murine Borrelia and Plasmodium co-infection model. PLoS ONE 9, e103295 (2014). Woodfin, A. et al. ICAM-1-expressing neutrophils exhibit enhanced effector functions in murine models of endotoxemia. Blood 127, 898–907 (2016). Berendt, A. R., Simmons, D. L., Tansey, J., Newbold, C. I. & Marsh, K. Intercellular adhesion molecule-1 is an endothelial cell adhesion receptor for Plasmodium falciparum. Nature 341, 57–59 (1989). Ockenhouse, C. F. et al. Human vascular endothelial cell adhesion receptors for Plasmodium falciparum-infected erythrocytes: roles for endothelial leukocyte adhesion molecule 1 and vascular cell adhesion molecule 1. J. Exp. Med. 176, 1183–1189 (1992). Miller, L. H., Baruch, D. I., Marsh, K. & Doumbo, O. K. The pathogenic basis of malaria. Nature 415, 673–679 (2002). Schofield, L. & Grau, G. E. Immunological processes in malaria pathogenesis. Nat. Rev. Immunol. 5, 722–735 (2005). Chakravorty, S. J. & Craig, A. The role of ICAM-1 in Plasmodium falciparum cytoadherence. Eur. J. Cell Biol. 84, 15–27 (2005). Perkins, D. J. et al. Severe malarial anemia: innate immunity and pathogenesis. Int. J. Biol. Sci. 7, 1427–1442 (2011). Ramos, T. N. et al. Cutting edge: the membrane attack complex of complement is required for the development of murine experimental cerebral malaria. J. Immunol. 186, 6657–6660 (2011). Biryukov, S. & Stoute, J. A. Complement activation in malaria: friend or foe? Trends Mol. Med. 20, 293–301 (2014). McDonald, C. R., Tran, V. & Kain, K. C. Complement activation in placental malaria. Front. Microbiol. 6, 1460 (2015). Mendonça, V. R. R. et al. Association between the haptoglobin and heme oxygenase 1 genetic profiles and soluble CD163 in susceptibility to and severity of human malaria. Infect. Immun. 80, 1445–1454 (2012). Nielsen, M. J., Møller, H. J. & Moestrup, S. K. Hemoglobin and heme scavenger receptors. Antioxid. Redox Signal. 12, 261–273 (2010). Li, P. et al. Nested PCR detection of malaria directly using blood filter paper samples from epidemiological surveys. Malar. J. 13, 175 (2014). WHO. Severe falciparum malaria. World Health Organization, Communicable Diseases Cluster. Trans. R. Soc. Trop. Med. Hyg. 94(Suppl. 1), S1–S90 (2000). Horstick, O., Martinez, E., Guzman, M. G., Martin, J. L. S. & Ranzinger, S. R. WHO Dengue Case Classification 2009 and its usefulness in practice: an expert consensus in the Americas. Pathog. Glob. Health 109, 19–25 (2015). Tuck, M. K. et al. Standard operating procedures for serum and plasma collection: early detection research network consensus statement standard operating procedure integration working group. J. Proteome Res. 8, 113–117 (2009). Keshishian, H. et al. Multiplexed, quantitative workflow for sensitive biomarker discovery in plasma yields novel candidates for early myocardial injury. Mol. Cell Proteomics 14, 2375–2393 (2015). Plubell, D. L. et al. Extended multiplexing of tandem mass tags (TMT) labeling reveals age and high fat diet specific proteome changes in mouse epididymal adipose tissue. Mol. Cell Proteomics 16, 873–890 (2017). Huang, T., Choi, M., Hao, S. & Vitek, O. MSstatsTMT: Protein Significance Analysis in Shotgun Mass Spectrometry-Based Proteomic Experiments with Tandem Mass Tag (TMT) Labeling (Bioconductor version: Release (3.9)), https://doi.org/10.18129/B9.bioc.MSstatsTMT (2019). Huang, T. et al. MSstatsTMT: Statistical detection of differentially abundant proteins in experiments with isobaric labeling and multiple mixtures. Mol. Cell Proteomics, https://doi.org/10.1074/mcp.RA120.002105 (2020). Ayers, K. L. & Cordell, H. J. SNP selection in genome-wide and candidate gene studies via penalized logistic regression. Genet. Epidemiol. 34, 879–891 (2010). Cawley, G. C. & Talbot, N. L. C. On over-fitting in model selection and subsequent selection bias in performance evaluation. J. Mach. Learn. Res. 11, 2079–2107 (2010). De Mol, C., De Vito, E. & Rosasco, L. Elastic-net regularization in learning theory. J. Complex. 25, 201–230 (2009). Kirpich, A. et al. Variable selection in omics data: a practical evaluation of small sample sizes. PLoS ONE 13, e0197910 (2018). Lu, F. & Petkova, E. A comparative study of variable selection methods in the context of developing psychiatric screening instruments. Stat. Med. 33, 401–421 (2014). PubMed Article PubMed Central Google Scholar Marafino, B. J., John Boscardin, W. & Adams Dudley, R. Efficient and sparse feature selection for biomedical text classification via the elastic net: application to ICU risk stratification from nursing notes. J. Biomed. Inform. 54, 114–120 (2015). Shen, L. et al. in Multimodal Brain Image Analysis (eds. Liu, T., Shen, D., Ibanez, L. & Tao, X.) Vol. 7012, 27–34 (Springer Berlin Heidelberg, 2011). Sirimongkolkasem, T. & Drikvandi, R. On regularisation methods for analysis of high dimensional data. Ann. Data Sci. 6, 737–763 (2019). Zhu, X. et al. Predictive model of the first failure pattern in patients receiving definitive chemoradiotherapy for inoperable locally advanced non-small cell lung cancer (LA-NSCLC). Radiat. Oncol. 15, 43 (2020). Rhenman, A. et al. Which set of embryo variables is most predictive for live birth? A prospective study in 6252 single embryo transfers to construct an embryo score for the ranking and selection of embryos. Hum. Reprod. 30, 28–36 (2015). Vabalas, A., Gowen, E., Poliakoff, E. & Casson, A. J. Machine learning algorithm validation with a limited sample size. PLoS ONE 14, e0224365 (2019). Mi, H., Muruganujan, A., Casagrande, J. T. & Thomas, P. D. Large-scale gene function analysis with the PANTHER classification system. Nat. Protoc. 8, 1551–1566 (2013). Fabregat, A. et al. The reactome pathway knowledgebase. Nucleic Acids Res. 44, D481–D487 (2016). MacLean, B. et al. Skyline: an open source document editor for creating and analyzing targeted proteomics experiments. Bioinformatics 26, 966–968 (2010). Perez-Riverol, Y. et al. PRIDE Inspector Toolsuite: moving toward a universal visualization tool for proteomics data standard formats and quality assessment of ProteomeXchange datasets. Mol. Cell Proteomics 15, 305–317 (2016). Farrah, T. et al. PASSEL: the PeptideAtlas SRMexperiment library. Proteomics 12, 1170–1175 (2012). Vipin Kumar. Multiplexed quantitative proteomics provides mechanistic cues for malaria severity and complexity. https://doi.org/10.5281/zenodo.4022347 (2020) The active support from Suman Ghosh from the Department of Medicine, Medical College Hospital Kolkata, and Dharmendra Rojh from the Department of Medicine, Malaria Research Center, S.P. Medical College, Bikaner in the clinical sample collection process is gratefully acknowledged. We are also grateful to Sandip K. Patel, Saicharan Ghantasala, Nikita Gahoi, and Apoorva Venkatesh from the Department of Biosciences and Bioengineering, Indian Institute of Technology Bombay for their insights and suggestions regarding the quantitative proteomics experiments. We also acknowledge Ting Huang from computational biology and proteomics at Northeastern University, Rohan Agarwal, and Saurabh Rajguru from the Chemical engineering department, Indian Institute of Technology Bombay for their kind support in data analysis using MSstatsTMT. We acknowledge the MASSFIITB Facility at IIT Bombay supported by the Department of Biotechnology (BT/PR13114/INF/22/206/2015) to carry out all MS-related experiments. This work was supported by the Department of Biotechnology, India grants No. BT/PR12174/MED/29/888/2014, BT/INF/22/SP23026/2017 and Ministry of Human Resource Development, Government of India (MHRD-UAY Phase-II Project (IITB_001) to S.S. V.K. and S.A. were supported by the IIT Bombay fellowship. Sandipan Ray Present address: Department of Systems Pharmacology and Translational Therapeutics, University of Pennsylvania, Philadelphia, PA, 19104, USA These authors contributed equally: Sandipan Ray, Shalini Aggarwal. Department of Biosciences and Bioengineering, Indian Institute of Technology Bombay, Mumbai, 400076, India Vipin Kumar, Sandipan Ray, Shalini Aggarwal, Deeptarup Biswas, Manali Jadhav, Swati Patankar & Sanjeeva Srivastava Department of Mathematics, Indian Institute of Technology Bombay, Mumbai, 400076, India Radha Yadav & Sanjeev V. Sabnis Medicine Department, Medical College Hospital Kolkata, 88, College Street, Kolkata, 700073, India Soumaditya Banerjee & Arunansu Talukdar Department of Medicine, Malaria Research Centre, S.P. Medical College, Bikaner, 334003, India Sanjay K. Kochar Dr. L H Hiranandani Hospital, Mumbai, 400076, India Suvin Shetty Sehgal Path Lab, Mumbai, 400053, India Kunal Sehgal Vipin Kumar Shalini Aggarwal Deeptarup Biswas Manali Jadhav Radha Yadav Sanjeev V. Sabnis Soumaditya Banerjee Arunansu Talukdar Swati Patankar Sanjeeva Srivastava V.K., S.S., S.P., and S.R. conceived and designed the experiments. V.K., M.J., and S.A. performed the MS-based quantitative proteomics experiments and data were analyzed by V.K., S.A., S.R., R.Y., and S.V.S. Bioinformatics analysis were performed by V.K., S.A., and D.B. Clinical samples and clinicopathological details were collected by S.B., A.T., and S.K. S.S. supervised the entire study and secured funding. The manuscript was written by V.K., S.R., and S.S. with input from all authors. All authors agreed on the interpretation of data and approved the final version of the manuscript. Correspondence to Sanjeeva Srivastava. Supplementary Data 1 Kumar, V., Ray, S., Aggarwal, S. et al. Multiplexed quantitative proteomics provides mechanistic cues for malaria severity and complexity. Commun Biol 3, 683 (2020). https://doi.org/10.1038/s42003-020-01384-4 Communications Biology ISSN 2399-3642 (online)
CommonCrawl
Living Reviews in Solar Physics December 2019 , 16:3 | Cite as Flare-productive active regions Shin Toriumi Haimin Wang Strong solar flares and coronal mass ejections, here defined not only as the bursts of electromagnetic radiation but as the entire process in which magnetic energy is released through magnetic reconnection and plasma instability, emanate from active regions (ARs) in which high magnetic non-potentiality resides in a wide variety of forms. This review focuses on the formation and evolution of flare-productive ARs from both observational and theoretical points of view. Starting from a general introduction of the genesis of ARs and solar flares, we give an overview of the key observational features during the long-term evolution in the pre-flare state, the rapid changes in the magnetic field associated with the flare occurrence, and the physical mechanisms behind these phenomena. Our picture of flare-productive ARs is summarized as follows: subject to the turbulent convection, the rising magnetic flux in the interior deforms into a complex structure and gains high non-potentiality; as the flux appears on the surface, an AR with large free magnetic energy and helicity is built, which is represented by \(\delta \)-sunspots, sheared polarity inversion lines, magnetic flux ropes, etc; the flare occurs when sufficient magnetic energy has accumulated, and the drastic coronal evolution affects magnetic fields even in the photosphere. We show that the improvement of observational instruments and modeling capabilities has significantly advanced our understanding in the last decades. Finally, we discuss the outstanding issues and future perspective and further broaden our scope to the possible applications of our knowledge to space-weather forecasting, extreme events in history, and corresponding stellar activities. Active regions Magnetic fields active regions Structure coronal mass ejections Initiation and propagation flares Dynamics flares Models magnetohydrodynamics The online version of this article ( https://doi.org/10.1007/s41116-019-0019-7) contains supplementary material, which is available to authorized users. Ever since sunspot observations with telescopes started in the beginning of seventeenth century, vast amounts of observational data have been collected. Triggered by the momentous discovery of solar flares by Carrington (1859) and Hodgson (1859) and by the report of the existence of magnetic fields in sunspots by Hale (1908), the close relationship between the production of solar flares and the magnetism of active regions (ARs) has been extensively argued. Advances in ground-based and space-borne telescopes have accelerated this trend. In recent decades, new instruments such as Hinode (Kosugi et al. 2007), Solar Dynamics Observatory (SDO; Pesnell et al. 2012), and the Goode Solar Telescope (GST; Cao et al. 2010)1 have delivered rich observational information and enabled us to study flares and ARs in unprecedented detail. Moreover, the ever-increasing capability of numerical simulations performed on supercomputers has improved the advanced modeling of these phenomena and deepened our understanding of their physical background. From experience we know that there are flare-productive and flare-quiet ARs. Then, some of the key questions are: What are the important morphological and magnetic properties of the flare-productive ARs that differentiate these from flare-quiet ARs? What are the key observational features that are created during the course of large-scale, long-term AR evolution? What subsurface dynamics and physical mechanisms produce such observed properties and features? What rapid changes occur in magnetic fields during the flare eruptions? The understanding of the flaring of ARs is not only motivated by academic curiosity but also desired by the practical demand of space weather forecasts that is growing more rapidly than ever before. Needless to say, the flaring activity of our host star directly affects the condition of the near-Earth environment through emitting coronal mass ejections (CMEs), electromagnetic radiation, and high energy particles.2 As the successful detection of stellar flares and starspots of solar-like stars is now increasing more and more, it is a key remaining issue for solar physicists to reveal the conditions of strong flare eruptions based on the rich information of solar ARs and flares. Therefore, we set as primary aim of this review article the summary of the current understanding of the formation and evolution of flare-productive ARs that has been brought about through decades of effort of observational and theoretical investigations. For this aim, we first highlight key observational properties of flaring ARs during the course of long-term and large-scale evolution. We then proceed to the theoretical studies that try to understand the physical origins of these observed properties. We switch our focus to the drastic evolution during the main stage of the flare and discuss the possibility that the changes in coronal fields affect the photospheric conditions. After we summarize what we have learned so far, especially in the age with Hinode, SDO, and GST, our discussion extends further to the possibilities of space weather forecasting and historical data analysis and even to the connection with stellar flares and CMEs. Although we carefully avoid stepping into the details too much, we provide references to excellent reviews since the main topic of this article, i.e., the development of flaring ARs, is closely related to a wide spectrum of phenomena from solar dynamo, flux emergence and AR formation to sunspots, flares and CMEs. The rest of this article is structured as follows. Section 2 provides the general introduction to the AR formation, solar flares and CMEs, and their relationships. Section 3 reviews the key morphological and magnetic properties of flare-productive ARs that are observed during the long-term and large-scale evolution. Then, in Sect. 4, we show the theoretical and numerical attempts to model and understand how these properties are created. Section 5 is dedicated to the discussion on rapid changes associated with flare eruptions. Finally, the summary and discussion are given in Sects. 6 and 7, respectively. Huge flare-productive AR NOAA 12192. Images are obtained by the SDO and Hinode satellites as well as the Solar Flare Telescope in NAOJ 2 Active regions and solar flares Figure 1 shows example images of the Sun. In the southern hemisphere, one may find a large sunspot group (top left: surrounded by a box), in which the magnetic field is strongly concentrated (top middle: magnetogram by SDO's Helioseismic and Magnetic Imager (HMI); Scherrer et al. 2012; Schou et al. 2012) and the bright loop structures are clearly seen in the EUV image (top right: 171 Å channel of SDO's Atmospheric Imaging Assembly (AIA); Lemen et al. 2012). This region, numbered 12192 by National Oceanic and Atmospheric Administration (NOAA), appeared in October 2014 as one of the largest spot groups ever observed with a maximum spot area of 2750 MSH3 and produced numerous solar flares including six X-class events on the Geostationary Operational Environmental Satellite (GOES) scale. These centers of activity are called ARs (see van Driel-Gesztelyi and Green 2015, for the history of the definition of ARs). In the simplest cases, ARs take a form of a simple bipole structure. However, as the detailed observation by Hinode's Solar Optical Telescope (SOT; Tsuneta et al. 2008) shows, ARs are sometimes composed of a number of magnetic elements of various size scales (bottom panels), and the flare productivity is known to increase with the "complexity" of the ARs. In this section, we introduce the present knowledge of how the ARs and sunspots are generated, how they become unstable and produce flares and CMEs, and how these features, i.e., the spots and flares, are related. 2.1 Flux emergence and AR formation It is generally thought that ARs are created as a result of the emergence of toroidal magnetic flux from the deeper convection zone (flux emergence: Parker 1955; Babcock 1961). In most dynamo models (Charbonneau 2010; Brun and Browning 2017), the toroidal flux is generated and amplified by turbulence and shear in the tachocline, the thin shear layer at the base of the solar convection zone. There are alternative possibilities such as the dynamo working in the near surface shear layer (Brandenburg 2005) and the amplification of advected horizontal fields by convection (Stein and Nordlund 2012). Magnetic flux systems created through these processes emerge to the solar surface and eventually generate ARs. Below we introduce the emergence processes in the interior and to the atmosphere from both theoretical and observational viewpoints. For more comprehensive discussion, interested readers may also consult the review papers by Fisher et al. (2000), Charbonneau (2010) and Brun and Browning (2017) that are specialized in magnetism in the solar interior, Zwaan (1985) and van Driel-Gesztelyi and Green (2015) for observational properties and Archontis (2008), Fan (2009a), Cheung and Isobe (2014) and Schmieder et al. (2014) that elaborate on theories and models of flux emergence. 2.1.1 Emergence in the interior: theory Parker (1955) demonstrated that a horizontal flux tube, a horizontal bundle of magnetic field lines, will rise due to magnetic buoyancy. Let us assume pressure balance between inside and outside the thin flux tube, $$\begin{aligned} p_{\mathrm{e}}=p_{\mathrm{i}}+\frac{B^{2}}{8\pi }, \end{aligned}$$ where \(p_{\mathrm{i}}\) and \(p_{\mathrm{e}}\) are the pressure inside and outside the flux tube, whose average field strength is B. When the plasma is in local thermodynamic equilibrium, i.e., \(T_{\mathrm{e}}=T_{\mathrm{i}}=T\), the above equation can be rewritten as $$\begin{aligned} \rho _{\mathrm{e}}=\rho _{\mathrm{i}}+\frac{B^{2}}{8\pi }\frac{m}{k_{\mathrm{B}}T}, \end{aligned}$$ where \(\rho \) is the density, m mean molecular mass, and \(k_{\mathrm{B}}\) the Boltzmann constant. It is obvious from this equation that the flux tube is buoyant (\(\rho _{\mathrm{i}}<\rho _{\mathrm{e}}\)), and the buoyancy per unit volume is $$\begin{aligned} f_{\mathrm{B}}=(\rho _{\mathrm{e}}-\rho _{\mathrm{i}})g =\frac{B^{2}}{8\pi }\frac{mg}{k_{\mathrm{B}}T} =\frac{B^{2}}{8\pi H_{\mathrm{p}}}, \end{aligned}$$ where \(H_{\mathrm{p}}=k_{\mathrm{B}}T/(mg)\) is the local pressure scale height. In most parts of the interior, the plasma-\(\beta \) (\(\equiv 8\pi p/B^{2}\)) is (much) greater than unity. For a magnetic flux at the base of the convection zone with a field strength of \(10^{5}\) G, which is 10 times stronger than the field strength that is in equipartition with the local kinetic energy density, the plasma-\(\beta \) is of the order of \(10^{5}\) (e.g., Fan 2009a). In such a situation, the rising flux can still be affected by external flow fields of thermal convection. A large number of numerical models have been developed and revealed various physical mechanisms of flux emergence and observed AR characteristics. For example, magnetohydrodynamic (MHD) simulations show that a horizontal magnetic layer at the base of the convection zone in mechanical equilibrium can break up and develop into buoyant magnetic flux tubes through the magnetic buoyancy instability (Cattaneo and Hughes 1988; Matthews et al. 1995; Fan 2001a). In order to keep the flux tube coherent, it was suggested that the flux tube needs twist, i.e., the azimuthal component of the magnetic field should wrap around the tube's axis (Parker 1979a; Longcope et al. 1996; Moreno-Insertis and Emonet 1996). Abbett et al. (2000) found that, in 3D simulations, the amount of twist necessary for the tube to retain its coherency is reduced substantially comparing to the 2D limit. The effect of the Coriolis force on the rising flux tube, including the asymmetry between the leading and following spots of bipolar ARs, has been studied by simulations with the assumption that the flux tube is thin enough that the cross sectional evolution can be neglected (thin flux tube approximation: e.g., Spruit 1981; Choudhuri and Gilman 1987; Fan et al. 1993; D'Silva and Choudhuri 1993; Caligari et al. 1995). The emergence in the convective interior and its interaction with the flow fields have been considered in simulations that apply the anelastic MHD approximation (e.g., Gough 1969; Fan et al. 2003; Fan 2008; Jouve and Brun 2009; Nelson et al. 2011; Weber et al. 2011; Jouve et al. 2013). The top panels of Fig. 2 illustrate the anelastic simulation by Nelson et al. (2013), who modeled the buoyant rise of \(\varOmega \)-shaped loops generated self-consistently from a bundle of toroidal flux (magnetic wreath). However, these assumptions become inappropriate in the uppermost convection zone above a depth of about \(20\, \mathrm{Mm}\) (Fan 2009a). This difficulty motivated Toriumi and Yokoyama (2010, 2011) to conduct fully-compressible MHD simulations that seamlessly connect the different atmospheric layers from a depth of \(40\, \mathrm{Mm}\) in the interior to the solar corona. They found that, as illustrated in 3D models in Fig. 2d–f, the rising flux tube, starting at \(-\,20\, \mathrm{Mm}\), temporarily slows down and undergoes horizontal expansion (pancaking) while generating escaping plasma flows before it resumes emergence into the photosphere and beyond. This process, termed "two-step emergence," is widely observed in the larger-scale models from the interior to the atmosphere (see Sect. 3.3.5 of Cheung and Isobe 2014). As an alternative approach, Abbett and Fisher (2003) and Chen et al. (2017) joined global-scale anelastic models and local MHD simulations from the near-surface layer upwards and investigated fuller history of emergence. a–c Emergence of buoyant \(\varOmega \)-loops from a magnetic wreath self-consistently generated in an anelastic dynamo model. Panels b and c demonstrate the local evolution within a domain extending from \(0.72\,R_{\odot }\) (\(-\,195\, \mathrm{Mm}\) from the solar surface) to \(0.96\,R_{\odot }\) (\(-\,28\, \mathrm{Mm}\)), with volume rendering indicating the toroidal field strength. Image reproduced by permission from Nelson et al. (2013), copyright by AAS. d–f Flux emergence simulation in a single computational domain that seamlessly covers from the convection zone to the corona with a vertical extent from \(-\,40\) to \(+\,50\, \mathrm{Mm}\) (here shown up to \(+\,20\, \mathrm{Mm}\)). The rising flux tube, initially placed at \(-\,20\, \mathrm{Mm}\), decelerates and expands horizontally before it appears on the photosphere and erupts into the corona. Normalizing units are \(H_{0}=200\, \mathrm{km}\) for length, \(\tau _{0}=25\, \mathrm{s}\) for time, and \(B_\mathrm{0}=300\, \mathrm{G}\) for magnetic field strength. Image reproduced by permission from Toriumi and Yokoyama (2012), copyright by ESO 2.1.2 Emergence in the interior: observation Several attempts have been made to detect the subsurface emerging magnetic flux using local helioseismology (see review by Gizon and Birch 2005). One of the earliest works, Braun (1995), reported on the p-mode scattering starting about 2 days before the spot formation in the emerging AR NOAA 5247. The following case studies mainly focused on the wave-speed perturbation and subsurface flow fields before the flux appearance: Chang et al. (1999), Jensen et al. (2001), Komm et al. (2008), Kosovichev and Duvall (2008), Zharkov and Thompson (2008) and Kosovichev (2009). However, in most cases, it was difficult to detect significant seismic signatures associated with the emerging flux, probably because of the fast rising motion and accordingly short observation time, which leads to low signal-to-noise ratio. A recent observation by Ilonidis et al. (2011), however, detected strong seismic perturbations in NOAA 10488 at depths between 42 and 75 Mm, up to 2 days before the photospheric flux reaches its maximum flux growth rate. The estimated rising speed from 65 Mm to the surface is about \(0.6\, \mathrm{km\ s}^{-1}\) (see also Braun 2012; Ilonidis et al. 2013; Kholikov 2013; Kosovichev et al. 2018). Statistical studies by Komm et al. (2009, 2011b, 2012) showed indications of upflows, rotations, and increased vorticity in the subsurface layer. Leka et al. (2013), Birch et al. (2013) and Barnes et al. (2014) analyzed more than 100 emerging regions and found that there are statistically significant seismic signatures in average subsurface flows and the apparent wave speed, at least one day prior to the emergence, although their individual samples did not show discernible signal greater than the noise level. Other possible precursors of flux emergence on the surface are the reduction in acoustic oscillation power (Hartlep et al. 2011; Toriumi et al. 2013b), f-mode amplification (Singh et al. 2016), and horizontal divergent flows (Toriumi et al. 2012, 2014a). 2.1.3 Birth of ARs: observation As the rising magnetic flux reaches the photosphere, it starts to build up an AR if the flux is sufficiently large. Figure 3a and its accompanying movie show various aspects of a newly emerging flux region. In a magnetogram (Stokes-V/I map), the emerging flux is scattered throughout the region as a number of small-scale magnetic elements of positive and negative polarities. These elements merge with and cancel each other in the middle of the region and gradually form pores and, if the emerged flux is sufficient, they eventually create sunspots (Zwaan 1978). Zwaan (1985) introduced the hierarchy of magnetic elements. Sunspots with a flux of \(5\times 10^{20}\, \mathrm{Mx}\) or more have a penumbra and the umbral field is 2900–\(3300\, \mathrm{G}\), sometimes exceeding \(4000\, \mathrm{G}\), while the flux of pores is \(2.5\times 10^{19}\)–\(5\times 10^{20}\, \mathrm{Mx}\) and the field strength is \({\sim }\,2000\, \mathrm{G}\). If the flux is less than \(10^{20}\, \mathrm{Mx}\), the emerging regions do not develop beyond ephemeral regions (Harvey and Martin 1973). a "Textbook" flux emergence in AR NOAA 12401 observed simultaneously by Hinode, the Interface Region Imaging Spectrograph (IRIS; De Pontieu et al. 2014), and SDO (2015 August 19). From top left to bottom right are the IRIS slit-jaw image of 1400 Å, raster-scan intensitygram at the Mg ii k line core (k3: 2796 Å), intensitygram at the Mg ii triplet line (2798 Å), Dopplergram produced from the Si iv 1403 Å spectrum (blue, white, and red correspond to \(-\,10\), 0, and \(+\,40\, \mathrm{km\ s}^{-1}\), respectively), SDO/AIA 1600 Å, Hinode/SOT/FG Ca ii H, SOT/SP Stokes-V/I, and SDO/HMI intensitygram. The white arrow in the top left panel indicates the direction of the disk center. In the accompanying movie, the Ca ii H and Stokes-V/I maps are replaced by the AIA 1700 Å image and HMI magnetogram, respectively. (For movie see Electronic Supplementary Material.) Image and movie reproduced by permission from Toriumi et al. (2017a), copyright by AAS. b Schematic model of flux emergence. Image reproduced by permission from Shibata et al. (1989), copyright by AAS. The original version of this illustration appeared in Shibata's review note in 1979 From the observation of repeated emergence and cancellation of photospheric magnetic elements, Strous et al. (1996) and Strous and Zwaan (1999) suggested that this behavior is due to the rising of undulatory (sea-serpent) field lines. Georgoulis et al. (2002), Bernasconi et al. (2002) and Pariat et al. (2004) suggested that Ellerman bombs, the bursty intensity enhancements in H\(\alpha \) line wings (Ellerman 1917), are located at the dipped parts, at which magnetic reconnection takes place to disconnect emerged flux from un-emerged, mass-laden parts of the flux tube (resistive emergence model). UV bursts in the transition region lines are similarly found at the cancellation sites (Peter et al. 2014; Young et al. 2018). Brightenings seen in 1400 Å, 1600 Å, and Ca ii H of Fig. 3a correspond to Ellerman bombs and UV bursts. Soon after the magnetic flux shows up, an arch filament system (AFS) appears as parallel dark fibrils, probably the manifestation of rising magnetic fields (Bruzek 1967, 1969, see Mg ii k3 image of Fig. 3a). Bipolar plages are observed in the chromospheric Ca ii H and K lines at the footpoints of the AFS (Kawaguchi and Kitai 1976, brightenings above the pores in Fig. 3a). The Hinode analysis of AFS by Otsuji et al. (2007, 2010) shows the horizontal expansion and upward acceleration of emerging flux, which strongly supports the "two-step emergence" scenario (Sect. 2.1.1). The observational characteristics of emerging flux regions are schematically summarized by Shibata et al. (1989) as an illustration in Fig. 3b. 2.1.4 Birth of ARs: theory The MHD modeling of flux emergence from the photospheric layer to the corona was pioneered by Shibata et al. (1989), who simulated the 2D emergence due to the Parker instability, the undular mode of the magnetic buoyancy instability (Parker 1979a). They successfully reproduced the observed dynamical features such as rising motion of the AFS and the strong downflow along the field lines. Since then, the flux emergence process has been widely studied both in 2D and 3D (e.g., Shibata et al. 1990; Kaisig et al. 1990; Nozawa et al. 1992; Magara 2001; Matsumoto and Shibata 1992; Matsumoto et al. 1993; Fan 2001b; Magara and Longcope 2001; Archontis et al. 2004; Isobe et al. 2005; Murray et al. 2006). 3D flux emergence simulation from around the photospheric height. a, b Selected field lines of the emerging flux tube. c–e Vertical magnetic field \(B_{z}\), the horizontal magnetic field (black arrows), and the horizontal velocity field (red arrows). f Top-down view of panel b with vertical velocity \(v_{z}\). g–i Line-of-sight (LOS) magnetic field, horizontal velocity, and H\(\alpha \) image of NOAA AR 5617, respectively. Image reproduced by permission from Fan (2001b), copyright by AAS Figure 4 shows a typical example of flux emergence simulations by Fan (2001b), which models the buoyant rise of a twisted flux tube from just beneath the photosphere (\(-\,1.5\, \mathrm{Mm}\)) and upwards. The initial flux tube, which is horizontal and endowed with a density deficit at the middle with respect to the surroundings, starts rising due to the magnetic buoyancy and deforms into an \(\varOmega \)-loop (panel a). As the flux tube penetrates into the upper atmosphere, a ying–yang pattern of positive and negative polarities (vertical field \(B_{z}\)) is produced in the photosphere (panels c–e), which resembles the polarity layout in the actual AR (panel g). Due to the initial twist, magnetic field lines in the atmosphere show a twisted structure, which also mimics the observed helical nature of the AFS (panel i). Forbes and Priest (1984) and Yokoyama and Shibata (1995, 1996) investigated the interaction between emerging flux and the preexisting coronal loop (the model proposed by Heyvaerts et al. 1977) and successfully reproduced jet ejections (see also Miyagoshi and Yokoyama 2003; Moreno-Insertis et al. 2008; Nishizuka et al. 2008; Murray et al. 2009; Archontis et al. 2010; Takasao et al. 2013; Moreno-Insertis and Galsgaard 2013). Magnetic flux cancellation at the emerging undular fields and the resultant production of Ellerman bombs were modeled by Isobe et al. (2007) in 2D and Archontis and Hood (2009) in 3D. With the growing ability of computation resources, simulations have become more realistic and now take into account the effect of thermal convection on flux emergence. For instance, Cheung et al. (2008) performed 3D radiative MHD simulations of the emergence of an initially horizontal flux tube in the granular convection. They found that, due to vigorous convective flows at the top of the convection zone, the rising tube is highly structured by the surface granulation pattern, which is well in agreement with the Hinode/SOT observations. The series of numerical simulations of similar setups consistently showed that the granular cells are expanded and elongated as the horizontal flux approaches and that the surface convection makes undular field lines (dipped field at the downflow lanes), which reconnect with each other and drain down the plasma from the surface layer (Abbett 2007; Cheung et al. 2007; Isobe et al. 2008; Martínez-Sykora et al. 2008, 2009; Tortosa-Andreu and Moreno-Insertis 2009; Fang et al. 2010). The realistic modeling by Archontis and Hansteen (2014) and Hansteen et al. (2017) successfully reproduced the small-scale reconnection events at the dipped fields and showed that they can be observed as Ellerman bombs or UV bursts depending on the reconnection heights. Throughout these processes, the magnetic elements grow larger and, eventually, the sunspots are formed (Cheung et al. 2010; Rempel and Cheung 2014). 2.2 Solar flares and CMEs In most astronomical contexts, the term "flare" refers to the abrupt increase in intensity of electromagnetic waves, and the flares on the Sun are detected over a wide range of spectrum such as X-rays, (E)UV, radio, and even white light. In fact, the discovery of flares was made as a remarkable intensity enhancement in white light (Carrington event on 1859 September 1; Carrington 1859; Hodgson 1859). Figure 5 is the original whole-disk drawing by Carrington, which shows a large spot group that produced the strong white light flare. Nowadays, flare strengths are grouped by peak soft X-ray flux over 1–8 Å, measured by GOES, into logarithmic classes A, B, C, M, X, corresponding to \(10^{-8}\), \(10^{-7}\), \(10^{-6}\), \(10^{-5}\), \(10^{-4}\, \mathrm{W\ m}^{-2}\) at Earth, respectively, so X1.2 and M3.4 represent \(1.2\times 10^{-4}\, \mathrm{W\ m}^{-2}\) and \(3.4\times 10^{-5}\, \mathrm{W\ m}^{-2}\), respectively. The Carrington flare is arguably considered as the most powerful event ever with the estimated magnitude of X45 (\(\pm \, 5\)) and bolometric energy of \(5\times 10^{32}\, \mathrm{erg}\) (Tsurutani et al. 2003; Cliver and Svalgaard 2004; Boteler 2006; Cliver and Dietrich 2013). Carrington's original whole-disk drawing on 1859 September 1. Carrington (1859) and Hodgson (1859) observed the white light flare in the large sunspot region in the northern hemisphere. This manuscript is currently preserved in the archive of the Royal Astronomical Society (RAS) as RAS MSS Carrington 3.2: Drawings of sunspots, showing the whole of the Sun's disk, v.2, f.313a. For a better visualization, the thickness of the limb and axes is enhanced. Image reproduced by permission from Hayakawa et al. (2018), copyright by AAS and RAS X3.4-class flare in AR NOAA 10930. The panels show full-disk magnetogram from Michelson Doppler Imager (MDI) aboard the Solar and Heliospheric Observatory (SOHO), GOES soft X-ray light curves for 1–8 Å (red) and 0.5–4.0 Å (blue), and Hinode/SOT/FG Ca ii H image (see also the accompanying movie), whose FOV is indicated by a yellow box in the magnetogram. Hinode image courtesy of Joten Okamoto (ISAS/JAXA and NAOJ). The bottom panel displays the computationally extrapolated magnetic field lines before the X3.4 flare using the NLFFF method. The red isosurface shows where the electric current is highest. Image reproduced by permission from Schrijver et al. (2008), copyright by AAS Solar flares are now considered as the conversion process of (free) magnetic energy to kinetic and thermal energy as well as particle acceleration, most probably through magnetic reconnection. Figure 6 shows the GOES X3.4-class flare in AR NOAA 10930. From this figure and the corresponding movie, one may find that the flare occurs between the two major sunspots, particularly at the polarity inversion line (PIL: also called the neutral line), where the vertical field \(B_{z}\) or the line-of-sight (LOS) field \(B_{\mathrm{LOS}}\) remains zero and the sign flips across it. The most pronounced feature is the pair of flare ribbons that spreads along and away from the PIL (Bruzek 1964; Asai et al. 2004). The magnetic field in the corona, which is computationally extrapolated from the photospheric magnetogram using the non-linear force-free field (NLFFF) method (Sect. 4.3.1), shows a helical topology above the PIL. Such a highly non-potential, twisted magnetic structure called a magnetic flux rope is often observed in soft X-rays prior to the flare occurrence (see Sect. 3.3.1). a Schematic illustration of the standard flare model. Image reproduced by permission from Shiota et al. (2005), copyright by AAS. The thick solid lines represent magnetic field lines. Shaded, hatched, and dotted regions display the features observed in soft X-rays, EUV, and H\(\alpha \), respectively. b Observationally inferred magnetic field structure of CMEs in the interplanetary space. Image reproduced by permission from Marubashi (1989), copyright by Kluwer Various observational characteristics of the flares, not only the ribbons and the flux rope but also the cusp-shaped loops seen in soft X-rays (Tsuneta et al. 1992), hard X-ray loop-top source (Masuda et al. 1994), inflows toward a current sheet (Yokoyama et al. 2001), etc., altogether lend support to the well-established flare model based on the magnetic reconnection scenario, referred to as the standard model, or the CSHKP model after its major contributors (Carmichael 1964; Sturrock 1966; Hirayama 1974; Kopp and Pneuman 1976, see Fig. 7a). In this paradigm and its updated versions (e.g., Forbes and Malherbe 1986; Shibata et al. 1995; Aulanier et al. 2012; Janvier et al. 2013), the key features are explained as follows. The magnetic flux rope becomes unstable and erupts into the higher atmosphere, entraining the overlying coronal field. The legs of the coronal field are drawn into a current sheet underneath the flux rope as inflows and reconnect with each other. The outflows from the reconnection region further boost the flux rope eruption. The post-reconnection field lines form a cusp structure, while the accelerated electrons from the reconnection site precipitate along the field lines and heat the chromosphere to produce flare ribbons. The flux rope, if ejected successfully, expands and develops into the magnetic skeleton of a CME that travels through interplanetary space. This is well demonstrated by in-situ observations of magnetic fields at vantage points, e.g., in front of the Earth (Burlaga et al. 1981; Klein and Burlaga 1982; Marubashi 1986). Figure 7b shows a schematic illustration of the inferred topology. The helical nature of the magnetic field of the CMEs is strongly suggestive of their solar origins. Regarding the onset of flux rope eruption and subsequent ejection of CMEs, various theories have extensively been proposed and investigated, such as flux emergence (Heyvaerts et al. 1977), breakout (Antiochos et al. 1999; DeVore and Antiochos 2008), tether-cutting (Moore et al. 2001), emerging-flux trigger (Chen and Shibata 2000), kink instability (Török and Kliem 2005; Fan and Gibson 2007), and torus instability (Kliem and Török 2006), along with a more recent concept of the double-arc instability (Ishiguro and Kusano 2017). In any case, there appears to be a consensus, at least, that the flare/CME occurrence is caused through the dynamical coupling between the unstable eruption of a flux rope (ideal MHD process) and magnetic reconnection of surrounding arcades (resistive MHD process). It should be noted, however, that not all the stronger flares are accompanied by CMEs (e.g., Yashiro et al. 2006). The best example is the giant AR NOAA 12192 (Fig. 1). Throughout the disk passage, this AR produced numerous energetic flares including the six X-class ones, but surprisingly none of them were CME-eruptive. Sun et al. (2015) showed that in this AR, the decay index \(n=-\partial \ln {B_{\mathrm{h}}}/\partial \ln {z}\), which measures the decreasing rate of the horizontal magnetic field \(B_{\mathrm{h}}\) with height z, remains below the critical value \(n_{\mathrm{c}}\approx 1.5\) for the torus instability until a large altitude and thus only failed eruptions took place (Inoue et al. 2016; Jiang et al. 2016a; Amari et al. 2018). The confinement of flux rope eruption by strong overlying field is also shown by the statistical studies on a number of ARs (Wang et al. 2017a; Vasantharaju et al. 2018; Jing et al. 2018). The same mechanism explains the observed result by Toriumi et al. (2017b) that the ratio of reconnected flux (in the flare ribbons) to the total AR flux is, on average, smaller for failed events than eruptive cases. DeRosa and Barnes (2018) showed that X-class flares located near coronal fields that are open to the heliosphere are eruptive at a higher rate than those lacking access to open fields. The topics we have discussed above are only the most representative aspects of the flares and CMEs. In order to keep our primary focus on the formation and evolution of flare-productive ARs, however, we stop the discussion at this point and yield the rest to reviews by, e.g., Schrijver (2009), Fletcher et al. (2011) and Benz (2017) for observational overviews and Priest and Forbes (2002), Forbes et al. (2006), Chen (2011), Shibata and Magara (2011) and Janvier et al. (2015) for theoretical and modeling aspects. 2.3 Categorizations of sunspots and flare productivity The number of sunspots varies with the 11 year solar activity cycle (Schwabe 1843; Hathaway 2015). Early in a cycle, the spots appear in higher latitudes up to \(40^{\circ }\) and, throughout the cycle, the latitude gradually drifts lower to the equator (Spörer's law: Carrington 1858). This behavior is illustrated by the Maunder butterfly diagram (Fig. 8 top). In each bipolar AR, the preceding spot tends to appear closer to the equator than the following spot (Joy's rule: Hale et al. 1919). As the magnetic observation started in the beginning of twentieth century (Hale 1908), Hale's polarity rule was discovered: for each cycle, the bipolar ARs are aligned in the east–west orientation with opposite preceding magnetic polarities on the opposite hemispheres. Soon, they also noticed that the polarities of the preceding spots alternate between successive cycles and these features are now altogether called Hale–Nicholson rule (Fig. 8 bottom: Hale and Nicholson 1925). (Top) Sunspot butterfly diagram showing the total spot area as a function of time and latitude. Image courtesy of Hathaway. In each cycle, the latitudes of ARs shifts to the equator (Spörer's law). (Bottom) Schematic diagram showing the polarity alignments. The preceding spots appear closer to the equator than the following spots (Joy's rule). In each cycle, the preceding polarities on one hemisphere are the same and are opposite to those on the other hemisphere, and the order of the polarities reverses in the successive cycle (Hale–Nicholson rule). These are merely the overall trends and there exist many exceptional ARs Example spot images for the three indices of McIntosh classification. Image reproduced by permission from McIntosh (1990), copyright by Kluwer Along with such long-term characteristics, which impose strong constraints on dynamo models, the structure of each sunspot group is also recognized as an important factor (see reviews by Solanki 2003; Borrero and Ichimoto 2011). One method of categorizing the sunspots is the Zurich classification (Cortie 1901; Waldmeier 1938), which was further developed as the McIntosh classification (McIntosh 1990). The McIntosh classification uses three letters to describe the white-light properties of the spots, which are the size, penumbral type, and distribution (see Fig. 9). The combination of the three letters shows the morphological complexity of ARs and, according to Bornmann and Shaw (1994), the flare production rate increases along the diagonal line in the 3D parameter space from the simplest corner "A/B/Hxx" to the most complex end "Fkc". Other studies show essentially a consistent result: morphologically complex ARs produce more flares (e.g., Atac 1987; Gallagher et al. 2002; Ternullo et al. 2006; Norquist 2011; Lee et al. 2012; McCloskey et al. 2016). The primary advantage of this method is that the spots are categorized simply from the white light observation and thus it requires no magnetic measurement.4 Another categorization method is the Mount Wilson classification, which refers to the magnetic structures of ARs. The original scheme of this method has the following three identifiers (Fig. 10 top: Hale et al. 1919; Hale and Nicholson 1938): \(\alpha \), a unipolar spot group; \(\beta \), a simple bipolar spot group of both positive and negative polarities; and \(\gamma \), a complex spot group in which spots of both polarities are distributed so irregularly as to prevent classification as a \(\beta \) group. Often more than one identifier is appended to each AR to indicate even more complex structures, such as \(\beta \gamma \), a bipolar spot group which is so complex that preceding or following spots are accompanied by minor polarities. It was shown that the flare productivity is related to this categorization. Giovanelli (1939) found that the probability of the flare eruption is proportional to the spot area and it increases with the spot complexity (in the order of \(\alpha \), \(\beta \), \(\beta \gamma \), and \(\gamma \)). Consistent results were reported by Kleczek (1953), Bell and Glazer (1959) and Greatrix (1963). (Top) Sample diagrams of the Mount Wilson classification. (Bottom) Peak flare magnitudes as a function of maximum sunspot area. Image reproduced by permission from Sammis et al. (2000), copyright by AAS. Note that the tick marks of the horizontal axis should be corrected as, from left to right, \(1\times 10^{-5}\), \(1\times 10^{-4}\), \(1\times 10^{-3}\), and \(1\times 10^{-2}\) in the unit of the hemisphere, or equivalently, 10, 100, 1000, and 10,000 MSH Later, the \(\delta \) group, a spot group in which umbrae of opposite polarities are separated by less than 2\(^{\circ }\) and situated within the common penumbra, was added to the Mount Wilson classification by Künzel (1960, 1965). In this scheme, the most complex ARs are the spots appended with \(\beta \gamma \delta \). Ever since Künzel (1960) showed that the \(\delta \)-spots are highly flare-productive, a number of statistical investigations have been carried out and showed consistent results (e.g. Mayfield and Lawrence 1985; Sammis et al. 2000; Tian et al. 2002; Ternullo et al. 2006; Guo et al. 2014; Toriumi et al. 2017b; Yang et al. 2017b). The bottom panel of Fig. 10 is a diagram of the peak GOES soft X-ray flux versus the maximum sunspot area for various ARs by Sammis et al. (2000). Here, one may easily find the clear positive correlation that the flare magnitude increases with the spot area. However, this diagram also shows that more complex regions produce stronger flares. For example, all \(\ge \) X4-class flares occur in ARs of area greater than 1000 MSH and classified as the most complex \(\beta \gamma \delta \). Other studies show the correlations and associations between the \(\delta \)-spots and the production of proton flares (here meaning that flares that emit energetic protons: Warwick 1966; Sakurai 1970), white-light flares (Neidig and Cliver 1983), \(\gamma \)-ray flares (Xu et al. 1991), and fast CMEs (Wang and Zhang 2008). Yet another important finding is that the inverted or anti-Hale spot groups, i.e., the ARs violating Hale's polarity rule, are flare productive (Smith and Howard 1968; Zirin 1970; Tang 1982). In most cases, polarities of ARs follow the Hale–Nicholson rule described earlier in this subsection and the spot groups violating this rule are very small in number (appearance rate being 3–9%; Richardson 1948; Wang and Sheeley 1989; Khlystova and Sokoloff 2009; Stenflo and Kosovichev 2012; McClintock et al. 2014). However, it is known that once this structure is created, an AR tends to produce strong flares. For example, Tian et al. (2002) selected the 25 most violent ARs in Cycles 22 and 23 based on five criteria: the largest spot area \(>1000\, \mathrm{MSH}\); X-ray flare index (related to the sum of peak flare intensities) \(>5.0\); 10.7 cm radio flux \(>1000\, \mathrm{s.f.u.}\); proton flux (\(>10\, \mathrm{MeV}\)) \(>400\, \mathrm{p.f.u.}\); and geomagnetic \(A_{p}\) index \(>50\). They found that most of them (68%) violate the Hale–Nicholson rule. Surveying 104 \(\delta \)-spots, Tian et al. (2005a) showed that about 34% violate the Hale's rule but follow the hemispheric current helicity rule, which describes the dominance of negative (positive) current helicity in the northern (southern) hemisphere (e.g., Pevtsov et al. 1995, see also Sect. 3.3.3). Tian et al. (2005a) found that such ARs have a much stronger tendency to produce X-class flares. Great flare event in 1946 July 25 in RGO 14585, the fourth largest sunspot group since the late nineteenth century. A gorgeous two-ribbon flare breaks out in the huge, compact sunspot region. (Left) Sunspots observed in Ca ii K1v. (Right) Very large flare ribbons observed in H\(\alpha \). Image reproduced by permission from Toriumi et al. (2017b), copyright by AAS and Paris Observatory In this subsection, we reviewed several schemes of sunspot categorization and showed that ARs producing larger flares tend to have: a larger spot area; morphological and magnetic complexity, which is qualitatively indicated by McIntosh and Mount Wilson schemes; and anti-Hale alignment. However, for producing strong flares, probably it is not enough to satisfy just one of these conditions. For example, the largest-ever sunspot since the late nineteenth century, RGO (Royal Greenwich Observatory) 14886 on April 1947 (maximum spot area of 6132 MSH), is reported as flare quiet. The spot image shown in Fig. 3 of Aulanier et al. (2013) indicates that this region has a simple bipolar structure (\(\beta \)-spot). On the other hand, the fourth largest in history, RGO 14585 on July 1946 (4279 MSH) as in Fig. 11, produced great flares and geomagnetic storms with a ground-level enhancement (Ellison 1946; Forbush 1946; Dodson and Hedeman 1949). The spot image reveals that this region is strongly packed as if it is a \(\delta \)-spot and, judging from the Mount Wilson drawing, it is very likely true. Therefore, it is important to find if there exist critical conditions for the strong flares and, if so, what they are, by conducting observational and theoretical studies of any kinds to investigate the magnetic structure of flaring ARs and their evolution. 3 Long-term and large-scale evolution: observational aspects Observationally, the changes of magnetic fields that are associated with flares are often divided into two regimes: the long-term, gradual evolution of large-scale fields and the rapid changes associated with (i.e., in the time scales comparable to) the flare occurrence. In what follows (Sects. 3 and 4), we review the first topic, the long-term evolution, which is essentially related to the energy build-up process in the pre-flare state. 3.1 Formation and development of \(\delta \)-spots The role of long-term magnetic development in flare production was first recognized by Martres et al. (1968), who pointed out that the flares are often associated with evolving magnetic structures (Structure magnétique évolutive) of opposite polarities, in which one is growing and the other decreasing. Through accumulating a vast amount of observational data, observers gradually found certain regularities of flare-productive ARs. After 18 years of observations at Big Bear Solar Observatory (BBSO), Zirin and Liggett (1987) summarized and classified the formation of \(\delta \)-spots that produce great flares in three ways: Type 1: A complex of spots emerging all at once with different dipoles intertwined. This type is tightly packed with a large umbra and called "island \(\delta \) sunspot"; Type 2: A single \(\delta \)-spot produced by emergence of satellite spots near large older spots; and Type 3: A \(\delta \)-configuration formed by collision between two separate but growing bipoles. The overall polarity layout is quadrupolar and the preceding spot of one bipole collides with the following spot of the other. Examples of Type 1 \(\delta \)-spots. a AR McMath 11976 in August 1972. H\(\alpha -0.5\) Å image on August 3. Umbrae numbered F1, F2, F3, P1, P2, and P3 all share a common penumbra. Image reproduced by permission from Zirin and Tanaka (1973), copyright by D. Reidel. b NOAA 5395 in March 1989. He D3 image and magnetogram on March 10. Image reproduced by permission from Wang et al. (1991), copyright by AAS Figure 12 shows two typical examples of Type 1. The AR in Fig. 12a, McMath 11976, appeared in August 1972 and produced great flares (Zirin and Tanaka 1973). This region emerged as a tight complex of sunspots with inverted magnetic polarity (i.e., anti-Hale region). The negative spot P1 pushed into the positive spots (F1, F2, and F3) and caused steep magnetic gradient on the central PIL. The filament on the north (fil 1), which may be the extension of the central PIL, repeatedly erupted due to the continuous spot motion. Another example is NOAA 5395 in March 1989 (Fig. 12b: Wang et al. 1991). This region also had a closely packed structure of multiple spots and produced great flares including X4.5 (March 10) and X10 (March 12). This region is known to produce the geomagnetic storm that triggered the severe power outage in Quebec, Canada, on March 13 to 14 (e.g., Allen et al. 1989; Cliver and Dietrich 2013). The analysis shows that, at one edge of the large positive spot F1, negative polarities successively emerged and moved around the main spots, creating a clockwise spiraling penumbral fields around it (Wang et al. 1991; Tang and Wang 1993; Ishii et al. 1998). The series of strong flares occurred along the PIL surrounding the main positive spots. Similar island-\(\delta \) sunspots are observed to show significant flaring activity, such as flares in McMath 13043 (July 1974), X20 event in NOAA 5629 (August 1989), X13 in NOAA 5747 (October 1989), and X12 in NOAA 6659 (June 1991) (Tanaka 1991; Tang and Wang 1993; Schmieder et al. 1994). AR NOAA 10930 in December 2006 as the example of Type 2 \(\delta \)-spot obtained by Hinode/SOT. Daily evolution of continuum, magnetic fields, and Ca ii H is shown over the field of view of \(128''\times 96''\). Images reproduced by permission from Kubo et al. (2007), copyright by ASJ Type 2 events are the flare eruptions caused by the newly emerging satellite spots in the penumbra of an existing spot (Rust 1968) and Zirin and Liggett (1987) classified spot groups Mount Wilson 19469 and 20130 into this category (Patterson and Zirin 1981; Tang 1983). Figure 13 shows a clear example of this type, NOAA 10930 in December 2006 (Kubo et al. 2007). Within the southern penumbra of the main negative spot, a positive spot appears and drifts around to the east with showing a counter-clockwise rotation. As a result, an X3.4-class flare occurred on December 13 at the PIL between the main and the satellite spots (also refer to Fig. 6 and its corresponding movie). AR NOAA 11158 in February 2011 as the example of Type 3 \(\delta \)-spot. Image reproduced by permission from Toriumi et al. (2014b), copyright by Springer. Two emerging bipoles P1–N1 and P2–N2 collide against each other and produced a sheared PIL within a \(\delta \)-spot at the region center. The series of flares occur at the extended PIL between N1 and P2. Plus signs indicate the magnetic flux-weighted centroids of the four polarities. EUV images (panels e and f) show the field connectivity between N1 and P2 Figure 14 shows NOAA 11158 in February 2011, the typical case of Type 3 \(\delta \)-spot (Toriumi et al. 2014b). Because of the collision of two emerging bipoles P1–N1 and P2–N2, a highly sheared PIL with steep magnetic gradient is produced in the central \(\delta \)-spot (N1 and P2) and a series of flares including the X2.2-class event (February 15) occur. Similar structures are seen in a variety of ARs, such as NOAA 8562/8567, 6850, 7220/7222, 10314, and 10488 (van Driel-Gesztelyi et al. 2000; Kálmán 2001; Morita and McIntosh 2005; Poisson et al. 2013; Liu and Zhang 2006). a Evolution patterns responsible for great flare occurrence and their explanations by an emerging twisted knot model. Mode A is a shearing process with spot growth and Mode B is an unshearing process with spot disappearance. Intersections represent the photosphere at times \(t_{1}\), \(t_{2}\) and \(t_{3}\). Image reproduced by permission from Tanaka (1991), copyright by Kluwer. b, c Inferred 3D topologies for NOAA 7912 and 10314. Images reproduced by permission from López Fuentes et al. (2000) and Poisson et al. (2013), copyrights by AAS and COSPAR, respectively 3D model made of flexible wires for explaining the evolution of NOAA 4021. Image reproduced by permission from Ishii et al. (2000), copyright by ASJ How are these complex structures formed? Zirin and Liggett (1987) mentioned that "because Types 1 and 2 erupt in the same place, and Type 3 requires large dipoles that are not close by mere accident, the \(\delta \) configuration must be the product of a subsurface phenomenon." However, we cannot directly observe below the surface. One way to reconstruct the 3D topology of emerging magnetic fields is to study it using sequential images (e.g., white light and magnetograms). For example, Tanaka (1991) studied the evolution of flare-active Type 1 \(\delta \)-spots McMath 13043 and 11976 and explained the observed proper motions, the non-Hale spots turning to obey it, by the emergence of knotted twisted flux tubes (twisted knot model: Fig. 15a). This scenario was supported by many successive researchers (e.g., Fig. 15b) and it was suggested that the deformation of emerging \(\varOmega \)-loops is due to the helical kink instability (e.g., Lites et al. 1995; Leka et al. 1996; López Fuentes et al. 2000, 2003; Holder et al. 2004; Tian et al. 2005a, b; Nandy 2006; Takizawa and Kitai 2015) (see Sect. 4.1.1 for theoretical investigations on the kink instability and "Appendix" for the story of the original advocates of this instability as the formation mechanism of the \(\delta \)-spots). Poisson et al. (2013) explained the formation of Type 3 \(\delta \)-spot NOAA 10314 as the ascent of a single large \(\varOmega \)-loop whose top is curled downward and has a U-loop below the photosphere (Fig. 15c; see also Pevtsov and Longcope 1998; van Driel-Gesztelyi et al. 2000; Takizawa and Kitai 2015). Ishii et al. (2000) and Kurokawa et al. (2002) even used flexible wires to manually model the inferred 3D configurations (Fig. 16). From vertically stacked sequential magnetograms, Chintzoglou and Zhang (2013) inferred the subsurface topology of NOAA 11158 (Fig. 14). These observations consistently show that the emerging flux tubes of \(\delta \)-spots do not have a simple \(\varOmega \)-shape but are deformed within the convection zone, prior to emergence. Classification of flaring ARs. Image reproduced by permission from Toriumi et al. (2017b), copyright by AAS. (Top) Polarity distributions. Magnetic elements (spots) are indicated by circles with plus and minus signs. The PIL or filament involved in the flare is shown with an orange line, while proper motions of the polarities are indicated with green arrows. (Middle) Possible 3D structures of magnetic fields. Solar surface is indicated with a horizontal slice. (Bottom) Sample events. Gray scale shows magnetogram, overlayed by temporally stacked flare ribbons (orange and turquoise). Red plus signs show the area-weighted centroids of the ribbons. The white lines at the bottom right indicate the length of \(50''\) Toriumi et al. (2017b) surveyed all \(\ge \) M5-class flares within 45\(^{\circ }\) from disk center for six years from May 2010 and classified the host ARs into four groups depending on their developments (Fig. 17): (1) Spot-spot, a complex, compact \(\delta \)-spot, in which a large long, sheared PIL extends across the whole AR (equivalent to Type 1 \(\delta \)-spot); (2) Spot-satellite, in which a newly emerging bipole appears in the vicinity of a preexisting main spot (i.e., Type 2); and (3) Quadrupole, a \(\delta \)-spot is created by the collision of two bipoles (i.e., Type 3). However, they also noticed that even X-class events do not require \(\delta \)-spots or strong-gradient PILs. Instead, some events occur between two independent ARs, situations called (4) Inter-AR events (Dodson and Hedeman 1970). For example, the X1.2 event on 2014 January 7 occurred between NOAA 11944 and 11943 (Möstl et al. 2015; Wang et al. 2015). Figure 17 also provides possible 3D topologies, which were later modeled by numerical simulations (see Sect. 4.1.5). Through the analysis of Mount Wilson classifications from 1992 to 2015, Jaeggli and Norton (2016) discussed the possible production mechanism of complex ARs. They found that while the fractions of \(\alpha \)- and \(\beta \)-spots remain constant over cycles (about 20% and 80%, respectively), that of complex ARs appended with \(\gamma \) and/or \(\delta \) increases drastically from 10% at solar minimum to more than 30% at maximum. According to the authors, this may indicate that complex ARs are produced by the collision of simpler ARs around the surface layer through the higher rate of flux emergence during solar maximum. This idea may be related to the successive emergence model (Kurokawa 1987) and perhaps to the concepts of "complexes of activities" and "sunspot nests" (Bumba and Howard 1965; Gaizauskas et al. 1983; Castenmiller et al. 1986; Gaizauskas et al. 1994). 3.2 Photospheric features 3.2.1 Strong-field, strong-gradient, highly-sheared PILs and magnetic channels Because flares are the release of magnetic energy via magnetic reconnection, it is natural that these events are observed around the PILs, where the electric currents are strongly enhanced (see, e.g., Fig. 6). Since this fact was first pointed out by Severny (1958), the importance of the PILs in the flare occurrence has been repeatedly emphasized (e.g., Zirin and Tanaka 1973; Hagyard et al. 1984; Wang et al. 1996; Schrijver 2007). The photospheric characteristics of the flaring PILs are summarized as follows. Strong field: Both the vertical fields surrounding the PIL and the transverse fields along the PIL are very strong. Tanaka (1991) and Zirin and Wang (1993b) reported on the detection of strong transverse fields of up to 4300 G (see also Jaeggli 2016; Wang et al. 2018a). Livingston et al. (2006) also pointed out that part of the exceptionally strong fields they found are likely related to the transverse fields in light bridges of \(\delta \)-spots (i.e., PILs). Okamoto and Sakurai (2018) noticed the fields as high as 6250 G in a PIL, which is probably the highest value ever measured on the Sun including the sunspot umbrae. Strong gradient: The horizontal gradient of the vertical field across the PIL is steep, indicating that positive and negative polarities are tightly pressed against each other (Moreton and Severny 1968; Wang et al. 1991, 1994b). The gradient is sometimes up to several \(100\, \mathrm{G\ Mm}^{-1}\) (Wang and Li 1998; Jing et al. 2006; Song et al. 2009). Strong shear: The transverse field is directed almost parallel to the PIL. The shear angle is often measured in the frame where \(0^{\circ }\) is the azimuth of a potential field (Hagyard et al. 1984; Lu et al. 1993), and large shears of \(80^{\circ }\)–\(90^{\circ }\) are observed at flaring PILs (Hagyard et al. 1990; Hagyard 1990). Figure 18 clearly shows that the transverse fields at the PIL of NOAA 10930 are along the direction of the PIL (marked by the box). The strong-field, strong-gradient, highly-sheared PILs may be the direct manifestation of non-potentiality of magnetic fields and, therefore, these features are often used for the prediction of flares and CMEs. Falconer et al. (2002, 2006) measured the lengths of PILs of, e.g., strong transverse field (\(>150\, \mathrm{G}\)), large shear angle (\(>45^{\circ }\)), and steep gradient (\(> 50\, \mathrm{G\ Mm}^{-1}\)) and demonstrated that these parameters predict the occurrence of CMEs. Schrijver (2007) evaluated the total unsigned flux near the strong-gradient PILs and showed that it gives the upper limit of possible GOES flare class. Hinode/SOT/SP vector magnetogram of AR NOAA 10930, which produced the X3.4-class flare (see Figs. 6, 13). The image shows the LOS magnetic fields (gray scale), transverse fields (green arrows), positive and negative polarities (red and blue contours), and the PILs (black contours). The FOV is \(66''\times 66''\). The area around the sheared PIL is marked with a rectangular box. Image reproduced by permission from Wang et al. (2008), copyright by AAS Temporal evolution of the X3.4-class flare in AR NOAA 10930. Background shows the LOS magnetogram, over which the PILs are plotted with green lines. The red contours show the Ca ii H line enhancement. The pre-flare brightening (such as B1) continuously occurs around the central PIL (yellow circle). The flare ribbons originate and expand from this region (see, e.g., progenitor brightening of B2). Image reproduced by permission from Bamba et al. (2013), copyright by AAS Another important feature of the flaring PILs is the "magnetic channel", which is an alternating pattern of elongated positive and negative polarities (Zirin and Wang 1993a; Wang et al. 2002a). Figure 18 displays the magnetic channel in NOAA 10930 (see PIL marked by the box). Wang et al. (2008) and Lim et al. (2010) showed that high resolution with high polarimetric accuracy is needed to adequately resolve such small-scale structures (width \(\lesssim 1''\)). Figure 19 clearly shows that the pre-flare brightening continues around this structure and the flare ribbons originate from here (see also the movie of Fig. 6). From these observations, Bamba et al. (2013) suggested that such fine-scale magnetic structures galvanize the whole system into producing flare eruptions (Toriumi et al. 2013a; Bamba et al. 2017; Bamba and Kusano 2018). BBSO/GST observation of magnetic field in AR NOAA 12371 before the M6.5-class flare at 18:23 UT on 2015 June 22. a, b GST/NIRIS photospheric vertical magnetic field (scaled between \(\pm \, 1500\, \mathrm{G}\)) at 17:35 UT, superimposed with arrows representing horizontal magnetic field vectors. The box in a denotes the FOV of b, in which the magnetic channel structure can be obviously observed. c Distribution of magnetic shear in terms of a product of the field strength and shear angle. The overplotted yellow contour in a–c is the PIL. d Temporal evolution of total positive (blue dotted line) and negative (red solid line) magnetic fluxes and the unsigned electric current (black dashed line), calculated over the magnetic channel region enclosed by the box in b. The first two vertical dashed lines indicate the times of two flare precursor episodes. Image reproduced by permission from Wang et al. (2017b), copyright by Macmillan The significance of the sheared PIL, magnetic channel, and small-scale trigger was also verified by a super high-resolution observation by BBSO/GST. Figure 20 shows the GST/NIRIS magnetogram of AR NOAA 12371. Here, Wang et al. (2017b) found that the field is highly sheared with respect to the PIL, especially in the precursor brightening region [panels (a) and (b)]. This signifies a high degree of non-potentiality, as reflected by the concentration of magnetic shear along the PIL [panel (c)]. In the region around the initial precursor brightening enclosed by the box in panel (b), they observed a miniature version of a magnetic channel with a scale of only 3000 km, which can also be recognized as the flare-triggering field. Importantly, the evolutions of both polarities within the channel are temporally associated with the occurrence of precursor episodes [panel (d)]. 3.2.2 Flow fields and spot rotations Given the high-\(\beta \) condition in the photosphere, it was speculated that such flaring PILs are generated by the sheared, converging flow fields around it. In fact, Harvey and Harvey (1976) observed strong shear flows along the flaring PILs and associated these flows with the occurrence of flares (Meunier and Kosovichev 2003; Yang et al. 2004; Deng et al. 2006; Shimizu et al. 2014). Also, Keil et al. (1994) showed that the flare kernels correspond to the locations of convergence in the horizontal flows. The converging flow and the sustained cancellation of positive and negative polarities on the two sides of the PIL are thought to be the key process in building up a magnetic flux rope (van Ballegooijen and Martens 1989, see also Sect. 3.3.1 of this article for detailed discussion). The large-scale spot motions drive the flow fields around the PILs and, because of the frozen-in state of the field, the magnetic structures are reconfigured. For instance, Krall et al. (1982) revealed that the shear flow in the PIL is in association with rapid spot motions, which enhances the magnetic shear at the PIL and leads to the series of flares. Wang (1994) observed that magnetic shear development is intrinsically related to the newly emerging flux. Velocity field of the southern sunspot in AR NOAA 10930 over the FOV of \(42''\times 38''\). The radius of the circle in the lower-left corner corresponds to a speed of \(0.22\, \mathrm{km\ s}^{-1}\), and the color of an arrow corresponds to its direction. Image reproduced by permission from Min and Chae (2009), copyright by Springer Strong spot rotations (both the spot rotating around its center and the spot rotating around its counterpart in the same AR) are also often observed in the pre-flare state. Figure 21 is a clear example of rotating sunspots in AR NOAA 10930 (Min and Chae 2009). This figure highlights that the southern spot rotates in the counter-clockwise direction before the X3.4-class flare occurs. Brown et al. (2003) analyzed rotating sunspots in seven ARs and found that the spots rotate around their umbral centers up to 200\(^{\circ }\) in 3–5 days. The coronal loops are twisted as the spot rotates, and six of them showed flares and/or CMEs (Régnier and Canfield 2006; Zhang et al. 2007, 2008; Vemareddy et al. 2012; Ruan et al. 2014; Vemareddy et al. 2016). Brown et al. (2003) considered that the spot rotation is caused by the flux tube emergence (see Sect. 4.1 for the discussion). The observed association of spot rotations and eruptions is consistent with the theoretical suggestion by Stenflo (1969) and Barnes and Sturrock (1972) that such spot rotations accumulate flare energy in the atmosphere. Yan et al. (2008) surveyed 186 rotating sunspots in 153 ARs and statistically investigated the relationship between the spot rotation and the flare productivity. They found that ARs with sunspots of rotation direction opposite to the global differential rotation are in favor of producing M- and X-class flares. These flow fields and spot motions strongly suggest the possibility that the flaring ARs, if not all, are produced by the emergence of magnetic flux with a strong twist. Through these processes, the magnetic flux transports the energy and magnetic helicity (Sect. 3.2.3) from the subsurface layer to the atmosphere. 3.2.3 Injection of magnetic helicity Magnetic helicity is a measure of magnetic structures such as twists, kinks, and internal linkage (Elsasser 1956) and is a useful tool to quantify and characterize the complexity of flaring ARs. The magnetic helicity of the magnetic field \({\mathbf {B}}\) fully contained in a volume V (i.e., the normal component \(B_{n}\) vanishes at any point of the surface S) is defined as $$\begin{aligned} H=\int _{V} {{\mathbf {A}}}\cdot {{\mathbf {B}}}\, dV, \end{aligned}$$ where \({{\mathbf {A}}}\) is the vector potential of \({{\mathbf {B}}}\), i.e., \({{\mathbf {B}}}=\nabla \times {{\mathbf {A}}}\). H is invariant to gauge transformations and, in ideal MHD, H is a conserved quantity. Even under resistive MHD where magnetic reconnection can occur, it is shown that dissipation of H is much slower than dissipation of magnetic energy (Berger 1984). However, in many practical situations, the field lines cross the surface of the volume of interest S (e.g., the photosphere) and thus it is convenient to use the relative magnetic helicity (Berger and Field 1984; Finn and Antonsen Jr 1985): $$\begin{aligned} H_{\mathrm{R}}=\int _{V} ({{\mathbf {A}}}+{{\mathbf {A}}_{0}})\cdot ({{\mathbf {B}}}-{{\mathbf {B}}_{0}})\, dV, \end{aligned}$$ where \({{\mathbf {A}}}_{0}\) and \({{\mathbf {B}}}_{0}\) are the reference vector potential and magnetic field, respectively (\({{\mathbf {B}}}_{0}\) has the same \(B_{n}\) distribution on S). \(H_{\mathrm{R}}\) is also a gauge-invariant quantity, and often the potential field \({{\mathbf {B}}}_{\mathrm{p}}\) \((=\nabla \times {{\mathbf {A}}}_{\mathrm{p}})\) is chosen as the reference field: $$\begin{aligned} H_{\mathrm{R}}=\int _{V} ({{\mathbf {A}}}+{{\mathbf {A}}_{\mathrm{p}}})\cdot ({{\mathbf {B}}}-{{\mathbf {B}}_{\mathrm{p}}})\, dV. \end{aligned}$$ One way to calculate the relative helicity in the coronal volume is to rely on 3D magnetic extrapolations as it is not yet possible to fully measure the magnetic fields in the atmosphere (Sect. 4.3.1). Alternatively, it is also possible to monitor the helicity flux (helicity injection rate) through the photosphere over the AR,5 $$\begin{aligned} \frac{dH_{\mathrm{R}}}{dt} = 2\int \left[ ({{\mathbf {A}}}_{\mathrm{p}}\cdot {{\mathbf {B}}})v_{n} -({{\mathbf {A}}}_{\mathrm{p}}\cdot {{\mathbf {v}}})B_{n} \right] \, dS, \end{aligned}$$ where \({{\mathbf {v}}}\) is the velocity of the plasma and \(v_{n}\) is the component normal to the surface. This parameter has been used more commonly to investigate the accumulation of helicity during the course of AR evolution (Chae 2001; Chae et al. 2001; Green et al. 2002; Nindos et al. 2003; Chae et al. 2004). Note that in the last equation, the first and second terms in the bracket are called the "emergence term" and "shear term," respectively. a Temporal evolution of the magnetic helicity injection rate (solid line) and the GOES soft X-ray flux (dotted line) over 6.5 h. The arrows indicate the X-ray intensity peak of homologous flares in AR NOAA 8100. Image reproduced by permission from Moon et al. (2002a), copyright by AAS. b Temporal variation of magnetic helicity. Plotted are the coronal helicity derived from the NLFFF extrapolation \(H_{\mathrm{r}}\) (red dots), the accumulated amount of helicity injection through the photosphere \(\varDelta H|_{S}\) (blue dots), total unsigned magnetic flux (black) and GOES flux (gray). The uncertainty in \(H_{\mathrm{r}}\) is indicated by the error bars. The uncertainty in \(\varDelta H|_{S}\) is generally 0.5% that is too small to be plotted. Image reproduced by permission from Jing et al. (2012), copyright by AAS Many observational studies have shown the temporal relationship between the helicity injection and the occurrence of flares and CMEs (Moon et al. 2002a, b; Chae et al. 2004; Magara and Tsuneta 2008; Park et al. 2008, 2012). For instance, Moon et al. (2002a, b) revealed that the significant amount of helicity was impulsively injected around the peak time of X-ray flux of the flare events they studied, especially for the strong ones (Fig. 22a). The authors attributed the observed impulsive helicity injection to the horizontal velocity anomalies near the PIL. However, because the location of helicity injection is near the flaring site (e.g., H\(\alpha \) flare ribbons), the possibility can not be ruled out that the observation is affected by an artifact of the magnetogram (SOHO/MDI) due to emission caused by particle precipitation that changes the spectral line's shape. From long-term monitoring, Park et al. (2008, 2012) found that the helicity first increases monotonically and then remains almost constant just before the flares. Some events show the sign of injected helicity reverses and, in such cases, the flares are more energetic and impulsive and the accompanying CMEs are faster and more recurring. Park et al. (2010a) and Jing et al. (2012) compared the accumulated helicity injection measured by integrating Eq. (7) over time and the coronal helicity derived from the NLFFF extrapolation (Sect. 4.3.1) and found close correlations between the two parameters (see Fig. 22b). From the viewpoint of helicity budget, the CME works as a carrier of helicity that is taken away from a flaring AR and leads the magnetic system of the AR to lower energy states (see illustration in Fig. 7b: Rust 1994; Démoulin et al. 2002; Green et al. 2002). However, accumulated helicity may also be reduced by annihilation of two magnetic systems of opposite helicity sign (through magnetic reconnection). Several observations show that magnetic systems with oppositely singed helicity commonly exist in a given AR and the interaction of these systems play a key role in driving flares and CMEs (Kusano et al. 2002; Wang et al. 2004c; Chandra et al. 2010; Romano et al. 2011; Zuccarello et al. 2011). This scenario is further supported by MHD simulations by Kusano et al. (2004, 2012), in which the emergence of reversed shear near the PIL triggers the eruption. Peak helicity injection rate during the observing interval versus the median helicity flux over the interval. Non-X-flaring reference regions (345) are plotted as plus signs and X-flare regions (48) as boxed crosses. The necessary condition for the production of an X-flare is a peak helicity flux \(>6\times 10^{36}\, \mathrm{Mx}^{2}\, \mathrm{s}^{-1}\). Image reproduced by permission from LaBonte et al. (2007), copyright by AAS Statistical investigations on a number of ARs clearly demonstrate the tendency that flare-productive ARs have a significantly higher amount of helicity than flare-quiet ARs (Nindos and Andrews 2004; Park et al. 2010b). LaBonte et al. (2007) compared 48 X-flare-producing ARs and 345 non-X-flaring regions and derived an empirical threshold for the occurrence of an X-class flare that the peak helicity flux exceeds a magnitude of \(6\times 10^{36}\, \mathrm{Mx}^{2}\, \mathrm{s}^{-1}\) (see Fig. 23). Tziotziou et al. (2012, 2014) found a consistent monotonic scaling between the relative helicity and the free magnetic energy for both observational data sets and MHD simulations (Moraitis et al. 2014). However, it should be noted that these results do not take into account the area of ARs. Because the magnetic helicity in a flux system scales as the square of that system's magnetic flux, we can compare, by normalizing the magnetic helicity by the flux squared, how much the magnetic configuration is stressed in ARs of the same size (Démoulin and Pariat 2009). As mentioned above, flaring ARs exhibit a fairly complicated distribution of both positive and negative signs of magnetic helicity. The helicity flux distribution can be measured by computing and mapping the density of helicity flux in Eq. (7): \(G_{A}=2[({{\mathbf {A}}}_{\mathrm{p}}\cdot {{\mathbf {B}}})v_{n}-({{\mathbf {A}}}_{\mathrm{p}}\cdot {{\mathbf {v}}})B_{n}]\), or simply \(G_{A}=-2({{\mathbf {A}}}_{\mathrm{p}}\cdot {{\mathbf {v}}})B_{n}\). However, Pariat et al. (2005) showed that \(G_{A}\) is not a proper helicity flux density as \(G_{A}\) can be non zero (\(G_{A}\) map can show variation) even with simple translational motions that do not inject any magnetic helicity. Then, they proposed an alternative proxy of the helicity flux density, \(G_{\varPhi }\), which takes into account the magnetic field connectivity and thus requires 3D magnetic extrapolations. Dalmasse et al. (2013, 2014) developed a method to compute \(G_{\varPhi }\) and applied it to observational data of the complex flaring AR NOAA 11158 (Fig. 14), showing that this proxy reliably and accurately maps the distribution of photospheric helicity injection. 3.2.4 Magnetic tongues and importance of structural complexity In vertical (or LOS) magnetograms, the newly emerging regions, especially of AR scales, display "magnetic tongue" structures, the extended magnetic polarities at both sides of the PIL (Fig. 24a), first mentioned by López Fuentes et al. (2000). The magnetic tongues that resemble the yin-yang pattern are thought to be the vertical projection of the poloidal component of the twisted emerging magnetic flux tube (Fig. 24b), and thus, the layout of tongues and the direction of PILs are used as proxies of magnetic helicity sign of emerging fields (Sect. 3.2.3: Luoni et al. 2011; Takizawa and Kitai 2015; Poisson et al. 2015, 2016). Multiple observational studies showed that such yin-yang tongues are seen in flaring ARs, along with other observational characteristics including sigmoids, sheared coronal loops, and J-shaped flare ribbons (Li et al. 2007; Green et al. 2007; Canou et al. 2009; Chandra et al. 2009; Mandrini et al. 2014). This may indicate that the flaring ARs tend to possess substantial magnetic helicity. a Sample images of magnetic tongues resembling the yin-yang pattern. The left panel shows the tongue with negative helicity (left-handed twist), while the right panel is for positive helicity (right-handed twist). Image reproduced by permission from Takizawa and Kitai (2015), copyright by Springer. b Model of a twisted flux tube with a half-torus shape. The magnetic tongue (red-blue), separated by the PIL (straight line), is explained by the emergence of a twisted flux tube. In this case, the magnetic tongue has positive helicity due to the emergence of a flux tube with right-handed twist. Image reproduced by permission from Poisson et al. (2016), copyright by Springer One of the important conclusions from the series of statistical investigations in Sect. 2.3 was that magnetic fields of flare-productive ARs exhibit higher degrees of complexity. While classical sunspot categorizations (e.g., McIntosh and Mount Wilson schemes) simply provide qualitative indices of the ARs' complexity, one well-studied quantitative measure of the complexity is the fractal dimension, an indication of self-similarity of structures (Mandelbrot 1983). From the fractal dimension analysis using full-disk magnetograms over 7.5 years, McAteer et al. (2005) found that the flare productivity, in terms of both GOES magnitude and frequency, has a good correlation with fractal dimension. They showed a threshold fractal dimension of 1.2 and 1.25 as a necessary requirement for an AR to produce M- and X-class flares, respectively, within next 24 h period. Interestingly, McAteer et al. (2005) also found that the frequency distributions of the fractal dimension for different Mount Wilson classes (\(\alpha \), \(\beta \), \(\beta \gamma \), \(\beta \gamma \delta \)) are similar to each other with a mean fractal dimension of 1.32. Perhaps this result indicates that, for the production of strong flares, the complexity of mid-to-small scales (smaller than the whole AR: detected by the fractal dimension analysis) has to exist along with the large-scale complexity (AR size: characterized by the Mount Wilson class). Importance of structural complexity in the flare production is also demonstrated by plotting the power spectra of magnetograms. Abramenko (2005) calculated the power-law index \(\alpha \) of the magnetic power spectrum \(E(k)\sim k^{-\alpha }\) of the magnetograms for 16 ARs, where k being the spatial wavenumber, and compared \(\alpha \) with the flare index FI, which represents the flare productivity of a given AR: $$\begin{aligned} FI=\frac{1}{\tau } \left[ 100\sum _{i}I_{X}+10\sum _{j}I_{M}+1.0\sum _{k}I_{C}+0.1\sum _{l}I_{B} \right] , \end{aligned}$$ where \(I_{X}\), \(I_{M}\), \(I_{C}\), and \(I_{B}\) are the GOES magnitudes of X-, M-, C-, and B-classes, respectively, that occurred in a given AR in the period of \(\tau \) days, and indices i, j, k, and l designate flares in each class. As shown in Fig. 25, it was revealed that higher flare productivity is associated with steeper spectrum: the power-law index is \(\alpha >2.0\) for ARs producing X-class flares and is \(\alpha \approx 5/3\) for flare-quiet ARs (i.e., regime of classical Kolmogorov turbulence; Kolmogorov 1941). Although not mentioned in Abramenko (2005), the above result might also be explained by the observation that larger ARs tend to produce stronger flares (e.g., Sammis et al. 2000): the spatial power spectrum of a large AR would have more power at low wavenumbers but have the same power at higher wavenumbers, which leads to a steeper power spectrum for a larger AR. The works introduced in this subsubsection essentially show the fractal, multi-fractal, and/or turbulent nature of flaring ARs (Abramenko et al. 2002, 2003; Abramenko and Yurchyshyn 2010; McAteer et al. 2010; Georgoulis 2012). Regarding the practical flare prediction, Georgoulis (2005) revealed, however, that the fractal dimension does not have significant predictability. Rather, they suggested that the temporal evolution of the fractal diagnostics may be practically useful in flare prediction. Power-law index \(\alpha \) for 16 ARs of different flare index (denoted as A in this panel). The dashed vertical line indicates \(\alpha =5/3\) for the Kolmogorov's turbulence theory. The positive relationship between the flare productivity and the power-law index is clearly illustrated. Image reproduced by permission from Abramenko (2005), copyright by AAS 3.2.5 (Im)balance of electric currents Magnetic energy that is released in solar flares stems from the non-potential, magnetic field associated with electrical currents. An important and long-standing question about the electric current is whether or not the current is neutralized in ARs, and, if not, to what extent and how (e.g., Melrose 1991, 1995, 1996; Parker 1996). For the violation of current neutralization, two basic mechanisms have been proposed, which are (1) the magnetic field lines are stressed and twisted by photospheric and sub-photospheric flow motions (e.g., Klimchuk and Sturrock 1992; Török and Kliem 2003; Dalmasse et al. 2015); and (2) the current is provided by the emergence of twisted, i.e., current-carrying flux tubes (e.g., Leka et al. 1996; Longcope and Welsch 2000; Fan 2001b). The current neutralization is investigated by examining whether the total electric current integrated over a single magnetic polarity of an AR vanishes. This is equivalent to whether the main (direct) current of a flux tube is surrounded by the shielding (return) current of equal strength and opposite direction. A number of observers have tried to address this issue by measuring the longitudinal (vertical) component of electric current density from the vector magnetogram, $$\begin{aligned} j_{z}=\frac{c}{4\pi } \left[ {\mathbf {\nabla }}\times {{\mathbf {B}}} \right] _{z} =\frac{c}{4\pi } \left( \frac{\partial B_{y}}{\partial x} - \frac{\partial B_{x}}{\partial y} \right) , \end{aligned}$$ where c is the speed of light. Whereas Wilkinson et al. (1992) stated that their data do not convincingly show a non-neutralized current system, many observations have consistently suggested the existence of twisted flux systems, in favor of the scenario (2) (see a variety of observations introduced in previous sections). To cite a case, Wheatland (2000) examined vector magnetograms for 21 ARs and found that the electric currents in the positive and negative polarities significantly deviated from zero in more than half of the ARs studied, indicating that the AR currents are typically not neutralized. Using vector magnetograms of the highest quality by Hinode/SOT/SP, Georgoulis et al. (2012) investigated the distribution of currents in a flaring/eruptive AR (NOAA 10930) and a flare-quiet one (NOAA 10940). They found that substantial non-neutralized currents are injected along the photospheric PILs and that more intense PILs yield stronger non-neutralized currents. From statistical studies, Liu et al. (2017b) and Kontogiannis et al. (2017) showed that the flare- and CME-producing ARs are characterized by strong non-neutralized currents. However, because the measurement of electric currents is strongly hampered by the limited resolution and ambiguities of magnetogram, it has always been a challenging task to accurately evaluate the distribution of currents as in Eq. (9). Therefore, to figure out whether the ARs are born with net currents, it is desirable to enlist the aid of numerical modeling (Török et al. 2014, see Sect. 4.1). (Top left) Hinode/X-Ray Telescope (XRT; Golub et al. 2007) image of the sigmoid observed on February 12, 2007. (Top right) Field lines traced from the NLFFF extrapolation model. The cyan field lines belong to the potential arcade. The yellow J-shaped and the green S-shaped field lines are part of the flux rope, and the short red field lines lie under the flux rope. The background shows the LOS magnetogram. Image reproduced by permission from Savcheva et al. (2012a), copyright by AAS. (Bottom) Filament formation model based on the flux cancellation scenario. Field lines above the PIL (dashed line) become sheared and converged due to the photospheric motions (panels a–c). Magnetic reconnection then produces a long overlying loop (A–D in panel d) and a short field line that submerges (B–C). Overlying arcades are further sheared and converged to produce a flux rope (panels e and f). Image reproduced by permission from van Ballegooijen and Martens (1989), copyright by AAS 3.3 Atmospheric and subsurface evolutions 3.3.1 Formation of flux ropes: sigmoids and filaments In flare-productive ARs, free magnetic energy is stored in non-potential coronal fields that harbor significant amount of shear and twist. When observed in soft X-rays, these coronal fields display forward or inverse S-shaped structures, which was first observed by Acton et al. (1992) and are called "sigmoids" (Rust and Kumar 1996): see review by Gibson et al. (2006). Figure 26 (top) shows a typical example of a sigmoid. One may find that its structure is in good agreement with the extrapolated coronal fields, which shows the form of a magnetic flux rope. From the statistical analysis of the data from Yohkoh's Soft X-ray Telescope (SXT; Tsuneta et al. 1991), Canfield et al. (1999) revealed that ARs are significantly more likely to be eruptive if they are either sigmoid or large: 51% of all ARs analyzed are sigmoid and they account for 65% of the observed eruptions. This result attracted interest in sigmoids as precursors of flare eruptions, and the trend was confirmed later by Canfield et al. (2007), Savcheva et al. (2014) and Kawabata et al. (2018). Sigmoids are often accompanied by H\(\alpha \) filaments (e.g., Pevtsov et al. 1996; Pevtsov 2002), and they form above and along the PILs in the evolving ARs. It is therefore important to understand the formation mechanism of sigmoids in relation to the large-scale/long-term evolution of the photospheric fields (as we saw earlier in Sects. 3.1 and 3.2). In fact, the series of sigmoid observations indicate that they are created in the manner anticipated in the filament formation model by van Ballegooijen and Martens (1989) [see Fig. 26(bottom)], in which the shearing and converging flow around the PIL drives flux cancellation and twists up the arcade fields to create a flux rope (see also Martens and Zwaan 2001).6 Day to day evolution of AR NOAA 10977. (Top) SOHO/MDI magnetogram saturating at \(\pm \, 100\, \mathrm{G}\). (Bottom) Hinode/XRT C Poly filter images showing the transition from a sheared arcade to a sigmoid. Images reproduced by permission from Green et al. (2011), copyright by ESO Figure 27 is one of the most compelling examples of the sigmoid formation through spot evolution (Green et al. 2011). At the central PIL of this AR, about one third of the magnetic flux cancels in 2.5 days before the flare eruption and the photospheric field shows an apparent shearing motion (top panels). At the same time, the coronal structure transforms first from a weakly to a highly sheared arcade then to a sigmoid that lies over the PIL (bottom panels). The sigmoid flux rope erupts eventually during the GOES B1.4-class flare, leaving an arcade structure in soft X-ray images (Sterling and Hudson 1997; Hudson et al. 1998; Sterling et al. 2000). A similar long-term transition of coronal fields from a sheared arcade or a pair of J-shaped loops to the sigmoid was also observed by Tripathi et al. (2009), Green and Kliem (2009) and Savcheva et al. (2012b). From these observations, one can infer that the twisted flux rope in a flaring AR is formed above the PIL due to the photospheric driving before the eruption. Then, it is natural to speculate that magnetic helicity is the cause of the flux rope structure. To this end, Yamamoto et al. (2005) analyzed three sigmoid ARs and found that in two regions, the magnetic helicity injected through the sigmoid footpoints is comparable to the helicity content of the sigmoid loops. However, this is not true for the other AR, which may be because the sigmoid consists of multiple loops. They concluded that, excluding the latter complex AR, the magnetic twist of sigmoids is consistent with the helicity injected from the sigmoid footpoints. Investigating various filament eruption events associated with sigmoids, Green et al. (2007) showed that the structure of a sigmoid agrees with the helicity of a filament (e.g., forward S-shaped sigmoid for positive helicity filament) and that the rotation of a filament apex during the eruption is consistent with the helicity of the filament (e.g., clockwise rotation for positive helicity filament). The authors found that these behaviors agree with the kink instability scenario as numerically modeled by Török and Kliem (2005). DEM maps of AR NOAA 11158 with the FOV of \(1200''\times 480''\) centered at \((600'', -\,268'')\). The color indicates the total EM contained within a \(\log {(T\, \mathrm{[K]})}\) range indicated in the bottom left corner of each panel. Image reproduced by permission from Cheung et al. (2015), copyright by AAS Thermal structures of sigmoid ARs have been investigated by differential emission measure (DEM) analysis (for detailed account of this method, see Sects. 7 and 8 of Del Zanna and Mason 2018). For instance, the DEM maps of AR NOAA 11158 in Fig. 28, calculated from six EUV images of SDO/AIA by Cheung et al. (2015), clearly reveals that a hot core structure is embedded in the center of AR (\(\log {(T \mathrm{[K]})}>6.6\)) and covered by cooler overlying loops (\(\log {(T \mathrm{[K]})}\lesssim 6.3\)). Syntelis et al. (2016) analyzed the pre-eruptive phase of NOAA 11429, which is responsible for the two consecutive X-class flares with fast CMEs, using data from both AIA and Hinode's EUV Imaging Spectrometer (EIS; Culhane et al. 2007). They found that the mean DEM of the flux ropes in the temperature range of \(\log {(T \mathrm{[K]})}=6.8\)–7.1 gradually increased by an order of magnitude about 5 h before the CME eruption. This increase was associated with the rising of the flux rope and may be related to the observed heating in CME cores (Cheng et al. 2012; Hannah and Kontar 2013), although the physical relationship with instabilities is not clear. 3.3.2 Broadening of EUV spectral lines prior to flares Another possible atmospheric response to the photospheric evolution is the pre-flare non-thermal broadening of coronal EUV spectral lines. The observed line width consists of thermal width, instrumental width, and non-thermal (excess) broadening, which are related via $$\begin{aligned} W_{\mathrm{obs}}^{2}=W_{\mathrm{inst}}^{2}+4\ln {2} \left( \frac{\lambda }{c} \right) ^{2} \left( v_{\mathrm{t}}^{2}+v_{\mathrm{nt}}^{2} \right) , \end{aligned}$$ where \(W_{\mathrm{obs}}\) and \(W_{\mathrm{inst}}\) are the observed and instrumental widths, respectively, \(\lambda \) the wavelength of the emission line, c the speed of light, \(v_{\mathrm{t}}\) the thermal velocity, and \(v_{\mathrm{nt}}\) the non-thermal velocity. (Top) GOES soft X-ray light curve from December 9 to 13, 2006. The X3.4-class flare occurs at 02:14 UT on December 13. (Bottom) Helicity injection rate (\(dH_{\mathrm{R}}/dt\)) in the unit of \(10^{36}\, \mathrm{Mx}^{2}\, \mathrm{s}^{-1}\), measured by Hinode/SOT/SP by Magara and Tsuneta (2008) (asterisks with dashed line). The median of the top 95th percentile of non-thermal velocities observed in the AR core (\(v_{\mathrm{nt}}\)) for Hinode/EIS Fe xii 195 Å line is also plotted (solid line). The vertical dash-dotted line denotes the time of the third EIS measurement of December 12. Image reproduced by permission from Harra et al. (2009), copyright by AAS Alexander et al. (1998), Ranns et al. (2000) and Harra et al. (2001) showed that the non-thermal broadening peaks in the early phase of, or even tens of minutes, before the flare occurrence, and suggested that the broadening indicates turbulence that is related to the flare triggering mechanism. However, Harra et al. (2009) revealed that the pre-flare broadening starts much earlier. They measured the non-thermal velocity of Fe xii 195 Å line using Hinode/EIS and found that, as shown in Fig. 29, the increase in the line width begins up to one day before the X-class flare occurs after the helicity injection saturates (Magara and Tsuneta 2008). Imada et al. (2014) revisited this event and showed that this pre-flare broadening occurs in concurrence with upflow of about 10–\(30\, \mathrm{km\ s}^{-1}\). They speculated that the upflow indicates the expansion of outer coronal loops and this rising motion (observed as the Doppler blueshift) causes the excess broadening. 3.3.3 Helioseismic signatures in the interior Given the complex features of magnetic fields in flaring ARs, it is natural to ask if there is any subsurface counterpart. One of the earliest attempts to apply the local helioseismology techniques to search for the statistical relation between the subsurface flow field and the flare occurrence was done by Mason et al. (2006): Fig. 30 (top). They applied the ring-diagram method to 408 ARs from the Global Oscillation Network Group (GONG) data and 159 ARs from the SOHO/MDI data to measure the vorticity of flows (\({\varvec{\omega }}=\nabla \times {{\mathbf {v}}}\)) and compared it with the total flare intensity [equivalent to the flare index FI: Eq. (8)]. It was found that the maximum unsigned vorticity components at a depth of about 12 Mm, calculated from a synoptic maps of global subsurface flows that are generated by averaging the ring-diagram flow fields over 7 days (Haber et al. 2002), are correlated well with the flare intensity greater than \(3.2\times 10^{-5}\, \mathrm{W\ m}^{-2}\). For flare activity below this value, the relation was not apparent. Komm and Hill (2009) expanded the analysis to 1009 ARs including non-flaring ones. As shown in the bottom panels of Fig. 30, they demonstrated a clear relation between the magnetic flux density (total magnetic flux averaged over area: in the unit of G) and vorticity for flaring ARs (correlation coefficient \(CC=0.75\)). The non-flaring ARs show a similar trend but the correlation is weaker (\(CC=0.5\)) and the mean values of flux and vorticity are smaller. The authors concluded that the inclusion of vorticity helps to distinguish between flaring and non-flaring regions. (Top) Vorticity distribution beneath a sample AR. The upper panel shows the latitudinal distribution of the unsigned magnetic flux across AR NOAA 10096 (solid) and that binned over 15\(^{\circ }\) (dashed), whereas the lower panel displays the zonal vorticity component (the east–west component: \(\omega _{x}\)) as a function of latitude and depth, with arrows denoting the meridional flows. The strong zonal vorticity of opposite sign is concentrated at the location of the AR. Image reproduced by permission from Mason et al. (2006), copyright by AAS. (Bottom) Total flare intensity of ARs during their disk passage (in the unit of \(10^{-3}\, \mathrm{W\ m}^{-2}\), i.e. relative to an X10 flare) as a function of unsigned maximum magnetic flux density and unsigned subsurface vorticity at \(-\,12\, \mathrm{Mm}\), plotted in linear scale to focus on large values (left) and logarithmic scale to focus on small ones (right). The colors indicate the maximum intensity of each subset. Black symbols are non-flaring ARs. Image reproduced by permission from Komm and Hill (2009), copyright by AGU Reinard et al. (2010) put more focus on the temporal evolution of subsurface flow fields. By analyzing 1023 ARs with the ring-diagram method, they showed that (1) at first, about 2–3 days before the flare occurrence, the kinetic helicity density, \({{\mathbf {v}}}\cdot {\varvec{\omega }}={{\mathbf {v}}}\cdot (\nabla \times {{\mathbf {v}}})\), has a large spread in values with depth, but the spread decreases on the days of the flares, and that (2) the degree of shrinking is greater for stronger flares. The observed tendency lends support to the interpretation that the subsurface rotational turbulent flows twist the magnetic fields into unstable configurations and drives the flare eruptions. Komm et al. (2011a) further applied discriminant analysis to various magnetic and subsurface flow parameters and found that the subsurface parameters improve the ability to distinguish between the flaring and non-flaring ARs. The most important parameter is the structure vorticity, which estimates the horizontal gradient of the horizontal vorticity components. As an independent ring-diagram study, Lin (2014) compared the flare activity levels of 77 ARs and the quantities that describe the subsurface structural disturbances. According to the author, there was no remarkable correlation between these parameters. Another approach is to apply time-distance helioseismology. Using the sequential SDO/HMI data of five flare-productive ARs, Gao et al. (2012, 2014) compared the kinetic helicity density measured from the subsurface velocity maps and the current helicity density calculated from the photospheric vector magnetograms, \({{\mathbf {B}}}\cdot (\nabla \times {{\mathbf {B}}})\),7 and found a good correlation between the two values. They found that eight out of a total of 11 events show a drastic amplitude change of the kinetic helicity density, and five of them are accompanied by flares stronger than M5.0 level within 8 h, either before or after the amplitude change. The spread of the kinetic helicity density in depth also showed strong variations, which confirms the observational result of Reinard et al. (2010). Braun (2016) used helioseismic holography to more than 250 ARs observed between 2010 and 2014. They found that individual ARs show mostly variations associated with non-flare related evolution, although correlations between the flare soft X-ray flux and subsurface flow indices are in general similar to those found previously by Komm and Hill (2009). Moreover, they detected no remarkable precursors or other temporal changes that are specifically associated with the flare occurrences. It should be pointed out that whereas not a small number of results have been reported, there is no clear physical model that explains the statistical correlations found between flaring and various properties of subsurface flows. For instance, it is not clear why the subsurface vorticity is correlated with AR flux, better for the flaring ARs than for the non-flaring ARs (Fig. 30). Therefore, further investigation, probably with the aid of numerical simulations, is required to interpret the observational results. The difficulty resides also in the observational techniques. In many cases, the existence of strong magnetic flux (i.e. ARs) is assumed as a small perturbation when solving the linear inverse problem in seismology. However, this may not be true (see Gizon and Birch 2005, Sect. 3.7). Development of seismology techniques, again with the assistance of modeling, may overcome this shortcoming and deepen our understanding of subsurface evolutions. 3.4 Summary of this section In this section, we have reviewed the important observational characteristics that are created in the long-term and large-scale evolution of flare-productive ARs. Many of these characteristics manifest the morphological and magnetic complexity of such ARs and prove the inherent high non-potentiality of the magnetic system. The \(\delta \)-spots, in which the umbrae of both polarities share a common penumbra (Sect. 2.3), are formed in three ways (Sect. 3.1): Type 1 (Spot-spot), the tightly packed sunspot with multiple bipoles intertwined; Type 2 (Spot-satellite), where a newly emerging flux appears in close proximity to a pre-existing spot; and Type 3 (Quadrupole), the head-on collision of two neighboring bipoles. However, X-class flares also emanate from between two separated ARs, albeit rarely (Inter-AR). The \(\delta \)-spots develop the strong-field, strong-gradient, highly-sheared PILs, which sometimes show a magnetic channel, a narrow lane structure consisting of elongated flux threads of opposite polarities (Sect. 3.2.1). These magnetic evolutions are caused by the shearing and converging flows around the PIL, where as remarkable sunspot rotations, both the self and mutual rotations, are also observed (Sect. 3.2.2). Injection of magnetic helicity is found to have temporal correlation with flare productivity, while X-class flares require a significantly higher amount of helicity injection (Sect. 3.2.3). The magnetic tongue structure is thought to be the manifestation of emergence of twisted magnetic flux and is used as a proxy of magnetic helicity sign (Sect. 3.2.4). In studies addressing the old question of whether AR currents are neutralized or not, the preponderance of recent evidence supports the view that electric currents are not neutralized, particularly in regions prone to exhibit large flares (Sect. 3.2.5). Twisted flux ropes, observed as H\(\alpha \) filaments and soft X-ray sigmoids, can be produced in the atmosphere above the PILs due to the shearing and converging flows and helicity injection, which eventually erupt in the flares and evolves into CMEs (Sect. 3.3.1). Though more extensive surveys are desired, several works have shown that flaring ARs have steeper power spectra, probably reflecting the morphological and magnetic complexity (Sect. 3.2.4), coronal upflows with excess broadening of EUV emission lines in response to the helicity injection (Sect. 3.3.2), and properties of vorticity in the convection zone (Sect. 3.3.3). Evolution of AR NOAA 12673 and the formation of the flaring PIL. Image reproduced by permission from Yang et al. (2017a), copyright by AAS. a1–a3 SDO/HMI vector magnetograms at 12:00 UT on September 3, top view of the extrapolated field lines, and corresponding AIA 171 Å image, respectively. b1–b3 Similar to panels a1–a3, but for the time at 09:48 UT on September 6. In panels a1 and b1, green arrows are overlaid to indicate bipoles A, B, C, and D, and yellow arrow shows the pre-existing sunspot. c1 Free energy density corresponding to panel b1 overlaid with the vertical magnetic field contours at \(\pm \, 800\, \mathrm{G}\). Twist number \(T_{w}\) (Berger and Prior 2006) and squashing factor Q (Demoulin et al. 1996; Titov et al. 2002) distribution in the x–z plane along the cut labeled in panel c1. In panel b2, the blue field lines connect the opposite patches of bipole C and bipole D, respectively, and the red field lines indicate a flux rope along the PIL. In panels c2 and c3, the green dotted curves outline the general shape of the flux rope AR NOAA 12673, which appeared in September 2017 and produced numerous flares including the X9.3-class event, is characteristic of the important features introduced in this section. Figure 31 by Yang et al. (2017a) shows the overall evolution and the formation of the flaring PIL. This AR rotates on to the visible disk as a simple \(\alpha \)-spot of positive polarity. On September 3, two bipolar systems A and B suddenly emerge to the east of the pre-existing central spot (panels a1–a3), and two additional bipoles C and D emerge more in the north–south direction within the first two pairs, forming a highly complex \(\delta \)-spot (panels b1–b3). This evolution reminds us of a Type-2 \(\delta \)-spot, but at the same time the collision of the secondary bipoles C and D is also reminiscent of the Type-3 structure. Sun and Norton (2017) pointed out that the emergence rate of this AR is one of the fastest emergence events ever observed. High-resolution observations of the flaring PIL of AR NOAA 12673. a, b Hinode/SOT/SP LOS and transverse magnetic field strength, respectively. Note that in many pixels near the PIL, transverse fields are saturated at 5000 G due to the limitation of inversion algorithm. c BBSO/GST TiO image. The two white boxes in a–c mark the two strong transverse field areas at the PIL, where twisted photospheric light-bridge structures of the \(\delta \)-configuration are present. d NIRIS Stokes-U profile of a selected strong transverse field pixel at the PIL within the northern box. The direct measurement of Zeeman splitting yields a field strength of 5570 G. Image reproduced by permission from Wang et al. (2018a), copyright by AAS As the negative polarity of D rapidly intrudes into the positive polarities, it produces a strong-field, strong-gradient, highly-sheared PIL (Fig. 31: location where free energy is enhanced in panel c1). According to Yang et al. (2017a), because the pre-existing central spot blocks the free development of the newly-emerging fields, the \(B_{z}\) gradient at the PIL becomes much enhanced. As Fig. 32 illustrates, Wang et al. (2018a) detected exceptionally strong transverse fields of up to \(5570\, \mathrm{G}\) around this PIL. In the corona above this PIL, a flux rope structure is clearly reproduced by the NLFFF modeling (Fig. 31: red field lines in panel b2), which agrees well with the sigmoidal structure. Moreover, Verma (2018), Yan et al. (2018) and Vemareddy (2019) reported on the PIL shear flows, spot rotations, and helicity injection, respectively, which combined seems to activate the X9.3 flare. 4 Long-term and large-scale evolution: theoretical aspects As we saw in the preceding sections, in its long history of solar observation, a vast amount of key observational features that differentiate the flare-productive ARs from the quiescent ones have been discovered. The essential questions we have are, of course, how are they created and what is the underlying physics? The other side of solar physics, the theoretical and numerical studies, may provide answers to these questions. Because there have already been substantial number of simulation models to date, in order to offer the reader a guideline, we introduce three genres of modeling, following the discussion in Cheung and Isobe (2014). The first group is the data-inspired models, which assume an ideal simulation setup that is "inspired" by the observations. Flux emergence and flux cancellation models fall into this group. The second group is the data-constrained models, in which the models use observational data at a single moment to drive computations. The series of extrapolated magnetic fields, computed from the sequential photospheric magnetogram, is one representative model of this group. However, it is less likely that such static solutions are applicable to flare-producing, i.e., dynamically evolving ARs. So, another way of the data-constrained models is to use the extrapolated field as the initial condition and solve the time-dependent MHD equations to trace the temporal evolution. The third group, the data-driven models, even utilizes a temporal sequence of observational data, such as the series of magnetograms, to drive the models. The flux emergence and flux cancellation models are introduced in Sects. 4.1 and 4.2, respectively. The data-constrained and data-driven models, which are still rather the newcomers, are jointly shown in Sect. 4.3. 4.1 Flux emergence models The fundamental premise of the formation and evolution of flaring ARs is that solar ARs are produced ultimately by emerging flux from the convection zone. Therefore, it is not surprising that many theoretical models have focused on the evolution process of flaring ARs from below the surface of the Sun, which we call the flux emergence models. These models leverage the 3D flux emergence simulations, such as those in Sect. 2.1.4, and try to capture some aspects of observed magnetic features of flaring ARs. In fact, even classical models that configure a simple \(\varOmega \)-loop can explain some of the observed features. Magnetic tongues: As the series of observational studies predicted, magnetic tongues, the extended magnetic patches on the both sides of the PIL, are well reproduced by the emergence of a twisted flux tube (see, e.g., Fig. 4e). Archontis and Hood (2010) compared the magnetogram of AR NOAA 10808 and that produced in their numerical simulation and showed that the pattern of magnetic tongues depends on the azimuthal field of the emerging flux tube. Flux ropes and sigmoids: It was Manchester et al. (2004) who first reproduced the flux rope structures self-consistently in the 3D flux emergence simulation. In their model, where the buoyant segment of the flux tube is shorter than that of Fan (2001b)'s model, the upper part of the emerged flux tube becomes detached from the main body and forms a coronal flux rope that erupts into the higher atmosphere as in a CME. Archontis and Török (2008) explained the formation of a flux rope as magnetic reconnection between a set of emerging loops. Because the original flux tube is twisted, the emerged loops are sheared above the PIL and reconnect with each other, forming a flux rope structure. Archontis et al. (2009) revealed that the electric current sheets, which originally have a pair of J-shaped configurations, are joined to form a sigmoid structure as observed in soft X-rays. Similar sigmoid structure was observed in the models by, e.g., Magara (2006), Fan (2009b) and Archontis and Hood (2012). Shear flows: The essential driver of the shear flows in the emergence simulations is the Lorentz force on the two sides of the PIL in opposite directions (Manchester 2001). When the twisted flux tube emerges into the atmosphere, the rapid expansion deforms the field lines of the flux tube and drives the shear flows around the PIL. Fan (2001b) and Manchester et al. (2004) explained the twisting up of the coronal field as a shear Alfvén wave propagating upward, while Fan (2009b) interpreted it as a torsional Alfvén wave. The horizontal velocity vector of Fig. 4e clearly displays the shear flows around the PIL. Helicity injection: Injection of magnetic helicity flux through the photosphere was investigated by Magara and Longcope (2003), who revealed that in the earliest stage, the emergence term dominates, which then reduces and the shear term becomes the main source of the helicity injection for the rest of the period (see Sect. 3.2.3 for the definition of the terms). The helicity transport by the shear term is explained by the horizontal shearing and rotational motions at the footpoints of the emerged magnetic fields (Longcope and Welsch 2000; Fan 2009b). Spot rotation: This can be considered as the subtopic of the helicity injection. Longcope and Welsch (2000) proposed a theoretical model that treats both the expanded twisted flux tube in the corona and that remaining in the convection zone. In this model, as a twisted tube emerges, the torsional Alfvén wave propagates downward into the convection zone due to the mismatch of twists between the two layers and causes the spot rotation. Magara and Longcope (2003) and Magara (2006) found that the rotational flows are formed in each of the spots soon after the rising flux tube becomes vertical, whereas Fan (2009b) shows that significant vortical motions develop as a torsional Alfvén wave propagates along the flux tube. Sturrock et al. (2015) used a toroidal tube model (Hood et al. 2009) and revealed that two sunspots do undergo rotation (not an apparent effect). They explained the rotation by unbalanced torque produced by magnetic tension. (Im)balance of electric currents: Török et al. (2014) considered the emergence of a flux tube that contains neutralized electric currents (i.e., the situation where the direct current along the axis is balanced with the return current at the tube's periphery). As the significant emergence to the surface begins, the current rapidly deviates from the neutralized state and the total direct current remains several times larger than that of the return current throughout the whole evolution. They suggested that when the tube approaches the surface, the return current is pushed aside by the direct current. Also, most of the return currents remain beneath the surface because the tube does not undergo a bodily emergence. It was therefore concluded that ARs are born on the surface with substantial net electric currents. The above features are formed as parts of relaxation processes in which the twist of the flux tube is released through the emergence from the convection zone to the corona. However, in most of the these numerical models that assume a simple buoyant emergence of flux tubes, other important characteristics of flaring ARs, such as tightly-packed \(\delta \)-spots with strong-field, strong-gradient, highly-sheared PILs, are not reproduced. The two photospheric footpoints of the emerging \(\varOmega \)-loops are prone to separation in a monotonous fashion and never form a converged, \(\delta \)-shaped structure. Therefore, to overcome this difficulty, one needs to assume subsurface magnetic fields with not-so-simple configurations. 4.1.1 Kinked tube model The idea of the emergence of a kink-unstable magnetic flux tube is inspired by the observations of flare-productive ARs, especially of Type 1 \(\delta \)-spots (see Sect. 3.1). These regions have compact morphology and strong twists, and the tilt often deviates so much from parallel to the equator that sometimes it even violates Hale's polarity rule. The 3D configurations inferred from the proper motion of the spots strongly suggest the emergence of "a knotted twisted flux tube" (Tanaka 1991, see Fig. 15a of this article). Conversion of twist and writhe. When a straight twisted ribbon (top) is loosened, the original twist converts into the writhe of the coiled ribbon (bottom). In an analogous way, a twisted flux tube deforms into a curled shape if the twist is sufficiently strong, which is the helical kink instability According to Kurokawa (1991), it was Piddington (1974) who first proposed the concept of emerging twisted flux tubes for the energy source in the Alfvén wave theory of solar flares. In "Appendix", we show the history about who suggested the kink instability first as the formation mechanism of the \(\delta \)-spots. The helical kink instability is the instability of a highly-twisted flux tube, in which the twist of the tube (turning of the field lines around the tube's axis) is converted to writhe (turning of the axis itself) due to the helicity conservation (see Fig. 33: Berger and Field 1984; Moffatt and Ricca 1992). It was applied to laboratory plasma (e.g., Shafranov 1957; Kruskal et al. 1958) and to coronal plasma (e.g., Gold and Hoyle 1960; Anzer 1968; Raadu 1972; Hood and Priest 1980, 1981), before Linton et al. (1996) considered the kink instability of flux tubes in a high-\(\beta \) plasma.8 For a uniformly twisted cylindrical flux tube with the axial and azimuthal fields of \(B_{x}(r)\) and \(B_{\phi }(r)=qrB_{x}(r)\), respectively, where r is the radial distance from the tube's axis and the twist q is constant, the flux tube becomes unstable against the kink instability when q exceeds a critical value $$\begin{aligned} q_{\mathrm{cr}}=a^{-1}, \end{aligned}$$ where \(a^{-2}\) is the coefficient for the \(r^{2}\) term in the Taylor series expansion of the axial field \(B_{x}\) about the flux tube: \(B_{x}(r)=B_{\mathrm{tube}}(1-a^{-2}r^{2}+\cdots )\). In the case of commonly used Gaussian flux tubes, in which $$\begin{aligned} B_{x}(r)=B_{\mathrm{tube}}\exp {\left( -\frac{r^{2}}{R_{\mathrm{tube}}^{2}}\right) } \end{aligned}$$ $$\begin{aligned} B_{\phi }(r)=qrB_{x}(r), \end{aligned}$$ with \(R_{\mathrm{tube}}\) being the typical radius of the tube, the critical twist is simply expressed as \(q_{\mathrm{cr}}=R_{\mathrm{tube}}^{-1}\). Linton et al. (1996) also argued that, as the flux tube rises through the convection zone, the originally stable tube may become unstable because the tube expands (\(R_{\mathrm{tube}}\) increases) due to the decreasing surrounding pressure, which lowers the critical twist (\(q_{\mathrm{cr}}\) decreases). Emergence of a kink-unstable flux tube. Image reproduced by permission from Fan et al. (1998b), copyright by AAS. (Left) Snapshot of the flux tube during its rise as viewed from the side. The color shading indicates the absolute magnetic field strength. (Right) Horizontal cross-section of the upper portion of the flux tube (indicated by yellow plane in the left panel). The contours denote the vertical magnetic field \(B_{z}\) with solid line (dotted line) contours representing positive (negative) \(B_{z}\). The arrows show the horizontal magnetic field The first 3D non-linear simulation of the kink-unstable emergence was done by Matsumoto et al. (1998) for reproducing the sequence of sigmoid ARs (top left panel of Fig. 26). Linton et al. (1998, 1999) performed linear and nonlinear calculations of the kink instability in a uniform medium without taking into account the effects of gravity and stratification of external plasma. Using the 3D anelastic MHD code, Fan et al. (1998b, 1999) calculated the emergence in an adiabatically stratified atmosphere representing the solar convection zone (Fig. 34) and found that, due to the kink instability, the writhing of the tube increases the buoyancy at the apex and accelerates the emergence. The horizontal cross-section of the tube shows a compact bipolar pair of \(B_{z}\) with a highly sheared horizontal field along the PIL, and the line connecting the two polarities is deflected by more than 90\(^{\circ }\) from its original orientation. These structures are highly reminiscent of the \(\delta \)-spots. 3D magnetic structure and photospheric and chromospheric fields \(B_{z}\). Yellow and blue field lines denote the field lines passing by the current sheet between the two arcades. White field lines denote those enveloping the arcade. Purple and white field lines denote those created by reconnection between the blue and yellow magnetic loops. a–c Bird's eye view. d Top view. e Schematic diagram of the magnetic field lines. f Schematic diagram of the magnetic field structure shown in panel d. Images reproduced by permission from Takasao et al. (2015), copyright by AAS. (For movie see Electronic Supplementary Material) However, because these emergence simulations were confined to the convection zone, it remained unclear if the kinked tubes can really produce observed characteristics when they emerge into the atmosphere. To overcome this issue, Takasao et al. (2015) performed a fully compressible MHD simulation in which a subsurface kink-unstable flux tube rises from the convection zone seamlessly into the solar corona. In their model, the rising flux tube develops a knotted structure as in the previous simulations (e.g., Fan et al. 1998b; Linton et al. 1999) and, at the top-most convection zone, it undergoes a strong horizontal expansion due to the strong stratification and deforms into a pancake-like shape (two-step emergence, a commonly observed feature of large-scale flux emergence models: see Sect. 2.1.1 and Fig. 2). Interestingly, as opposed to the simple bipolar structure observed in the kinked tube simulations limited to the convection zone (right panel of Fig. 34), the photospheric magnetogram in Fig. 35 shows a quadrupolar structure consisting of the main bipolar pair of large roundish spots that appears in the earlier phase and the narrow, elongated middle pair formed later. The middle pair is created due to the submergence of dipped fields, which is a part of the emerged magnetic fields (see also the accompanying movie). The field lines in Fig. 35 show that magnetic reconnection takes place between the two emerging loops (blue and yellow field lines) and creates lower-lying and overlying post-reconnection field lines (purple and white field lines, respectively). Here, the lower-lying fields are almost parallel to the central PIL. It is also found that, as a consequence of Lorentz force exerted by the two emerging loops (expanding arcades) on both sides of the central PIL, a strong converging flow is excited around it and the horizontal magnetic field becomes aligned more parallel to it. Later, Knizhnik et al. (2018) surveyed the evolution of kink-unstable tubes with varying the twist intensity. They revealed, for example, that the separation of both polarities on the surface becomes smaller (i.e., more compact) with increasing the twist, which underpins the kink instability as a promising candidate for explaining \(\delta \)-spot formation. It should be noted that the assumed twists in these simulations may be too strong compared to the twists of the actual ARs. Pevtsov et al. (1994, 1995) quantified the twist of ARs by calculating the force-free parameter \(\alpha \), the constant of a force-free field \(\nabla \times {{\mathbf {B}}}=\alpha {{\mathbf {B}}}\) (see Sect. 4.3.1) measured from the vector field as $$\begin{aligned} \alpha =\frac{\left[ \nabla \times {{\mathbf {B}}}\right] _{z}}{B_{z}} =\frac{1}{B_{z}} \left( \frac{\partial B_{x}}{\partial y}-\frac{\partial B_{y}}{\partial x} \right) , \end{aligned}$$ and averaging it over the AR to obtain one global estimate of the twist. The observed \(\alpha \) is typically of the order of 0.01–\(0.1\, \mathrm{Mm}^{-1}\) (e.g., Pevtsov et al. 1995; Leka et al. 1996; Longcope et al. 1998), which yields \(q\lesssim 0.1\, \mathrm{Mm}^{-1}\) under the simple relation of \(\alpha \approx 2q\) (Longcope and Klapper 1997), though there remains a possibility that the observed ARs are inclined to regular, flare-quiet ones due to selection bias. On the other hand, the threshold twist for the kink instability is, say, \(q_{\mathrm{cr}}=1\, \mathrm{Mm}^{-1}\) for the typical tube radius of 1 Mm in the deeper convection zone. Therefore, the twists of the flux tubes assumed in the simulations, \(q>q_{\mathrm{cr}}=1\, \mathrm{Mm}^{-1}\), are at least one order of magnitude larger than the observed AR twists, \(q\lesssim 0.1\, \mathrm{Mm}^{-1}\), even though each elementary bipole in ARs may satisfy the assumed condition (Longcope et al. 1999). 4.1.2 Multi-buoyant segment model Type 3 \(\delta \)-spots like the quadrupolar AR NOAA 11158 (Fig. 14), in which two emerging bipoles collide against each other to form a \(\delta \)-structure with a flaring PIL in between, are redolent of a subsurface linkage of the two bipoles. That is, the observed bipoles are the two emerging sections of a single subsurface flux system, distorted perhaps by convective buffeting during its rise (Fig. 15c). (Left) Evolution of the buoyant flux tube in the 3D convective flow for the case where the initial axial field is comparable to the equipartition field (\(B_{\mathrm{tube}}=B_{\mathrm{eq}}\)). The image shows the volume rendering of the absolute magnetic field strength of the flux tube. (Right) Two different views of the same tube at the final state, showing that the apex is pushed down by a local downflow. Image reproduced by permission from Fan et al. (2003), copyright by AAS An emerging flux tube can be affected by the convection when the hydrodynamic force dominates the restoring magnetic tension of the bent flux tube (Fan 2009a): $$\begin{aligned} \frac{B_{\mathrm{tube}}^{2}}{4\pi L} \lesssim C_{\mathrm{D}} \frac{\rho v^{2}}{\pi R_{\mathrm{tube}}}, \end{aligned}$$ which yields $$\begin{aligned} B_{\mathrm{tube}} \lesssim \left( \frac{C_{\mathrm{D}}}{\pi } \frac{L}{R_{\mathrm{tube}}} \right) ^{1/2} B_{\mathrm{eq}} \sim \mathrm{a\ few}\ B_{\mathrm{eq}}, \end{aligned}$$ where \(B_{\mathrm{eq}}=(4\pi \rho )^{1/2}v\) is the equipartition field strength, at which the magnetic energy density is comparable to the kinetic energy density of convective flows, \(B_{\mathrm{eq}}^{2}/(8\pi )=\rho v^{2}/2\), L and v are the size scale and speed of the convection, respectively, and \(C_{\mathrm{D}}\) is the aerodynamic drag coefficient, which is of order unity. At the bottom of the convection zone, \((L/R_{\mathrm{tube}})^{1/2}=3\)–5 and \(B_{\mathrm{eq}}\sim 10\, \mathrm{kG}\) (Fan 2009a). In fact, Fan et al. (2003) numerically demonstrated that flux tubes of \(B_{\mathrm{tube}}\sim B_{\mathrm{eq}}\) are significantly influenced by turbulent convection. As Fig. 36 shows, the section of the emerging flux tube within convective upflows is strongly pushed up while the downdraft sections are pinned down. To make things intriguing, the apex of the rising \(\varOmega \)-tube encounters another local downdraft and takes an M-shaped structure. (Top) Emergence of a double-buoyant segment flux tube. The shown are the temporal evolution of vertical fields at the surface (photospheric magnetogram). Two emerging bipoles P1–N1 and P2–N2 collide at the center and form a sheared PIL with a compact \(\delta \)-spot structure. (Bottom) Relative motion of the photospheric polarities N1 and P2 for a AR NOAA 11158 (Fig. 14), b the simulation with a single double-buoyant-segment tube (i.e., top panels), and c another simulation with two parallel tubes. The center of each diagram indicates the position of N1 and the horizontal axis is parallel to the x-axis. Approaching of the two polarities in NOAA 11158 is reproduced only in the single tube model. Image reproduced by permission from Toriumi et al. (2014b), copyright by Springer Such a situation was modeled by Toriumi et al. (2014b), who reproduced NOAA 11158 (Fig. 14) by simulating the emergence of a single horizontal flux tube that rises at two sections along the tube. As the photospheric magnetogram of Fig. 37 (top) displays, the two buoyant segments produce a pair of emerging bipoles P1–N1 and P2–N2, and the inner polarities (N1 and P2) become tightly packed to create a \(\delta \)-spot. The strong confinement of the central polarities happens because the two emerging loops (P1–N1 and P2–N2) are joined by a dipped field beneath the photosphere. These authors also modeled the emergence of two buoyant flux tubes that are placed closely in parallel (but not connected). In this case, the inner polarities of the two emerging bipoles move closer but just fly-by and never form a compact \(\delta \)-spot. Bottom of Fig. 37 compares the relative motion of the two inner polarities (time evolution of the vector from N1 to P2) for NOAA 11158, the single tube case, and the double tube case. In the actual AR (see also Fig. 14), P2 continuously drifts along the southern edge of N1 from east to west in a counter-clockwise direction and becomes closer to N1, producing a highly-sheared, strong-gradient PIL. Between the two simulation cases, only the single tube case shows the monotonic decrease of the distance. Therefore, they concluded that this Type 3 quadrupolar AR is, between the two scenarios, more likely to be created from a single multi-buoyant-segment flux tube. Simulation results by Fang and Fan (2015), showing 3D structure of the M-shaped emerging loops (red lines) at three different time steps. The plane shows the photospheric magnetogram. Note that the notation of the four polarities is different from that in Figs. 14 and 37. In the final state, magnetic reconnection between the two loops (red) produces overlying (magenta) and low-lying (blue) field lines. Image reproduced by permission, copyright by AAS Exactly the same situation was investigated later by Fang and Fan (2015), but in a much larger computational domain of a realistic AR size with an adaptive mesh refinement code to resolve fine-scale structures. Figure 38 shows three snapshots from their simulation, which clearly shows that the M-shaped emerging loop produces two arcades in the corona and, through magnetic reconnection, overlying and lower-lying field lines, which is expected from the coronal observation of NOAA 11158 (see bottom panels of Fig. 14). The striking consistency between the more realistic simulation and the observation further supports the scenario of multi-buoyant-segment flux tubes for the Type 3 \(\delta \)-spots. 4.1.3 Interacting tube model Another possible origin of the complexity of ARs is the subsurface interaction of multiple rising flux systems. Based on the study of potential flow around circular cylinders, Parker (1978, 1979b) predicted that when two cylindrical flux tubes are rising in a fluid one above the other, the lower tube is attracted toward the other because of the wake of the tube ahead and, when rising side by side, the tubes attract each other due to the Bernoulli effect. However, from 2D simulations on the cross-sectional evolution, Fan et al. (1998a) found that the interaction of the two tubes is much more complicated. When the tubes rise side by side, because the wake behind each tube interacts with that of the other, each tube sheds a succession of eddies of alternating signs and gains Magnus force in the lateral direction, leading to the repeated attractive and repulsive motions during their ascents. On the other hand, when the tubes do not have the same initial height, the tube behind is drawn into the wake of the tube ahead and eventually merges with it. At the interface between the two tubes, dissipation of oppositely directed field components (twists) occurs. (Top) Polar plots showing the types of interaction of right-handed (R) and left-handed (L) twist tubes. Each radial spoke corresponds to a simulation RLi, where one R tube is in the reference position and another tube is in front of it, rotated by an angle \(i\pi /4\) clockwise to it in such a way that RL0/RR0 is parallel and RL4/RR4 is anti-parallel. The solid curves show \(2(\mathrm{KE}_{\mathrm{peak}}-\mathrm{KE}_{0})/\mathrm{ME}_{0}\), where KE\(_{\mathrm{peak}}\) is the peak global kinetic energy during the simulation, KE\(_{0}\) is the initial global kinetic energy, and ME\(_{0}\) is the initial global magnetic energy. The dashed curves show the global magnetic energy near the end of the simulations normalized by ME\(_{0}\). The dotted circles are the normalized energy levels of 0.15 and 0.3. (Bottom) Merge interaction of RR0. Isosurface of \(|B|_{\mathrm{max}}/3\) and field lines for three time steps are shown. Image reproduced by permission from Linton et al. (2001), copyright by AAS Linton et al. (2001) focused more on magnetic reconnection between two strongly-twisted flux tubes in the 3D low-\(\beta \) volume (i.e., the solar corona) to study the triggering of flares and eruptions. They found that, depending on the helicity (twist handedness) and the relative angle of the tube axes, the interaction can be classified into four distinct classes (see Fig. 39): (1) bounce, in which the two tubes bounce off each other with very little reconnection, occurring for example between parallel counter-helicity tubes (RL0); (2) merge, in which the tubes merge due to reconnection of azimuthal components, e.g., between parallel co-helicity tubes (RR0: bottom of Fig. 39); (3) slingshot, in which the tubes reconnect and "slingshot" away in a manner analogous to the classical 2D reconnection, e.g., between anti-parallel counter-helicity tubes (RL4); and (4) tunnel, in which field lines of the tubes undergo reconnection twice and the tubes pass through each other, occurring when the co-helicity tubes are placed in the orthogonal direction like RR6. These interactions were also investigated by Sakai and Koide (1992). Linton and Antiochos (2005) and Linton (2006) demonstrated that the situations may differ depending on the level of twist and the balance of magnetic flux contained in the two tubes. (Top) Two snapshots from the simulation of interacting orthogonal flux tubes. The field lines are colored according to local \(B_{z}\), while the red isosurface gives a constant-|B| layer. (Bottom) Synthesized magnetogram at the photospheric height, in which darker and lighter colors represent \(B_{z}<0\) and \(B_{z}>0\), respectively. The green and blue lines are selected field lines, traced from the upper and lower tubes, respectively. Image reproduced by permission from Murray and Hood (2007), copyright by ESO Murray and Hood (2007) simulated the interaction of emerging flux tubes in the stratified high-\(\beta \) medium representing the solar interior. They examined the cases where two horizontal tubes are placed in such a way that the lower one is buoyant whereas the upper one remains stable. For the case of parallel tubes, or LL0 (the mirror symmetry of RR0) following the notation by Linton et al. (2001), they found that the tubes gradually merge, though not totally, and the photospheric magnetogram shows a simple ying–yang pattern similar to that of the single tube case (like in Fig. 4). Of more interest is the case with orthogonal tubes in Fig. 40, or LL2 (corresponding to RR6), where the two tubes are expected to perform a slingshot reconnection due to their lower degrees of twist (Linton 2006). The authors found that, as opposed to the expectation, the two tubes do not undergo a complete slingshot because the tubes differ much in strength. The resultant magnetogram becomes much more complicated. As Fig. 40 illustrates, the polarity layout is at first positive negative from left to right when the upper tube emerges. However, as the lower tube reaches the photosphere, the layout reveals a quadrupolar structure and transits to negative positive, eventually recovering the classical ying–yang pattern. (Top) Simulation results of global-scale toroidal loops for the case with the same axial field but opposite handedness (RL0), which is illustrated as the cartoon. The panels in the first row and on the second middle indicate the radial magnetic field at the near-top layer at \(0.93R_{\odot }\). The panel on the second right shows the radial current, on which the contours of the radial field at 80% (thick) and 20% (thin) of its maximum (solid) and minimum (dashed) are overplotted. The magenta arrows point to the PILs. Due to the bounce interaction of the emerging tubes, the surface magnetogram shows two emerging bipoles with different helicity signs. (Bottom) The same as the top panels but for the case with the same handedness and axial field (RR0). In this merging case, the emerging region consists of a large single bipole but shows a higher degree of non-neutralized currents. Image reproduced by permission from Jouve et al. (2018), copyright by AAS The interaction of two emerging flux tubes inside the solar interior was also examined by Jouve et al. (2018) in a global scale. By extending their anelastic MHD models of the flux emergence in a spherical convective shell with large-scale mean flows (e.g., Jouve et al. 2013), they conducted simulations on the pairs of emerging toroidal loops that have different combinations of the twist handedness and axial direction. They found that if the two loops are given opposite handedness and the same axial direction or the same handedness but opposite axial direction, they bounce against each other through rising, which is in good agreement with RL0 and RR4 of Linton et al. (2001). Consequently, as in the top panels of Fig. 41, the map of the radial magnetic field near the top boundary (substituting the solar surface) shows a quadrupolar region constituted of two emerging bipoles. On the other hand, the case with parallel co-helicity loops (corresponding to RR0) yields a simple bipolar pattern due to the merging of the loops [Fig. 41(bottom)], just like the first model of Murray and Hood (2007). However, in such a case, the non-neutralized currents, suggested to be the origin of eruptive events (Sect. 3.2.5), are much more pronounced than the other cases because the return currents contained in the periphery of each loop are annihilated at the current sheet between the merging loops. From the series of simulation runs in Jouve et al. (2018), a variety of AR structures are formed by interaction of two rising flux tubes, from simple bipolar to complex quadrupolar ones. Since the magnetograms investigated in this study are at \(0.93R_{\odot }\) (i.e., about 50 Mm below the actual surface of the Sun) due to the limitation of anelastic models, further investigations with the fully compressible calculations that enable the direct access to the surface are needed to elaborate how much of the emerging flux does reach the photosphere and what the possible AR configurations at the surface are. ARs with much higher degree of complexity were modeled by Prior and MacTaggart (2016), who simulated the buoyant emergence of braided magnetic fields from the convection zone to the corona. For instance, their "pigtail" field, in which three flux tubes are entangled with each other, develops a magnetogram with a number of positive and negative polarities intertwined: see Fig. 13 of Prior and MacTaggart (2016). 4.1.4 Effect of turbulent convection As we have discussed in Sect. 2.1 and above, thermal convection exerts a diverse range of impacts on the emerging flux, and the series of realistic simulations have revealed the dynamic interactions between the magnetic fields and convective flows, such as boost-up and pin-down of large-scale emerging fields (Fan et al. 2003; Jouve and Brun 2009), elongation of the surface granular cells (Martínez-Sykora et al. 2008; Cheung et al. 2008), and the local undulation of emerging fields (Tortosa-Andreu and Moreno-Insertis 2009; Fang et al. 2010; Cheung et al. 2010). Temporal evolution of vertical magnetic field at the solar surface at a 3:45:00, b 4:15:00, c 5:10:00, d 5:35:00, e 6:23:00, and f 7:41:00 from the start of the simulation. Arrows show the horizontal velocity field. Noticeable shearing/converging flows are highlighted with the boxes. Image reproduced by permission from Fang et al. (2012b), copyright by AAS Comparison of the model field (blue) with the extrapolated potential field (red) at the times of 04:05:00, 04:25:00, 04:45:00, and 05:05:00 plotted on the photospheric magnetogram. The formation of non-potential sigmoidal field is clearly seen. Image reproduced by permission from Fang et al. (2012a), copyright by AAS Fang et al. (2012a, b) simulated the buoyant rise of a twisted flux tube from the convection zone in which turbulent convection resides. Figure 42 shows the evolution of photospheric magnetograms, which reveals the rapid growth of magnetic concentrations (spots) with the unsigned total flux of up to \(1.37\times 10^{21}\, \mathrm{Mx}\) (at \(t=5\) h), the strong spot rotations (see the large negative spot at \(x=6\, \mathrm{Mm}\)), and the shearing and converging motions around the PIL. Here, both the shearing and rotational motions are driven by the Lorentz force and these motions transfer the magnetic energy and helicity into the corona (consistent with, e.g., Manchester 2001; Fan 2001b). The authors found that the convection-driven convergence flow produces a strong magnetic gradient and flux cancellation at the PIL. Together with the shear flow, the field lines above the PIL undergo a tether-cutting reconnection and produce long overlying sheared arcades and short submerging loops (Moore et al. 2001). Comparison of the model and extrapolated field lines in Fig. 43 clearly illustrates the development of non-potential, sigmoidal structure above the PIL that is covered by the more potential coronal loops. Similar convective emergence simulation was also performed by Chatterjee et al. (2016), who employed a horizontal magnetic flux sheet instead of a tube at the start of the simulation. The flux sheet breaks up into several flux bundles due to the undular mode instability (Fan 2001a) and develops into a large-scale U-shaped loop, which appears in the photosphere as a pair of colliding flux concentrations (i.e., a \(\delta \)-spot). The strong cancellation between the two spots manifests as a series of flare eruptions with magnitudes comparable to GOES C- and B-class events (Korsós et al. 2018). Through the creation of a \(\delta \)-spot and the flaring activity, they observed the repeated formation of cool dense filaments above the PIL and the ejection of helical flux ropes. Another intriguing possibility of \(\delta \)-spot formation was suggested by Mitra et al. (2014), who conducted the direct numerical simulation of the strong stratified dynamo with forced turbulence. Their 3D computation box holds two-layered turbulence, the helical and large-scale dynamo in the lower layer and the non-helical turbulence in the upper layer. As a result, they observed the formation of strong bipolar flux concentrations with super-equipartition fields, which sometimes move closer to take a \(\delta \)-spot configuration. While the large-scale magnetic field in the deeper layer is created through a large-scale dynamo (\(\alpha \) effect), the spontaneous spot formation in the upper layer may be due to the so-called negative effective magnetic pressure instability (NEMPI), which is caused by suppression of the turbulent hydromagnetic pressure and tension due to the mean magnetic field (Brandenburg et al. 2011). 4.1.5 Toward the general picture The numerical simulations introduced above have suggested the possibility that different types of flare-productive ARs have different subsurface origins and evolution histories (Zirin and Liggett 1987; Toriumi et al. 2017b). For example, the \(\delta \)-spots of Types 1 (Spot-spot) and 3 (Quadrupole) may be produced from the kinked and multi-buoyant-segment flux systems, respectively (Linton et al. 1999; Fan et al. 1999; Takasao et al. 2015; Toriumi et al. 2014b; Fang and Fan 2015). 3D numerical simulations of the four representative types of flare-productive ARs, as introduced in Fig. 17. Images and movie reproduced by permission from Toriumi and Takasao (2017), copyright by AAS. (Top) Polarity distributions. (Second) Schematic diagrams showing the numerical setup. (Third) Surface vertical magnetic fields (magnetogram). The green arrows for the Spot-satellite case point to the satellite spots, which originate from the parasitic flux tube. (Bottom) Magnetic field lines. The green field lines are for the parasitic tube and the parallel tube. (For movie see Electronic Supplementary Material.) The accompanying movie shows the temporal evolutions for the four cases In order to scrutinize the differences between the above three cases plus another type of X-flaring ARs, the Inter-AR case, created by two independent but closely neighboring episodes of flux emergence, Toriumi and Takasao (2017) conducted a systematic survey of flux emergence simulations by using similar numerical conditions with as little difference as possible, and explored the formation of \(\delta \)-spots, flaring PILs, and their evolution processes. Figure 44 summarizes the numerical conditions and results. For the Spot-spot case, the initial twist strength is intensified so as to exceed the critical value for the kink instability (Linton et al. 1996, see also Sect. 4.1.1). The Spot-satellite is modeled by introducing a parasitic flux tube above the main tube in a direction perpendicular to it, the situation similar to the interacting tube models in Sect. 4.1.3. The Spot-satellite may also be produced from a single bifurcating tube, which, however, was not considered for the sake of simplicity. The Quadrupole flux tube has two buoyant sections along the axis, resembling the simulations in Sect. 4.1.2. Finally, for the Inter-AR case, two flux tubes are placed in parallel. As the movie of Fig. 44 demonstrates, all cases except for Inter-AR produce \(\delta \)-shaped polarities with strongly-sheared, strong-gradient PILs in their cores that are coupled with flow motions, but the most drastic evolution appears for the Spot-spot case. As discussed in Sect. 4.1.1, the knotted apex enhances the buoyancy that leads to the fastest emergence among the four cases. The total unsigned magnetic flux in the photosphere $$\begin{aligned} \varPhi =\int _{z=0} |B_{z}|\, dS \end{aligned}$$ and the free magnetic energy stored in the atmosphere $$\begin{aligned} \varDelta E_{\mathrm{mag}}\equiv E_{\mathrm{mag}}- E_{\mathrm{pot}} =\int _{z>0} \frac{{{\mathbf {B}}}^{2}}{8\pi }\, dV -\int _{z>0} \frac{{{\mathbf {B}}}_{\mathrm{pot}}^{2}}{8\pi }\, dV, \end{aligned}$$ where \({{\mathbf {B}}}_{\mathrm{pot}}\) is the potential field, are also largest for the Spot-spot case. Modeled 3D magnetic structures for the four types of flare-producing ARs in Toriumi and Takasao (2017). The purple field lines are the newly formed flux rope structure, created through magnetic reconnection of emerged loops indicated with yellow and green lines. Except for the Spot-spot case, the flux ropes are exposed and have an access to the outer space. On the contrary, the Spot-spot flux rope is covered by the overlying arcade. Image reproduced by permission, copyright by AAS It is also suggested from these models that the difference in initial simulation setup may determine the fate of a CME eruption. As shown in Fig. 45, in the case of Spot-satellite, Quadrupole, and Inter-AR, the newly formed flux rope above the sheared PIL is exposed to outer space, an ideal situation for successful CME eruption. However, in the Spot-spot case, the flux rope is trapped and confined by the overlying loops. Very strong confinement may explain the flare-rich but CME-poor nature of the Spot-spot AR NOAA 12192 (see Fig. 1 and discussion on successful and failed eruptions in Sect. 2.2). In addition, this model is able to account for the formation of "magnetic channels," another important feature of the flaring PILs (Zirin and Wang 1993a, see Sect. 3.2.1). In the magnetogram of the Spot-spot case (Fig. 44), one may find that the central PIL has an elongated alternating pattern of positive and negative polarities, resembling the magnetic channel. This structure is produced because the photospheric fields are highly inclined to horizontal and almost parallel to the PIL with slight undulations. The series of simulations above provides a unified, general view of the birth of flare-productive ARs. Within the solar interior, probably due to convective evolution, the emerging flux systems that form \(\delta \)-spots are severely twisted to take on tortuous structures, partially pinned down to bear multiple rising segments, bifurcated into entangled branches, or hit against other flux systems to undergo mutual interactions. All of these processes are prone to enhancement of free magnetic energy. As the fluxes reach the photosphere, complex magnetic structures, prominently manifested by \(\delta \)-spots, sheared PILs, sheared coronal arcades, and flux ropes, develop. The \(\delta \)-spots are likely generated by multiple emerging loops instead of a single \(\varOmega \)-loop, and the different patterns of polarity layouts, such as Types 1, 2, and 3, stem from the difference in the subsurface evolution. Even two separated, seemingly independent ARs may intensify the free energy if located in the closer proximity (Inter-AR case). The stored free energy is, if accumulated enough, released in the form of flares and CMEs. One possibility that was not considered in Toriumi and Takasao (2017) is the situation where a new, delayed flux emerges into a pre-existing flux system (i.e. the concepts of successive emergence, complexes of activities, and sunspot nests in Sect. 3.1). Schrijver (2007) interpreted the formations of flaring PILs with this idea and Welsch and Li (2008) overall agreed. This situation is qualitatively similar to the Spot-satellite case, in which a minor bipole appears in the close proximity to the major sunspot, but the scale is much larger. Therefore, toward a more complete view, we may need to take into account this successive emergence case. 4.2 Flux cancellation models It is thought that coronal flux ropes can also form post-emergence as a coronal response to photospheric driving. Antiochos et al. (1994) and DeVore and Antiochos (2000) demonstrated that a sheared arcade lying above a PIL, produced by shearing motion in the photosphere (without convergence), contains a dipped structure that supports the prominence material. In the theory of van Ballegooijen and Martens (1989) (see Fig. 26), coronal loops above the PIL become sheared and converged due to photospheric motions and eventually reconnect against each other to form a flux rope. Most of the simulations based on this theory, often referred to as the "flux cancellation" models, deal with the evolution of coronal field lines within the computational box above the photospheric surface, i.e., the situation after the magnetic flux is emerged. Flux cancellation model by Amari et al. (2003a). Image reproduced by permission, copyright by AAS. a Initial bipolar potential fields (i.e., \(t=0\)). A pair of counter-clockwise twisting motions is imposed at the bottom boundary from \(t=0\) to \(t_{\mathrm{s}}\), followed by a viscous relaxation from \(t=t_{\mathrm{s}}\) to \(t_{0}\). b Field lines of the magnetic configuration after the converging flow is applied from \(t_{0}=400\tau _{\mathrm{A}}\) to \(450\tau _{\mathrm{A}}\), where the unit \(\tau _{\mathrm{A}}\) denotes the Alfvén transit time. Shown is the case for \(t_{\mathrm{s}}=200\tau _{\mathrm{A}}\), in which the sheared loops are obvious around the PIL. c The state after the convergence is applied to \(t=498\tau _{\mathrm{A}}\). A helical flux rope, low-lying arcade, and overlying arcade are now formed through magnetic reconnection between the sheared loops. d The convergence is further applied to \(t=530\tau _{\mathrm{A}}\). The flux rope erupts upward with entraining the overlying arcades successively Figure 46 shows the representative 3D calculation by Amari et al. (2003a). Here, the original potential field (panel a) is twisted by two co-rotating vortices imposed at the photospheric boundary. After the system is relaxed (panel b), converging motion is applied and magnetic reconnection between the sheared loops leads to the formation of a twisted flux rope, with a small low-lying arcade below, and an overlying arcade above (panel c). As the reconnection goes on, the unstable flux rope is ejected (panel d). For instigating the flux cancellation of sheared loops, several types of mechanisms have been considered (see, e.g., Mackay et al. 2010; Aulanier 2014). Other than the convergence flow (Amari et al. 2003a; Aulanier et al. 2010), proposed mechanisms include decrease of photospheric flux through shearing motion (Amari et al. 2000, 2010), turbulent diffusion (Amari et al. 2003b; Mackay and van Ballegooijen 2006; Yeates and Mackay 2009; Aulanier et al. 2010), and reversal of magnetic shear (Kusano et al. 2004). Flux rope formation and eruption by opposite-polarity type emerging flux. Image reproduced by permission from Kusano et al. (2012), copyright by AAS. Green tubes show the field lines with connectivity that differs from the initial state, while the blue tubes in panels a and d are the original sheared arcades. Gray scale at the bottom indicates the vertical field \(B_{z}\) (white, positive; black, negative) and red contours denote the strong current layer. The initial sheared arcades (blue lines in panel a) go through reconnection triggered by the emerging flux at the bottom boundary and a helical flux rope is created (panels b–d). The flux rope is ejected leaving a current sheet underneath (panels e–h) Kusano et al. (2012) investigated the process where the sheared arcade field above the PIL reconnects to create a flux rope and erupts, triggered by emerging flux from the photospheric surface (rather than the convergence flow or diffusion). This model sheds light on the importance of small-scale magnetic structures, which are often observed around flaring PILs, in the destabilization of the entire system (Toriumi et al. 2013a; Bamba et al. 2013; Wang et al. 2017b). In the particular simulation case of Fig. 47, emerging flux with the field direction opposite to that of the arcades triggers the reconnection and produces an erupting flux rope. From a systematic survey on the orientations of arcade and emerging flux, it was found that there exist two kinds of emerging flux capable of initiating the cancellation: the opposite-polarity type (shown as Fig. 47) and the reversed-shear type (comparable to Kusano et al. 2004). As a more recent attempt, Xia et al. (2014), Xia and Keppens (2016) and Kaneko and Yokoyama (2017) performed 3D flux cancellation simulations that take into account the effect of thermodynamical processes. Due to the strong radiative cooling, coronal plasma within the helical field lines of the flux rope becomes condensed and piles up on the dipped part at the bottom. In this way, these authors successfully reproduced filaments (prominences) in a more realistic manner than those lacking in the thermodynamical processes. 4.3 Data-constrained and data-driven models 4.3.1 Field extrapolation methods One way to trace the development of coronal magnetic field is to sequentially compute the field lines from the routinely measured photospheric magnetograms by using extrapolation methods which neglect non-magnetic forces (such as pressure gradient) and assume that the Lorentz force vanishes, i.e., the force-free condition, $$\begin{aligned} {\mathbf {j}}\times {{\mathbf {B}}}=0, \end{aligned}$$ where \({\mathbf {j}}\) is the current density $$\begin{aligned} {\mathbf {j}}=\frac{c}{4\pi }{\mathbf {\nabla }}\times {{\mathbf {B}}}. \end{aligned}$$ The potential (current-free) field is the simplest approximation, under which \(\nabla \times {{\mathbf {B}}}=0\). This can be replaced by $$\begin{aligned} {{\mathbf {B}}}=-\nabla \psi , \end{aligned}$$ where \(\psi \) is the scalar potential, and combined with the solenoidal condition (\(\nabla \cdot {{\mathbf {B}}}=0\)), further rewritten as $$\begin{aligned} \nabla ^{2}\psi =0. \end{aligned}$$ The potential coronal field is calculated by solving this equation with using the normal component of the photospheric field \(B_{z}\) as the boundary condition. Schrijver et al. (2005) and Schrijver (2016) assessed the non-potentiality of coronal fields of 95 and 41 ARs by comparing potential field extrapolations to the corresponding coronal images from the Transition Region and Coronal Explorer (TRACE; Handy et al. 1999) and SDO/AIA, respectively. They concluded that, in most cases, significant non-potentiality exists in ARs with newly emerging flux within \({\sim }\,30\) h or when opposite-polarity concentrations are evolving and in close contact. The force-free condition, Eq. (19), is also expressed as $$\begin{aligned} \nabla \times {{\mathbf {B}}}=\alpha {{\mathbf {B}}}, \end{aligned}$$ where \(\alpha \) is called the force-free parameter. If \(\alpha \) is constant everywhere in the coronal volume under consideration, the magnetic field is called a linear force-free field (LFFF); otherwise, a non-linear force-free field (NLFFF). In these models, all components of the vector magnetogram are used as the bottom boundary condition. As Figs. 26 and 31 show, the NLFFF extrapolations provide realistic coronal fields comparable to the actual observations. By applying NLFFF methods to the complex quadrupolar AR NOAA 11967, Liu et al. (2016b) and Kawabata et al. (2017) investigated the topology of coronal fields and elucidated the homologous occurrence of X-shaped flares. However, it has been shown that the NLFFF models are sensitive to the quality of photospheric boundary conditions, and thus do not faithfully reproduce observed coronal loop structures (e.g., DeRosa et al. 2009, 2015). Moreover, the input vector magnetograms are subject to the intrinsic ambiguity in the direction of the transverse magnetic field and this hampers fundamentally any magnetogram-driven coronal field reconstructions. Representative NLFFF techniques include the optimization method, MHD relaxation method, and flux-rope insertion method. For the basis and comparison of various extrapolation methods, we refer the reader to DeRosa et al. (2009, 2015), Wiegelmann and Sakurai (2012) and Inoue (2016). 4.3.2 Data-constrained models Even if one applies the most sophisticated technique of the NLFFF extrapolations to the accurate sequential magnetograms by Hinode/SOT and SDO/HMI, the obtained temporal evolution is still far from the real one because these models unavoidably assume a static state. One approach to overcome this issue is to use time-evolving data-constrained modeling. In this more physics-based method, the temporal evolution is obtained by solving the MHD equations with setting the reconstructed coronal field for the initial condition. Jiang et al. (2013) were the first to apply this method to the actual AR. As in Fig. 48, they reconstructed the initial coronal field of AR NOAA 11283 with the NLFFF model and demonstrated the CME eruption from this AR. According to the authors, due to small numerical errors in the extrapolation (i.e., their NLFFF was not perfectly force free), the system became unstable and the flux rope was erupted via the torus instability. Data-constrained MHD simulation of the flux rope eruption in AR NOAA 11283. Yellow and cyan lines are the magnetic field lines traced from the same positive polarity. Another set of field lines (white) are those that pass through the null point, and reconnect and open. Bottom boundary is the photospheric magnetogram. The sigmoidal flux rope (yellow field lines at \(t=0\), reproduced with NLFFF) becomes unstable and launched. Image reproduced by permission from Jiang et al. (2013), copyright by AAS Since then, the data-constrained approach has become the hot topic (Kliem et al. 2013; Amari et al. 2014). Inoue et al. (2014, 2015) modeled the X2.2-class event in NOAA 11158 (Fig. 14) and found that, interestingly, the flux rope at the core of this AR does not erupt directly but rather reconnects with ambient weakly twisted fields. Then, the ambient field transforms into a flux rope, which eventually exceeds the critical height of the torus instability. Muhamad et al. (2017) applied this method to NOAA 10930 (e.g., Figs. 6 and 19) and, by inserting emerging flux at the PIL from the bottom boundary, they succeeded in triggering the flux rope eruption, which is in line with the flare-triggering scenario by Kusano et al. (2012). The dramatic eruption in the X9.3 flare in NOAA 12673, which we introduced in Sect. 3.4, was modeled by Inoue et al. (2018b). They found that, as in Fig. 49, multiple compact flux ropes lying along the sheared PIL reconnect with each other and merge into a large, highly twisted flux rope that eventually erupts. 4.3.3 Data-driven models Even more realistic reconstruction of the evolving coronal field is to sequentially update the photospheric boundary condition, which is called the data-driven model. The first approach of the data-driven models we show here is the magneto-frictional method (Yang et al. 1986), in which the magnetic field evolves due to the Lorentz force, $$\begin{aligned} {{\mathbf {v}}}=\frac{1}{\nu c}\, {\mathbf {j}}\times {{\mathbf {B}}}, \end{aligned}$$ where \(\nu \) is the frictional coefficient. In this formulation, the (pseudo) velocity is simply proportional to the Lorentz force. Cheung and DeRosa (2012) applied this method to the sequential magnetogram of NOAA 11158 and reproduced flux ropes that were ejected in the series of M- and X-class flares in this AR. The formation and evolution of an eruptive flux rope in the X9.3-class flare in AR NOAA 12673. The top and second rows provide the field lines and magnetogram (\(B_{z}\)) that are viewed from two different angles and the bottom row shows the distribution of electric current in a vertical cross-section. In this model, multiple flux ropes along the PIL at the initial stage (\(t=0.28\)) reconnect and merge into a single flux rope (\(t=3.1\)), which eventually erupts into the higher atmosphere (\(t=7.3\)). Image reproduced by permission from Inoue et al. (2018b), copyright by AAS Another recent, yet nascent attempt is to directly solve the MHD equations with sequentially replacing the magnetogram to self-consistently reconstruct the coronal evolution (Wu et al. 2006). This was demonstrated by Jiang et al. (2016a, b) for ARs NOAA 11283 and 12192, respectively. Hayashi et al. (2018) calculated the photospheric electric field \({\mathbf {E}}\) from the sequential magnetogram \({{\mathbf {B}}}\) and drove the model of NOAA 11158 through Faraday's law $$\begin{aligned} \frac{\partial {{\mathbf {B}}}}{\partial t}=-c\nabla \times {\mathbf {E}}, \end{aligned}$$ instead of solving the induction equation $$\begin{aligned} \frac{\partial {{\mathbf {B}}}}{\partial t}=\nabla \times ({{\mathbf {v}}}\times {{\mathbf {B}}}). \end{aligned}$$ Here, \({\mathbf {E}}\) is determined, for instance, by solving Ohm's law (\({\mathbf {E}}=-{{\mathbf {v}}}\times {{\mathbf {B}}}/c\)) by using the velocity \({{\mathbf {v}}}\) obtained with flow tracking techniques (see Welsch et al. 2007, and references therein). As Fig. 50 displays, the initial coronal field, obtained by matching the potential field to the observed vector magnetogram and relaxing it, undergoes substantial elongation and twisting, especially above the central PIL, in response to the shear motion in the photosphere. Data-driven model of NOAA 11158, performed with a time-evolving photospheric electric field. The initial relaxed coronal field (a) is stretched and sheared over time especially above the central PIL. Image reproduced by permission from Hayashi et al. (2018), copyright by AAS A data-driven, dynamic model is supposed to calculate the coronal field that matches the changing photospheric magnetogram. An accurate model would, in principle, produce a flare or eruption at the same time that the actual Sun does. Inevitable simplifications of the model and inaccuracies in its initial state, however, suggest that it may be difficult to reproduce flares or eruptions. This is because the observed, gradual photospheric change (before and around the flare onset) might be insufficient to cause any drastic change in the (inaccurate) model's coronal field. Another caveat is that the model is limited by the temporal frequency of the driving data. Using the flux emergence simulation as the ground-truth data set, Leake et al. (2017) performed a data-driven simulation with the assumption that the photospheric information is provided every 12 min (the default cadence of the SDO/HMI vector magnetogram). They showed that the data-driven models can reproduce the slowly emerging ARs over 25 h with only \({\sim }\, 1\)% error in the free magnetic energy. However, the modeling was largely affected by rapidly evolving features. Even if one applies interpolation to the driving data, the coarse sampling generates a strobe effect, in which smoothly evolving features appear to jump across the photosphere. For an emerging bipole with a spatial extent of \(L=1\, \mathrm{Mm}\) with an apparent horizontal velocity of \(v_{\mathrm{h}}=20\, \mathrm{km\ s}^{-1}\), the sampling interval needs to be less than \(L/v_{\mathrm{h}}=50\, \mathrm{s}\). Note that this may be partly overcome by using faster-cadence LOS magnetograms. In this section, we presented theoretical investigations that try to address the subsurface origin and physical mechanisms behind the large-scale/long-term evolution of flare-producing ARs. We first showed in the beginning of Sect. 4.1 that classical flux emergence simulations of the \(\varOmega \)-loop emergence can explain several characteristics, such as magnetic tongues, formation of flux ropes and sigmoids, generation of shear flows and spot rotation, helicity injection, and non-neutralized currents. However, most of these models do not reproduce other important features of flaring ARs such as the highly-sheared PIL between closely neighboring opposite-polarity sunspots. From the observational evidence of emergence of top-curled flux tube, the helical kink instability was invoked as the possible production mechanism of the \(\delta \)-sunspots (Sect. 4.1.1). 3D models demonstrate that (1) a tightly twisted tube develops a kink instability; (2) the rise speed of the kinked tube is accelerated due to the enhanced buoyancy; and (3) the tube reproduces a quadrupolar polarity pattern with a sheared PIL on the photospheric surface. These models can reproduce the observed characteristics of Type 1 (Spot-spot) \(\delta \)-spots. Type 3 (Quadrupole) \(\delta \)-spots may be produced by the emergence of a flux tube with multiple buoyant segments (Sect. 4.1.2). Such a top-dent configuration is in fact created in a large-scale convective emergence model. Inspired by the observation of the quadrupolar AR NOAA 11158, the emergence of a flux tube that rises at two sections along the axis was investigated. It was found that the time evolution of the photospheric polarities, i.e., the collision, shearing, and converging motions of the central bipole, is fairly consistent with that of the actual AR. Such evolutions were not achieved by a pair of emerging flux tubes that are placed in parallel. Together with the follow-up study, the multi-buoyant segment model is considered as a likely candidate for quadrupolar \(\delta \)-spots. Interaction of emerging flux systems is also recognized as a source of complexity (Sect. 4.1.3). In fact, 3D simulations showed that complex-shaped ARs can be created by interaction of multiple tubes in the solar interior. One interesting consequence of the interaction, both aerodynamic and bodily, is that even simple bipolar ARs may originate from multiple flux systems through merging. In this case, non-neutralized currents can be significant because the return currents are annihilated. Turbulent convection results in a multitude of effects on the rising flux (Sect. 4.1.4). The convective emergence simulation revealed that the two polarities on the photosphere undergo shearing and rotational motion due to the Lorentz force and that the converging motion at the PIL causes flux cancellation, which leads to the production of a flux rope in the atmosphere. It was also found that the strong collision of opposite polarities results in a series of flare eruptions. With the aim to obtain a unified perspective of production of flaring ARs, a comparison of different modeling setups was performed (Sect. 4.1.5). It was assumed that the production of Spot-spot, Spot-satellite, Quadrupole, and Inter-AR types are due to the emergence of a kink-unstable tube, two interacting tubes, a multi-buoyant-segment tube, and two independent tubes, respectively. Although all models except for the Inter-AR case successfully reproduced \(\delta \)-spots with flaring PILs, the Spot-spot case showed a by far fastest rising with the largest free magnetic energy. Therefore, the difference in the observed evolution on the solar surface likely stems from the subsurface history, probably caused by turbulent convection, such as a strong twisting, downward pinning, and collision with other flux systems. Flux rope formation and the consequent eruption have been extensively surveyed in the sheared arcade and flux cancellation models (Sect. 4.2). Many of these simulation models are based on the filament formation theory by van Ballegooijen and Martens (1989): the coronal fields are tied to the photospheric bottom boundary and the photospheric motion, such as shearing, converging, and/or diffusion, drives the overall evolution. However, the reversed-shear and small-scale emerging field at the PIL are also suggested as the trigger of magnetic reconnection between coronal arcades. Flux cancellation models that take into account the effect of thermodynamics now reproduce the condensation of filament plasma due to radiative cooling. Along with the extrapolation methods (Sect. 4.3.1), recent progress in the more physics-based modeling of the coronal field is facilitated by the development in magnetographs, especially by the advanced vector magnetograms of Hinode/SOT and SDO/HMI. There are two methods in this category, which are data-constrained models, where a single snapshot is used for creating the initial coronal field (Sect. 4.3.2), and data-driven models, where the bottom boundary is sequentially updated to drive the calculation (Sect. 4.3.3). These methods, although still in the stage of development, provide the means to trace the evolution of coronal fields in a more realistic manner, such as the formation of flux ropes in response to the photospheric motion and the resultant eruptions, and may open the door to real-time space weather forecasting. 5 Rapid changes of magnetic fields associated with flares As we saw in the previous sections, the gradual magnetic field evolution (in the time scale of hours to days) is the key factor for the energy build up of solar eruptions. Then, can solar eruptions in the corona cause rapid (within minutes) magnetic field changes in the photosphere? The changes in the photosphere in response to the coronal eruptions have been expected to be small because the photospheric plasma density is much larger than that of the corona. Aulanier (2016) gave a review of this topic from both observational and modeling perspectives and provided a physical analysis of this issue called the "tail wags the dog" problem. Under certain circumstances, the coronal eruption can cause rapid changes in the photospheric magnetic topology. Earlier, Hudson et al. (2008) and Fisher et al. (2012) quantitatively assessed the back reaction on the solar surface and interior resulting from the coronal field evolution required to release energy and made the prediction that after flares, the photospheric magnetic field would become more horizontal at the flaring PILs. Their analysis is based on the principle of energy and momentum conservation and builds upon the proposal by Hudson (2000) that the coronal field should, in an overall sense, contract or implode if there is a net decrease in magnetic energy (coronal implosion). This is one of the very few models that specifically predict that magnetic destabilization associated with flares can be accompanied by rapid and permanent changes of photospheric magnetic fields and the pattern of the field changes. One special case related to this scenario is the tether-cutting reconnection model for sigmoids (Moore et al. 2001; Moore and Sterling 2006), which involves a two-stage reconnection process. At the eruption onset, the near-surface reconnection between the two sigmoid elbows produces a low-lying shorter loop across the PIL and a larger twisted flux rope connecting the two far ends of the sigmoid. The second stage reconnection occurs when the large-scale loop cuts through the arcade fields, which causes the erupting flux rope to evolve into a CME and precipitation of electrons to produce flare ribbons (see Fig. 7a for illustration). If scrutinizing the magnetic topology close to the surface, one would find a permanent change of magnetic fields that conforms to the scenario as described above: the magnetic fields turn more horizontal near the flaring PIL due to the newly formed short loops there. Whereas an earlier review by Wang and Liu (2015) summarizes certain aspects of research up to that time, focusing primarily on the results obtained before the SDO era, this section summarizes more recent observational findings of rapid magnetic field and sunspot structure changes associated with flares and briefly discusses the related theoretical insights. 5.1 Magnetic transients Before the discovery of the persistent photospheric magnetic field changes associated with flares, some studies showed observations of the so-called "magnetic transients"–the rapid, but short-lived change in the LOS magnetic fields. In the earlier studies (e.g., Tanaka 1978; Patterson 1984), these apparent transient reversals of magnetic polarity associated with flare footpoint emissions were interpreted as real physical effects of change in magnetic topology. Some later studies demonstrated that the short-lived magnetic transients are the observational effect due to changes in profiles of observing spectral lines caused by the flare emissions (Kosovichev and Zharkova 2001; Qiu and Gary 2003; Zhao et al. 2009), so they are sometimes called magnetic anomalies. The most comprehensive study in this topic is a recent paper by Sun et al. (2017), who analyzed the 135-s cadence HMI data and demonstrated the line profile changes and associated field signatures of transients (Fig. 51). Non-LTE9 modeling by Hong et al. (2018) explained the profile changes of Fe I 6173 Å line that the HMI uses and provided a quantitative assessment of magnetic transients. Song et al. (2018) suggested that magnetic transients and white-light flares are closely related spatially and temporally. Flare-induced artifact as "magnetic transient." a Differenced map of intensitygram. Symbol "T" marks the sample pixel. b Differenced map of magnetic field in the radial direction \(B_{r}\). c Temporal evolution of the sample pixel. Red symbols show the frames affected by flare emission. Green curves show the fitted step-like function for the horizontal field \(B_{h}\) and the radial field \(B_{r}\) and a fitted third-order polynomial for the formal uncertainty of field strength \({\sigma }_{B}\); green bands show the \(1\sigma \) fitting confidence interval. d Stokes profiles of the sample pixel at two instances, near (red) and before (gray) the flare peak. Image reproduced by permission from Sun et al. (2017), copyright by AAS Azimuth angle changes in association with flare emission of 2015 June 22. The FOV is \(40'' \times 40''\). (Top left) SDO/HMI white-light map. (Top right) Running difference image in H\(\alpha \) blue wing (line core \(-1.0\) Å), showing the eastern flare ribbon. The bright part is the leading front and the dark part is the following component. (Bottom left) The GST/NIRIS LOS magnetogram, scaled in a range of \(-\,2500\, \mathrm{G}\) (blue) to \(2500\, \mathrm{G}\) (yellow). (Bottom right) Running difference map of azimuth angle generated by subtracting the map taken at 17:58:45 UT from the one taken at 18:00:12 UT. The dark signal pointed by the pink arrow represents the sudden, transient increase of azimuth angle at 18:00:12 UT. Image reproduced by permission from Xu et al. (2018), copyright by the authors All the above magnetic transients are for the LOS component of the magnetic fields. Taking advantage of the unprecedented resolution provided by the 1.6-m GST at BBSO, Xu et al. (2018) showed a sudden rotation of the magnetic field vector by about 12\(^{\circ }\)–20\(^{\circ }\) counterclockwise, in association with the M6.5-class flare on June 22, 2015. Such changes of the azimuth angles of the transverse magnetic field are well pronounced within a ribbon-like structure (\({\sim }\, 600\, \mathrm{km}\) in width), moving co-spatially and co-temporally with the flare emission as seen in the H\(\alpha \) line (see Fig. 52). However, they are not related to the magnetic transients as shown above. A strong spatial correlation between the azimuth transient and the ribbon front indicates that the energetic electron beams are very likely the cause of the rotation. During the rotation, the measured azimuth becomes closer to that of the potential field, which indicates the process of energy release (untwisting motion) in the associated flare loop. The magnetic fields restored their original direction after the flare ribbons swept through over the area. This was the first time that a transient field rotation was observed. Possible explanations of this phenomenon include (1) effect of induced magnetic fields; (2) effect of downward-drafting plasma; (3) polarization of emission lines due to return current and/or filamentary chromospheric evaporation (different from the original concept of magnetic transient); and (4) effect of Alfvén waves. The authors claimed that the observed field change cannot be explained by existing models. This new, transient magnetic signature in the photosphere may offer a new diagnostic tool for future modeling of magnetic reconnection and the resulting energy release. 5.2 Rapid, persistent magnetic field changes In the early 1990s, the Caltech solar group discovered obvious rapid and permanent changes of vector magnetic fields associated with the flares using the BBSO data (Wang 1992; Wang et al. 1994a). They found that the transverse field shows much more prominent changes compared to the LOS component. Some of the results appeared to be puzzling: the magnetic shear angle (an indicator of non-potentiality), defined as the angular difference between the potential magnetic field and the measured field (see Sect. 3.2.1), increases following flares. It is well known that, in order to release the energy for a flare to occur, the coronal magnetic field has to evolve to a more relaxed state to release energy. For this reason, there have been some doubts to these earlier measurements, especially because the data were obtained from ground-based observatories that may suffer from certain effects such as atmospheric seeing and lack of continuous observing coverage. Kosovichev and Zharkova (2001) studied high-resolution SOHO/MDI magnetogram data for the "Bastille Day Flare" on 2000 July 14, and found regions with a permanent decrease of magnetic flux, which are related to the release of magnetic energy. Using high cadence GONG data, Sudol and Harvey (2005) found solid evidence of step-wise field changes associated with a number of flares. The time scale of the changes is as fast as 10 min (GONG cadence is 1 min), and magnitude of change is in the order of 100 G. Petrie and Sudol (2010), Johnstone et al. (2012), Cliver et al. (2012) and Burtseva and Petrie (2013) also surveyed more comprehensively the rapid and permanent changes of LOS magnetic fields with GONG data, which were indeed associated with almost all the X-class flares studied by them. The above studies using the LOS field data demonstrated the step-wise property of flare-related photospheric magnetic field change. However, the underlying cause of those changes was not clearly revealed. The work by Cameron and Sammis (1999) was the first to use near-limb magnetograph observations to characterize flare-related changes of magnetic fields, taking advantage of the projection effect. In a number of papers, it was found that, for the LOS magnetic field, the limb-ward flux increases in general, while the disk-ward flux in the flaring ARs decreases (Wang et al. 2002b; Wang 2006; Yurchyshyn et al. 2004; Spirock et al. 2002; Wang and Liu 2010). Such a behavior suggests that after flares, the overall magnetic field structure of ARs may change from a more vertical to a more horizontal configuration, which is consistent with the scenario that the Lorentz force change pushes down the field lines. Note that most of the observations listed in Wang and Liu (2010) are made by SOHO/MDI, which has a cadence of up to 1 min. The drastic change in inclination angle of magnetic fields in sunspots associated with the flare eruption was also detected by Ye et al. (2016) by using vector magnetograms from the SDO/HMI, and the observational result was consistent with the expectation of the coronal implosion scenario. TRACE white-light images covering associated with six major flares. The rapid changes of \(\delta \)-sunspot structures are observed. The top, middle, and bottom rows show the pre-flare, the post-flare, and the difference images between them after some smoothing, respectively. The white pattern in the difference image indicates the region of penumbral decay, while the dark pattern indicates the region of darkening of penumbra. The white dashed line denotes the flaring PIL and the black line represents a spatial scale of 30". Image reproduced by permission from Liu et al. (2005), copyright by AAS As more and more evidence indicates the irreversible photospheric magnetic field changes following flares, it is natural to find whether these changes are detectable in white-light structures of ARs. The white-light signatures of topological changes are indeed discovered in a number of papers (e.g. Wang et al. 2004a; Liu et al. 2005; Deng et al. 2005; Li et al. 2009; Wang et al. 2009, 2013, 2018b). The most prominent changes are the enhancement (i.e., darkening) of penumbral structure near the flaring PILs and the decay of penumbral structure in the peripheral sides (outer edges) of \(\delta \)-spots. Figure 53 clearly demonstrates some examples of such spot structure changes. The difference image between pre- and post-flare states always shows a dark patch at the flaring PIL that is surrounded by a bright ring. They correspond to the enhancement of the central sunspot penumbrae and the decay of the peripheral penumbrae, respectively. These examples were discussed in detail by Liu et al. (2005), in which they showed that (1) these rapid changes are associated with flares and are permanent, and (2) the decay of sunspot penumbrae is related to the magnetic field in the outer edge of AR that turns to a more vertical direction, while the darkening of sunspot structure near the central PIL is related to the magnetic field that turns to a more horizontal direction. Chen et al. (2007) statistically studied over 400 events using TRACE white-light data and found that the significance of sunspot structure change is positively correlated with the magnitude of flares. Using Hinode/SOT G-band data, Wang et al. (2012a) further studied the intrinsic linkage of penumbral decay to magnetic field changes. They took advantage of the high spatio-temporal resolution Hinode/SOT data and observed that in sections of peripheral penumbrae swept by flare ribbons, the dark fibrils completely disappear while the bright grains evolve into faculae where the magnetic flux becomes even more vertical. These results again suggest that the component of horizontal magnetic field of the penumbra is straightened upward (i.e., turning from horizontal to vertical) due to magnetic field restructuring associated with flares. Also notably, the flare-related enhancement of penumbral structure near central flaring PILs has also been unambiguously observed with BBSO/GST. Using GST TiO images with unprecedented spatial (0.1") and temporal (15 s) resolution, Wang et al. (2013) reported on a rapid formation of sunspot penumbra at the PIL associated with the 2012 July 2 C7.4 flare (see Fig. 54 and the corresponding movie). The most striking observation is that the solar granulation evolves to the typical pattern of penumbra consisting of alternating dark and bright fibrils. Interestingly, a new \(\delta \)-sunspot is created by the appearance of such a penumbral feature, and this penumbral formation also corresponds to the enhancement of the horizontal field. Similar pattern of penumbral formation is shown by Wang et al. (2018b). BBSO/GST H\(\alpha \) center (a) and blue-wing (b) images at the peak of the 2011 July 2 C7.4 flare, showing the flare ribbons and possible signatures of a flux rope eruption (the arrows in panel b). The GST TiO images about 1 h before (c) and 1 h after (d) the flare clearly show the formation of penumbra (pointed to by the arrow in panel d). The same post-flare TiO image in panel e is superimposed with positive (white) and negative (black) HMI LOS field contours, and NLFFF lines (pink). f Perspective views of the pre- and post-flare 3D magnetic structures including the core field (a flux rope) and the arcade field from NLFFF extrapolations. The collapse of arcade fields is obvious. g TiO time slices for a slit (black line in panel d) across the newly formed penumbra area. The dashed and solid lines denote the time of the start, peak, and end of the flare in GOES 1–8 Å. The sudden turning off of the convection associated with the flare is obviously shown. Images reproduced by permission from Wang et al. (2013) and Jing et al. (2014), copyright by AAS. (For movie see Electronic Supplementary Material) (Left) HMI vector magnetogram on 2012 March 7 showing the flare-productive AR NOAA 11429 right before the X5.4 flare. (Right) Temporal evolution of various magnetic properties of a compact region (green contour in the left panel) at the central PIL, in comparison with the light curves of GOES 1–8 Å soft X-ray flux (gray) and its derivative (black). Note that in panel d, the inclination is measured from horizontal direction. The shaded interval denotes the flare period in the GOES flux. Image reproduced by permission from Wang et al. (2012c), copyright by AAS A very clear demonstration of flare related changes in vector magnetic fields came from the analysis of SDO/HMI vector data by Wang et al. (2012b). The analysis of the X2.2 flare in AR NOAA 11158 on 2011 February 15 clearly demonstrated a rapid/irreversible increase of the horizontal magnetic field at the flaring PIL. The mean horizontal fields increased by about 500 G within 30 min after the flare. The authors also found that the photospheric field near the flaring PIL became more sheared and more inclined towards horizontal, consistent with the earlier results (e.g., Wang 1992; Wang et al. 1994a; Liu et al. 2005). Following that initial study, a number of papers using HMI data demonstrated the consistent changes of magnetic fields (Liu et al. 2012; Sun et al. 2012; Wang et al. 2012c; Petrie 2012, 2013; Yang et al. 2014; Castellanos Durán et al. 2018). The found patterns of the changes are consistent in the sense that the transverse field enhances in a region across the central flaring PIL. Figure 55 shows the typical time profiles of such field changes. Modeled and observed field changes from before (01:00 UT; a, c, and e) to after (04:00 UT; b, d, and f) the 2011 February 15 X2.2 flare. a, b Current density distribution on a vertical cross section indicated in c–f. c, d HMI horizontal field strength. Contour levels are 1200 G and 1500 G. e–f HMI vertical field. Contour levels are \(\pm \, 1000\, \mathrm{G}\) and \(\pm \, 2000\, \mathrm{G}\). Image reproduced by permission from Sun et al. (2012), copyright by AAS Associated with the above findings in the 2D photospheric magnetic fields, there must be a corresponding magnetic field evolution in 3D above the photosphere. The NLFFF extrapolation works as a powerful tool to reconstruct the 3D magnetic topology of the solar corona (see Sect. 4.3.1 for the extrapolation methods). Using Hinode/SOT magnetic field data, Jing et al. (2008) showed that the magnetic shear (indicating non-potentiality) only increases at lower altitude while it still largely relaxes in the higher corona, therefore the total free magnetic energy in 3D volume should still decrease after energy release of a flare. Using HMI data, Sun et al. (2012) clearly showed that the electric current density indeed increases at the flaring PIL near the surface while it decreases higher up, which may explain the overall decrease of free magnetic energy together with a local enhancement at low altitude (see Fig. 56). The above results may also imply that magnetic fields collapse toward the surface. Such a collapse was even detected in a C7.4 flare on 2012 July 2 as reported by Jing et al. (2014) and shown in Fig. 54. The collapse (or contraction) of magnetic arcades as reflected by NLFFF models across the C7.4 flare is spatially and temporally correlated with the formation of sunspot penumbra on the surface (Wang et al. 2013), as observed in high resolution observations of GST. The physics of this phenomenon is not fully understood: this could be due to newly reconnected magnetic fields above the PIL, or perhaps the reduction of local magnetic pressure due to a removal/weakening of the magnetic flux rope instigates the collapse. (Top) Temporal evolution of horizontal magnetic field measured by HMI and Hinode/SOT in a compact region around the PIL, in comparison with X-ray light curves for the M6.6 flare on 2011 February 13. The red curve is the fitting of HMI data with a step function. (Bottom) Extrapolated NLFFF lines before and after the event, demonstrating the process of magnetic reconnection consistent with the tether-cutting reconnection model. Images reproduced by permission from Liu et al. (2012, 2013), copyright by AAS Using vector magnetograms from HMI together with those from Hinode/SOT with high polarization accuracy and spatial resolution, Liu et al. (2012) revealed similar rapid and persistent increase of the transverse field associated with the M6.6 flare on 2011 February 13, together with the collapse of coronal currents toward the surface at the sigmoid core region. Liu et al. (2013) further compared the NLFFF extrapolations before and after the event (see Fig. 57). The results provide direct evidence of the tether-cutting reconnection model. There are four flare footpoints. About 10% of the flux (\({\sim }\, 3\times 10^{19}\, \mathrm{Mx}\)) from the inner footpoints (e.g., FP2 and FP3 of loops FP2–FP1 and FP3–FP4) undergoes a footpoint exchange to create shorter loops of FP2–FP3. This result presents the rapid/irreversible changes of the transverse field and corresponding 3-D field changes in corona. A more comprehensive investigation including the 3D magnetic field restructuring and flare energy release as well as the helioseismic response of two homologous flares, the 2011 September 6 X2.1 and September 7 X1.8 flares in AR NOAA 11283, was performed by Liu et al. (2014). Their observational and modeling results depicted a coherent picture of coronal implosions, in which the central field collapses while the peripheral field turns vertical, consistent with what was found by Liu et al. (2005). Area affected by rapid field changes corrected for foreshortening of LOS magnetic field as a function of the peak GOES soft X-ray flux of 75 events. Color-coded circles denote the center-to-limb distance \(\mu \) (cosine of the heliocentric angle) of each event. The line is the best fit to a power law with a correlation coefficient of 0.6. Image reproduced by permission from Castellanos Durán et al. (2018), copyright by AAS There are two research directions that are particularly worth mentioning here. Joint analysis of photospheric magnetic fields and coronal topology. Petrie (2016) studied two X-class flares observed by SDO and the Solar Terrestrial Relations Observatory (STEREO; Kaiser et al. 2008). They found that the rapid changes of magnetic fields at the PIL is associated with coronal loop contraction. Gömöry et al. (2017) analyzed VTT (Vacuum Tower Telescope) data covering an M-class flare and found an enhancement of the transverse magnetic field of approximately 550 G. This transverse field was found to bridge the PIL and connect umbrae of opposite polarities in the \(\delta \)-spot. At the same time, a newly formed system of loops appeared co-spatially in the corona as seen in 171 Å passband images of SDO/AIA. Therefore, the rapid photospheric magnetic field evolution is a part of 3D magnetic field re-structuring. Statistical study of a large number of events. Castellanos Durán et al. (2018) carried out a statistical analysis of permanent LOS magnetic field changes during 18 X-, 37 M-, 19 C-, and 1 B-class flares using data from SDO/HMI. They investigated the properties of permanent changes, such as frequency, areas, and locations. They detected changes of LOS field in 59 out of 75 flares and found that the strong flares are more likely to show changes. Figure 58 demonstrates the correlation between the affected LOS field change area and the peak GOES flux. It is apparent that larger flare produces more prominent field changes. 5.3 Sudden sunspot rotation and flow field changes The evolution of magnetic fields is closely associated with photospheric flow motions. Obviously, the studies of the flow fields along with the magnetic field evolution is very important. Several methods of flow tracking have been developed as summarized and compared by Welsch et al. (2007). One particular method is the differential affine velocity estimator (DAVE; Schuck 2005, 2006) that uses the induction equation to derive flow fields. A substantially improved version, DAVE for vector magnetograms (DAVE4VM; Schuck 2008), derives not only the horizontal but also the vertical component of the flows, which thus can analyze the flux emergence (i.e., vertical motions) in addition to the horizontal motions. Wang et al. (2014) showed some initial results of the flare-related acceleration of sunspot rotation that is derived by DAVE using SDO/HMI observations of AR NOAA 11158. The rotational speeds of the two sunspots increase significantly during and right after the X2.2 flare. Moreover, the direction of the enhanced sunspot rotation agrees with that of the change of the horizontal Lorentz force. Using the estimated torque and moment of inertia, Wang et al. (2014) estimated the angular acceleration of the sunspots. Although there are some uncertainties in the measurements and assumptions, the values agree with the observed angular acceleration of suddenly rotating sunspot immediately after the flare. BBSO/GST chromospheric H\(\alpha +1\) Å images showing flare ribbons (a, b) and the corresponding photospheric TiO images (c, d). In panel a, sunspots are labeled as f1 and f2, with the dotted lines contouring the vertical magnetic field at 1300 G. In panels c and d, the superimposed arrows (color-coded by direction; see the color wheel) depict the differential sunspot rotation tracked with DAVE. The thick white curves are the co-temporal flare ribbon. e Temporal evolution of overall sunspot rotation, showing the orientation angle of f1 from an ellipse fit (blue) and its approximation using an acceleration plus a deceleration function. f Temporal evolution of vorticity derived based on DAVE velocity vectors indicating the accelerated sunspot rotation. Image reproduced by permission from Liu et al. (2016a), copyright by the authors. (For movie see Electronic Supplementary Material) Liu et al. (2016a) used GST data to analyze the flow motions of the 2015 June 22 M6.6 flare. It is particularly striking that the rotation is not uniform over the sunspot: as the flare ribbon sweeps across, its different portions accelerate (up to \(50^{\circ }\,\mathrm{h}^{-1}\)) at different times corresponding to peaks of the flare hard X-ray emission. Associated with the rotation, the intensity and magnetic field of the sunspot change significantly, and the Poynting and helicity fluxes temporarily reverse their signs, indicating that the energy propagation that causes the rotation is from the higher atmosphere down to the photosphere. Figure 59 demonstrates the key results of that study (see also the corresponding movie). Flow field in the BBSO/GST TiO band. a, b Pre-flare (at 17:34:23 UT) and post-flare (at 19:22:30 UT) TiO images overplotted with arrows illustrating the flow vectors derived with DAVE. For clarity, arrows pointing northward (southward) are coded yellow (magenta). c, d Azimuth maps of corresponding flow vectors in panels a, b, also overplotted with the PIL, precursor kernel, and region R contours. The shear flow region P showing the most obvious flare-related enhancement is outlined using the dashed ellipse, with its major axis quasi-parallel to the PIL. Image reproduced by permission from Wang et al. (2018b), copyright by AAS Wang et al. (2018b) analyzed the same AR with GST and HMI data. For a penumbral segment in the negative field adjacent to the PIL, an enhancement of penumbral flows (up to an unusually high value of \(2\, \mathrm{km\ s}^{-1}\)) and extension of penumbral fibrils after the first peak of the flare hard X-ray emission. They also found an area at the PIL, which is co-spatial with a precursor brightening kernel, that exhibits a gradual increase of shear flow velocity (up to \(0.9\, \mathrm{km\ s}^{-1}\)) after the flare. The enhancing penumbral and shear flow regions are also accompanied by an increase of horizontal field and decrease of magnetic inclination angle measured from the horizontal. These results further confirm the concept of back reaction of coronal restructuring on the photosphere as a result of flare energy release. Figure 60 shows the evolution of the flow fields covering the flare. 5.4 Theoretical interpretations The modeling efforts of ARs and related eruptions are summarized in Sect. 4. Here we review certain points in explaining magnetic field restructuring following flares. Longcope and Forbes (2014) reviewed solar eruption models and classified them into three categories, tether-cutting, break-out and loss-of-equilibrium, all of which can be catastrophic. The tether-cutting model assumes a two-step reconnection that leads to eruption in the form of flares and CMEs, in particular, for sigmoid ARs (e.g., Moore and Labonte 1980; Moore et al. 2001; Moore and Sterling 2006). The first-stage reconnection occurs near the solar surface at the onset of the eruption and produces a low-lying shorter loop across the PIL and thus explains the observed enhancement of transverse fields after flare. It also produces a much longer twisted flux rope connecting the two far ends of a sigmoid that triggers the second stage of eruption: the twisted flux rope becomes unstable and erupts outward to form a full CME. It is possible that in the earlier phase of the eruption, contraction of the shorter flare loop occurs. This has received increasing attention recently (e.g., Ji et al. 2006) and possibly corresponds to the first stage of the tether cutting. The ribbon separation described in the standard flare models such as the CSHKP model (Sect. 2.2) manifests the second stage. This model may explain other observational findings such as (1) transverse magnetic field at flaring PILs increases rapidly/persistently immediately following the flares (Wang et al. 2002b, 2004b; Wang and Liu 2010); (2) penumbral decay occurs in the peripheral penumbral areas of \(\delta \)-spots, indicating that the magnetic field lines turn more vertical after a flare in these areas (Wang et al. 2004a; Liu et al. 2005); and (3) hard X-ray images of the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI; Lin et al. 2002) show four footpoints, two inner ones and two outer ones, and sometimes the hard X-ray emitting sources change from confined footpoint structure to an elongated ribbon-like structure after the flare reaches intensity maximum (Liu et al. 2007a, b). In an attempt to quantitatively compare observations and modeling, Li et al. (2011) compared idealized MHD simulation of emerging flux in flare triggering with observation. They selected a lower level in the simulation to examine the near-surface magnetic structure evolution. Changes of magnetic field orientation and strength in the photosphere after flares/CMEs are indeed found in the simulation. The most obvious match is at the flaring PIL, where field lines in the simulation are found to be more inclined towards the horizontal, and transverse field strength increased after the eruption. At the outer side of the simulated sunspot penumbral area, field lines turn to a more vertical direction with a decreased transverse field strength. These are consistent with the observed penumbral enhancement at the PIL and decay of peripheral penumbrae (Liu et al. 2005). The simulation also shows the downward net Lorentz force pressing onto the photosphere, confirming the related observations. (Top) Temporal evolution of the modeled 3D dynamics of the eruptive flux rope on 2011 February 13 in AR NOAA 11158, together with the \(B_{z}\) distribution at the bottom. (Bottom) Comparison of simulation results with observations. a Flare ribbons during the M6.6 flare, observed by Hinode at 17:35 UT. b Synthetic flare ribbons measured from total displacement of the field line superimposed on the \(B_{z}\) distribution. The area corresponds to one surrounded by white square in panel a. c, d \(B_{h}\) distributions obtained from the simulation, just prior to and during the eruption, respectively. \(B_{h}\) increased prominently across the main PIL (marked by the black lines). Image reproduced by permission from Inoue et al. (2018a), copyright by the authors Recently, Inoue et al. (2018a) performed an MHD simulation that takes into account the observed photospheric magnetic field to reveal the dynamics of a solar eruption in a realistic magnetic environment. In this simulation, they confirmed that the tether-cutting reconnection occurring locally above the PIL creates a twisted flux tube, which is lifted into a toroidal unstable area where it loses equilibrium, destroys the force-free state, and drives the eruption. Figure 61 shows that the simulation not only reproduces the flare ribbons well but also demonstrates the irreversible transverse field enhancement at the photospheric PIL. Although the authors did not emphasize this point, the peripheral penumbral decay is also apparent in the simulated data. The same event has been analyzed in detail observationally by Liu et al. (2012, 2013). Note that Inoue et al. (2015) demonstrated similar field changes for the X2.2 flare in the same AR. The rapid field change coincides with the onset of the flare. As we mentioned earlier, Hudson et al. (2008) and Fisher et al. (2012) introduced the back reaction concept. The authors made the prediction that after flares, at the flaring PIL, the photospheric magnetic fields become more horizontal. The analysis is based on the simple principle of energy and momentum conservation: the upward erupting momentum must be compensated by the downward momentum as the back reaction. In addition, the field change should be stepwise (i.e. permanent) because it results from the removal of magnetic energy and magnetic pressure from the corona. This is one of the few models that specifically predict the rapid and permanent changes of photospheric magnetic fields associated with flares and support the observed Lorentz force change (e.g., Wang et al. 2012b, c; Liu et al. 2012; Sun et al. 2012; Petrie 2013, 2014, 2019). As a more recent study, Wang et al. (2018c) analyzed four flare events using SDO/AIA and STEREO and demonstrated the existence of real contractions of loops. They identified two categories of implosion, which are (1) a rapid contraction at the beginning of the flare impulsive phase, as magnetic free energy is removed rapidly by a filament eruption; and (2) a continuous loop shrinkage during the entire flare impulsive phase that corresponds to ongoing conversion of magnetic free energy in a coronal volume. Series of snapshots, from left to right, of a realistic numerical simulation of an eruptive flare. The colored lines show representative coronal magnetic field lines plotted from fixed footpoints in the photosphere: the cyan field lines represent the erupting flux rope, and the red (green) field lines are those that eventually reconnect with pink (yellow) field lines. The gray scale plane shows the time-varying electric current densities in the photosphere. The blue arrows show the displacement of the ribbons and cyan curved arrows indicate how sunspot rotation is initiated as flare ribbons move across sunspots. Image reproduced by permission from Aulanier (2016), copyright by Macmillan Finally, in Aulanier (2016), the sudden sunspot rotation is somehow demonstrated in their simulation (see Fig. 62). Note that these simulations usually assume the line-tying condition, i.e., the footpoint motions are not allowed (see Sect. 4.2 for details). Nevertheless, the observed trend slightly above the photosphere can demonstrate the direction for the rotational force, although quantitative comparison is very difficult. How close have we reached to the complete picture of the formation and evolution of flare-producing ARs? Thanks to the advancement of observation techniques and modeling efforts, we have acquired a substantial amount of knowledge that may set the grounds for a more complete understanding. In this section, we summarize our current understanding of the genesis and evolution and key observational features of these ARs. 6.1 The era with Hinode, SDO, and GST To a greater degree, our understanding of the flaring ARs has been pushed forward by the ceaseless improvement of observation instruments, and the progress in the last decade has been made in particular by Hinode, SDO, and GST. In fact, many parts of this review article are based on the outcome of these missions. Since launch in September 2006, the Hinode spacecraft has sent us various important observables. By virtue of seeing-free condition from space, one of its trio of instruments, SOT, has acquired high-resolution vector magnetograms, revealed the detailed structure of flaring PILs, and showed us its importance in triggering flares and CMEs (Sect. 3.2.1). With the vector magnetograms, though not quite satisfactorily, now we can extrapolate the coronal field by the NLFFF techniques, which is used as the initial condition of data-constrained simulations (Sect. 4.3). Moreover, through simultaneous multi-wavelength observation in concert with XRT and EIS, Hinode realized even more comprehensive tracing of the dynamical evolution over the different atmospheric layers. The flux rope formation due to the photospheric shear motion and the non-thermal broadening of EUV lines in response to the helicity injection are good examples of Hinode's multi-wavelength probing of flare-producing ARs (Sects. 3.3.1 and 3.3.2). Everyday, tons of observational data are ceaselessly poured to the ground from SDO (launched in February 2010). They include photospheric intensitygram, Dopplergram, (vector) magnetogram, and (E)UV images. Its constant full-disk observation enables us to statistically investigate the evolution of ARs from appearance to eventual flare eruption with unprecedented details. Together with EIS and XRT, the multi-filter (multi-temperature) observation of AIA provided the thermal diagnostics of ARs such as DEM inversions (Sect. 3.3.1). The steady supply of vector magnetogram by HMI revealed the rapid changes of not only the LOS field but also the transverse field in time scales of down to \({\sim }\, 10\) min (Sect. 5.2). Several new attempts to utilize vector data have started. For instance, the series of vector magnetograms are used in data-driven simulations to sequentially update the boundary condition of coronal field models (Sect. 4.3.3). Various photospheric parameters calculated from the vector data are used for predicting the flares and CMEs (see discussion in Sect. 7.2.1). Thanks to the high spatial resolution with the 1.6-m aperture and the longer duty cycle, BBSO/GST (scientific observation initiated in January 2009) has played a key role in obtaining insights into the rapid changes of photospheric (high-\(\beta \)) fields in response to dynamical evolution of coronal (low-\(\beta \)) fields during the course of flares and CMEs (Sect. 5). The most important science outputs made by BBSO/GST related to the flare-AR science include (1) the detailed structure, development, and destabilization of a flux rope, (2) the sudden flare-induced rotation of sunspots and evolution of photospheric flow fields, and (3) the tiny and transient flare precursors in the lower atmosphere. Through these discoveries, now we know that the answer to the "tail wags the dog" problem, i.e., whether the coronal eruption can cause changes in the photospheric field, is yes. The advancement of instruments has also motivated the development of numerical modeling. For instance, the long-term monitoring of flare-productive ARs by Hinode and SDO from birth to eruption inspired the flux emergence models and gave a clue to the formation mechanisms of \(\delta \)-spots (e.g., NOAA 11158 in February 2011: Sect. 4.1.2). Fine-scale flare-triggering fields and rapid magnetic changes during the flares, which are observable only with advanced instruments, have been compared with the results of the flare simulations (Sects. 4.2 and 5.4). Filtergram images of various wavelengths by XRT and AIA provide the means to diagnose the coronal fields (e.g., XRT image and NLFFF extrapolation of sigmoids: Sect. 3.3.1). All of these results underscore the importance of direct comparison of observation and modeling in unraveling the formation and evolution of flare-producing ARs. 6.2 From birth to eruption In this subsection, we summarize some of the key aspects related to the genesis of flare-producing ARs and eventual energy release, which have been uncovered by the observational and theoretical studies presented in this review article. Subsurface evolution: The dynamo-generated toroidal flux loops start rising in the convection zone (Sect. 2.1). Subject to the background turbulent convection, some of them may lose a simple \(\varOmega \)-shape and deform into a helical structure, a top-dent configuration, bifurcated multiple branches, or collide with other flux systems (Sect. 4.1). Through these processes, the rising flux systems gain non-potentiality that is represented by free magnetic energy and magnetic helicity. Formation of \(\delta \)-spots: On their appearance in the photosphere, some of these rising flux loops form \(\delta \)-sunspots, in which umbrae of positive and negative polarities are so close to share a common penumbra (Sect. 2.3). Most of the \(\delta \)-spots are generated by multiple emerging loops rather than a single \(\varOmega \)-loop and the diversity of polarity layout stems from the difference in the subsurface history, but strong flares also emanate from non-\(\delta \) sunspots such as the Inter-AR case (Sect. 3.1). Development of flaring PIL and photospheric features: Due to shearing and converging motion, the PIL between the opposite polarities obtains a strong transverse field with high gradient and shear (Sect. 3.2). This is the outcome of the Lorentz force, and this force also causes the rotational motion of sunspots (Sect. 4.1). Formation of flux rope: The coronal fields lying above the PIL become sheared in sync with the photospheric driving, cancel against each other, and form a magnetic flux rope. This helical structure is observed as a sigmoid in soft X-rays and as a filament (prominence) in H\(\alpha \) (Sects. 3.3 and 4.2). Flare occurrence and CME eruption: When the energy is sufficiently accumulated, the solar flare is eventually initiated (Sect. 2.2). The flux rope becomes destabilized and erupts, often as a CME into the interplanetary space, leaving behind a variety of remarkable observational features on the Sun. The drastic evolution of coronal fields causes rapid and profound changes in magnetic and flow fields even in the photosphere (Sect. 5). If the confinement of the overlying arcade in an AR is too strong, however, the flux rope may not develop into a CME. As is obvious from the fact that helical structures are seen in many parts in the story above, the whole process of AR formation, flare eruption, and CME propagation appears to be, overall, the large-scale transport of magnetic helicity and energy from the solar interior all the way to outer space (Low 1996, 2002; Démoulin 2007). In this sense, the formation of \(\delta \)-spots, where abundant evidence of non-potentiality is observed, is accepted as a natural consequence of the helicity that is delivered from the interior. 6.3 Key observational features and quantities In the long history of observation of ARs producing strong flares and CMEs, various features have been investigated. Perhaps these features can be summarized into three important factors, which are (a) the size, (b) complexity, and (c) evolution. Given the large magnetic energy accumulated in the ARs, it is reasonable that these ARs are larger in spot area, or naturally in total magnetic flux. However, as we saw in Sect. 2.2, the largest spot in history, RGO 14886, was not flare active, probably because this AR had a simple bipolar (i.e., potential) magnetic field. To increase free magnetic energy that is released through flare eruptions, ARs need to contain morphological and magnetic complexity, which is manifested as the dispersed polarities (i.e., \(\gamma \)-spots), strong-field, strong-gradient, highly-sheared PILs in \(\delta \)-spots, magnetic tongues, flux ropes, sigmoids, etc. These complex structures manifest during the course of AR evolution, observed as flux emergence of various scales, shearing motion on both sides of a PIL, and rotational motion of the sunspots. Of course, such evolutionary processes may serve as a trigger of eventual flare eruption. Some selected parameters in the literature that address the productivity of X-class flares Production of X-class flares Spot area 40% of \(\ge \) 1000 MSH \(\beta \gamma \delta \)-spots Sammis et al. (2000) PIL total unsigned flux (R-value) 20% of \(\log {(R)}=5.0\) (within the next 24 h) Schrijver (2007) Fractal dimension \(\ge 1.25\) McAteer et al. (2005) Power-law index \(>2.0\) Abramenko (2005) Peak helicity injection rate \(\ge 6\times 10^{36}\, \mathrm{Mx}^{2}\, \mathrm{s}^{-1}\) LaBonte et al. (2007) Total non-neutralized current \(\ge 4.6\times 10^{12}\, \mathrm{A}\) Kontogiannis et al. (2017) Maximum non-neutralized current \(\ge 8\times 10^{11}\, \mathrm{A}\) Normalized helicity gradient variance 1.13 (1 day before the flare) Reinard et al. (2010) As we have seen in many parts in this review, there is a multitude of statistical investigations that reveal the quantitative differences between flaring and quiescent ARs. In Table 1, we pick up several parameters from the literature that are suggested to differentiate (and may subsequently predict) X-class flares. One may notice from this table and other references in this article that many of the variables that have been investigated so far are snapshot parameters, i.e., those derived from observation at a single moment. However, since it is the AR evolution that drives the flaring activities, we need to understand the importance of dynamic parameters, i.e., those that describe the temporal change of magnetic fields. One of the most striking examples is the very fast flux emergence in the super-flaring AR NOAA 12673 (Fig. 31). Sun and Norton (2017) showed that the flux growth rate (i.e., time derivative of unsigned total magnetic flux) in this AR was greater than any values reported in the literature, and its X9.3 flare occurred a couple of days after this remarkable emergence was detected. Therefore, such time derivative quantities might be key to predict flares and CMEs (Sect. 7.2.1; see also Leka and Barnes 2003, 2007). Despite the remarkable progress made to date, many outstanding questions remain. However, some of them will be answered if observational and numerical techniques are improved more in the near future. In this section, we list some of the important questions and discuss the possibilities to utilize our knowledge of flare-productive ARs in related science fields. 7.1 Outstanding questions and future perspective Observationally, we still do not have a "visual" image of the subsurface emerging flux and thus we cannot establish whether the complex 3D configuration of flaring ARs deduced from the surface evolution is real or not. In a statistical sense, on average, these ARs show enhanced vorticity before they cause flare eruptions (Sect. 3.3.3). However, we still do not have robust methods of imaging the rising flux because the (local) helioseismic probing is hampered by the fast emergence and the low signal-to-noise ratio. The existence of strong flux may not be treated as a small perturbation, which is assumed when solving the linear inverse problem in seismology. Advancement in helioseismology techniques, probably with the support of numerical modeling, is desired to overcome this difficulty. Turbulent convection plays a crucial role in producing the morphological and magnetic complexity of these ARs. The generation of \(\varOmega \)-loops from the magnetic wreath in the global anelastic simulations begins to establish the concept of the "spot-dynamo" (Fig. 2: see Nelson et al. 2013; Brun et al. 2015). However, due to the limitation of the anelastic approximation, it is difficult to trace the story after the flux loops pass through the uppermost convection zone (about \(-\,20\, \mathrm{Mm}\) and upward). Compressible simulations that enable access to (very close to) the solar surface, such as by Hotta et al. (2014), may reveal the dynamical interaction between the magnetic field and turbulent convection in much greater detail. The genesis of magnetic helicity, namely, the twist and writhe of emerging flux (observed in the form of magnetic shear, spot rotations, magnetic tongues, sigmoids, etc.: Sect. 3), is still a big mystery (Longcope et al. 1999). Regarding the formation of flaring ARs, it is also an interesting question how and why super strong transverse field appears at the PIL in a \(\delta \)-spot instead of at the core of sunspot umbra. These issues may be solved by an advancement of numerical models. There has been a dichotomy of theory whether a magnetic flux rope is created well before the eruption or at the very moment of it (see, e.g., Forbes et al. 2006, p. 266). Thanks to the NLFFF, data-constrained, and data-driven models, now the flux rope appears to be created from before eruption, at least in the flare-productive ARs, through the continued shearing along the PIL. These numerical methods may be advanced even more and provide a conclusive answer. For example, vector field measurements in higher atmospheric layers may realize more accurate extrapolations. In the current force-free methods, it is assumed that the input photospheric vector field is in force-free (Sect. 4.3.1). However, this is apparently not the case because the photosphere is in the realm of high-\(\beta \) plasma (i.e., the photospheric plasma is largely affected by the non-magnetic forces such as pressure gradient), which requires a smoothing of the photospheric vector field before the extrapolation is applied. Chromospheric low-\(\beta \) fields, obtained by future instruments such as the Daniel K. Inouye Solar Telescope (DKIST), may give better boundary conditions for the force-free extrapolations, data-constrained and data-driven models. Moreover, magnetic information at multiple altitudes allows us to calculate the partial derivatives in the vertical direction (i.e., \(\partial B_{x}/\partial z\) and \(\partial B_{y}/\partial z\)) and may provide better estimates of the total (vector) current density, horizontal velocity, electric field, and Lorentz force density. Stereoscopic monitoring of the Sun from multiple vantage points, for instance by spacecrafts around the Earth and at the Lagrangian L5 point or by off-ecliptic explorers like Solar Orbiter, is helpful in various aspects (Akioka et al. 2005; Schrijver et al. 2015; Gibson et al. 2018). Apart from the early warning of space weather events like Earth-directed CMEs and violent ARs beyond the east limb, it may help probing the deeper interior with local helioseismology, resolving the ambiguity of magnetic measurements, and assessing the topology of entangled coronal fields (see results from STEREO). With advanced spectroscopic and imaging instruments, atmospheric evolution such as build-up and eruption of flux ropes and non-thermal broadening of EUV lines (Sect. 3.3) may be revealed in further detail. All these new capabilities will greatly improve our understanding of the nature of flare-productive ARs. The detection of flare-related activities from ground-based large-aperture telescopes has been, in most cases, done by GST (Sect. 5). To better understand the fine-scale dynamics in AR build-up and flare eruption, it is necessary to increase the detection rate of these events by enhancing the observing time. One possible idea is to organize an international network of high-resolution telescopes, such as DKIST (4-m aperture in Maui), New Vacuum Solar Telescope (NVST; 1-m aperture in Yunnan), Swedish Solar Telescope (SST; 1-m aperture in La Palma), GREGOR (1.5-m aperture in Tenerife), and European Solar Telescope (EST; 4-m aperture under contemplation), and conduct a long-running monitoring of a target AR. Several key observations of dynamic activities in flaring ARs were already made with NVST (Xue et al. 2016, 2017) and SST (Guglielmino et al. 2016; Robustini et al. 2018). Therefore, the combination of these stations may open up unexplored discovery space and provide insights into the evolution of small-scale magnetic features in the very long run (days to weeks). 7.2 Broader impacts on related science fields 7.2.1 Prediction and forecasting of solar flares and CMEs Probably one of the most practical applications of the knowledge of flaring ARs we have acquired is the prediction of flares and CMEs. Statistical investigations of various events that introduce parameters such as those in Table 1 characterized the flare-productive ARs. In the last decades, the knowledge-based flare predictions using these quantities have been significantly developed. 13 flare-predictive parameters derived from the SDO/HMI vector data (Bobra and Couvidat 2015) F-score Total unsigned current helicity \(H_{c_{\mathrm{total}}}\propto \sum |B_{z}\cdot J_{z}|\) Total magnitude of Lorentz force \(F\propto \sum B^{2}\) Total photospheric magnetic free energy density \(\rho _{\mathrm{tot}}\propto \sum ({{\mathbf {B}}}^\mathrm{Obs}-{{\mathbf {B}}}^\mathrm{Pot})^{2}dA\) Total unsigned vertical current \(J_{z_{\mathrm{total}}}=\sum |J_{z}|dA\) Absolute value of the net current helicity \(H_{c_{\mathrm{abs}}}\propto \left| \sum B_{z}\cdot J_{z}\right| \) Sum of the modulus of the net current per polarity \(J_{z_{\mathrm{sum}}}\propto \left| \sum ^{B_{z}^{+}} J_{z}dA\right| +\left| \sum ^{B_{z}^{-}} J_{z}dA\right| \) Total unsigned flux \(\varPhi =\sum |B_{z}|dA\) Area of strong field pixels in the active region \(\mathrm{Area}=\sum \mathrm{Pixels}\) Sum of z-component of Lorentz force \(F_{z}\propto \sum (B_{x}^{2}+B_{y}^{2}-B_{z}^{2})dA\) Mean photospheric magnetic free energy \(\overline{\rho }\propto \frac{1}{N}\sum ({{\mathbf {B}}}^\mathrm{Obs}-{{\mathbf {B}}}^\mathrm{Pot})^{2}\) Sum of flux near polarity inversion line \(\varPhi =\sum |B_{LoS}|dA\) within R mask Sum of z-component of normalized Lorentz force \(\delta F_{z}\propto \frac{\sum (B_{x}^{2}+B_{y}^{2}-B_{z}^{2})}{\sum B^{2}}\) Fraction of area with shear \(> 45^{\circ }\) Area with shear \(> 45^{\circ }\)/total area F-score indicates the scoring of the parameter Nowadays, these methods employ machine-learning algorithms. For example, Bobra and Couvidat (2015) extracted various photospheric parameters from the SDO/HMI vector magnetograms for individual ARs, trained the machine, and obtained a good predictive performance for \(\ge \) M1.0 flares. The parameters investigated are listed in Table 2, which are basically the previously suggested variables (Leka and Barnes 2003; Fisher et al. 2012; Schrijver 2007), It should be noted that most of them are "extensive," where a given parameter increases with AR size (Tan et al. 2007; Welsch et al. 2009; Sun et al. 2015; Toriumi and Takasao 2017). Many of the parameters listed in Table 2 are, again, snapshot ones (see Sect. 6.3), and the inclusion of dynamic parameters may be helpful in flare predictions (Leka and Barnes 2003, 2007). For instance, to the flare-predictive parameters in Table 2, Nishizuka et al. (2017) added additional information that indicates flare history and chromospheric pre-flare brightening and also time derivatives of various observables. By training the machine with three different algorithms, the authors successfully obtained a prediction score higher than that of Bobra and Couvidat (2015). This study clearly highlights the usage of dynamic parameters. However, it is worth noting that increasing the number of parameters does not necessarily improve the prediction performance. In fact, Leka and Barnes (2007) and Bobra and Couvidat (2015) found that there was little value to add parameters more than a few. This is because the model with many parameters (i.e. large degrees of freedom) tends to overfit the training data and, in that case, the model may perform worse on the validation data. Today, while there remains a view that the occurrence of flares is a "stochastic" process (e.g., the avalanche model by Lu and Hamilton 1991) and therefore the "deterministic" forecasting might be fundamentally impossible (Schrijver 2009), the knowledge-based prediction is growing much more rapidly than ever before (e.g., Qahwaji and Colak 2007; Colak and Qahwaji 2009; Yu et al. 2010; Ahmed et al. 2013; Muranushi et al. 2015; Bobra and Ilonidis 2016; Liu et al. 2017a; Jonas et al. 2018; Huang et al. 2018; Nishizuka et al. 2018). Together with the attempts to build up physics-based (i.e., modeling-based) algorithms (Sects. 4.3.2 and 4.3.3), the recent development of this field may tell us that the real-time space weather forecasting will come true in the very near future. 7.2.2 Investigating extreme space-weather events in history The strongest flare activity ever observed with an estimated GOES class of \({\sim }\)X45 is the Carrington flare in September 1859 (see Sect. 2.2). To understand the mechanisms and trends of such extreme space weather events that may affect the Earth (like the occurrence frequency; Schrijver et al. 2012; Riley 2012; Curto et al. 2016), it is crucial to increase the sample number by surveying the greatest events in history. However, often these events do not have observations of sufficient data quality for scientific analysis. In the modern age, the data analyzed are often digitized intensity images of various wavelengths and LOS or vector magnetograms. For the historical events, however, available records can be photographic plates or perhaps only sunspot drawings. But still, there are several ways to elucidate how and why the strong events occurred. For instance, there are several attempts to achieve magnetic information from historical sunspot drawings. For the great storm of May 1921 (Silverman and Cliver 2001; Kappenman 2006), Lundstedt et al. (2015) reconstructed "magnetograms" by applying their torus model to the daily Mount Wilson drawings of sunspot magnetic fields and studied the development of the target AR. They found that spot rotations and flux emergence occurred in the AR. They pointed out the close association between the drastic spot evolutions and the eventual magnetic storm. Another approach is to reconstruct vector magnetogram from existing LOS magnetogram by applying one of the machine-learning methods called transfer learning (Pan and Qiang 2010). One of the purposes of this method is to convert some source data to target data and, with this method, one may use SDO/HMI vector magnetograms (for Cycle 24) and SOHO/MDI LOS magnetograms (for Cycle 23) as the source data and target data, respectively, and reproduce "vector magnetograms" for ARs of Cycle 23. Because there were many more stronger flares in Cycle 23, such vector data may help investigate the driving mechanisms of extreme events. In many respects, studying historical records is beneficial in understanding the activity of the Sun. It may tell us how strong events the Sun can produce, how frequently these events occur, and how they make an impact on our magnetic circumstances. Although it is not easy to derive useful information from such records, we can still take advantage of the current knowledge of flaring ARs. Attempts to examine drastic spot evolution and reconstruct magnetograms may give us clues to understand the nature of severe space-weather events. 7.2.3 Connection with stellar flares and CMEs The production of stellar flares and CMEs are now of great importance, not only from the viewpoint of mass and angular momentum loss rates especially of the active young stars (e.g., Aarnio et al. 2012), but also in the search for habitability of orbiting exoplanets. The type II radio burst, which is believed to be produced by MHD shocks in front of the CME propagating into the interplanetary space (Gurnett 1995), is currently the best way of detecting the stellar CMEs (Osten and Wolk 2017). In this regard, Crosley and Osten (2018a, b) attempted to detect type II bursts on nearby, magnetically-active, well-characterized M dwarf star EQ Peg. During 20 h of simultaneous radio and optical observation, they detected four optical flare signatures but no radio features identifiable as type II bursts. Two radio bursts were found during the additional 44 h of radio-only observation. However, their characteristics were not consistent with that of type II events. From the statistics of the solar flares and CMEs (Yashiro et al. 2006), all the four detected flares are empirically predicted to have associated CMEs, but none was detected at radio wavelengths in this data set. As an independent analysis, Leitzinger et al. (2014) searched for flares and CMEs on 28 young late-type (K to M) stars in the open cluster Blanco-1. From the 5 h observation, they found four H\(\alpha \) flares from three M stars and one K star. Interestingly, however, they also did not detect any clear indications of CMEs such as spectral asymmetries of the H\(\alpha \) line caused by large Doppler velocities. Although we cannot rule out the possibility that the signals were less than the detection sensitivity, it is worth discussing the reason of the "failed" eruptions by employing the knowledge of flare-productive ARs of the Sun. As we saw, for instance, in Sects. 2.2 and 4.1.5, the flare eruption tends to fail when the overlying coronal loops are strong and slowly decaying over height (Wang et al. 2017a; Vasantharaju et al. 2018; Jing et al. 2018). Observations and numerical modeling of flaring ARs show that, for the failed events, a magnetic flux rope is often trapped in the AR core and does not have an access to open fields (Toriumi et al. 2017b; Toriumi and Takasao 2017; DeRosa and Barnes 2018). As the Zeeman Doppler Imaging by Morin et al. (2008) suggests, active M dwarfs tend to be covered by strong magnetic patches over the entire stellar surface. Due to the strong confinement by coronal loops extending from these patches, we may expect less successful CME eruptions even if energetic stellar flares occur (Drake et al. 2016). The confinement may also be due to the strong large-scale dipolar field, as numerically modeled by Alvarado-Gómez et al. (2018). Thanks to the advancement of observational capabilities, many more "superflares" are now detected on solar-like G-type stars (Maehara et al. 2012; Shibayama et al. 2013). Indications of huge starspots with large magnetic energy are seen in these stars (e.g., Notsu et al. 2013). By conducting spectroscopic and polarimetric observations on the properties of superflares and starspots, and by comparing them with numerical models of solar–stellar flares and ARs, the production mechanisms, similarities and diversities, and their stellar space-weather impacts may be revealed in detail in the near future. The GST was formerly called the New Solar Telescope (NST). This is why a study report on the future of solar physics, published by the Next Generation Solar Physics Mission (NGSPM)'s Science Objectives Team (SOT), chartered by NASA, JAXA, and ESA, cites the formation mechanism of flare-productive ARs as one of the most important science targets. At the time of this writing, the report is available at https://hinode.nao.ac.jp/SOLAR-C/SOLAR-C/Documents/NGSPM_report_170731.pdf. Also, observation and modeling of such ARs is recognized as an important target in the International Space-weather Roadmap (Schrijver et al. 2015). Millionths of the solar hemisphere. \(1\, \mathrm{MSH}\sim 3\times 10^{6}\, \mathrm{km}^{2}\). McIntosh (1990) mentioned that "[r]arely will the measured magnetic class conflict with" his definitions of unipolar and bipolar groups. It is implicitly assumed here that the net helicity flux through S other than the photosphere is zero. It is also suggested that the flux ropes emerge bodily from below the surface (e.g., Lites et al. 1995; Okamoto et al. 2008). Not to be confused with the magnetic helicity density, \({{\mathbf {B}}}\cdot {{\mathbf {A}}}={{\mathbf {A}}}\cdot (\nabla \times {{\mathbf {A}}})\): see Sect. 3.2.3. Note that the kink instability is also suggested as one driving mechanism of CME eruption: see Sect. 2.2. In local thermodynamical equilibrium (LTE), it is assumed that the state of plasma is described simply by the Saha–Boltzmann equations, i.e., as a function of the local kinetic temperature and electron density alone. Non-LTE indicates that this assumption is not valid. This part is based on Zirin and Liggett (1987). S.T. benefited from fruitful discussions held in the series of Flux Emergence Workshops, the Project for Solar-Terrestrial Environment Prediction (PSTEP), the solar–stellar team sponsored by the International Space Science Institute (ISSI), and Nagoya University ISEE/CICR International Workshop on Data-driven Models. S.T. would like to thank Mark C.M. Cheung, Yuhong Fan, George H. Fisher, Manuel Güdel, Hiroki Kurokawa, Mark G. Linton, Rachel A. Osten, and Takashi Sekii, for providing valuable comments, discussions, and continuous supports. H.W. thanks Chang Liu for his contribution in writing the 2015 RAA review paper that prepared some knowledge for this review. We thank the anonymous referees and the editor Carolus J. Schrijver for very helpful comments. Data are courtesy of the science teams of Hinode, SOHO, and SDO. Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in cooperation with ESA and NSC (Norway). SOHO is a project of international cooperation between ESA and NASA. HMI and AIA are instruments on board SDO, a mission for NASA's Living With a Star program. We thank Sian Prosser, the Royal Astronomical Society, for providing the sunspot drawing by Carrington. The work was supported by JSPS KAKENHI Grant Nos. JP16K17671 (PI: S. Toriumi) and JP15H05814 (PI: K. Ichimoto) and by the NINS program for cross-disciplinary study (Grant Nos. 01321802 and 01311904) on Turbulence, Transport, and Heating Dynamics in Laboratory and Solar/Astrophysical Plasmas: "SoLaBo-X". H.W. acknowledges the support of US NSF under Grant AGS-1821294 and US NASA under Grants 80NSSC17K0016, 80NSSC18K1133, 80NSSC18K1705, and 80NSSC19K0257. 41116_2019_19_MOESM1_ESM.mov (8.4 mb) Movie of Fig. 6 showing X3.4-class flare in AR NOAA 10930. Hinode/SOT/FG Ca ii H image. Courtesy of Joten Okamoto (ISAS/JAXA and NAOJ). (mov 8.37MB) 41116_2019_19_MOESM2_ESM.mp4 (1.1 mb) Movie of Fig. 35 showing 3D magnetic structure and photospheric field Bz. Top view (left) and bird's eye view (right). Yellow and blue field lines denote the field lines passing by the current sheet between the two arcades. (mp4 1.10 MB) Movie of Fig. 44 showing 3D numerical simulations of the four representative types of flare-productive ARs, as introduced in Fig. 17. (Top) Surface vertical magnetic fields (magnetogram). (Bottom) Magnetic field lines. The green field lines are for the parasitic tube and the parallel tube. (mp4 2.94 MB) 41116_2019_19_MOESM4_ESM.mpeg (12.3 mb) Movie of Fig. 54 showing BBSO/GST TiO images of the 2011 July 2 C7.4 flare, clearly showing the formation of penumbra. (mpeg 2.2MB) 41116_2019_19_MOESM5_ESM.mov (15.1 mb) Fig. 59 showing BBSO/GST photospheric TiO images. The yellow box shows the rotating sunspots. (mov 15MB) Movie of Fig. 3 showing "textbook" flux emergence in AR NOAA 12401 observed simultaneously by Hinode, IRIS, and SDO (2015 August 19). From top left to bottom right are the IRIS slit-jaw image of 1400 Å, raster-scan intensitygram at the Mg ii k line core (k3: 2796 Å), intensitygram at the Mg ii triplet line (2798 Å), Dopplergram produced from the Si iv 1403 Å spectrum (blue, white, and red correspond to −10, 0, and +40 km s−1, respectively), SDO/AIA 1600 Å and 1700 Å, and SDO/HMI magnetogram and intensitygram. The white arrow in the top left panel indicates the direction of the disk center. (mpeg 24.0MB) Appendix: Original advocates of the kink instability Little has been known about who first proposed the helical kink instability as the formation mechanism for the \(\delta \)-spots. Almost certainly, it is Linton et al. (1996) who for the first time investigated this instability in the context of \(\delta \)-spot formation. However, the authors did not clearly claim in their paper that they were the first to propose this idea. Before that, from the observed proper motions of \(\delta \)-spots, Tanaka (1991) suggested in his posthumous publication that the rising of a knotted twisted flux tube creates the \(\delta \)-spots (twisted knot model). Although his illustration, adopted as Fig. 15a in this review, is highly evocative of the kink instability, the term "kink instability" was not used at all in his paper. It is now almost impossible to find out whether Katsuo Tanaka and his longtime collaborator Harold Zirin held an idea of the kink instability at that time because both of them are deceased. Here we show a brief history between Tanaka (1991) and Linton et al. (1996), which George H. Fisher, Mark G. Linton, and Yuhong Fan gave to us. While Fisher was working on the thin flux tube model (Sect. 2.1.1) with Fan in the University of Hawaii, he conceived an idea to add magnetic twist to the thin flux tube, inspired by Mouschovias and Poland (1978), who proposed magnetic twist as a driver of the flux rope eruption. Although the concept of Mouschovias and Poland (1978) was based more on the lateral kink instability, in which the displacement of a twisted flux tube is in a single perpendicular direction (i.e., an \(\varOmega \)-loop in a 2D plane), Fisher had the misconception that the driving mechanism was the helical kink mode, where the direction of the displacement rotates along the tube axis (see Priest 2014, Sect. 7.5.3 for the two modes). When Fisher and Linton started working on this issue after Fisher moved to the University of California, Berkeley in 1992, Dana W. Longcope showed that the thin flux tube model cannot represent the helical kink instability because this model, in principle, assumes that any physical value is uniform over the tube's cross section and thus does not include internal motion within the tube. In the meantime, they studied the textbook of Zirin (1988), especially on the "island \(\delta \)" sunspot,10 as well as the seminal work by Tanaka (1991). These two publications stimulated them to propose the helical kink instability as the origin for the island \(\delta \)-spot. Over 1994 and 1995, Linton performed energy principle analysis on the instability with Longcope and, eventually, the work resulted in Linton et al. (1996). Moreover, the presentation by Linton et al., probably in the 26th AAS/SPD meeting in 1995, evolved into a collaboration with Russell B. Dahlburg on MHD simulations, which was published later as Linton et al. (1998). Therefore, it is not easy to narrow down the originator of the idea to one. However, the above story should be a good example of a coincidental misconception serendipitously producing fruit. Aarnio AN, Matt SP, Stassun KG (2012) Mass loss in pre-main-sequence stars via coronal mass ejections and implications for angular momentum loss. ApJ 760:9. https://doi.org/10.1088/0004-637X/760/1/9. arXiv:1209.6410 ADSCrossRefGoogle Scholar Abbett WP (2007) The magnetic connection between the convection zone and corona in the quiet Sun. ApJ 665:1469–1488. https://doi.org/10.1086/519788 ADSCrossRefGoogle Scholar Abbett WP, Fisher GH (2003) A coupled model for the emergence of active region magnetic flux into the solar corona. ApJ 582:475–485. https://doi.org/10.1086/344613 ADSCrossRefGoogle Scholar Abbett WP, Fisher GH, Fan Y (2000) The three-dimensional evolution of rising, twisted magnetic flux tubes in a gravitationally stratified model convection zone. ApJ 540:548–562. https://doi.org/10.1086/309316. arXiv:astro-ph/0004031 ADSCrossRefGoogle Scholar Abramenko VI (2005) Relationship between magnetic power spectrum and flare productivity in solar active regions. ApJ 629:1141–1149. https://doi.org/10.1086/431732 ADSCrossRefGoogle Scholar Abramenko V, Yurchyshyn V (2010) Intermittency and multifractality spectra of the magnetic field in solar active regions. ApJ 722:122–130. https://doi.org/10.1088/0004-637X/722/1/122. arXiv:1012.1586 ADSCrossRefGoogle Scholar Abramenko VI, Yurchyshyn VB, Wang H, Spirock TJ, Goode PR (2002) Scaling behavior of structure functions of the longitudinal magnetic field in active regions on the Sun. ApJ 577:487–495. https://doi.org/10.1086/342169 ADSCrossRefGoogle Scholar Abramenko VI, Yurchyshyn VB, Wang H, Spirock TJ, Goode PR (2003) Signature of an avalanche in solar flares as measured by photospheric magnetic fields. ApJ 597:1135–1144. https://doi.org/10.1086/378492 ADSCrossRefGoogle Scholar Acton L, Tsuneta S, Ogawara Y, Bentley R, Bruner M, Canfield R, Culhane L, Doschek G, Hiei E, Hirayama T (1992) The YOHKOH mission for high-energy solar physics. Science 258:618–625. https://doi.org/10.1126/science.258.5082.618 ADSCrossRefGoogle Scholar Ahmed OW, Qahwaji R, Colak T, Higgins PA, Gallagher PT, Bloomfield DS (2013) Solar flare prediction using advanced feature extraction, machine learning, and feature selection. Sol Phys 283:157–175. https://doi.org/10.1007/s11207-011-9896-1 ADSCrossRefGoogle Scholar Akioka M, Nagatsuma T, Miyake W, Ohtaka K, Marubashi K (2005) The L5 mission for space weather forecasting. Adv Space Res 35:65–69. https://doi.org/10.1016/j.asr.2004.09.014 ADSCrossRefGoogle Scholar Alexander D, Harra-Murnion LK, Khan JI, Matthews SA (1998) Relative timing of soft X-ray nonthermal line broadening and hard X-ray emission in solar flares. ApJ 494:L235–L238. https://doi.org/10.1086/311175 ADSCrossRefGoogle Scholar Allen J, Frank L, Sauer H, Reiff P (1989) Effects of the March 1989 solar activity. Eos Trans AGU 70:1479. https://doi.org/10.1029/89EO00409 ADSCrossRefGoogle Scholar Alvarado-Gómez JD, Drake JJ, Cohen O, Moschou SP, Garraffo C (2018) Suppression of coronal mass ejections in active stars by an overlying large-scale magnetic field: a numerical study. ApJ 862:93. https://doi.org/10.3847/1538-4357/aacb7f. arXiv:1806.02828 ADSCrossRefGoogle Scholar Amari T, Luciani JF, Mikić Z, Linker J (2000) A twisted flux rope model for coronal mass ejections and two-ribbon flares. ApJ 529:L49–L52. https://doi.org/10.1086/312444 ADSCrossRefGoogle Scholar Amari T, Luciani JF, Aly JJ, Mikić Z, Linker J (2003a) Coronal mass ejection: initiation, magnetic helicity, and flux ropes. I. Boundary motion-driven evolution. ApJ 585:1073–1086. https://doi.org/10.1086/345501 ADSCrossRefGoogle Scholar Amari T, Luciani JF, Aly JJ, Mikić Z, Linker J (2003b) Coronal mass ejection: initiation, magnetic helicity, and flux ropes. II. Turbulent diffusion-driven evolution. ApJ 595:1231–1250. https://doi.org/10.1086/377444 ADSCrossRefGoogle Scholar Amari T, Aly JJ, Mikić Z, Linker J (2010) Coronal mass ejection initiation: on the nature of the flux cancellation model. ApJ 717:L26–L30. https://doi.org/10.1088/2041-8205/717/1/L26. arXiv:1005.4669 ADSCrossRefGoogle Scholar Amari T, Canou A, Aly JJ (2014) Characterizing and predicting the magnetic environment leading to solar eruptions. Nature 514:465–469. https://doi.org/10.1038/nature13815 ADSCrossRefGoogle Scholar Amari T, Canou A, Aly JJ, Delyon F, Alauzet F (2018) Magnetic cage and rope as the key for solar eruptions. Nature 554:211–215. https://doi.org/10.1038/nature24671 ADSCrossRefGoogle Scholar Antiochos SK, Dahlburg RB, Klimchuk JA (1994) The magnetic field of solar prominences. ApJ 420:L41–L44. https://doi.org/10.1086/187158 ADSCrossRefGoogle Scholar Antiochos SK, DeVore CR, Klimchuk JA (1999) A model for solar coronal mass ejections. ApJ 510:485–493. https://doi.org/10.1086/306563. arXiv:astro-ph/9807220 ADSCrossRefGoogle Scholar Anzer U (1968) The stability of force-free magnetic fields with cylindrical symmetry in the context of solar flares. Sol Phys 3:298–315. https://doi.org/10.1007/BF00155164 ADSCrossRefGoogle Scholar Archontis V (2008) Magnetic flux emergence in the Sun. J Geophys Res 113:A03S04. https://doi.org/10.1029/2007JA012422 CrossRefGoogle Scholar Archontis V, Hansteen V (2014) Clusters of small eruptive flares produced by magnetic reconnection in the Sun. ApJ 788:L2. https://doi.org/10.1088/2041-8205/788/1/L2. arXiv:1405.6420 ADSCrossRefGoogle Scholar Archontis V, Hood AW (2009) Formation of Ellerman bombs due to 3D flux emergence. A&A 508:1469–1483. https://doi.org/10.1051/0004-6361/200912455 ADSCrossRefGoogle Scholar Archontis V, Hood AW (2010) Flux emergence and coronal eruption. A&A 514:A56. https://doi.org/10.1051/0004-6361/200913502. arXiv:1003.2333 ADSCrossRefGoogle Scholar Archontis V, Hood AW (2012) Magnetic flux emergence: a precursor of solar plasma expulsion. A&A 537:A62. https://doi.org/10.1051/0004-6361/201116956 ADSCrossRefGoogle Scholar Archontis V, Török T (2008) Eruption of magnetic flux ropes during flux emergence. A&A 492:L35–L38. https://doi.org/10.1051/0004-6361:200811131. arXiv:0811.1134 ADSCrossRefGoogle Scholar Archontis V, Moreno-Insertis F, Galsgaard K, Hood A, O'Shea E (2004) Emergence of magnetic flux from the convection zone into the corona. A&A 426:1047–1063. https://doi.org/10.1051/0004-6361:20035934 ADSCrossRefGoogle Scholar Archontis V, Hood AW, Savcheva A, Golub L, Deluca E (2009) On the structure and evolution of complexity in sigmoids: a flux emergence model. ApJ 691:1276–1291. https://doi.org/10.1088/0004-637X/691/2/1276 ADSCrossRefGoogle Scholar Archontis V, Tsinganos K, Gontikakis C (2010) Recurrent solar jets in active regions. A&A 512:L2. https://doi.org/10.1051/0004-6361/200913752. arXiv:1003.2349 ADSCrossRefGoogle Scholar Asai A, Yokoyama T, Shimojo M, Masuda S, Kurokawa H, Shibata K (2004) Flare ribbon expansion and energy release rate. ApJ 611:557–567. https://doi.org/10.1086/422159 ADSCrossRefGoogle Scholar Atac T (1987) Statistical relationship between sunspots and major flares. Ap&SS 129:203–208. https://doi.org/10.1007/BF00717871 ADSCrossRefGoogle Scholar Aulanier G (2014) The physical mechanisms that initiate and drive solar eruptions. In: Schmieder B, Malherbe JM, Wu ST (eds) Nature of prominences and their role in space weather, IAU symposium, vol 300. Cambridge University Press, Cambridge, pp 184–196. https://doi.org/10.1017/S1743921313010958. arXiv:1309.7329 CrossRefGoogle Scholar Aulanier G (2016) Solar physics: when the tail wags the dog. Nature Phys 12:998–999. https://doi.org/10.1038/nphys3938 ADSCrossRefGoogle Scholar Aulanier G, Török T, Démoulin P, DeLuca EE (2010) Formation of torus-unstable flux ropes and electric currents in erupting sigmoids. ApJ 708:314–333. https://doi.org/10.1088/0004-637X/708/1/314 ADSCrossRefGoogle Scholar Aulanier G, Janvier M, Schmieder B (2012) The standard flare model in three dimensions. I. Strong-to-weak shear transition in post-flare loops. A&A 543:A110. https://doi.org/10.1051/0004-6361/201219311 ADSCrossRefGoogle Scholar Aulanier G, Démoulin P, Schrijver CJ, Janvier M, Pariat E, Schmieder B (2013) The standard flare model in three dimensions II. Upper limit on solar flare energy. A&A 549:A66. https://doi.org/10.1051/0004-6361/201220406. arXiv:1212.2086 ADSCrossRefGoogle Scholar Babcock HW (1961) The topology of the Sun's magnetic field and the 22-year cycle. ApJ 133:572. https://doi.org/10.1086/147060 ADSCrossRefGoogle Scholar Bamba Y, Kusano K (2018) Evaluation of applicability of a flare trigger model based on a comparison of geometric structures. ApJ 856:43. https://doi.org/10.3847/1538-4357/aaacd1. arXiv:1802.00134 ADSCrossRefGoogle Scholar Bamba Y, Kusano K, Yamamoto TT, Okamoto TJ (2013) Study on the triggering process of solar flares based on hinode/SOT observations. ApJ 778:48. https://doi.org/10.1088/0004-637X/778/1/48. arXiv:1309.5465 ADSCrossRefGoogle Scholar Bamba Y, Inoue S, Kusano K, Shiota D (2017) Triggering process of the X1.0 three-ribbon flare in the great active region NOAA 12192. ApJ 838:134. https://doi.org/10.3847/1538-4357/aa6682. arXiv:1704.00877 ADSCrossRefGoogle Scholar Barnes CW, Sturrock PA (1972) Force-free magnetic-field structures and their role in solar activity. ApJ 174:659. https://doi.org/10.1086/151527 ADSCrossRefGoogle Scholar Barnes G, Birch AC, Leka KD, Braun DC (2014) Helioseismology of pre-emerging active regions. III. Statistical analysis. ApJ 786:19. https://doi.org/10.1088/0004-637X/786/1/19. arXiv:1307.1938 ADSCrossRefGoogle Scholar Bell B, Glazer H (1959) Some sunspot and flare statistics. Smithsonian Contrib Astrophys 3:25ADSCrossRefGoogle Scholar Benz AO (2017) Flare observations. Living Rev Sol Phys 14:2. https://doi.org/10.1007/s41116-016-0004-3 ADSCrossRefGoogle Scholar Berger MA (1984) Rigorous new limits on magnetic helicity dissipation in the solar corona. Geophys Astrophys Fluid Dyn 30:79–104. https://doi.org/10.1080/03091928408210078 ADSCrossRefGoogle Scholar Berger MA, Field GB (1984) The topological properties of magnetic helicity. J Fluid Mech 147:133–148. https://doi.org/10.1017/S0022112084002019 ADSMathSciNetCrossRefGoogle Scholar Berger MA, Prior C (2006) The writhe of open and closed curves. J Phys A: Math Gen 39:8321–8348. https://doi.org/10.1088/0305-4470/39/26/005 ADSMathSciNetCrossRefzbMATHGoogle Scholar Bernasconi PN, Rust DM, Georgoulis MK, Labonte BJ (2002) Moving dipolar features in an emerging flux region. Sol Phys 209:119–139. https://doi.org/10.1023/A:1020943816174 ADSCrossRefGoogle Scholar Birch AC, Braun DC, Leka KD, Barnes G, Javornik B (2013) Helioseismology of pre-emerging active regions. II. Average emergence properties. ApJ 762:131. https://doi.org/10.1088/0004-637X/762/2/131. arXiv:1303.1391 ADSCrossRefGoogle Scholar Bobra MG, Couvidat S (2015) Solar flare prediction using SDO/HMI vector magnetic field data with a machine-learning algorithm. ApJ 798:135. https://doi.org/10.1088/0004-637X/798/2/135. arXiv:1411.1405 ADSCrossRefGoogle Scholar Bobra MG, Ilonidis S (2016) Predicting coronal mass ejections using machine learning methods. ApJ 821:127. https://doi.org/10.3847/0004-637X/821/2/127. arXiv:1603.03775 ADSCrossRefGoogle Scholar Bornmann PL, Shaw D (1994) Flare rates and the McIntosh active-region classifications. Sol Phys 150:127–146. https://doi.org/10.1007/BF00712882 ADSCrossRefGoogle Scholar Borrero JM, Ichimoto K (2011) Magnetic structure of sunspots. Living Rev Sol Phys 8:4. https://doi.org/10.12942/lrsp-2011-4. arXiv:1109.4412 ADSCrossRefGoogle Scholar Boteler DH (2006) The super storms of August/September 1859 and their effects on the telegraph system. Adv Space Res 38:159–172. https://doi.org/10.1016/j.asr.2006.01.013 ADSCrossRefGoogle Scholar Brandenburg A (2005) The case for a distributed solar dynamo shaped by near-surface shear. ApJ 625:539–547. https://doi.org/10.1086/429584. arXiv:astro-ph/0502275 ADSCrossRefGoogle Scholar Brandenburg A, Kemel K, Kleeorin N, Mitra D, Rogachevskii I (2011) Detection of negative effective magnetic pressure instability in turbulence simulations. ApJ 740:L50. https://doi.org/10.1088/2041-8205/740/2/L50. arXiv:1109.1270 ADSCrossRefGoogle Scholar Braun DC (1995) Sunspot seismology: new observations and prospects. In: Ulrich RK, Rhodes EJ Jr, Däppen W (eds) GONG'94: Helio- and astro-seismology from the Earth and Space, ASP Conference Series, vol 76. Astronomical Society of the Pacific, San Francisco, p 250Google Scholar Braun DC (2012) Comment on "Detection of emerging sunspot regions in the solar interior". Science 336:296. https://doi.org/10.1126/science.1215425 ADSCrossRefGoogle Scholar Braun DC (2016) A helioseismic survey of near-surface flows around active regions and their association with flares. ApJ 819:106. https://doi.org/10.3847/0004-637X/819/2/106. arXiv:1602.00038 ADSCrossRefGoogle Scholar Brown DS, Nightingale RW, Alexander D, Schrijver CJ, Metcalf TR, Shine RA, Title AM, Wolfson CJ (2003) Observations of rotating sunspots from TRACE. Sol Phys 216:79–108. https://doi.org/10.1023/A:1026138413791 ADSCrossRefGoogle Scholar Brun AS, Browning MK (2017) Magnetism, dynamo action and the solar–stellar connection. Living Rev Sol Phys 14:4. https://doi.org/10.1007/s41116-017-0007-8 ADSCrossRefGoogle Scholar Brun AS, Browning MK, Dikpati M, Hotta H, Strugarek A (2015) Recent advances on solar global magnetism and variability. Space Sci Rev 196:101–136. https://doi.org/10.1007/s11214-013-0028-0 ADSCrossRefGoogle Scholar Bruzek A (1964) On the association between loop prominences and flares. ApJ 140:746. https://doi.org/10.1086/147969 ADSCrossRefGoogle Scholar Bruzek A (1967) On arch-filament systems in spotgroups. Sol Phys 2:451–461. https://doi.org/10.1007/BF00146493 ADSCrossRefGoogle Scholar Bruzek A (1969) Motions in arch filament systems. Sol Phys 8:29–36. https://doi.org/10.1007/BF00150655 ADSCrossRefGoogle Scholar Bumba V, Howard R (1965) Large-scale distribution of solar magnetic fields. ApJ 141:1502. https://doi.org/10.1086/148238 ADSCrossRefGoogle Scholar Burlaga L, Sittler E, Mariani F, Schwenn R (1981) Magnetic loop behind an interplanetary shock: voyager, helios, and IMP 8 observations. J Geophys Res 86:6673–6684. https://doi.org/10.1029/JA086iA08p06673 ADSCrossRefGoogle Scholar Burtseva O, Petrie G (2013) Magnetic flux changes and cancellation associated with X-class and M-class flares. Sol Phys 283:429–452. https://doi.org/10.1007/s11207-013-0241-8. arXiv:1210.4200 ADSCrossRefGoogle Scholar Caligari P, Moreno-Insertis F, Schussler M (1995) Emerging flux tubes in the solar convection zone. 1: asymmetry, tilt, and emergence latitude. ApJ 441:886–902. https://doi.org/10.1086/175410 ADSCrossRefGoogle Scholar Cameron R, Sammis I (1999) Tangential field changes in the great flare of 1990 May 24. ApJ 525:L61–L64. https://doi.org/10.1086/312328 ADSCrossRefGoogle Scholar Canfield RC, Hudson HS, McKenzie DE (1999) Sigmoidal morphology and eruptive solar activity. Geophys Res Lett 26:627–630. https://doi.org/10.1029/1999GL900105 ADSCrossRefGoogle Scholar Canfield RC, Kazachenko MD, Acton LW, Mackay DH, Son J, Freeman TL (2007) Yohkoh SXT full-resolution observations of sigmoids: structure, formation, and eruption. ApJ 671:L81–L84. https://doi.org/10.1086/524729 ADSCrossRefGoogle Scholar Canou A, Amari T, Bommier V, Schmieder B, Aulanier G, Li H (2009) Evidence for a pre-eruptive twisted flux rope using the themis vector magnetograph. ApJ 693:L27–L30. https://doi.org/10.1088/0004-637X/693/1/L27 ADSCrossRefGoogle Scholar Cao W, Gorceix N, Coulter R, Ahn K, Rimmele TR, Goode PR (2010) Scientific instrumentation for the 1.6 m New Solar Telescope in Big Bear. Astron Nachr 331:636. https://doi.org/10.1002/asna.201011390 ADSCrossRefGoogle Scholar Carmichael H (1964) A process for flares. In: Hess WN (ed) The physics of solar flares, proceedings of the AAS-NASA symposium held 28–30 October, 1963 at the Goddard Space Flight Center, Greenbelt, MD. NASA Special Publications, SP-50, NASA, Washington, DC, pp 451–456Google Scholar Carrington RC (1858) On the distribution of the solar spots in latitudes since the beginning of the year 1854, with a map. MNRAS 19:1–3. https://doi.org/10.1093/mnras/19.1.1 ADSCrossRefGoogle Scholar Carrington RC (1859) Description of a singular appearance seen in the Sun on September 1, 1859. MNRAS 20:13–15. https://doi.org/10.1093/mnras/20.1.13 ADSCrossRefGoogle Scholar Castellanos Durán JS, Kleint L, Calvo-Mozo B (2018) A statistical study of photospheric magnetic field changes during 75 solar flares. ApJ 852:25. https://doi.org/10.3847/1538-4357/aa9d37. arXiv:1711.08631 ADSCrossRefGoogle Scholar Castenmiller MJM, Zwaan C, van der Zalm EBJ (1986) Sunspot nests: manifestations of sequences in magnetic activity. Sol Phys 105:237–255. https://doi.org/10.1007/BF00172045 ADSCrossRefGoogle Scholar Cattaneo F, Hughes DW (1988) The nonlinear breakup of a magnetic layer: instability to interchange modes. J Fluid Mech 196:323–344. https://doi.org/10.1017/S0022112088002721 ADSCrossRefGoogle Scholar Chae J (2001) Observational determination of the rate of magnetic helicity transport through the solar surface via the horizontal motion of field line footpoints. ApJ 560:L95–L98. https://doi.org/10.1086/324173 ADSCrossRefGoogle Scholar Chae J, Wang H, Qiu J, Goode PR, Strous L, Yun HS (2001) The formation of a prominence in active region NOAA 8668. I. SOHO/MDI observations of magnetic field evolution. ApJ 560:476–489. https://doi.org/10.1086/322491 ADSCrossRefGoogle Scholar Chae J, Moon YJ, Park YD (2004) Determination of magnetic helicity content of solar active regions from SOHO/MDI magnetograms. Sol Phys 223:39–55. https://doi.org/10.1007/s11207-004-0938-9 ADSCrossRefGoogle Scholar Chandra R, Schmieder B, Aulanier G, Malherbe JM (2009) Evidence of magnetic helicity in emerging flux and associated flare. Sol Phys 258:53–67. https://doi.org/10.1007/s11207-009-9392-z. arXiv:0906.1210 ADSCrossRefGoogle Scholar Chandra R, Pariat E, Schmieder B, Mandrini CH, Uddin W (2010) How can a negative magnetic helicity active region generate a positive helicity magnetic cloud? Sol Phys 261:127–148. https://doi.org/10.1007/s11207-009-9470-2. arXiv:0910.0968 ADSCrossRefGoogle Scholar Chang HK, Chou DY, Sun MT (1999) In search of emerging magnetic flux underneath the solar surface with acoustic imaging. ApJ 526:L53–L56. https://doi.org/10.1086/312366 ADSCrossRefGoogle Scholar Charbonneau P (2010) Dynamo models of the solar cycle. Living Rev Sol Phys 7:3. https://doi.org/10.12942/lrsp-2010-3 ADSCrossRefGoogle Scholar Chatterjee P, Hansteen V, Carlsson M (2016) Modeling repeatedly flaring \(\delta \) sunspots. Phys Rev Lett 116(10):101101. https://doi.org/10.1103/PhysRevLett.116.101101. arXiv:1601.00749 ADSCrossRefGoogle Scholar Chen PF (2011) Coronal mass ejections: models and their observational basis. Living Rev Sol Phys 8:1. https://doi.org/10.12942/lrsp-2011-1 ADSCrossRefGoogle Scholar Chen PF, Shibata K (2000) An emerging flux trigger mechanism for coronal mass ejections. ApJ 545:524–531. https://doi.org/10.1086/317803 ADSCrossRefGoogle Scholar Chen WZ, Liu C, Song H, Deng N, Tan CY, Wang HM (2007) A statistical study of rapid sunspot structure change associated with flares. Chin J Astron Astrophys 7:733–742. https://doi.org/10.1088/1009-9271/7/5/14 ADSCrossRefGoogle Scholar Chen F, Rempel M, Fan Y (2017) Emergence of magnetic flux generated in a solar convective dynamo. I. The formation of sunspots and active regions, and the origin of their asymmetries. ApJ 846:149. https://doi.org/10.3847/1538-4357/aa85a0. arXiv:1704.05999 ADSCrossRefGoogle Scholar Cheng X, Zhang J, Saar SH, Ding MD (2012) Differential emission measure analysis of multiple structural components of coronal mass ejections in the inner corona. ApJ 761:62. https://doi.org/10.1088/0004-637X/761/1/62. arXiv:1210.7287 ADSCrossRefGoogle Scholar Cheung MCM, DeRosa ML (2012) A method for data-driven simulations of evolving solar active regions. ApJ 757:147. https://doi.org/10.1088/0004-637X/757/2/147. arXiv:1208.2954 ADSCrossRefGoogle Scholar Cheung MCM, Isobe H (2014) Flux emergence (theory). Living Rev Sol Phys 11:3. https://doi.org/10.12942/lrsp-2014-3 ADSCrossRefGoogle Scholar Cheung MCM, Schüssler M, Moreno-Insertis F (2007) Magnetic flux emergence in granular convection: radiative MHD simulations and observational signatures. A&A 467:703–719. https://doi.org/10.1051/0004-6361:20077048. arXiv:astro-ph/0702666 ADSCrossRefGoogle Scholar Cheung MCM, Schüssler M, Tarbell TD, Title AM (2008) Solar surface emerging flux regions: a comparative study of radiative MHD modeling and hinode SOT observations. ApJ 687:1373–1387. https://doi.org/10.1086/591245. arXiv:0810.5723 ADSCrossRefGoogle Scholar Cheung MCM, Rempel M, Title AM, Schüssler M (2010) Simulation of the formation of a solar active region. ApJ 720:233–244. https://doi.org/10.1088/0004-637X/720/1/233. arXiv:1006.4117 ADSCrossRefGoogle Scholar Cheung MCM, Boerner P, Schrijver CJ, Testa P, Chen F, Peter H, Malanushenko A (2015) Thermal diagnostics with the atmospheric imaging assembly on board the solar dynamics observatory: a validated method for differential emission measure inversions. ApJ 807:143. https://doi.org/10.1088/0004-637X/807/2/143. arXiv:1504.03258 ADSCrossRefGoogle Scholar Chintzoglou G, Zhang J (2013) Reconstructing the subsurface three-dimensional magnetic structure of a solar active region using SDO/HMI observations. ApJ 764:L3. https://doi.org/10.1088/2041-8205/764/1/L3. arXiv:1301.4651 ADSCrossRefGoogle Scholar Choudhuri AR, Gilman PA (1987) The influence of the Coriolis force on flux tubes rising through the solar convection zone. ApJ 316:788–800. https://doi.org/10.1086/165243 ADSCrossRefGoogle Scholar Cliver EW, Dietrich WF (2013) The 1859 space weather event revisited: limits of extreme activity. J Space Weather Space Clim 3(27):A31. https://doi.org/10.1051/swsc/2013053 ADSCrossRefGoogle Scholar Cliver EW, Svalgaard L (2004) The 1859 solar-terrestrial disturbance and the current limits of extreme space weather activity. Sol Phys 224:407–422. https://doi.org/10.1007/s11207-005-4980-z ADSCrossRefGoogle Scholar Cliver EW, Petrie GJD, Ling AG (2012) Abrupt changes of the photospheric magnetic field in active regions and the impulsive phase of solar flares. ApJ 756:144. https://doi.org/10.1088/0004-637X/756/2/144 ADSCrossRefGoogle Scholar Colak T, Qahwaji R (2009) Automated solar activity prediction: a hybrid computer platform using machine learning and solar imaging for automated prediction of solar flares. Space Weather 7:S06001. https://doi.org/10.1029/2008SW000401 ADSCrossRefGoogle Scholar Cortie AL (1901) On the types of sun-spot disturbances. ApJ 13:260. https://doi.org/10.1086/140816 ADSCrossRefGoogle Scholar Crosley MK, Osten RA (2018a) Constraining stellar coronal mass ejections through multi-wavelength analysis of the active M Dwarf EQ Peg. ApJ 856:39. https://doi.org/10.3847/1538-4357/aaaec2. arXiv:1802.03440 ADSCrossRefGoogle Scholar Crosley MK, Osten RA (2018b) Low-frequency radio transients on the active M-dwarf EQ Peg and the search for coronal mass ejections. ApJ 862:113. https://doi.org/10.3847/1538-4357/aacf02 ADSCrossRefGoogle Scholar Culhane JL, Harra LK, James AM, Al-Janabi K, Bradley LJ, Chaudry RA, Rees K, Tandy JA, Thomas P, Whillock MCR, Winter B, Doschek GA, Korendyke CM, Brown CM, Myers S, Mariska J, Seely J, Lang J, Kent BJ, Shaughnessy BM, Young PR, Simnett GM, Castelli CM, Mahmoud S, Mapson-Menard H, Probyn BJ, Thomas RJ, Davila J, Dere K, Windt D, Shea J, Hagood R, Moye R, Hara H, Watanabe T, Matsuzaki K, Kosugi T, Hansteen V, Wikstol Ø (2007) The EUV imaging spectrometer for hinode. Sol Phys 243:19–61. https://doi.org/10.1007/s01007-007-0293-1 ADSCrossRefGoogle Scholar Curto JJ, Castell J, Del Moral F (2016) Sfe: waiting for the big one. J Space Weather Space Clim 6(27):A23. https://doi.org/10.1051/swsc/2016018 CrossRefGoogle Scholar Dalmasse K, Pariat E, Valori G, Démoulin P, Green LM (2013) First observational application of a connectivity-based helicity flux density. A&A 555:L6. https://doi.org/10.1051/0004-6361/201321999. arXiv:1307.2838 ADSCrossRefGoogle Scholar Dalmasse K, Pariat E, Démoulin P, Aulanier G (2014) Photospheric injection of magnetic helicity: connectivity-based flux density method. Sol Phys 289:107–136. https://doi.org/10.1007/s11207-013-0326-4. arXiv:1307.2829 ADSCrossRefGoogle Scholar Dalmasse K, Aulanier G, Démoulin P, Kliem B, Török T, Pariat E (2015) The origin of net electric currents in solar active regions. ApJ 810:17. https://doi.org/10.1088/0004-637X/810/1/17. arXiv:1507.05060 ADSCrossRefGoogle Scholar De Pontieu B, Title AM, Lemen JR, Kushner GD, Akin DJ, Allard B, Berger T, Boerner P, Cheung M, Chou C, Drake JF, Duncan DW, Freeland S, Heyman GF, Hoffman C, Hurlburt NE, Lindgren RW, Mathur D, Rehse R, Sabolish D, Seguin R, Schrijver CJ, Tarbell TD, Wülser JP, Wolfson CJ, Yanari C, Mudge J, Nguyen-Phuc N, Timmons R, van Bezooijen R, Weingrod I, Brookner R, Butcher G, Dougherty B, Eder J, Knagenhjelm V, Larsen S, Mansir D, Phan L, Boyle P, Cheimets PN, DeLuca EE, Golub L, Gates R, Hertz E, McKillop S, Park S, Perry T, Podgorski WA, Reeves K, Saar S, Testa P, Tian H, Weber M, Dunn C, Eccles S, Jaeggli SA, Kankelborg CC, Mashburn K, Pust N, Springer L, Carvalho R, Kleint L, Marmie J, Mazmanian E, Pereira TMD, Sawyer S, Strong J, Worden SP, Carlsson M, Hansteen VH, Leenaarts J, Wiesmann M, Aloise J, Chu KC, Bush RI, Scherrer PH, Brekke P, Martinez-Sykora J, Lites BW, McIntosh SW, Uitenbroek H, Okamoto TJ, Gummin MA, Auker G, Jerram P, Pool P, Waltham N (2014) The interface region imaging spectrograph (IRIS). Sol Phys 289:2733–2779. https://doi.org/10.1007/s11207-014-0485-y. arXiv:1401.2491 ADSCrossRefGoogle Scholar Del Zanna G, Mason HE (2018) Solar UV and X-ray spectral diagnostics. Living Rev Sol Phys 15:5. https://doi.org/10.1007/s41116-018-0015-3. arXiv:1809.01618 ADSCrossRefGoogle Scholar Démoulin P (2007) Recent theoretical and observational developments in magnetic helicity studies. Adv Space Res 39:1674–1693. https://doi.org/10.1016/j.asr.2006.12.037 ADSCrossRefGoogle Scholar Démoulin P, Pariat E (2009) Modelling and observations of photospheric magnetic helicity. Adv Space Res 43:1013–1031. https://doi.org/10.1016/j.asr.2008.12.004 ADSCrossRefGoogle Scholar Demoulin P, Henoux JC, Priest ER, Mandrini CH (1996) Quasi-separatrix layers in solar flares. I. Method. A&A 308:643–655ADSGoogle Scholar Démoulin P, Mandrini CH, van Driel-Gesztelyi L, Thompson BJ, Plunkett S, Kovári Z, Aulanier G, Young A (2002) What is the source of the magnetic helicity shed by CMEs? The long-term helicity budget of AR 7978. A&A 382:650–665. https://doi.org/10.1051/0004-6361:20011634 ADSCrossRefGoogle Scholar Deng N, Liu C, Yang G, Wang H, Denker C (2005) Rapid penumbral decay associated with an X2.3 flare in NOAA active region 9026. ApJ 623:1195–1201. https://doi.org/10.1086/428821 ADSCrossRefGoogle Scholar Deng N, Xu Y, Yang G, Cao W, Liu C, Rimmele TR, Wang H, Denker C (2006) Multiwavelength study of flow fields in flaring super active region NOAA 10486. ApJ 644:1278–1291. https://doi.org/10.1086/503600 ADSCrossRefGoogle Scholar DeRosa ML, Barnes G (2018) Does nearby open flux affect the eruptivity of solar active regions? ApJ 861:131. https://doi.org/10.3847/1538-4357/aac77a. arXiv:1802.01199 ADSCrossRefGoogle Scholar DeRosa ML, Schrijver CJ, Barnes G, Leka KD, Lites BW, Aschwanden MJ, Amari T, Canou A, McTiernan JM, Régnier S, Thalmann JK, Valori G, Wheatland MS, Wiegelmann T, Cheung MCM, Conlon PA, Fuhrmann M, Inhester B, Tadesse T (2009) A critical assessment of nonlinear force-free field modeling of the solar corona for active region 10953. ApJ 696:1780–1791. https://doi.org/10.1088/0004-637X/696/2/1780. arXiv:0902.1007 ADSCrossRefGoogle Scholar DeRosa ML, Wheatland MS, Leka KD, Barnes G, Amari T, Canou A, Gilchrist SA, Thalmann JK, Valori G, Wiegelmann T, Schrijver CJ, Malanushenko A, Sun X, Régnier S (2015) The influence of spatial resolution on nonlinear force-free modeling. ApJ 811:107. https://doi.org/10.1088/0004-637X/811/2/107. arXiv:1508.05455 ADSCrossRefGoogle Scholar DeVore CR, Antiochos SK (2000) Dynamical formation and stability of helical prominence magnetic fields. ApJ 539:954–963. https://doi.org/10.1086/309275 ADSCrossRefGoogle Scholar DeVore CR, Antiochos SK (2008) Homologous confined filament eruptions via magnetic breakout. ApJ 680:740–756. https://doi.org/10.1086/588011 ADSCrossRefGoogle Scholar Dodson HW, Hedeman ER (1949) The frequency and positions of flares within three active sunspot areas. ApJ 110:242. https://doi.org/10.1086/145200 ADSCrossRefGoogle Scholar Dodson HW, Hedeman ER (1970) Major H\(\alpha \) flares in centers of activity with very small or no spots. Sol Phys 13:401–419. https://doi.org/10.1007/BF00153560 ADSCrossRefGoogle Scholar Drake JJ, Cohen O, Garraffo C, Kashyap V (2016) Stellar flares and the dark energy of CMEs. In: Kosovichev AG, Hawley SL, Heinzel P (eds) Solar and stellar flares and their effects on planets, IAU symposium, vol 320. Cambridge University Press, Cambridge, pp 196–201. https://doi.org/10.1017/S1743921316000260. arXiv:1610.05185 CrossRefGoogle Scholar D'Silva S, Choudhuri AR (1993) A theoretical model for tilts of bipolar magnetic regions. A&A 272:621ADSGoogle Scholar Ellerman F (1917) Solar hydrogen "bombs". ApJ 46:298. https://doi.org/10.1086/142366 ADSCrossRefGoogle Scholar Ellison MA (1946) Visual and spectrographic observations of a great solar flare, 1946 July 25. MNRAS 106:500. https://doi.org/10.1093/mnras/106.6.500 ADSCrossRefGoogle Scholar Elsasser WM (1956) Hydromagnetic dynamo theory. Rev Mod Phys 28:135–163. https://doi.org/10.1103/RevModPhys.28.135 ADSMathSciNetCrossRefzbMATHGoogle Scholar Falconer DA, Moore RL, Gary GA (2002) Correlation of the coronal mass ejection productivity of solar active regions with measures of their global nonpotentiality from vector magnetograms: baseline results. ApJ 569:1016–1025. https://doi.org/10.1086/339161 ADSCrossRefGoogle Scholar Falconer DA, Moore RL, Gary GA (2006) Magnetic causes of solar coronal mass ejections: dominance of the free magnetic energy over the magnetic twist alone. ApJ 644:1258–1272. https://doi.org/10.1086/503699 ADSCrossRefGoogle Scholar Fan Y (2001a) Nonlinear growth of the three-dimensional undular instability of a horizontal magnetic layer and the formation of arching flux tubes. ApJ 546:509–527. https://doi.org/10.1086/318222 ADSCrossRefGoogle Scholar Fan Y (2001b) The emergence of a twisted \(\Omega \)-tube into the solar atmosphere. ApJ 554:L111–L114. https://doi.org/10.1086/320935 ADSCrossRefGoogle Scholar Fan Y (2008) The three-dimensional evolution of buoyant magnetic flux tubes in a model solar convective envelope. ApJ 676:680–697. https://doi.org/10.1086/527317 ADSCrossRefGoogle Scholar Fan Y (2009a) Magnetic fields in the solar convection zone. Living Rev Sol Phys 6:4. https://doi.org/10.12942/lrsp-2009-4 ADSCrossRefGoogle Scholar Fan Y (2009b) The emergence of a twisted flux tube into the solar atmosphere: sunspot rotations and the formation of a coronal flux rope. ApJ 697:1529–1542. https://doi.org/10.1088/0004-637X/697/2/1529. arXiv:0903.1288 ADSCrossRefGoogle Scholar Fan Y, Gibson SE (2007) Onset of coronal mass ejections due to loss of confinement of coronal flux ropes. ApJ 668:1232–1245. https://doi.org/10.1086/521335 ADSCrossRefGoogle Scholar Fan Y, Fisher GH, Deluca EE (1993) The origin of morphological asymmetries in bipolar active regions. ApJ 405:390–401. https://doi.org/10.1086/172370 ADSCrossRefGoogle Scholar Fan Y, Zweibel EG, Lantz SR (1998a) Two-dimensional simulations of buoyantly rising, interacting magnetic flux tubes. ApJ 493:480–493. https://doi.org/10.1086/305122 ADSCrossRefGoogle Scholar Fan Y, Zweibel EG, Linton MG, Fisher GH (1998b) The rise of kink-unstable magnetic flux tubes in the solar convection zone. ApJ 505:L59–L63. https://doi.org/10.1086/311597 ADSCrossRefGoogle Scholar Fan Y, Zweibel EG, Linton MG, Fisher GH (1999) The rise of kink-unstable magnetic flux tubes and the origin of \(\delta \)-configuration sunspots. ApJ 521:460–477. https://doi.org/10.1086/307533 ADSCrossRefGoogle Scholar Fan Y, Abbett WP, Fisher GH (2003) The dynamic evolution of twisted magnetic flux tubes in a three-dimensional convecting flow. I. Uniformly buoyant horizontal tubes. ApJ 582:1206–1219. https://doi.org/10.1086/344798 ADSCrossRefGoogle Scholar Fang F, Fan Y (2015) \(\delta \)-Sunspot formation in simulation of active-region-scale flux emergence. ApJ 806:79. https://doi.org/10.1088/0004-637X/806/1/79. arXiv:1504.04393 ADSCrossRefGoogle Scholar Fang F, Manchester W, Abbett WP, van der Holst B (2010) Simulation of flux emergence from the convection zone to the corona. ApJ 714:1649–1657. https://doi.org/10.1088/0004-637X/714/2/1649. arXiv:1003.6118 ADSCrossRefGoogle Scholar Fang F, Manchester W IV, Abbett WP, van der Holst B (2012a) Buildup of magnetic shear and free energy during flux emergence and cancellation. ApJ 754:15. https://doi.org/10.1088/0004-637X/754/1/15. arXiv:1205.3764 ADSCrossRefGoogle Scholar Fang F, Manchester W IV, Abbett WP, van der Holst B (2012b) Dynamic coupling of convective flows and magnetic field during flux emergence. ApJ 745:37. https://doi.org/10.1088/0004-637X/745/1/37. arXiv:1111.1679 ADSCrossRefGoogle Scholar Finn JM, Antonsen TM Jr (1985) Magnetic helicity: what is it, and what is it good for? Comments Plasma Phys Contr Fusion 9:111–126Google Scholar Fisher GH, Fan Y, Longcope DW, Linton MG, Pevtsov AA (2000) The solar dynamo and emerging flux—(invited review). Sol Phys 192:119–139. https://doi.org/10.1023/A:1005286516009 ADSCrossRefGoogle Scholar Fisher GH, Bercik DJ, Welsch BT, Hudson HS (2012) Global forces in eruptive solar flares: the Lorentz force acting on the solar atmosphere and the solar interior. Sol Phys 277:59–76. https://doi.org/10.1007/s11207-011-9907-2. arXiv:1006.5247 ADSCrossRefGoogle Scholar Fletcher L, Dennis BR, Hudson HS, Krucker S, Phillips K, Veronig A, Battaglia M, Bone L, Caspi A, Chen Q, Gallagher P, Grigis PT, Ji H, Liu W, Milligan RO, Temmer M (2011) An observational overview of solar flares. Space Sci Rev 159:19–106. https://doi.org/10.1007/s11214-010-9701-8. arXiv:1109.5932 ADSCrossRefGoogle Scholar Forbes TG, Malherbe JM (1986) A shock condensation mechanism for loop prominences. ApJ 302:L67–L70. https://doi.org/10.1086/184639 ADSCrossRefGoogle Scholar Forbes TG, Priest ER (1984) Numerical simulation of reconnection in an emerging magnetic flux region. Sol Phys 94:315–340. https://doi.org/10.1007/BF00151321 ADSCrossRefGoogle Scholar Forbes TG, Linker JA, Chen J, Cid C, Kóta J, Lee MA, Mann G, Mikić Z, Potgieter MS, Schmidt JM, Siscoe GL, Vainio R, Antiochos SK, Riley P (2006) CME theory and models. Space Sci Rev 123:251–302. https://doi.org/10.1007/s11214-006-9019-8 ADSCrossRefGoogle Scholar Forbush SE (1946) Three unusual cosmic-ray increases possibly due to charged particles from the Sun. Phys Rev 70:771–772. https://doi.org/10.1103/PhysRev.70.771 ADSCrossRefGoogle Scholar Gaizauskas V, Harvey KL, Harvey JW, Zwaan C (1983) Large-scale patterns formed by solar active regions during the ascending phase of cycle 21. ApJ 265:1056–1065. https://doi.org/10.1086/160747 ADSCrossRefGoogle Scholar Gaizauskas V, Harvey KL, Proulx M (1994) Interactions between nested sunspots. 1: the formation and breakup of a delta-type sunspot. ApJ 422:883–898. https://doi.org/10.1086/173780 ADSCrossRefGoogle Scholar Gallagher PT, Moon YJ, Wang H (2002) Active-region monitoring and flare forecasting I. Data processing and first results. Sol Phys 209:171–183. https://doi.org/10.1023/A:1020950221179 ADSCrossRefGoogle Scholar Gao Y, Zhao J, Zhang H (2012) Analysis on correlations between subsurface kinetic helicity and photospheric current helicity in active regions. ApJ 761:L9. https://doi.org/10.1088/2041-8205/761/1/L9. arXiv:1211.2278 ADSCrossRefGoogle Scholar Gao Y, Zhao J, Zhang H (2014) A study of connections between solar flares and subsurface flow fields of active regions. Sol Phys 289:493–502. https://doi.org/10.1007/s11207-013-0274-z ADSCrossRefGoogle Scholar Georgoulis MK (2005) Turbulence in the solar atmosphere: manifestations and diagnostics via solar image processing. Sol Phys 228:5–27. https://doi.org/10.1007/s11207-005-2513-4. arXiv:astro-ph/0511449 ADSCrossRefGoogle Scholar Georgoulis MK (2012) Are solar active regions with major flares more fractal, multifractal, or turbulent than others? Sol Phys 276:161–181. https://doi.org/10.1007/s11207-010-9705-2. arXiv:1101.0547 ADSCrossRefGoogle Scholar Georgoulis MK, Rust DM, Bernasconi PN, Schmieder B (2002) Statistics, morphology, and energetics of Ellerman bombs. ApJ 575:506–528. https://doi.org/10.1086/341195 ADSCrossRefGoogle Scholar Georgoulis MK, Titov VS, Mikić Z (2012) Non-neutralized electric current patterns in solar active regions: origin of the shear-generating Lorentz force. ApJ 761:61. https://doi.org/10.1088/0004-637X/761/1/61. arXiv:1210.2919 ADSCrossRefGoogle Scholar Gibson SE, Fan Y, Török T, Kliem B (2006) The evolving sigmoid: evidence for magnetic flux ropes in the corona before, during, and after CMEs. Space Sci Rev 124:131–144. https://doi.org/10.1007/s11214-006-9101-2 ADSCrossRefGoogle Scholar Gibson SE, Vourlidas A, Hassler DM, Rachmeler LA, Thompson MJ, Newmark J, Velli M, Title A, McIntosh SW (2018) Solar physics from unconventional viewpoints. Front Astron Space Sci 5:32. https://doi.org/10.3389/fspas.2018.00032. arXiv:1805.09452 ADSCrossRefGoogle Scholar Giovanelli RG (1939) The relations between eruptions and sunspots. ApJ 89:555. https://doi.org/10.1086/144081 ADSCrossRefGoogle Scholar Gizon L, Birch AC (2005) Local helioseismology. Living Rev Sol Phys 2:6. https://doi.org/10.12942/lrsp-2005-6 ADSCrossRefGoogle Scholar Gold T, Hoyle F (1960) On the origin of solar flares. MNRAS 120:89. https://doi.org/10.1093/mnras/120.2.89 ADSCrossRefGoogle Scholar Golub L, Deluca E, Austin G, Bookbinder J, Caldwell D, Cheimets P, Cirtain J, Cosmo M, Reid P, Sette A, Weber M, Sakao T, Kano R, Shibasaki K, Hara H, Tsuneta S, Kumagai K, Tamura T, Shimojo M, McCracken J, Carpenter J, Haight H, Siler R, Wright E, Tucker J, Rutledge H, Barbera M, Peres G, Varisco S (2007) The X-ray telescope (XRT) for the hinode mission. Sol Phys 243:63–86. https://doi.org/10.1007/s11207-007-0182-1 ADSCrossRefGoogle Scholar Gömöry P, Balthasar H, Kuckein C, Koza J, Veronig AM, González Manrique SJ, Kučera A, Schwartz P, Hanslmeier A (2017) Flare-induced changes of the photospheric magnetic field in a \(\delta \)-spot deduced from ground-based observations. A&A 602:A60. https://doi.org/10.1051/0004-6361/201730644. arXiv:1704.06089 ADSCrossRefGoogle Scholar Gough DO (1969) The anelastic approximation for thermal convection. J Atmos Sci 26:448–456. https://doi.org/10.1175/1520-0469(1969)026<0448:TAAFTC>2.0.CO;2 ADSCrossRefGoogle Scholar Greatrix GR (1963) On the statistical relations between flare intensity and sunspots. MNRAS 126:123. https://doi.org/10.1093/mnras/126.2.123 ADSCrossRefGoogle Scholar Green LM, Kliem B (2009) Flux rope formation preceding coronal mass ejection onset. ApJ 700:L83–L87. https://doi.org/10.1088/0004-637X/700/2/L83. arXiv:0906.4794 ADSCrossRefGoogle Scholar Green LM, Lópezfuentes MC, Mandrini CH, Démoulin P, Van Driel-Gesztelyi L, Culhane JL (2002) The magnetic helicity budget of a CME-prolific active region. Sol Phys 208:43–68. https://doi.org/10.1023/A:1019658520033 ADSCrossRefGoogle Scholar Green LM, Kliem B, Török T, van Driel-Gesztelyi L, Attrill GDR (2007) Transient coronal sigmoids and rotating erupting flux ropes. Sol Phys 246:365–391. https://doi.org/10.1007/s11207-007-9061-z ADSCrossRefGoogle Scholar Green LM, Kliem B, Wallace AJ (2011) Photospheric flux cancellation and associated flux rope formation and eruption. A&A 526:A2. https://doi.org/10.1051/0004-6361/201015146. arXiv:1011.1227 ADSCrossRefGoogle Scholar Guglielmino SL, Zuccarello F, Romano P, Cristaldi A, Ermolli I, Criscuoli S, Falco M, Zuccarello FP (2016) A multi-instrument analysis of a C4.1 flare occurring in a \(\delta \) sunspot. ApJ 819:157. https://doi.org/10.3847/0004-637X/819/2/157 ADSCrossRefGoogle Scholar Guo J, Lin J, Deng Y (2014) The dependence of flares on the magnetic classification of the source regions in solar cycles 22–23. MNRAS 441:2208–2211. https://doi.org/10.1093/mnras/stu695 ADSCrossRefGoogle Scholar Gurnett DA (1995) Heliospheric radio emissions. Space Sci Rev 72:243–254. https://doi.org/10.1007/BF00768787 ADSCrossRefGoogle Scholar Haber DA, Hindman BW, Toomre J, Bogart RS, Larsen RM, Hill F (2002) Evolving submerged meridional circulation cells within the upper convection zone revealed by ring-diagram analysis. ApJ 570:855–864. https://doi.org/10.1086/339631 ADSCrossRefGoogle Scholar Hagyard MJ (1990) The significance of vector magnetic field measurements. Mem Soc Astron Ital 61:337–357ADSGoogle Scholar Hagyard MJ, Smith JB Jr, Teuber D, West EA (1984) A quantitative study relating observed shear in photospheric magnetic fields to repeated flaring. Sol Phys 91:115–126. https://doi.org/10.1007/BF00213618 ADSCrossRefGoogle Scholar Hagyard MJ, Venkatakrishnan P, Smith JB Jr (1990) Nonpotential magnetic fields at sites of gamma-ray flares. ApJS 73:159–163. https://doi.org/10.1086/191447 ADSCrossRefGoogle Scholar Hale GE (1908) On the probable existence of a magnetic field in sun-spots. ApJ 28:315. https://doi.org/10.1086/141602 ADSCrossRefGoogle Scholar Hale GE, Nicholson SB (1925) The law of sun-spot polarity. ApJ 62:270. https://doi.org/10.1086/142933 ADSCrossRefGoogle Scholar Hale GE, Nicholson SB (1938) Magnetic observations of sunspots, 1917–1924. Carnegie Institution of Washington, Washington, DCGoogle Scholar Hale GE, Ellerman F, Nicholson SB, Joy AH (1919) The magnetic polarity of sun-spots. ApJ 49:153. https://doi.org/10.1086/142452 ADSCrossRefGoogle Scholar Handy BN, Acton LW, Kankelborg CC, Wolfson CJ, Akin DJ, Bruner ME, Caravalho R, Catura RC, Chevalier R, Duncan DW, Edwards CG, Feinstein CN, Freeland SL, Friedlaender FM, Hoffmann CH, Hurlburt NE, Jurcevich BK, Katz NL, Kelly GA, Lemen JR, Levay M, Lindgren RW, Mathur DP, Meyer SB, Morrison SJ, Morrison MD, Nightingale RW, Pope TP, Rehse RA, Schrijver CJ, Shine RA, Shing L, Strong KT, Tarbell TD, Title AM, Torgerson DD, Golub L, Bookbinder JA, Caldwell D, Cheimets PN, Davis WN, Deluca EE, McMullen RA, Warren HP, Amato D, Fisher R, Maldonado H, Parkinson C (1999) The transition region and coronal explorer. Sol Phys 187:229–260. https://doi.org/10.1023/A:1005166902804 ADSCrossRefGoogle Scholar Hannah IG, Kontar EP (2013) Multi-thermal dynamics and energetics of a coronal mass ejection in the low solar atmosphere. A&A 553:A10. https://doi.org/10.1051/0004-6361/201219727. arXiv:1212.5529 ADSCrossRefGoogle Scholar Hansteen VH, Archontis V, Pereira TMD, Carlsson M, Rouppe van der Voort L, Leenaarts J (2017) Bombs and flares at the surface and lower atmosphere of the Sun. ApJ 839:22. https://doi.org/10.3847/1538-4357/aa6844. arXiv:1704.02872 ADSCrossRefGoogle Scholar Harra LK, Matthews SA, Culhane JL (2001) Nonthermal velocity evolution in the precursor phase of a solar flare. ApJ 549:L245–L248. https://doi.org/10.1086/319163 ADSCrossRefGoogle Scholar Harra LK, Williams DR, Wallace AJ, Magara T, Hara H, Tsuneta S, Sterling AC, Doschek GA (2009) Coronal nonthermal velocity following helicity injection before an X-class flare. ApJ 691:L99–L102. https://doi.org/10.1088/0004-637X/691/2/L99 ADSCrossRefGoogle Scholar Hartlep T, Kosovichev AG, Zhao J, Mansour NN (2011) Signatures of emerging subsurface structures in acoustic power maps of the Sun. Sol Phys 268:321–327. https://doi.org/10.1007/s11207-010-9544-1. arXiv:1003.4305 ADSCrossRefGoogle Scholar Harvey KL, Harvey JW (1976) A study of the magnetic and velocity fields in an active region. Sol Phys 47:233–246. https://doi.org/10.1007/BF00152261 ADSCrossRefGoogle Scholar Harvey KL, Martin SF (1973) Ephemeral active regions. Sol Phys 32:389–402. https://doi.org/10.1007/BF00154951 ADSCrossRefGoogle Scholar Hathaway DH (2015) The solar cycle. Living Rev Sol Phys 12:4. https://doi.org/10.1007/lrsp-2015-4. arXiv:1502.07020 ADSCrossRefGoogle Scholar Hayakawa H, Ebihara Y, Hand DP, Hayakawa S, Kumar S, Mukherjee S, Veenadhari B (2018) Low-latitude Aurorae during the Extreme Space Weather Events in 1859. ApJ 869:57. https://doi.org/10.3847/1538-4357/aae47c. arXiv:1811.02786 ADSCrossRefGoogle Scholar Hayashi K, Feng X, Xiong M, Jiang C (2018) An MHD simulation of solar active region 11158 driven with a time-dependent electric field determined from HMI vector magnetic field measurement data. ApJ 855:11. https://doi.org/10.3847/1538-4357/aaacd8 ADSCrossRefGoogle Scholar Heyvaerts J, Priest ER, Rust DM (1977) An emerging flux model for the solar flare phenomenon. ApJ 216:123–137. https://doi.org/10.1086/155453 ADSCrossRefGoogle Scholar Hirayama T (1974) Theoretical model of flares and prominences. I: evaporating flare model. Sol Phys 34:323–338. https://doi.org/10.1007/BF00153671 ADSCrossRefGoogle Scholar Hodgson R (1859) On a curious appearance seen in the Sun. MNRAS 20:15–16. https://doi.org/10.1093/mnras/20.1.15 ADSCrossRefGoogle Scholar Holder ZA, Canfield RC, McMullen RA, Nandy D, Howard RF, Pevtsov AA (2004) On the tilt and twist of solar active regions. ApJ 611:1149–1155. https://doi.org/10.1086/422247 ADSCrossRefGoogle Scholar Hong J, Ding MD, Li Y, Carlsson M (2018) Non-LTE calculations of the Fe I 6173 Å line in a flaring atmosphere. ApJ 857:L2. https://doi.org/10.3847/2041-8213/aab9aa. arXiv:1803.09912 ADSCrossRefGoogle Scholar Hood AW, Priest ER (1980) Magnetic instability of coronal arcades as the origin of two-ribbon flares. Sol Phys 66:113–134. https://doi.org/10.1007/BF00150523 ADSCrossRefGoogle Scholar Hood AW, Priest ER (1981) Critical conditions for magnetic instabilities in force-free coronal loops. Geophys Astrophys Fluid Dyn 17:297–318. https://doi.org/10.1080/03091928108243687 ADSCrossRefzbMATHGoogle Scholar Hood AW, Archontis V, Galsgaard K, Moreno-Insertis F (2009) The emergence of toroidal flux tubes from beneath the solar photosphere. A&A 503:999–1011. https://doi.org/10.1051/0004-6361/200912189 ADSCrossRefGoogle Scholar Hotta H, Rempel M, Yokoyama T (2014) High-resolution calculations of the solar global convection with the reduced speed of sound technique. I. The structure of the convection and the magnetic field without the rotation. ApJ 786:24. https://doi.org/10.1088/0004-637X/786/1/24. arXiv:1402.5008 ADSCrossRefGoogle Scholar Huang X, Wang H, Xu L, Liu J, Li R, Dai X (2018) Deep learning based solar flare forecasting model. I. Results for line-of-sight magnetograms. ApJ 856:7. https://doi.org/10.3847/1538-4357/aaae00 ADSCrossRefGoogle Scholar Hudson HS (2000) Implosions in coronal transients. ApJ 531:L75–L77. https://doi.org/10.1086/312516 ADSCrossRefGoogle Scholar Hudson HS, Lemen JR, St Cyr OC, Sterling AC, Webb DF (1998) X-ray coronal changes during Halo CMEs. Geophys Res Lett 25:2481–2484. https://doi.org/10.1029/98GL01303 ADSCrossRefGoogle Scholar Hudson HS, Fisher GH, Welsch BT (2008) Flare energy and magnetic field variations. In: Howe R, Komm RW, Balasubramaniam KS, Petrie GJD (eds) Subsurface and atmospheric influences on solar activity, ASP Conference Series, vol 383. Astronomical Society of the Pacific, San Francisco, p 221Google Scholar Ilonidis S, Zhao J, Kosovichev A (2011) Detection of emerging sunspot regions in the solar interior. Science 333:993. https://doi.org/10.1126/science.1206253 ADSCrossRefGoogle Scholar Ilonidis S, Zhao J, Hartlep T (2013) Helioseismic investigation of emerging magnetic flux in the solar convection zone. ApJ 777:138. https://doi.org/10.1088/0004-637X/777/2/138 ADSCrossRefGoogle Scholar Imada S, Bamba Y, Kusano K (2014) Coronal behavior before the large flare onset. PASJ 66:S17. https://doi.org/10.1093/pasj/psu092. arXiv:1408.2585 ADSCrossRefGoogle Scholar Inoue S (2016) Magnetohydrodynamics modeling of coronal magnetic field and solar eruptions based on the photospheric magnetic field. Prog Earth Planet Sci 3:19. https://doi.org/10.1186/s40645-016-0084-7 ADSCrossRefGoogle Scholar Inoue S, Hayashi K, Magara T, Choe GS, Park YD (2014) Magnetohydrodynamic simulation of the X2.2 solar flare on 2011 February 15. I. Comparison with the observations. ApJ 788:182. https://doi.org/10.1088/0004-637X/788/2/182. arXiv:1404.3257 ADSCrossRefGoogle Scholar Inoue S, Hayashi K, Magara T, Choe GS, Park YD (2015) Magnetohydrodynamic simulation of the X2.2 solar flare on 2011 February 15. II. Dynamics connecting the solar flare and the coronal mass ejection. ApJ 803:73. https://doi.org/10.1088/0004-637X/803/2/73. arXiv:1501.07663 ADSCrossRefGoogle Scholar Inoue S, Hayashi K, Kusano K (2016) Structure and stability of magnetic fields in solar active region 12192 based on the nonlinear force-free field modeling. ApJ 818:168. https://doi.org/10.3847/0004-637X/818/2/168. arXiv:1601.00791 ADSCrossRefGoogle Scholar Inoue S, Kusano K, Büchner J, Skála J (2018a) Formation and dynamics of a solar eruptive flux tube. Nature Commun 9:174. https://doi.org/10.1038/s41467-017-02616-8 ADSCrossRefGoogle Scholar Inoue S, Shiota D, Bamba Y, Park SH (2018b) Magnetohydrodynamic modeling of a solar eruption associated with an X9.3 flare observed in the active region 12673. ApJ 867:83. https://doi.org/10.3847/1538-4357/aae079. arXiv:1809.02309 ADSCrossRefGoogle Scholar Ishiguro N, Kusano K (2017) Double arc instability in the solar corona. ApJ 843:101. https://doi.org/10.3847/1538-4357/aa799b. arXiv:1706.06112 ADSCrossRefGoogle Scholar Ishii TT, Kurokawa H, Takeuchi TT (1998) Emergence of a twisted magnetic flux bundle as a source of strong flare activity. ApJ 499:898–904. https://doi.org/10.1086/305669. arXiv:astro-ph/9708208 ADSCrossRefGoogle Scholar Ishii TT, Kurokawa H, Takeuchi TT (2000) Emergence of twisted magnetic-flux bundles and flare activity in a large active region, NOAA 4201. PASJ 52:337. https://doi.org/10.1093/pasj/52.2.337 ADSCrossRefGoogle Scholar Isobe H, Miyagoshi T, Shibata K, Yokoyama T (2005) Filamentary structure on the Sun from the magnetic Rayleigh–Taylor instability. Nature 434:478–481. https://doi.org/10.1038/nature03399 ADSCrossRefGoogle Scholar Isobe H, Tripathi D, Archontis V (2007) Ellerman bombs and jets associated with resistive flux emergence. ApJ 657:L53–L56. https://doi.org/10.1086/512969 ADSCrossRefGoogle Scholar Isobe H, Proctor MRE, Weiss NO (2008) Convection-driven emergence of small-scale magnetic fields and their role in coronal heating and solar wind acceleration. ApJ 679:L57. https://doi.org/10.1086/589150 ADSCrossRefGoogle Scholar Jaeggli SA (2016) Multi-wavelength study of a delta-spot. I. A region of very strong, horizontal magnetic field. ApJ 818:81. https://doi.org/10.3847/0004-637X/818/1/81. arXiv:1512.08463 ADSCrossRefGoogle Scholar Jaeggli SA, Norton AA (2016) The magnetic classification of solar active regions 1992–2015. ApJ 820:L11. https://doi.org/10.3847/2041-8205/820/1/L11. arXiv:1603.02552 ADSCrossRefGoogle Scholar Janvier M, Aulanier G, Pariat E, Démoulin P (2013) The standard flare model in three dimensions. III. Slip-running reconnection properties. A&A 555:A77. https://doi.org/10.1051/0004-6361/201321164. arXiv:1305.4053 ADSCrossRefGoogle Scholar Janvier M, Aulanier G, Démoulin P (2015) From coronal observations to MHD simulations, the building blocks for 3D models of solar flares (invited review). Sol Phys 290:3425–3456. https://doi.org/10.1007/s11207-015-0710-3. arXiv:1505.05299 ADSCrossRefGoogle Scholar Jensen JM, Duvall TL Jr, Jacobsen BH, Christensen-Dalsgaard J (2001) Imaging an emerging active region with helioseismic tomography. ApJ 553:L193–L196. https://doi.org/10.1086/320677 ADSCrossRefGoogle Scholar Ji H, Huang G, Wang H, Zhou T, Li Y, Zhang Y, Song M (2006) Converging motion of H\(\alpha \) conjugate kernels: the signature of fast relaxation of a sheared magnetic field. ApJ 636:L173–L174. https://doi.org/10.1086/500203 ADSCrossRefGoogle Scholar Jiang C, Feng X, Wu ST, Hu Q (2013) Magnetohydrodynamic simulation of a sigmoid eruption of active region 11283. ApJ 771:L30. https://doi.org/10.1088/2041-8205/771/2/L30. arXiv:1306.1009 ADSCrossRefGoogle Scholar Jiang C, Wu ST, Feng X, Hu Q (2016a) Data-driven magnetohydrodynamic modelling of a flux-emerging active region leading to solar eruption. Nature Commun 7:11522. https://doi.org/10.1038/ncomms11522 ADSCrossRefGoogle Scholar Jiang C, Wu ST, Yurchyshyn V, Wang H, Feng X, Hu Q (2016b) How did a major confined flare occur in super solar active region 12192? ApJ 828:62. https://doi.org/10.3847/0004-637X/828/1/62. arXiv:1606.09334 ADSCrossRefGoogle Scholar Jing J, Song H, Abramenko V, Tan C, Wang H (2006) The statistical relationship between the photospheric magnetic parameters and the flare productivity of active regions. ApJ 644:1273–1277. https://doi.org/10.1086/503895 ADSCrossRefGoogle Scholar Jing J, Wiegelmann T, Suematsu Y, Kubo M, Wang H (2008) Changes of magnetic structure in three dimensions associated with the X3.4 flare of 2006 December 13. ApJ 676:L81. https://doi.org/10.1086/587058 ADSCrossRefGoogle Scholar Jing J, Park SH, Liu C, Lee J, Wiegelmann T, Xu Y, Deng N, Wang H (2012) Evolution of relative magnetic helicity and current helicity in NOAA active region 11158. ApJ 752:L9. https://doi.org/10.1088/2041-8205/752/1/L9 ADSCrossRefGoogle Scholar Jing J, Liu C, Lee J, Wang S, Wiegelmann T, Xu Y, Wang H (2014) Evolution of a magnetic flux rope and its overlying arcade based on nonlinear force-free field extrapolations. ApJ 784:L13. https://doi.org/10.1088/2041-8205/784/1/L13 ADSCrossRefGoogle Scholar Jing J, Liu C, Lee J, Ji H, Liu N, Xu Y, Wang H (2018) Statistical analysis of torus and kink instabilities in solar eruptions. ApJ 864:138. https://doi.org/10.3847/1538-4357/aad6e4. arXiv:1808.08924 ADSCrossRefGoogle Scholar Johnstone BM, Petrie GJD, Sudol JJ (2012) Abrupt longitudinal magnetic field changes and ultraviolet emissions accompanying solar flares. ApJ 760:29. https://doi.org/10.1088/0004-637X/760/1/29. arXiv:1211.2212 ADSCrossRefGoogle Scholar Jonas E, Bobra M, Shankar V, Todd Hoeksema J, Recht B (2018) Flare prediction using photospheric and coronal image data. Sol Phys 293:48. https://doi.org/10.1007/s11207-018-1258-9. arXiv:1708.01323 ADSCrossRefGoogle Scholar Jouve L, Brun AS (2009) Three-dimensional nonlinear evolution of a magnetic flux tube in a spherical shell: influence of turbulent convection and associated mean flows. ApJ 701:1300–1322. https://doi.org/10.1088/0004-637X/701/2/1300. arXiv:0907.2131 ADSCrossRefGoogle Scholar Jouve L, Brun AS, Aulanier G (2013) Global dynamics of subsurface solar active regions. ApJ 762:4. https://doi.org/10.1088/0004-637X/762/1/4. arXiv:1211.7251 ADSCrossRefGoogle Scholar Jouve L, Brun AS, Aulanier G (2018) Interactions of twisted \(\Omega \)-loops in a model solar convection zone. ApJ 857:83. https://doi.org/10.3847/1538-4357/aab5b6. arXiv:1803.04709 ADSCrossRefGoogle Scholar Kaiser ML, Kucera TA, Davila JM, St Cyr OC, Guhathakurta M, Christian E (2008) The STEREO mission: an introduction. Space Sci Rev 136:5–16. https://doi.org/10.1007/s11214-007-9277-0 ADSCrossRefGoogle Scholar Kaisig M, Tajima T, Shibata K, Nozawa S, Matsumoto R (1990) Nonlinear excitation of magnetic undular instability by convective motion. ApJ 358:698–709. https://doi.org/10.1086/169024 ADSCrossRefGoogle Scholar Kálmán B (2001) Submergence of magnetic flux in interaction of sunspot groups. A&A 371:731–737. https://doi.org/10.1051/0004-6361:20010429 ADSCrossRefGoogle Scholar Kaneko T, Yokoyama T (2017) Reconnection–condensation model for solar prominence formation. ApJ 845:12. https://doi.org/10.3847/1538-4357/aa7d59. arXiv:1706.10008 ADSCrossRefGoogle Scholar Kappenman JG (2006) Great geomagnetic storms and extreme impulsive geomagnetic field disturbance events: an analysis of observational evidence including the great storm of May 1921. Adv Space Res 38:188–199. https://doi.org/10.1016/j.asr.2005.08.055 ADSCrossRefGoogle Scholar Kawabata Y, Inoue S, Shimizu T (2017) Non-potential field formation in the X-shaped quadrupole magnetic field configuration. ApJ 842:106. https://doi.org/10.3847/1538-4357/aa71a0. arXiv:1705.02560 ADSCrossRefGoogle Scholar Kawabata Y, Iida Y, Doi T, Akiyama S, Yashiro S, Shimizu T (2018) Statistical relation between solar flares and coronal mass ejections with respect to sigmoidal structures in active regions. ApJ 869:99. https://doi.org/10.3847/1538-4357/aaebfc. arXiv:1810.10808 ADSCrossRefGoogle Scholar Kawaguchi I, Kitai R (1976) The velocity field associated with the birth of sunspots. Sol Phys 46:125–135. https://doi.org/10.1007/BF00157559 ADSCrossRefGoogle Scholar Keil SL, Balasubramaniam KS, Bernasconi P, Smaldone LA, Cauzzi G (1994) Observations of active region dynamics: preflare flows and field observations. In: Balasubramaniam KS, Simon GW (eds) Solar active region evolution: comparing models with observations, ASP Conference Series, vol 68. Astronomical Society of the Pacific, San Francisco, p 265Google Scholar Khlystova AI, Sokoloff DD (2009) Toroidal magnetic field of the Sun from data on Hale-rule-violating sunspot groups. Astron Rep 53:281–285. https://doi.org/10.1134/S106377290903010X ADSCrossRefGoogle Scholar Kholikov S (2013) A search for helioseismic signature of emerging active regions. Sol Phys 287:229–237. https://doi.org/10.1007/s11207-013-0321-9 ADSCrossRefGoogle Scholar Kleczek J (1953) Relations between flares and sunspots. Bull Astron Inst Czech 4:9ADSGoogle Scholar Klein LW, Burlaga LF (1982) Interplanetary magnetic clouds at 1 AU. J Geophys Res 87:613–624. https://doi.org/10.1029/JA087iA02p00613 ADSCrossRefGoogle Scholar Kliem B, Török T (2006) Torus instability. Phys Rev Lett 96(25):255002. https://doi.org/10.1103/PhysRevLett.96.255002. arXiv:physics/0605217 ADSCrossRefGoogle Scholar Kliem B, Su YN, van Ballegooijen AA, DeLuca EE (2013) Magnetohydrodynamic modeling of the solar eruption on 2010 April 8. ApJ 779:129. https://doi.org/10.1088/0004-637X/779/2/129. arXiv:1304.6981 ADSCrossRefGoogle Scholar Klimchuk JA, Sturrock PA (1992) Three-dimensional force-free magnetic fields and flare energy buildup. ApJ 385:344–353. https://doi.org/10.1086/170943 ADSCrossRefGoogle Scholar Knizhnik KJ, Linton MG, DeVore CR (2018) The role of twist in kinked flux rope emergence and delta-spot formation. ApJ 864:89. https://doi.org/10.3847/1538-4357/aad68c. arXiv:1808.05562 ADSCrossRefGoogle Scholar Kolmogorov A (1941) The local structure of turbulence in incompressible viscous fluid for very large Reynolds' numbers. Dokl Akad Nauk SSSR 30:301–305ADSMathSciNetGoogle Scholar Komm R, Hill F (2009) Solar flares and solar subphotospheric vorticity. J Geophys Res 114:A06105. https://doi.org/10.1029/2008JA013977 ADSCrossRefGoogle Scholar Komm R, Morita S, Howe R, Hill F (2008) Emerging active regions studied with ring-diagram analysis. ApJ 672:1254–1265. https://doi.org/10.1086/523998 ADSCrossRefGoogle Scholar Komm R, Howe R, Hill F (2009) Emerging and decaying magnetic flux and subsurface flows. Sol Phys 258:13–30. https://doi.org/10.1007/s11207-009-9398-6 ADSCrossRefGoogle Scholar Komm R, Ferguson R, Hill F, Barnes G, Leka KD (2011a) Subsurface vorticity of flaring versus flare-quiet active regions. Sol Phys 268:389–406. https://doi.org/10.1007/s11207-010-9552-1 ADSCrossRefGoogle Scholar Komm R, Howe R, Hill F (2011b) Subsurface velocity of emerging and decaying active regions. Sol Phys 268:407–428. https://doi.org/10.1007/s11207-010-9692-3 ADSCrossRefGoogle Scholar Komm R, Howe R, Hill F (2012) Vorticity of subsurface flows of emerging and decaying active regions. Sol Phys 277:205–226. https://doi.org/10.1007/s11207-011-9920-5 ADSCrossRefGoogle Scholar Kontogiannis I, Georgoulis MK, Park SH, Guerra JA (2017) Non-neutralized electric currents in solar active regions and flare productivity. Sol Phys 292:159. https://doi.org/10.1007/s11207-017-1185-1. arXiv:1708.07087 ADSCrossRefGoogle Scholar Kopp RA, Pneuman GW (1976) Magnetic reconnection in the corona and the loop prominence phenomenon. Sol Phys 50:85–98. https://doi.org/10.1007/BF00206193 ADSCrossRefGoogle Scholar Korsós MB, Chatterjee P, Erdélyi R (2018) Applying the weighted horizontal magnetic gradient method to a simulated flaring active region. ApJ 857:103. https://doi.org/10.3847/1538-4357/aab891. arXiv:1804.10351 ADSCrossRefGoogle Scholar Kosovichev AG (2009) Photospheric and subphotospheric dynamics of emerging magnetic flux. Space Sci Rev 144:175–195. https://doi.org/10.1007/s11214-009-9487-8. arXiv:0901.0035 ADSCrossRefGoogle Scholar Kosovichev AG, Duvall TL Jr (2008) Local helioseismology and magnetic flux emergence. In: Howe R, Komm RW, Balasubramaniam KS, Petrie GJD (eds) Subsurface and atmospheric influences on solar activity, ASP Conference Series, vol 383. Astronomical Society of the Pacific, San Francisco, p 59Google Scholar Kosovichev AG, Zharkova VV (2001) Magnetic energy release and transients in the solar flare of 2000 July 14. ApJ 550:L105–L108. https://doi.org/10.1086/319484 ADSCrossRefGoogle Scholar Kosovichev AG, Zhao J, Ilonidis S (2018) Local helioseismology of emerging active regions: a case study. In: Rozelot JP, Babayev E (eds) Variability of the Sun and Sun-like stars: from asteroseismology to space weather. EDP Sciences, Les Ulis, pp 15–38Google Scholar Kosugi T, Matsuzaki K, Sakao T, Shimizu T, Sone Y, Tachikawa S, Hashimoto T, Minesugi K, Ohnishi A, Yamada T, Tsuneta S, Hara H, Ichimoto K, Suematsu Y, Shimojo M, Watanabe T, Shimada S, Davis JM, Hill LD, Owens JK, Title AM, Culhane JL, Harra LK, Doschek GA, Golub L (2007) The hinode (solar-B) mission: an overview. Sol Phys 243:3–17. https://doi.org/10.1007/s11207-007-9014-6 ADSCrossRefGoogle Scholar Krall KR, Smith JB Jr, Hagyard MJ, West EA, Cummings NP (1982) Vector magnetic field evolution, energy storage, and associated photospheric velocity shear within a flare-productive active region. Sol Phys 79:59–75. https://doi.org/10.1007/BF00146973 ADSCrossRefGoogle Scholar Kruskal MD, Johnson JL, Gottlieb MB, Goldman LM (1958) Hydromagnetic instability in a stellarator. Phys Fluids 1:421–429. https://doi.org/10.1063/1.1724359 ADSMathSciNetCrossRefGoogle Scholar Kubo M, Yokoyama T, Katsukawa Y, Lites B, Tsuneta S, Suematsu Y, Ichimoto K, Shimizu T, Nagata S, Tarbell TD, Shine RA, Title AM, David Elmore (2007) Hinode observations of a vector magnetic field change associated with a flare on 2006 December 13. PASJ 59:S779–S784. https://doi.org/10.1093/pasj/59.sp3.S779. arXiv:0709.2397 ADSCrossRefGoogle Scholar Künzel H (1960) Die Flare-Häufigkeit in Fleckengruppen unterschiedlicher Klasse und magnetischer Struktur. Astron Nachr 285:271. https://doi.org/10.1002/asna.19592850516 ADSCrossRefGoogle Scholar Künzel H (1965) Zur Klassifikation von Sonnenfleckengruppen. Astron Nachr 288:177ADSGoogle Scholar Kurokawa H (1987) Two distinct morphological types of magnetic shear development and their relation to flares. Sol Phys 113:259–263. https://doi.org/10.1007/BF00147706 ADSCrossRefGoogle Scholar Kurokawa H (1991) Optical observations of flare-productive flux emergence. In: Uchida Y, Canfield RC, Watanabe T, Hiei E (eds) Flare physics in solar activity maximum 22, Lecture Notes in Physics, vol 387. Springer, Berlin, p 39. https://doi.org/10.1007/BFb0032613 CrossRefGoogle Scholar Kurokawa H, Wang T, Ishii TT (2002) Emergence and drastic breakdown of a twisted flux rope to trigger strong solar flares in NOAA active region 9026. ApJ 572:598–608. https://doi.org/10.1086/340305 ADSCrossRefGoogle Scholar Kusano K, Maeshiro T, Yokoyama T, Sakurai T (2002) Measurement of magnetic helicity injection and free energy loading into the solar corona. ApJ 577:501–512. https://doi.org/10.1086/342171 ADSCrossRefGoogle Scholar Kusano K, Maeshiro T, Yokoyama T, Sakurai T (2004) The trigger mechanism of solar flares in a coronal arcade with reversed magnetic shear. ApJ 610:537–549. https://doi.org/10.1086/421547 ADSCrossRefGoogle Scholar Kusano K, Bamba Y, Yamamoto TT, Iida Y, Toriumi S, Asai A (2012) Magnetic field structures triggering solar flares and coronal mass ejections. ApJ 760:31. https://doi.org/10.1088/0004-637X/760/1/31. arXiv:1210.0598 ADSCrossRefGoogle Scholar LaBonte BJ, Georgoulis MK, Rust DM (2007) Survey of magnetic helicity injection in regions producing X-class flares. ApJ 671:955–963. https://doi.org/10.1086/522682 ADSCrossRefGoogle Scholar Leake JE, Linton MG, Schuck PW (2017) Testing the accuracy of data-driven MHD simulations of active region evolution. ApJ 838:113. https://doi.org/10.3847/1538-4357/aa6578. arXiv:1702.06808 ADSCrossRefGoogle Scholar Lee K, Moon YJ, Lee JY, Lee KS, Na H (2012) Solar flare occurrence rate and probability in terms of the sunspot classification supplemented with sunspot area and its changes. Sol Phys 281:639–650. https://doi.org/10.1007/s11207-012-0091-9 ADSCrossRefGoogle Scholar Leitzinger M, Odert P, Greimel R, Korhonen H, Guenther EW, Hanslmeier A, Lammer H, Khodachenko ML (2014) A search for flares and mass ejections on young late-type stars in the open cluster Blanco-1. MNRAS 443:898–910. https://doi.org/10.1093/mnras/stu1161. arXiv:1406.2734 ADSCrossRefGoogle Scholar Leka KD, Barnes G (2003) Photospheric magnetic field properties of flaring versus flare-quiet active regions. II. Discriminant analysis. ApJ 595:1296–1306. https://doi.org/10.1086/377512 ADSCrossRefGoogle Scholar Leka KD, Barnes G (2007) Photospheric magnetic field properties of flaring versus flare-quiet active regions. IV. A statistically significant sample. ApJ 656:1173–1186. https://doi.org/10.1086/510282 ADSCrossRefGoogle Scholar Leka KD, Canfield RC, McClymont AN, van Driel-Gesztelyi L (1996) Evidence for current-carrying emerging flux. ApJ 462:547. https://doi.org/10.1086/177171 ADSCrossRefGoogle Scholar Leka KD, Barnes G, Birch AC, Gonzalez-Hernandez I, Dunn T, Javornik B, Braun DC (2013) Helioseismology of pre-emerging active regions. I. Overview, data, and target selection criteria. ApJ 762:130. https://doi.org/10.1088/0004-637X/762/2/130. arXiv:1303.1433 ADSCrossRefGoogle Scholar Lemen JR, Title AM, Akin DJ, Boerner PF, Chou C, Drake JF, Duncan DW, Edwards CG, Friedlaender FM, Heyman GF, Hurlburt NE, Katz NL, Kushner GD, Levay M, Lindgren RW, Mathur DP, McFeaters EL, Mitchell S, Rehse RA, Schrijver CJ, Springer LA, Stern RA, Tarbell TD, Wuelser JP, Wolfson CJ, Yanari C, Bookbinder JA, Cheimets PN, Caldwell D, Deluca EE, Gates R, Golub L, Park S, Podgorski WA, Bush RI, Scherrer PH, Gummin MA, Smith P, Auker G, Jerram P, Pool P, Soufli R, Windt DL, Beardsley S, Clapp M, Lang J, Waltham N (2012) The Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO). Sol Phys 275:17–40. https://doi.org/10.1007/s11207-011-9776-8 ADSCrossRefGoogle Scholar Li H, Schmieder B, Song MT, Bommier V (2007) Interaction of magnetic field systems leading to an X1.7 flare due to large-scale flux tube emergence. A&A 475:1081–1091. https://doi.org/10.1051/0004-6361:20077500 ADSCrossRefGoogle Scholar Li Y, Jing J, Tan C, Wang H (2009) The change of magnetic inclination angles associated with the X3.4 flare on December 13, 2006. Sci China Phys Mech Astron 52:1702–1706. https://doi.org/10.1007/s11433-009-0238-3 ADSCrossRefGoogle Scholar Li Y, Jing J, Fan Y, Wang H (2011) Comparison between observation and simulation of magnetic field changes associated with flares. ApJ 727:L19. https://doi.org/10.1088/2041-8205/727/1/L19 ADSCrossRefGoogle Scholar Lim EK, Chae J, Jing J, Wang H, Wiegelmann T (2010) The formation of a magnetic channel by the emergence of current-carrying magnetic fields. ApJ 719:403–414. https://doi.org/10.1088/0004-637X/719/1/403. arXiv:1009.0420 ADSCrossRefGoogle Scholar Lin CH (2014) A statistical study of the subsurface structure and eruptivity of solar active regions. Ap&SS 352:361–371. https://doi.org/10.1007/s10509-014-1931-x. arXiv:1512.07007 ADSCrossRefGoogle Scholar Lin RP, Dennis BR, Hurford GJ, Smith DM, Zehnder A, Harvey PR, Curtis DW, Pankow D, Turin P, Bester M, Csillaghy A, Lewis M, Madden N, van Beek HF, Appleby M, Raudorf T, McTiernan J, Ramaty R, Schmahl E, Schwartz R, Krucker S, Abiad R, Quinn T, Berg P, Hashii M, Sterling R, Jackson R, Pratt R, Campbell RD, Malone D, Landis D, Barrington-Leigh CP, Slassi-Sennou S, Cork C, Clark D, Amato D, Orwig L, Boyle R, Banks IS, Shirey K, Tolbert AK, Zarro D, Snow F, Thomsen K, Henneck R, McHedlishvili A, Ming P, Fivian M, Jordan J, Wanner R, Crubb J, Preble J, Matranga M, Benz A, Hudson H, Canfield RC, Holman GD, Crannell C, Kosugi T, Emslie AG, Vilmer N, Brown JC, Johns-Krull C, Aschwanden M, Metcalf T, Conway A (2002) The Reuven Ramaty high-energy solar spectroscopic imager (RHESSI). Sol Phys 210:3–32. https://doi.org/10.1023/A:1022428818870 ADSCrossRefGoogle Scholar Linton MG (2006) Reconnection of nonidentical flux tubes. J Geophys Res 111:A12S09. https://doi.org/10.1029/2006JA011891 CrossRefGoogle Scholar Linton MG, Antiochos SK (2005) Magnetic flux tube reconnection: tunneling versus slingshot. ApJ 625:506–521. https://doi.org/10.1086/429585. arXiv:astro-ph/0501473 ADSCrossRefGoogle Scholar Linton MG, Longcope DW, Fisher GH (1996) The helical kink instability of isolated, twisted magnetic flux tubes. ApJ 469:954. https://doi.org/10.1086/177842 ADSCrossRefGoogle Scholar Linton MG, Dahlburg RB, Fisher GH, Longcope DW (1998) Nonlinear evolution of kink-unstable magnetic flux tubes and solar \(\delta \)-spot active regions. ApJ 507:404–416. https://doi.org/10.1086/306299 ADSCrossRefGoogle Scholar Linton MG, Fisher GH, Dahlburg RB, Fan Y (1999) Relationship of the multimode kink instability to \(\delta \)-spot formation. ApJ 522:1190–1205. https://doi.org/10.1086/307678 ADSCrossRefGoogle Scholar Linton MG, Dahlburg RB, Antiochos SK (2001) Reconnection of twisted flux tubes as a function of contact angle. ApJ 553:905–921. https://doi.org/10.1086/320974 ADSCrossRefGoogle Scholar Lites BW, Low BC, Martinez Pillet V, Seagraves P, Skumanich A, Frank ZA, Shine RA, Tsuneta S (1995) The possible ascent of a closed magnetic system through the photosphere. ApJ 446:877. https://doi.org/10.1086/175845 ADSCrossRefGoogle Scholar Liu J, Zhang H (2006) The magnetic field, horizontal motion and helicity in a fast emerging flux region which eventually forms a delta spot. Sol Phys 234:21–40. https://doi.org/10.1007/s11207-006-2091-0 ADSCrossRefGoogle Scholar Liu C, Deng N, Liu Y, Falconer D, Goode PR, Denker C, Wang H (2005) Rapid change of \(\delta \) spot structure associated with seven major flares. ApJ 622:722–736. https://doi.org/10.1086/427868 ADSCrossRefGoogle Scholar Liu C, Lee J, Gary DE, Wang H (2007a) The ribbon-like hard X-ray emission in a sigmoidal solar active region. ApJ 658:L127–L130. https://doi.org/10.1086/513739. arXiv:astro-ph/0702326 ADSCrossRefGoogle Scholar Liu C, Lee J, Yurchyshyn V, Deng N, Ks Cho, Karlický M, Wang H (2007b) The eruption from a sigmoidal solar active region on 2005 May 13. ApJ 669:1372–1381. https://doi.org/10.1086/521644. arXiv:0707.2240 ADSCrossRefGoogle Scholar Liu C, Deng N, Liu R, Lee J, Wiegelmann T, Jing J, Xu Y, Wang S, Wang H (2012) Rapid changes of photospheric magnetic field after tether-cutting reconnection and magnetic implosion. ApJ 745:L4. https://doi.org/10.1088/2041-8205/745/1/L4. arXiv:1112.3598 ADSCrossRefGoogle Scholar Liu C, Deng N, Lee J, Wiegelmann T, Moore RL, Wang H (2013) Evidence for solar tether-cutting magnetic reconnection from coronal field extrapolations. ApJ 778:L36. https://doi.org/10.1088/2041-8205/778/2/L36. arXiv:1310.5098 ADSCrossRefGoogle Scholar Liu C, Deng N, Lee J, Wiegelmann T, Jiang C, Dennis BR, Su Y, Donea A, Wang H (2014) Three-dimensional magnetic restructuring in two homologous solar flares in the seismically active NOAA AR 11283. ApJ 795:128. https://doi.org/10.1088/0004-637X/795/2/128. arXiv:1409.6391 ADSCrossRefGoogle Scholar Liu C, Xu Y, Cao W, Deng N, Lee J, Hudson HS, Gary DE, Wang J, Jing J, Wang H (2016a) Flare differentially rotates sunspot on Sun's surface. Nature Commun 7:13104. https://doi.org/10.1038/ncomms13104. arXiv:1610.02969 ADSCrossRefGoogle Scholar Liu R, Chen J, Wang Y, Liu K (2016b) Investigating energetic X-shaped flares on the outskirts of a solar active region. Sci Rep 6:34021. https://doi.org/10.1038/srep34021. arXiv:1609.02713 ADSCrossRefGoogle Scholar Liu C, Deng N, Wang JTL, Wang H (2017a) Predicting solar flares using SDO/HMI vector magnetic data products and the random forest algorithm. ApJ 843:104. https://doi.org/10.3847/1538-4357/aa789b. arXiv:1706.02422 ADSCrossRefGoogle Scholar Liu Y, Sun X, Török T, Titov VS, Leake JE (2017b) Electric-current neutralization, magnetic shear, and eruptive activity in solar active regions. ApJ 846:L6. https://doi.org/10.3847/2041-8213/aa861e. arXiv:1708.04411 ADSCrossRefGoogle Scholar Livingston W, Harvey JW, Malanushenko OV, Webster L (2006) Sunspots with the strongest magnetic fields. Sol Phys 239:41–68. https://doi.org/10.1007/s11207-006-0265-4 ADSCrossRefGoogle Scholar Longcope DW, Forbes TG (2014) Breakout and tether-cutting eruption models are both catastrophic (sometimes). Sol Phys 289:2091–2122. https://doi.org/10.1007/s11207-013-0464-8. arXiv:1312.4435 ADSCrossRefGoogle Scholar Longcope DW, Klapper I (1997) Dynamics of a thin twisted flux tube. ApJ 488:443–453. https://doi.org/10.1086/304680 ADSCrossRefGoogle Scholar Longcope DW, Welsch BT (2000) A model for the emergence of a twisted magnetic flux tube. ApJ 545:1089–1100. https://doi.org/10.1086/317846 ADSCrossRefGoogle Scholar Longcope DW, Fisher GH, Arendt S (1996) The evolution and fragmentation of rising magnetic flux tubes. ApJ 464:999. https://doi.org/10.1086/177387 ADSCrossRefGoogle Scholar Longcope DW, Fisher GH, Pevtsov AA (1998) Flux-tube twist resulting from helical turbulence: the \(\Sigma \)-effect. ApJ 507:417–432. https://doi.org/10.1086/306312 ADSCrossRefGoogle Scholar Longcope D, Linton M, Pevtsov A, Fisher G, Klapper I (1999) Twisted flux tubes and how they get that way, Geophysical Monograph, vol 111. American Geophysical Union, Washington, DC. https://doi.org/10.1029/GM111p0093 CrossRefGoogle Scholar López Fuentes MC, Demoulin P, Mandrini CH, van Driel-Gesztelyi L (2000) The counterkink rotation of a non-Hale active region. ApJ 544:540–549. https://doi.org/10.1086/317180. arXiv:1412.1456 ADSCrossRefGoogle Scholar López Fuentes MC, Démoulin P, Mandrini CH, Pevtsov AA, van Driel-Gesztelyi L (2003) Magnetic twist and writhe of active regions. On the origin of deformed flux tubes. A&A 397:305–318. https://doi.org/10.1051/0004-6361:20021487. arXiv:1411.5626 ADSCrossRefGoogle Scholar Low BC (1996) Solar activity and the corona. Sol Phys 167:217–265. https://doi.org/10.1007/BF00146338 ADSCrossRefGoogle Scholar Low BC (2002) Magnetic coupling between the corona and the solar dynamo. In: Sawaya-Lacoste H (ed) SOLMAG 2002. Proceedings of the magnetic coupling of the solar atmosphere Euroconference, vol 505. ESA Special Publication, Paris, pp 35–39Google Scholar Lu ET, Hamilton RJ (1991) Avalanches and the distribution of solar flares. ApJ 380:L89–L92. https://doi.org/10.1086/186180 ADSCrossRefGoogle Scholar Lu Y, Wang J, Wang H (1993) Shear angle of magnetic fields. Sol Phys 148:119–132. https://doi.org/10.1007/BF00675538 ADSCrossRefGoogle Scholar Lundstedt H, Persson T, Andersson V (2015) The extreme solar storm of May 1921: observations and a complex topological model. Ann Geophys 33:109–116. https://doi.org/10.5194/angeo-33-109-2015 ADSCrossRefGoogle Scholar Luoni ML, Démoulin P, Mandrini CH, van Driel-Gesztelyi L (2011) Twisted flux tube emergence evidenced in longitudinal magnetograms: magnetic tongues. Sol Phys 270:45–74. https://doi.org/10.1007/s11207-011-9731-8 ADSCrossRefGoogle Scholar Mackay DH, van Ballegooijen AA (2006) Models of the large-scale corona. I. Formation, evolution, and liftoff of magnetic flux ropes. ApJ 641:577–589. https://doi.org/10.1086/500425 ADSCrossRefGoogle Scholar Mackay DH, Karpen JT, Ballester JL, Schmieder B, Aulanier G (2010) Physics of solar prominences: II—magnetic structure and dynamics. Space Sci Rev 151:333–399. https://doi.org/10.1007/s11214-010-9628-0. arXiv:1001.1635 ADSCrossRefGoogle Scholar Maehara H, Shibayama T, Notsu S, Notsu Y, Nagao T, Kusaba S, Honda S, Nogami D, Shibata K (2012) Superflares on solar-type stars. Nature 485:478–481. https://doi.org/10.1038/nature11063 ADSCrossRefGoogle Scholar Magara T (2001) Dynamics of emerging flux tubes in the Sun. ApJ 549:608–628. https://doi.org/10.1086/319073 ADSCrossRefGoogle Scholar Magara T (2006) Dynamic and topological features of photospheric and coronal activities produced by flux emergence in the Sun. ApJ 653:1499–1509. https://doi.org/10.1086/508926 ADSCrossRefGoogle Scholar Magara T, Longcope DW (2001) Sigmoid structure of an emerging flux tube. ApJ 559:L55–L59. https://doi.org/10.1086/323635 ADSCrossRefGoogle Scholar Magara T, Longcope DW (2003) Injection of magnetic energy and magnetic helicity into the solar atmosphere by an emerging magnetic flux tube. ApJ 586:630–649. https://doi.org/10.1086/367611 ADSCrossRefGoogle Scholar Magara T, Tsuneta S (2008) Hinode's observational result on the saturation of magnetic helicity injected into the solar atmosphere and its relation to the occurrence of a solar flare. PASJ 60:1181–1189. https://doi.org/10.1093/pasj/60.5.1181 ADSCrossRefGoogle Scholar Manchester W IV (2001) The role of nonlinear Alfvén waves in shear formation during solar magnetic flux emergence. ApJ 547:503–519. https://doi.org/10.1086/318342 ADSCrossRefGoogle Scholar Manchester W IV, Gombosi T, DeZeeuw D, Fan Y (2004) Eruption of a buoyantly emerging magnetic flux rope. ApJ 610:588–596. https://doi.org/10.1086/421516 ADSCrossRefGoogle Scholar Mandelbrot BB (1983) The fractal geometry of nature, rev. and enlarged edn. W.H. Freeman, New YorkGoogle Scholar Mandrini CH, Schmieder B, Démoulin P, Guo Y, Cristiani GD (2014) Topological analysis of emerging bipole clusters producing violent solar events. Sol Phys 289:2041–2071. https://doi.org/10.1007/s11207-013-0458-6. arXiv:1312.3359 ADSCrossRefGoogle Scholar Martens PC, Zwaan C (2001) Origin and evolution of filament–prominence systems. ApJ 558:872–887. https://doi.org/10.1086/322279 ADSCrossRefGoogle Scholar Martínez-Sykora J, Hansteen V, Carlsson M (2008) Twisted flux tube emergence from the convection zone to the corona. ApJ 679:871–888. https://doi.org/10.1086/587028. arXiv:0712.3854 ADSCrossRefGoogle Scholar Martínez-Sykora J, Hansteen V, Carlsson M (2009) Twisted flux tube emergence from the convection zone to the corona. II. Later states. ApJ 702:129–140. https://doi.org/10.1088/0004-637X/702/1/129. arXiv:0906.5464 ADSCrossRefGoogle Scholar Martres MJ, Michard R, Soru-Iscovici I, Tsap TT (1968) Étude de la localisation des éruptions dans la structure magnétique évolutive des régions actives solaires. Sol Phys 5:187–206. https://doi.org/10.1007/BF00147965 ADSCrossRefGoogle Scholar Marubashi K (1986) Structure of the interplanetary magnetic clouds and their solar origins. Adv Space Res 6:335–338. https://doi.org/10.1016/0273-1177(86)90172-9 ADSCrossRefGoogle Scholar Marubashi K (1989) The space weather forecast program. Space Sci Rev 51:197–214. https://doi.org/10.1007/BF00226275 ADSCrossRefGoogle Scholar Mason D, Komm R, Hill F, Howe R, Haber D, Hindman BW (2006) Flares, magnetic fields, and subsurface vorticity: a survey of GONG and MDI data. ApJ 645:1543–1553. https://doi.org/10.1086/503761 ADSCrossRefGoogle Scholar Masuda S, Kosugi T, Hara H, Tsuneta S, Ogawara Y (1994) A loop-top hard X-ray source in a compact solar flare as evidence for magnetic reconnection. Nature 371:495–497. https://doi.org/10.1038/371495a0 ADSCrossRefGoogle Scholar Matsumoto R, Shibata K (1992) Three-dimensional MHD simulation of the Parker instability in galactic gas disks and the solar atmosphere. PASJ 44:167–175ADSGoogle Scholar Matsumoto R, Tajima T, Shibata K, Kaisig M (1993) Three-dimensional magnetohydrodynamics of the emerging magnetic flux in the solar atmosphere. ApJ 414:357–371. https://doi.org/10.1086/173082 ADSCrossRefGoogle Scholar Matsumoto R, Tajima T, Chou W, Okubo A, Shibata K (1998) Formation of a kinked alignment of solar active regions. ApJ 493:L43–L46. https://doi.org/10.1086/311116 ADSCrossRefGoogle Scholar Matthews PC, Hughes DW, Proctor MRE (1995) Magnetic buoyancy, vorticity, and three-dimensional flux-tube formation. ApJ 448:938. https://doi.org/10.1086/176022 ADSCrossRefGoogle Scholar Mayfield EB, Lawrence JK (1985) The correlation of solar flare production with magnetic energy in active regions. Sol Phys 96:293–305. https://doi.org/10.1007/BF00149685 ADSCrossRefGoogle Scholar McAteer RTJ, Gallagher PT, Ireland J (2005) Statistics of active region complexity: a large-scale fractal dimension survey. ApJ 631:628–635. https://doi.org/10.1086/432412 ADSCrossRefGoogle Scholar McAteer RTJ, Gallagher PT, Conlon PA (2010) Turbulence, complexity, and solar flares. Adv Space Res 45:1067–1074. https://doi.org/10.1016/j.asr.2009.08.026. arXiv:0909.5636 ADSCrossRefGoogle Scholar McClintock BH, Norton AA, Li J (2014) Re-examining sunspot tilt angle to include anti-Hale statistics. ApJ 797:130. https://doi.org/10.1088/0004-637X/797/2/130. arXiv:1412.5094 ADSCrossRefGoogle Scholar McCloskey AE, Gallagher PT, Bloomfield DS (2016) Flaring rates and the evolution of sunspot group McIntosh classifications. Sol Phys 291:1711–1738. https://doi.org/10.1007/s11207-016-0933-y. arXiv:1607.00903 ADSCrossRefGoogle Scholar McIntosh PS (1990) The classification of sunspot groups. Sol Phys 125:251–267. https://doi.org/10.1007/BF00158405 ADSCrossRefGoogle Scholar Melrose DB (1991) Neutralized and unneutralized current patterns in the solar corona. ApJ 381:306–312. https://doi.org/10.1086/170652 ADSCrossRefGoogle Scholar Melrose DB (1995) Current paths in the corona and energy release in solar flares. ApJ 451:391. https://doi.org/10.1086/176228 ADSCrossRefGoogle Scholar Melrose DB (1996) Reply to comments by E. N. Parker. ApJ 471:497. https://doi.org/10.1086/177985 ADSCrossRefGoogle Scholar Meunier N, Kosovichev A (2003) Fast photospheric flows and magnetic fields in a flaring active region. A&A 412:541–553. https://doi.org/10.1051/0004-6361:20031435 ADSCrossRefGoogle Scholar Min S, Chae J (2009) The rotating sunspot in AR 10930. Sol Phys 258:203–217. https://doi.org/10.1007/s11207-009-9425-7 ADSCrossRefGoogle Scholar Mitra D, Brandenburg A, Kleeorin N, Rogachevskii I (2014) Intense bipolar structures from stratified helical dynamos. MNRAS 445:761–769. https://doi.org/10.1093/mnras/stu1755. arXiv:1404.3194 ADSCrossRefGoogle Scholar Miyagoshi T, Yokoyama T (2003) Magnetohydrodynamic numerical simulations of solar X-ray jets based on the magnetic reconnection model that includes chromospheric evaporation. ApJ 593:L133–L136. https://doi.org/10.1086/378215 ADSCrossRefGoogle Scholar Moffatt HK, Ricca RL (1992) Helicity and the Calugareanu invariant. Proc R Soc London, Ser A 439:411–429. https://doi.org/10.1098/rspa.1992.0159 ADSCrossRefzbMATHGoogle Scholar Moon YJ, Chae J, Choe GS, Wang H, Park YD, Yun HS, Yurchyshyn V, Goode PR (2002a) Flare activity and magnetic helicity injection by photospheric horizontal motions. ApJ 574:1066–1073. https://doi.org/10.1086/340975 ADSCrossRefGoogle Scholar Moon YJ, Chae J, Wang H, Choe GS, Park YD (2002b) Impulsive variations of the magnetic helicity change rate associated with eruptive flares. ApJ 580:528–537. https://doi.org/10.1086/343130 ADSCrossRefGoogle Scholar Moore RL, Labonte BJ (1980) The filament eruption in the 3B flare of July 29, 1973: onset and magnetic field configuration. In: Dryer M, Tandberg-Hanssen E (eds) Solar and interplanetary dynamics, IAU symposium, vol 91. D. Reidel, Dordrecht, pp 207–210. https://doi.org/10.1007/978-94-009-9100-2_32 CrossRefGoogle Scholar Moore RL, Sterling AC (2006) Initiation of coronal mass ejections, Geophysical Monograph, vol 165. American Geophysical Union, Washington, DC. https://doi.org/10.1029/165GM07 CrossRefGoogle Scholar Moore RL, Sterling AC, Hudson HS, Lemen JR (2001) Onset of the magnetic explosion in solar flares and coronal mass ejections. ApJ 552:833–848. https://doi.org/10.1086/320559 ADSCrossRefGoogle Scholar Moraitis K, Tziotziou K, Georgoulis MK, Archontis V (2014) Validation and benchmarking of a practical free magnetic energy and relative magnetic helicity budget calculation in solar magnetic structures. Sol Phys 289:4453–4480. https://doi.org/10.1007/s11207-014-0590-y. arXiv:1406.5381 ADSCrossRefGoogle Scholar Moreno-Insertis F, Emonet T (1996) The rise of twisted magnetic tubes in a stratified medium. ApJ 472:L53. https://doi.org/10.1086/310360 ADSCrossRefGoogle Scholar Moreno-Insertis F, Galsgaard K (2013) Plasma jets and eruptions in solar coronal holes: a three-dimensional flux emergence experiment. ApJ 771:20. https://doi.org/10.1088/0004-637X/771/1/20. arXiv:1305.2201 ADSCrossRefGoogle Scholar Moreno-Insertis F, Galsgaard K, Ugarte-Urra I (2008) Jets in coronal holes: hinode observations and three-dimensional computer modeling. ApJ 673:L211. https://doi.org/10.1086/527560. arXiv:0712.1059 ADSCrossRefGoogle Scholar Moreton GE, Severny AB (1968) Magnetic fields and flares in the region CMP 20 September 1963. Sol Phys 3:282–297. https://doi.org/10.1007/BF00155163 ADSCrossRefGoogle Scholar Morin J, Donati JF, Petit P, Delfosse X, Forveille T, Albert L, Aurière M, Cabanac R, Dintrans B, Fares R, Gastine T, Jardine MM, Lignières F, Paletou F, Ramirez Velez JC, Théado S (2008) Large-scale magnetic topologies of mid M dwarfs. MNRAS 390:567–581. https://doi.org/10.1111/j.1365-2966.2008.13809.x. arXiv:0808.1423 ADSCrossRefGoogle Scholar Morita S, McIntosh SW (2005) Genesis of AR NOAA10314. In: Sankarasubramanian K, Penn M, Pevtsov A (eds) Large-scale structures and their role in solar activity, ASP Conference Series, vol 346. Astronomical Society of the Pacific, San Francisco, p 317Google Scholar Möstl C, Rollett T, Frahm RA, Liu YD, Long DM, Colaninno RC, Reiss MA, Temmer M, Farrugia CJ, Posner A, Dumbović M, Janvier M, Démoulin P, Boakes P, Devos A, Kraaikamp E, Mays ML, Vršnak B (2015) Strong coronal channelling and interplanetary evolution of a solar storm up to Earth and Mars. Nature Commun 6:7135. https://doi.org/10.1038/ncomms8135. arXiv:1506.02842 CrossRefGoogle Scholar Mouschovias TC, Poland AI (1978) Expansion and broadening of coronal loop transients: a theoretical explanation. ApJ 220:675–682. https://doi.org/10.1086/155951 ADSCrossRefGoogle Scholar Muhamad J, Kusano K, Inoue S, Shiota D (2017) Magnetohydrodynamic simulations for studying solar flare trigger mechanism. ApJ 842:86. https://doi.org/10.3847/1538-4357/aa750e. arXiv:1706.07153 ADSCrossRefGoogle Scholar Muranushi T, Shibayama T, Muranushi YH, Isobe H, Nemoto S, Komazaki K, Shibata K (2015) UFCORIN: a fully automated predictor of solar flares in GOES X-ray flux. Space Weather 13:778–796. https://doi.org/10.1002/2015SW001257. arXiv:1507.08011 ADSCrossRefGoogle Scholar Murray MJ, Hood AW (2007) Simple emergence structures from complex magnetic fields. A&A 470:709–719. https://doi.org/10.1051/0004-6361:20077251 ADSCrossRefzbMATHGoogle Scholar Murray MJ, Hood AW, Moreno-Insertis F, Galsgaard K, Archontis V (2006) 3D simulations identifying the effects of varying the twist and field strength of an emerging flux tube. A&A 460:909–923. https://doi.org/10.1051/0004-6361:20065950 ADSCrossRefGoogle Scholar Murray MJ, van Driel-Gesztelyi L, Baker D (2009) Simulations of emerging flux in a coronal hole: oscillatory reconnection. A&A 494:329–337. https://doi.org/10.1051/0004-6361:200810406 ADSCrossRefzbMATH
CommonCrawl
Yao*: Thangka Image Inpainting Algorithm Based on Wavelet Transform and Structural Constraints Fan Yao* Thangka Image Inpainting Algorithm Based on Wavelet Transform and Structural Constraints Abstract: The thangka image inpainting method based on wavelet transform is not ideal for contour curves when the high frequency information is repaired. In order to solve the problem, a new image inpainting algorithm is proposed based on edge structural constraints and wavelet transform coefficients. Firstly, a damaged thangka image is decomposed into low frequency subgraphs and high frequency subgraphs with different resolutions using wavelet transform. Then, the improved fast marching method is used to repair the low frequency subgraphs which represent structural information of the image. At the same time, for the high frequency subgraphs which represent textural information of the image, the extracted and repaired edge contour information is used to constrain structure inpainting in the proposed algorithm. Finally, the texture part is repaired using texture synthesis based on the wavelet coefficient characteristic of each subgraph. In this paper, the improved method is compared with the existing three methods. It is found that the improved method is superior to them in inpainting accuracy, especially in the case of contour curve. The experimental results show that the hierarchical method combined with structural constraints has a good effect on the edge damage of thangka images. Keywords: Image Inpainting , Structural Constraint , Texture Synthesis , Wavelet Transform The image inpainting methods can be divided into structure and texture feature inpainting methods [1-3]. The texture feature inpainting methods can be further classified into decomposition-based [4] and texture-based synthesis [5]. In the image wavelet decomposition model, a specific way is used to decom¬pose the damaged images. The images are decomposed into structural and textural information. The method based on variation and partial differential equation is used to repair the structural information, while the texture synthesis method is used to repair textural information. Then the subgraph reconstruc¬tion is completed. In actual simulation experiment, the texture subgraphs include less image energy when repairing textural information. The texture synthesis algorithm can easily cause false matching, and leads to a sharp deterioration in the repair effect, so the visual effect of final reconstructed image is undesirable. In view of the image decomposition inpainting, many scholars done a lot of improvement research. Chen et al. [6] used TV decomposition model to decompose images into structure and texture subgraphs, and then the two parts were repaired according to their characteristics. The Poisson equation was used for structure subgraph, and the texture block synthesis was used for texture subgraph, but the inpainting effect was not very ideal for the large scale. Wang et al. [7] presented a new technique for image inpainting. The damaged area was roughly repaired by fast marching inpainting method in spatial domain. Zhang and Dai [8] decomposed image into texture subgraph and structure subgraph based on wavelet decomposition, then reconstructed each one of these functions separately with the CDD algorithm and the improved texture synthesis algorithm, and a good repair effect can be obtained. Deshmukh and Mukherji [9] separated the damaged image into two principal components which encompass image texture and color. The fast texture image completion algorithm based on dependencies was presented by He et al. [10]. Xiao and Zhang [11] proposed a fast inpainting algorithm for texture image by determining the order of filling blocks in wavelet domain, and filled the inpainting blocks by using texture synthesis method. But it could not keep connectivity about the edge information of texture subgraphs. In order to reduce mismatch and overcome the defect that the texture is superior to the edge in repairing high frequency subgraph, the advantages of image decomposition restoration algorithm and wavelet transform domain coefficient characteristics will be made full use of in this paper. By introducing edge change factor and improving matching block search method, an image inpainting algorithm combining wavelet transform with texture synthesis will be discussed in detail. The rest of this paper is organized as follows. The proposed method will be described in Sections 2–4. In Section 5, some experimental results will be presented. Finally, the conclusion will be drawn in Section 6. 2. Our Proposed Method At present, the basic principle of image inpainting method which uses wavelet transform is as below [12]: first, the image is decomposed into structure and texture through wavelet transform; then an appropriate method is used to repair each part respectively according to its information characteristics. Due to the fact that the wavelet coefficient energy is relatively concentrated in low frequency subgraphs, the coefficient change has a relatively large influence on the reconstructed image. The wavelet coefficient energy is scattered in high frequency subgraphs; that is to say the matrix is sparse. Therefore, different inpainting methods are put forward according to various matrix characteristics. Based on the above analysis, the principle of the image restoration algorithm based on wavelet transform and texture synthesis was discussed in this paper. Wavelet transform was adopted to de¬compose the damaged image into low frequency subgraphs and high frequency subgraphs with different resolutions. Considering the low frequency component represented the structural part of the image, while the high frequency component reflected the change of the image edge, the low frequency position and high frequency position had certain relationship. The low frequency subgraphs were restored by the improved fast marching algorithm. According to the coefficient characteristics, the high frequency subgraphs were restored by the algorithm of texture synthesis. Finally the subgraphs after image inpain¬ting were reconstructed. The thangka image inpainting algorithm that combined wavelet transform with structural constraints was shown in Fig. 1. The algorithm implementation steps were as follows: (1) J-level wavelet transform of the original thangka image was done (Note the higher the wavelet transform series was, the more concentrated the low frequency subgraphs were. The restoration errors of low frequency subgraphs would have a great impact on the subsequent restoration at this time. Therefore, the wavelet transform series of image J=2-3 were proposed.) (2) Subgraph inpainting For the low frequency subgraphs, the improved FMM was used. For the high frequency subgraphs, first the edge contour was extracted, and the contour underwent curve fitting. Then the improved texture synthesis algorithm was used to restore the edge and texture region successively. (3) The first level wavelet reconstruction was carried out, and then step (2) was performed until the complete restoration image was obtained. The flow chart of the proposed inpainting algorithm. 3. Low Frequency Subgraph Inpainting Telea [13] proposed a fast marching method (FMM) image inpainting algorithm in 2004. Based on the arrival time function T(x,y), the algorithm aims to construct the curve evolution and FMM is used to restore the edge. Although FMM image inpainting algorithm can quickly restore damaged image and has good restoration effect, the weighting function of three factors has certain limitations, and some irrelevant information is introduced to the restoration process. As a result, the quality of restoration is undermined. To overcome the concentration of gradient information in the neighborhood, the main texture direction information was introduced and the inpainting template size was adaptively selected in this paper.The inpainting process was conducted according to the main texture direction in the neigh¬borhood, so as to better maintain texture and edge direction. In order to solve the arrival time function, the normal velocity was introduced, with (x,y) as the normal velocity of the point (x,y). Because the restore process was always done from edge to internal restoration area, (x,y) was always larger than zero [7]. In the process of curve evolution, the curve reached a point at most once. Next, T(x,y) could be obtained through Eikonal equation [TeX:] $$|\nabla T| \beta=1:$$ [TeX:] $$\left\{\left[\max \left(D_{X}^{-} T_{i j},-D_{X}^{+} T_{i j}, 0\right)\right]^{2}+\left[\max \left(D_{y}^{-} T_{i j},-D_{y}^{+} T_{i j}, 0\right)\right]^{2}\right\}^{1 / 2}=\frac{1}{\beta_{i j}}$$ In Eq. (1), [TeX:] $$\beta_{i j}=1,$$ the finite difference scheme of [TeX:] $$D_{X}^{-} T_{i j} \text { was } D_{X}^{-} T_{i j}=T_{i, j}-T_{i-1, j},$$ and the finite difference scheme of [TeX:] $$D_{X}^{+} T_{i j} \text { was } D_{X}^{+} T_{i j}=T_{i+1, j}-T_{i, j}.$$ Similarly, [TeX:] $$D_{y}^{-} T_{i j} \text { and } D_{y}^{+} T_{i j}$$ could also be secured. In this way, T of each pixel point to the edge could be calculated by Eq. (1). Improved FMM principle. To correctly select an order path, a mathematical model should be set up for each pixel. As shown in Fig. 2, the point p was the restored point close to [TeX:] $$\partial \Omega, \text { and } B_{\varepsilon}(p)$$ was a small region around p. At this time, every known pixel point in the neighborhood q had an impact on p. Eq. (2) was added into the first-order approximation process. [TeX:] $$I_{q}(p)=I(q)+\nabla I(q)(p-q)$$ [TeX:] $$I_{q}(p)$$ was the function of all known pixel point q about p in the neighborhood, and a normalized weighting function [TeX:] $$\omega(p, q)$$ should be located, which was used to measure the effect of each pixel point q on p, so as to estimate the inpainting value of point p: [TeX:] $$I(p)=\frac{\sum_{q \in B_{E}(p)} \omega(p, q)[I(q)+\nabla I(q)(p-q)]}{\sum_{q \in B_{\varepsilon}(p)} \omega(p, q)}$$ First, the gradient value and the direction of all pixels (outside the edge) were calculated to determine the main texture direction in the domain: [TeX:] $$m(x, y)=\sqrt{[I(x+1, y)-I(x-1, y)]^{2}+[I(x, y+1)-I(x, y-1)]^{2}}$$ [TeX:] $$\theta(x, y)=\operatorname{atan}\{[I(x, y+1)-I(x, y-1)] /[I(x+1, y)-I(x-1, y)]\}$$ The gradient direction of [TeX:] $$360^{\circ}$$ could be divided into 36 directions in Eq. (4); then the accumulative gradient value of each direction could be calculated. The direction of the maximum accumulative value was taken as the main texture direction [TeX:] $$N_{p}$$ in the domain. Given the isochromatic line was perpendicular to the gradient direction, the isochromatic line direction in the domain was [TeX:] $$N_{p}^{\perp}.$$ The weighted points were selected from the domain,and their directions were consistent with the main texture direction. Starting from point p to the known point q along [TeX:] $$N_{p}^{\perp},$$ the distance between the two points was [TeX:] $$d(p, q)=\sqrt{\left(x_{p}-x_{q}\right)^{2}+\left(y_{p}-y_{q}\right)^{2}},$$ and the vector was [TeX:] $$\overrightarrow{p q}=\left(x_{p}-x_{q}, y_{p}-y_{q}\right).$$ When the isochromatic line direction was [TeX:] $$N_{p}^{\perp}, D(p, q)$$ was defined to satisfy certain condition, and then q could be added to the collection of the weighted points: [TeX:] $$D(p, q)=\left|\frac{\overrightarrow{p q}}{d(p, q)} \cdot N_{P}^{\perp}\right|<\lambda$$ In Eq. (10), if D(p,q) was smaller than the threshold , q was the weighted point. could not be set too small; otherwise there will be no weighted point in the main texture direction. According to the experiment, the threshold was generally taken as 0.1, and the window size could be adaptively expanded according to the number of weighted points. Because the direction of q was determined, in order to avoid double-counting, the weighting function [TeX:] $$\omega(p, q)$$ was modified as: [TeX:] $$\omega(p, q)=d s t(p, q) \cdot \operatorname{lev}(p, q)$$ Finally, the size of pixel value was calculated through Eqs. (3) and (7). The improved FMM image restoration algorithm was implemented as follows: Pre-processing: three matrices should be extracted from the known image, namely the distance matrix T of all known pixels and edge, the known pixel matrix I, and the flag bit matrix f. Next, the points were marked in the image as the following three sets: KNOWN:the known pixels. The corresponding positions T and I were known; f was recorded as 0. INSIDE: the unknown pixels to be restored in the region. The corresponding positions T and I were unknown; f was denoted as 255. BAND: the pixel points on the damaged edge. The corresponding position I was known; T was recorded as 0 along the edge advancement during the restore process; and f was recorded as 1. Initialization: in the matrix T, the values of the points marked KNOWN and BAND were 0, namely T=0. The points of INSIDE were marked to be a larger number, namely [TeX:] $$T=10^{6} .0,1$$ and 255 were set according to the above set points in matrix f. Finally, the coordinates of the point marked as BAND were placed in the new edge stack. Inpainting: the value T of the points in the stack was calculated and put in order of numerical size. The minimum value was selected to be set [TeX:] $$P_{\min },$$ marked as KNOWN and deleted from the stack. Its four neighborhood points were traversed. When there was an INSIDE point, it was marked as BAND and the pixel value of the point was calculated by Eq. (3). If there was a BAND point, the point was retained in BAND. The value of BAND was updated in the neighborhood using Eq. (1). When the edge stack was not empty, the point was reselected. When the edge stack was empty, namely there was no unknown pixel point, the image was restored completely and the loop ended. 4. High Frequency Subgraph Inpainting 4.1 Edge Contour Information Extraction The precondition of contour restoration was edge information extraction. The high frequency subgraphs reflected the edge information of image, so the edge contour information could be directly extracted from high frequency subgraphs. Through observation, it was found that the high frequency coefficient with large absolute value represented the contour line of image. Therefore, the threshold value could be set to judge whether the coefficient was an edge information. The high frequency components in different directions represented the marginal changes in the different directions of original thangka image. The position information of three high frequency subgraphs corresponded to each other on the same layer. Therefore, the whole edge contour information could be combined after the edge extraction of the components in different directions. The binary matrix of edge contour information was denoted as B, whose size was the same as the subgraph in this layer. The points on the edge contour line were denoted as "1", while the points on other areas were denoted as "0". The threshold value was set, in order to determine whether each coefficient in the high frequency subgraphs represented edge information. If the absolute value of high frequency coefficient was larger than , the value of the corresponding position in the binary image was 1, or otherwise it was 0. A binary image representing edge contour of this layer could be obtained. The value of coordinates (m,n) in the binary image B was defined as: [TeX:] $$B_{j}(m, n)=\left\{\begin{array}{l} 1,\left|W_{j}^{b}\right| \geq \sigma \\ 0,\left|W_{j}^{b}\right|<\sigma \end{array} b \in\left(L H_{j}, H L_{j}, H H_{j}\right)\right.$$ [TeX:] $$\left|W_{j}^{b}\right|$$ was the absolute value of coefficients in j layer high frequency subgraph [TeX:] $$\left(L H_{j}, H L_{j}, H H_{j}\right) \cdot \sigma$$ was the threshold, whose size could be determined according to the strength information of actual damaged edges to ensure the extraction of the edges of the damaged area. 4.2 Edge Contour Information Inpainting After the information extraction of edge contour, the edge contour information was restored. The specific implementation process was as follows: firstly the intersection of the contour line and the boundary to be restored was set as a collection [TeX:] $$P=\left\{p_{0}, p_{1}, \ldots, p_{i}, \ldots, p_{n}\right\} 0 \leq i \leq n$$ (as shown in Fig. 3(c)) after the edge information was extracted from the contour of defects (as shown in Fig. 3(b)), and the contour line of [TeX:] $$p_{i} \text { was } C_{i}.$$ In order to make these intersections match orderly, two rules should be established: The connection between the two points must pass through the area to be restored. Any connection line should not intersect. It was assumed that only if the directions of the two curves to be fitted were nearly opposite, the two curves were selected for matching. To meet the first rule, P was divided into two subsets [TeX:] $$P_{1} \text { and } P_{2},$$ as shown in Fig. 3(d). The curve directions of the points were almost identical in the same subset, and the curve directions of the points were almost inverse in different subsets. Based on the above hypothesis, the angles could be calculated between every two curves to be restored. To find out the minimum value of these angles: P was divided into two subsets, and the direction of each point on the curve C was fixed in collection P. In binary image B, along curve [TeX:] $$C_{i}$$ starting from [TeX:] $$p_{i},$$ the point [TeX:] $$p_{i, j}(j=1,2,3,4)$$ marked as "1" was found in its 8-neighborhood; [TeX:] $$p_{i, 1}=p_{i} . \text { If } \vec{p}_{i, j}=\left(x\left(p_{i, j}\right), y\left(p_{i, j}\right)\right)$$ was the coordinate vector of [TeX:] $$p_{i, j},$$ the direction of curve [TeX:] $$C_{i}$$ was defined as: [TeX:] $$\vec{S}_{p_{i}}=0.5\left(\vec{p}_{i, j}-\vec{p}_{i, 2}\right)+0.35\left(\vec{p}_{i, 2}-\vec{p}_{i, 3}\right)+0.15\left(\vec{p}_{i, 3}-\vec{p}_{i, 4}\right)$$ In Eq. (13), [TeX:] $$\vec{S}_{p_{i}}$$ was the direction vector of a point on the curve, and then this vector was normalized: [TeX:] $$\vec{S}_{p_{i}}^{\prime}=\frac{\vec{s}_{p_{i}}}{\left|s_{p_{i}}\right|}$$ The unit direction vectors of all points could be calculated in collection P. Any point [TeX:] $$p_{i}$$ was put in subset [TeX:] $$P_{1},$$ and the vector angle cosine of point [TeX:] $$p_{i}$$ and other points [TeX:] $$p_{m}$$ was calculated: [TeX:] $$\cos \theta=\frac{\left\langle\vec{s}_{p_{i}}^{\prime}, \vec{s}_{p_{m}}^{\prime}\right\rangle}{\left\langle\vec{s}_{p_{i}}^{\prime}|| \vec{s}_{p_{m}}^{\prime}\right\rangle}$$ The resulting cosine values ranged from large to small, and then [TeX:] $$p_{i}$$ was put into subset [TeX:] $$P_{1}.$$ (1) When the number of points on both sides was equal, was an even number. The element number of subset [TeX:] $$P_{1}$$ was equal to n/2, the remaining points were put into subset [TeX:] $$P_{2},$$ , and the classification of P was completed. (2) When the number of points on both sides was inequal, was an odd number (as shown in Fig. 3). When the number of subset [TeX:] $$P_{1}$$ was equal to (n-1)/2, the arrangement of the points was as follows: [TeX:] $$p_{m} \in\left\{\begin{array}{l} P_{1}, \cos \theta \geq 0 \\ P_{2}, \cos \theta<0 \end{array}\right.$$ Finally, the rest of the points were placed in subset [TeX:] $$P_{2},$$ and the classification of the collection was completed. Fig. 31. Restored the edge information: (a) original image, (b) the restored edges information, (c) intersection of the repaired edges, (d) direction vector of each point, and (e) edge information of this algorithm. In order to meet the second criterion, the shortest Euclidean distance of all points in two subsets was identified, and two nearest points were used in curve fitting of quadratic polynomial. The points corresponding to the fitting curve were marked as "1" in B; then the two points were removed from subsets [TeX:] $$P_{1} \text { and } P_{2}.$$ All points were fitted in the similar way, as shown in Fig. 3(e). In the fitting process, the following settings were built for different situations: (1) When n was an even number, [TeX:] $$n\left(P_{1}\right) \neq n\left(P_{2}\right),$$ and every two points were fitted until all points were connected. (2) When n was an odd number, [TeX:] $$n\left(P_{1}\right) \neq n\left(P_{2}\right), \text { and } n\left(P_{1}\right)=n\left(P_{2}\right)+1$$ (other situations were handled in the similar way). As shown in Fig. 4, midpoint [TeX:] $$p_{1}$$ and its nearest point [TeX:] $$p_{2}$$ in subset [TeX:] $$P_{1}$$ were taken, and the shortest distance between point [TeX:] $$p_{5} \text { and } p_{1}$$ was located in the corresponding subset [TeX:] $$P_{2} \text { . If }\left\|\vec{p}_{1,2}-\vec{p}_{2,2}\right\|<\left\|\vec{p}_{1,1}-\vec{p}_{2,1}\right\|,$$ the two points were in opposite directions. Then [TeX:] $$P_{1}$$ and [TeX:] $$p_{5}$$ were connected, and [TeX:] $$p_{2}$$ was retained in subset [TeX:] $$P_{1}.$$ Otherwise [TeX:] $$p_{1} \text { and } p_{5}, p_{2} \text { and } p_{5}$$ at the same time were connected. n+1 point to n point. 4.3 Edges Inpainting When the edge contour information of the wavelet coefficients was restored, it could be used as the restoration constraint condition of this layer subgraph. The edge information of the subgraph enjoyed better smoothness due to the use of defined priority of the wavelet coefficient. The inpainting should be carried out in accordance with the direction of edge, and the restoration effect was better than traditional isophote [14-17]. However, only the improvement in priority was not enough to realize the restoration along the actual edge. In order to further optimize the restoration of edge part, the inpainting order should be optimized on the basis of texture synthesis. The binary image [TeX:] $$\mathrm{B}^{\prime}$$ of the restored edge contour information was used as a constraint condition. When the restoration in the region was to be repaired and the point in B^' was marked as 1, the priority was to synthesize the texture block centered on the point. In this way, the curve matching along edge contour information was restored. The edge information in the high frequency subgraphs was repaired, taking priority over other parts to overcome the fact that the edges were not connected. 4.4 Texture Inpainting In priority function, the edge change factor E(p) was introduced, and degree [TeX:] $$C^{\prime}(p)$$ of confidence was improved. The synthetic process could be carried out according to the strongest direction along the edge, so as to prevent the texture portion in the area. The priority factor was redefined as: [TeX:] $$P(p)=C^{\prime}(p) \cdot D(p)+E(p)$$ [TeX:] $$D(p)=\frac{\left|\nabla I_{p}^{\perp} \cdot n_{p}\right|}{\alpha}$$ [TeX:] $$\nabla I_{p}^{\perp}$$ was the isophote of point [TeX:] $$p \cdot n_{p}$$ was the unit normal vector of the boundary line of point [TeX:] $$p . \alpha$$ was the normalization factor. 𝐸(𝑝) was the edge change factor, whose size reflected the intensity of edge changes to be repaired in different directions and regions. The absolute value of the high frequency coefficient reflected the intensity of edge in different directions, so the factor of edge change 𝐸(𝑝) was defined as: [TeX:] $$E(p)=\frac{\max \left\{\Sigma \Psi_{p} \cap \Phi\left\|W_{L H}^{j}\right\|, \Sigma \Psi_{p} \cap \Phi\left\|W_{H L}^{j}\right\|, \Sigma \Psi_{p^{\cap \Phi}}\left\|W_{H H}^{j}\right\|\right\}}{\alpha}$$ In layer [TeX:] $$j, \sum_{\Psi_{p} \cap \phi}\left\|W^{j}\right\|$$ represented the sum of absolute values in the known high frequency coefficients, and was the normalization factor reaching 255. Since the different high frequency subgraphs represented the edge properties in different directions, confidence [TeX:] $$C^{\prime}(p)$$ was defined as: [TeX:] $$C^{\prime}(p)=\frac{\sum_{\Psi_{p} \cap \phi^{N_{b}}}}{\left|\Psi_{p}\right|}, \quad b \in(L H, H L, H H)$$ [TeX:] $$\sum_{\Psi_{p} \cap \Phi} N_{b}$$ was the number of effective pixels in different directions, and [TeX:] $$\left|\Psi_{p}\right|$$ was the pixel number of the block to be repaired. In search of a matching block, the prediction difference value was introduced from the very beginning to reduce the defects caused by poor matching in the restoration process. The high layer subgraph was used to constrain the inpainting of sublayer subgraph, and to better constrain the search of the restoration sequence and the best matching block. The inpainting order depended not only on the size of priority, but also on whether the matching block was the optimal. In terms of the restoration order of subgraphs, the low frequency subgraph on the highest layer was repaired first, the high frequency subgraph was repaired next, and the high frequency subgraph of sublayer was repaired last. 5. Experiment and Analysis In order to verify the effectiveness of the designed method, a contrast experiment was conducted using the traditional criminisi algorithm [18], two improved inpainting algorithm [19,20], and this method respectively. The hardware environment of this experiment was Intel Core i7-8550u, 1.99 GHz and 8 GB internal storage, and the software environment was the MATLAB 2014a under Windows 7 operating system. Haar wavelet was applied in this paper, with wavelet decomposition level j=2. 5.1 Image Inpainting To illustrate the effectiveness of these algorithms on thangka image inpainting, the thangka images with rich texture features and complex structural information were selected for testing in this paper. Figs. 5(a), 6(a), and 7(a) demonstrated the original images, and Figs. 5(b), 6(b), and 7(b) displayed the artificially broken images. Figs. 5(c), 6(c), and 7(c) were the experiment results based on the traditional criminisi algorithm. As could be seen from the inpainting results, the surrounding texture of the damaged area diffused into the damaged area, eventually filling the damaged area. A relatively messy texture could be clearly seen. Figs. 5(d), 6(d), and 7(d) were the experiment results based on literature [19], and Figs. 5(e), 6(e), and 7(e) were the experiment results based on literature [21]. Inpainting results for image_1: (a) original image, (b) broken image, (c) algorithm of [ 18], (d) algorithm of [ 19], (e) algorithm of [ 20], and (f) the proposed algorithm. As could be seen from the inpainting results, both edge closing degree and smoothness of the latter were obviously superior to those of the criminisi algorithm, and the inpainting effect could also meet human visual needs. To some extent, the phenomenon of structure fault caused by error propagation could be overcome, but the smoothness of the structure was hardly maintained. Figs. 5(f), 6(f), and 7(f) were the inpainting results based on the method designed in this paper. It could be argued that its visual effect outperformed the above algorithms, and that there was no obvious block matching error in the inpainting results. The restored area was uniformly distributed. The edge was smoother than the above algorithms, and there was not texture extension phenomenon in the strong edge area. Table 1 showed the peak signal-to-noise ratio (PSNR) value and consumed time of the four algorithms in the experiment. As was shown in Table 1, PSNR of this algorithm increased 1–3 dB compared with other algorithms. However, the consumed time of this algorithm was longer than literatures [19] and [20]. The reason was that the area to be repaired "shrank" after wavelet decomposition, and the increased factor and the improved search way would take a long time when the high frequency subgraphs were repaired. As a result, this algorithm consumed more time. PSNR (dB) and consumed time (s) of four algorithms Time (s) PSNR (dB) Time (s) PSNR (dB) Time (s) PSNR (dB) Time (s) PSNR (dB) Fig. 5 682×570 76.8459 36.4581 75.42897 37.4529 63.3889 37.7219 81.4596 39.1562 Fig. 6 400×350 61.0259 36.4893 59.6489 37.2772 51.6972 37.4293 62.0348 39.1579 5.2 Effects of Different Base Wavelets on Inpainting Fig. 8 indicated the restoration effect on the damaged thangka image (Fig. 8(b)) of Haar wavelet (a support length of 1) and biorthogonal Bior4.4 wavelet (a support length of 9) with the wavelet decomposition level at 2 and 4, respectively. Judging from Fig. 8, the restoration effect of Haar wavelet outperformed that of Bior4.4 wavelet, and the restoration effect of level-2 wavelet decomposition was better than that of level-4 wavelet decomposition. This was because when these data were set to zero during the restoration process in the damaged area, an artificial boundary was generated around the restoration area. When the wavelet was decomposed, the artificial boundary was caused by extension effect, which was related to the support length of base wavelet. The shorter the support length of base wavelet was, the smaller the elongation effect became, and the more accurate the restoration of the damaged area became. Therefore, in order to minimize the extension effect of the artificial boundary, the support length of base wavelet should be as short as possible. In addition, the wavelet decomposition series J was greater, and the low frequency energy was more concentrated. The low frequency subgraph restoration error would have great influence on the quality of the reconstructed image. The decomposition level J was preferably set at the level of 2 or 3, and they could respectively provide a subgraph of 1/4 or 1/8 of the original image. The decomposition level could basically meet the requirements of most restoration images. Comparison of inpainting effects: (a) original image, (b) broken image, (c) level-2 decomposition of Harr (PSNR=39.2567 dB), (d) level-2 decomposition of Bior4.4 (PSNR=38.8153 dB), (e) level-4 de¬composition of Harr (PSNR=37.9457 dB), and (f) level-4 decomposition of Bior4.4 (PSNR=39.0243 dB). 5.3 Damaged Image Inpainting In order to verify the effectiveness of this algorithm, the repair experiments of some actually damaged thangka images were carried out on computer. As shown in Figs. 9 and 10, Figs. 9(a) and 10(a) were actually damaged thangka images, and Figs. 9(b) and 10(b) were the images of damage marks. Figs. 9(c) and 10(c) were the inpainting results using literature [18]. Mismatches resulted in defects in the inpainting results. Figs. 9(d) and 10(d) were the restoration results using literature [19]. There were bulges and fractures in the restoration results. Figs. 9(d) and 10(e) were the restoration results using literature [20]. Although the damaged edges could be connected, the smooth area was blurry. Figs. 9(f) and 10(f) were the restoration results using the method presented in this paper. The results based on this algorithm displayed original visual effect. This algorithm enjoyed good restoration effect in both smooth and edge parts. Because Figs. 9 and 10 were both actually damaged images, their PSNR values could not be calculated. Objective judgment could only rely on the inpainting time of algorithms, and the repair time of the three algorithms was listed in Table 2. Inpainting results for image_4: (a) original image, (b) marked image, (c) results by [ 18], (d) results by [ 19], (e) results by [ 20], and (f) results by the proposed algorithm. It could be seen from Table 2 that the restoration time of this method was slightly longer than that of other algorithms. The reason was that when the high frequency subgraph was restored, the consumed time of the algorithm increased due to increased curve fitting, edge factor and improved search method. Inpainting time (s) of four algorithms Inpainting time (s) [18] [19] [20] Proposed Fig. 9 654×845 176.83 179.45 178.56 220.87 Fig. 10 750×948 196.87 208.45 198.47 286.31 When some damaged edges are restored, it is difficult to take into account the structural integrity of an image and good visual effect on the basis of traditional algorithms. In this paper, an inpainting algorithm combining wavelet transform with structure constraint has been proposed for thangka images. Thangka images were decomposed into subgraphs with different resolutions and components by wavelet transform, and then the edge contour information was extracted and restored. Moreover, the coefficient relations of different components were organically combined with texture synthesis, and the edge contour information was extracted and repaired to constrain the repair of the high frequency subgraph structure. By introducing edge change factor and improving the matching block search method, the repair methods of layering and classification were adopted to enhance the ability to restore edges and textures. Experimental results showed that this algorithm could effectively restore damaged images with strong edges and rich texture, and had a better restoration effect which was more in line with the visual effect of human eyes. However, the algorithm took longer to restore images, and it was not ideal to restore large texture damaged images. Therefore, how to improve the restoration accuracy of low frequency subgraphs is still worth further discussion. This work was supported by Scientific Research Planning Project of Shaanxi Province Educational Commission, China (No. 19JK0891), Major cultivation project of Xizang Minzu University, China (No. 19MDZ03), and Natural Science Foundation of China (No. 62062061). Fan Yao She is working as an Associate Professor in Department of Information Engineering, Xizang Minzu University, Xian Yang, China. Her research interests include the digital protection of Tibet culture heritage and image processing. 1 S. T. Pohlmann, E. F. Harkness, C. J. Taylor, S. M. Astley, "Evaluation of Kinect 3D sensor for healthcare imaging," Journal of Medical and Biological Engineering, vol. 36, no. 6, pp. 857-870, 2016.custom:[[[-]]] 2 M. Imtiyaz, A. Kumar, S. Sreenivasulu, "Inpainting an image based on enhanced resolution," Journal of Computer Technology & Applications, vol. 6, no. 1, pp. 23-26, 2015.custom:[[[-]]] 3 S. Feng, R. Murray-Smith, A. Ramsay, "Position stabilisation and lag reduction with Gaussian processes in sensor fusion system for user performance improvement," International Journal of Machine Learning and Cybernetics, vol. 8, no. 4, pp. 1167-1184, 2017.doi:[[[10.1007/s13042-015-0488-5]]] 4 F. Li, T. Zeng, "A new algorithm framework for image inpainting in transform domain," SIAM Journal on Imaging Sciences, vol. 9, no. 1, pp. 24-51, 2016.doi:[[[10.1137/15M1015169]]] 5 F. Chen, T. Hu, L. Zuo, Z. Peng, G. Jiang, M. Yu, "Depth map inpainting via sparse distortion model," Digital Signal Processing, vol. 58, pp. 93-101, 2016.doi:[[[10.1016/j.dsp.2016.07.019]]] 6 W. Chen, H. Yue, J. Wang, X. Wu, "An improved edge detection algorithm for depth map inpainting," Optics and Lasers in Engineering, vol. 55, pp. 69-77, 2014.custom:[[[-]]] 7 F. Wang, D. Liang, N. Wang, Z. Cheng, J. Tang, "An new method for image inpainting using wavelets," in Proceedings of 2011 International Conference on Multimedia Technology, Hangzhou, China, 2011;pp. 201-204. custom:[[[-]]] 8 H. Zhang, S. Dai, "Image inpainting based on wavelet decomposition," Procedia Engineering, vol. 29, pp. 3674-3678, 2012.custom:[[[-]]] 9 A. S. Deshmukh, P. Mukherji, "Image inpainting using multiresolution wavelet transform analysis," in Proceedings of 2012 International Conference on Communication, Information & Computing Technology (ICCICT), Mumbai, India, 2012;pp. 1-6. custom:[[[-]]] 10 K. He, R. Liang, T. Zhang, "Fast texture image completion algorithm based on dependencies between wavelet coefficients," Journal of Tianjin University, vol. 43, no. 12, pp. 1093-1097, 2010.custom:[[[-]]] 11 Z. Xiao, W. Zhang, "Wavelet-domain fast inpainting algorithm for texture image," Chinese Journal of Scientific Instrument, vol. 29, no. 7, pp. 1422-1425, 2008.custom:[[[-]]] 12 M. Xiao, G. Li, Y. Tan, J. Qin, "Image completion using similarity analysis and transformation," International Journal of Multimedia and Ubiquitous Engineering, vol. 10, no. 4, pp. 193-204, 2015.custom:[[[-]]] 13 A. Telea, "An image inpainting technique based on the fast marching method," Journal of Graphics Tools, vol. 9, no. 1, pp. 23-34, 2004.doi:[[[10.1080/10867651.2004.10487596]]] 14 P. Buyssens, M. Daisy, D. Tschumperle, O. Lezoray, "Exemplar-based inpainting: technical review and new heuristics for better geometric reconstructions," IEEE Transactions on Image Processing, vol. 24, no. 6, pp. 1809-1824, 2015.doi:[[[10.1109/TIP.2015.2411437]]] 15 L. Cai, T. Kim, "Context-driven hybrid image inpainting," IET Image Processing, vol. 9, no. 10, pp. 866-873, 2015.doi:[[[10.1049/iet-ipr.2015.0184]]] 16 J. Patel, T. K. Sarode, "Exemplar based image inpainting with reduced search region," International Journal of Computer Applications, vol. 92, no. 12, pp. 27-33, 2014.custom:[[[-]]] 17 L. Tang, Y. Tan, Z. Fang, C. Xiang, S. Chen, "An improved Criminisi image inpainting algorithm based on structure component and information entropy," Journal of Optoelectronics· Laser, vol. 28, no. 1, pp. 108-116, 2017.custom:[[[-]]] 18 H. M. Patel, H. L. Desai, "A review on design, implementation and performance analysis of the image inpainting technique based on TV model," International Journal of Engineering Development and Research (IJEDR), vol. 2, no. 1, pp. 191-195, 2014.custom:[[[-]]] 19 H. Zhang, S. Dai, "Image inpainting based on wavelet decomposition," Procedia Engineering, vol. 29, pp. 3674-3678, 2012.custom:[[[-]]] 20 B. Wang, L. Hu, J. Cao, R. Xue, G. Liu, "Image restoration based on sparse-optimal strategy in wavelet domain," Acta Electronica Sinica, vol. 44, no. 3, pp. 600-606, 2016.custom:[[[-]]] Received: May 31 2019 Revision received: May 11 2020 Revision received: June 11 2020 Accepted: June 14 2020 Published (Print): October 31 2020 Published (Electronic): October 31 2020 Corresponding Author: Fan Yao* , [email protected] Fan Yao*, College of Information Engineering, Xizang Minzu University, Xian Yang, Shaanxi, China, [email protected]
CommonCrawl
Periodic forcing on degenerate Hopf bifurcation Non-autonomous stochastic evolution equations of parabolic type with nonlocal initial conditions Qualitative analysis of a generalized Nosé-Hoover oscillator Qianqian Han 1, and Xiao-Song Yang 2,3,, School of Mathematics and Statistics, North China University of Water Resources and Electric Power, Zhengzhou, Henan 450046, China School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China Hubei Key Laboratory of Engineering Modeling and Science Computing, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China * Corresponding author: Xiao-Song Yang Received November 2019 Revised July 2020 Published November 2020 Fund Project: The first author is supported by NSFC grant 51979116 In this paper, we analyze the qualitative dynamics of a generalized Nosé-Hoover oscillator with two parameters varying in certain scope. We show that if a solution of this oscillator will not tend to the invariant manifold $ \{(x,y,z)\in \mathbb R^3|x = 0,y = 0\} $, it must pass through the plane $ z = 0 $ infinite times. Especially, every invariant set of this oscillator must have intersection with the plane $ z = 0 $. In addition, we show that if a solution is quasiperiodic, it must pass through at least five quadrants of $ \mathbb R^3 $. Keywords: Quasiperiodic orbit, qualitative dynamic, generalized Nosé-Hoover oscillator, invariant set, nonlinear ordinary differential equation. Mathematics Subject Classification: Primary: 34C15, 34C27; Secondary: 37C55. Citation: Qianqian Han, Xiao-Song Yang. Qualitative analysis of a generalized Nosé-Hoover oscillator. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020346 Q. Han and X.-S. Yang, Qualitative analysis of the Nosé-Hoover oscillator, Qual. Theory Dyn. Syst., 19 (2020), 1-36. doi: 10.1007/s12346-020-00340-1. Google Scholar W. G. Hoover, Canonical dynamics: Equilibrium phase-space distributions, Phys. Rev. A, 31 (1985), 1695-1697. doi: 10.1103/PhysRevA.31.1695. Google Scholar S. Nosé, A unified formulation of the constant temperature molecular dynamics methods, Journal of Chemical Physics, 81 (1984), 511-519. Google Scholar S. Nosé, A molecular dynamics method for simulations in the canonical ensemble, Molecular Physics, 52 (2002), 255-268. Google Scholar H. A. Posch, W. G. Hoover and F. J. Vesely, Canonical dynamics of the nosé oscillator: Stability, order, and chaos, Phys. Rev. A, 33 (1986), 4253-4265. doi: 10.1103/PhysRevA.33.4253. Google Scholar P. C. Rech, Quasiperiodicity and chaos in a generalized Nosé-Hoover oscillator, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 26 (2016), 1650170, 7 pp. doi: 10.1142/S0218127416501704. Google Scholar J. C. Sprott, W. G. Hoover and C. G. Hoover, Heat conduction, and the lack thereof, in time-reversible dynamical systems: Generalized nosé-hoover oscillators with a temperature gradient, Phys. Rev. E Stat. Nonlin. Soft Matter Phys., 89 (2014), 042914-042914. doi: 10.1103/PhysRevE.89.042914. Google Scholar L. Wang and X.-S. Yang, The invariant tori of knot type and the interlinked invariant tori in the nosé-hoover oscillator, European Physical Journal B, 88 (2015), 1-5. doi: 10.1140/epjb/e2015-60062-1. Google Scholar L. Wang and X.-S. Yang, A vast amount of various invariant tori in the Nosé-Hoover oscillator, Chaos, 25 (2015), 123110, 6 pp. doi: 10.1063/1.4937167. Google Scholar L. Wang and X.-S. Yang, The coexistence of invariant tori and topological horseshoe in a generalized Nosé-Hoover oscillator, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 27 (2017), 1750111, 12 pp. doi: 10.1142/S0218127417501115. Google Scholar L. Wang and X.-S. Yang, Global analysis of a generalized Nosé-Hoover oscillator, J. Math. Anal. Appl., 464 (2018), 370-379. doi: 10.1016/j.jmaa.2018.04.013. Google Scholar Figure 1. The grid is part of $ S_{1} $ and the shadow is part of $ S_{2} $ Figure 2. The shadow is the projection of the region $ I $ on the plane $ z = 0 $ Figure 3. $ A_{1}\rightarrow A_{2} $ means there are solutions from $ A_{1} $ to $ A_{2} $, $ B_{1}\dashrightarrow A_{2} $ means there are solutions from $ B_{1} $ to $ A_{2} $ and these solutions have intersection with $ X $-axis or $ Y $-axis Figure 4. From right to left are $ l_{10} $ and $ l_{20} $ Figure 5. From right to left are $ l_{01} $, $ l_{02} $, $ l_{03} $ and $ l_{04} $ Kevin Li. Dynamic transitions of the Swift-Hohenberg equation with third-order dispersion. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021003 Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020317 Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326 Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391 Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445 Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079 Vo Van Au, Hossein Jafari, Zakia Hammouch, Nguyen Huy Tuan. On a final value problem for a nonlinear fractional pseudo-parabolic equation. Electronic Research Archive, 2021, 29 (1) : 1709-1734. doi: 10.3934/era.2020088 Patrick Martinez, Judith Vancostenoble. Lipschitz stability for the growth rate coefficients in a nonlinear Fisher-KPP equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 695-721. doi: 10.3934/dcdss.2020362 Editorial Office. Retraction: Xiao-Qian Jiang and Lun-Chuan Zhang, A pricing option approach based on backward stochastic differential equation theory. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 969-969. doi: 10.3934/dcdss.2019065 José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020376 Qianqian Han Xiao-Song Yang
CommonCrawl
Search SpringerLink Technical Synergies and Trade-Offs Between Abatement of Global and Local Air Pollution Jorge Bonilla1, Jessica Coria2 & Thomas Sterner2 Environmental and Resource Economics volume 70, pages 191–221 (2018)Cite this article In this paper, we explore the synergies and tradeoffs between abatement of global and local pollution. We build a unique dataset of Swedish combined heat and power plants with detailed boiler-level data 2001–2009 on not only production and inputs but also on emissions of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\). Both pollutants are regulated by strict policies in Sweden. \(\hbox {CO}_{2}\) is subject to the European Union Emission Trading Scheme and Swedish carbon taxes; \(\hbox {NO}_{\mathrm{x}}\)—as a precursor of acid rain and eutrophication—is regulated by a heavy fee. Using a quadratic directional output distance function, we characterize changes in technical efficiency as well as patterns of substitutability in response to the policies mentioned. The fact that generating units face a trade-off between the pollutants indicates the need for policy coordination. Working on a manuscript? Avoid the most common mistakes and prepare your manuscript for journal editors. Climate policy is affected by multiple decision makers at local, national, and international levels. Usually these decision makers are not fully coordinated in terms of goals and methods, and the existence of several layers of governance may encourage strategic behavior from powerful local actors trying to enhance their own positions (Caillaud et al. 1996). Multi-level climate policy governance is also related to governance of local air pollutants since production processes often involve emitting several air pollutants simultaneously and most emission control measures affect more than one pollutant. Environmental policies aiming at reducing \(\hbox {CO}_{2}\) emissions might therefore create spillovers, i.e., reductions or increases in emissions of other pollutants by firms changing or modifying their production processes in response to climate policy. For example, a common strategy to reduce \(\hbox {CO}_{2}\) emissions is switching the fuel mix from fuel oil to biofuels, which are counted as having zero carbon. However, although such a transition may make \(\hbox {CO}_{2}\) emissions fall dramatically, biofuels often imply an increase in nitrogen oxides (\(\hbox {NO}_{\mathrm{x}})\), particulate matter (PM), carbon monoxide (CO), and volatile organic compound (VOC) emissions (Brännlund and Kriström 2001). Spillovers from climate policy have important implications for policy design since they affect the cost of climate regulations. Theory shows that one critical factor determining whether an increased stringency of climate policies leads to increased emission of local pollutants is the elasticity of substitution (e.g., Ambec and Coria 2013). If pollutants are substitutes, \(\hbox {CO}_{2}\) emissions will be reduced at the expense of increased emissions of local pollutants. If they are complements, climate policies might lead to ancillary benefits since local pollutants will then be reduced alongside \(\hbox {CO}_{2}\) emissions. There is also the effect of technological development, which generally decreases the emissions of all pollutants. Therefore, emissions of pollutants that are substitutes to \(\hbox {CO}_{2}\) can still fall with climate policy if technological development outweighs the substitution effect. In this paper, we characterize changes in the relative performance of Swedish combined heat and power plants with respect to \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions in response to variations in the level of stringency of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) policies and multiple layers of regulation. In particular, we study the effects of the interaction between the European Union Emissions Trading System (EU ETS) and the Swedish \(\hbox {CO}_{2}\) tax and refundable charge on \(\hbox {NO}_{\mathrm{x}}\) emissions with the goal of determining whether multi-level climate policy governance has generated ancillary benefits or costs in terms of \(\hbox {NO}_{\mathrm{x}}\) emissions. To this end, we built a unique dataset of Swedish combined heat and power plants for the period 2001–2009 consisting of detailed boiler-level data on not only production and inputs but also \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions. We estimate a quadratic directional output distance function to study and compare patterns of technical progress, substitution between \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\), and shadow prices of these pollutants in the period 2001–2004 (before introduction of the EU ETS) and 2005–2009 (post-implementation). To the best of our knowledge, this is the first study analyzing and quantifying the effects of the multi-governance of climate change policy and its interaction with the other national policy instruments aimed to reduce local pollutants. However, some previous studies have employed directional output distance functions to analyze the shadow cost of environmental regulations (see e.g., Färe et al. 2005; Marklund and Samakovlis 2007; Wei et al. 2013; Du et al. 2015a, b) and the technological non-separability and substitutability among air and water pollutants (see e.g., Murty et al. 2007; Kumar and Managi 2011; Färe et al. 2012). Notably, Agee et al. (2014) estimate a multiple-input, multiple-output directional output distance function to analyze the technological non-separability in the control of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions of the U.S. electric utilities' sector and its implications for the design of a \(\hbox {CO}_{2}\) emissions cap and trade system. They find that controlling one pollutant also controls the other, and hence there are ancillary benefits from policies targeted at reducing one of them. Their results are consistent with those of Burtraw et al. (2003), who employ an electricity market equilibrium model to simulate changes in emissions resulting from different climate policy scenarios and find considerable (health-related) ancillary benefits due to reduced \(\hbox {NO}_{\mathrm{x}}\) emissions. Our work differs from previous studies in two important ways. First, we focus on policy-induced substitutability across pollutants and the changes in relative shadow prices of emissions introduced by environmental multi-level governance. Second, we analyze the case of Sweden who has been at the forefront of \(\hbox {CO}_{2}\) emission reduction for a long time, and it is one of the few countries whose present level of emissions is below the level recorded in 1990. This accomplishment is mainly explained by the remarkable expansion of biofuel use, which has the potential negative side effect of increasing \(\hbox {NO}_{\mathrm{x}}\) emissions. Hence, compared with the U.S., where switching from high-carbon fuels (like coal and oil) to reduced-carbon ones such as natural gas is still an option, in Sweden most emission reductions have already been undertaken. Our results indicate that there are limits to the positive spillovers (i.e., ancillary benefits) from climate policy as countries approach the goal of a carbon-free economy. Furthermore, the existence of spillovers indicates the need for better coordination across policymakers at different levels of governance, and policy coordination becomes even more important under substitutability since the unintended costs of climate policy on local pollution can make it less acceptable to the public and policymakers. This is particularly the case since the benefits from reduced climate change mostly accrue in the long term and on a global scale, while any ancillary costs of climate policy would tend to accrue in the near term, affecting the countries undertaking mitigation action. This paper is organized as follows. Section 2 briefly describes the climate and \(\hbox {NO}_{\mathrm{x}}\) policy in Sweden and the changes in the relative price of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) over the period 2001–2009. Section 3 presents the theoretical and empirical framework of the joint production of heat and power, \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions. Section 4 discusses the data and empirical results and analyzes the sensitivity of our results to different directional vectors. Section 5 discusses some policy implications and Sect. 6 concludes the paper. Climate and \(\hbox {NO}_{\mathrm{x}}\) Policy for Combined Heat and Power Plants in Sweden: Carbon Taxation, EU ETS, and the Refundable \(\hbox {NO}_{\mathrm{x}}\) Charge Combined heat and power (hereinafter CHP) is the simultaneous generation of useful heat and power from a single fuel or energy source at or close to the point of use. As in all combustion processes, \(\hbox {CO}_{2}\) emissions produced by CHP depend mainly on the carbon content of the fuel, while \(\hbox {NO}_{\mathrm{x}}\) emissions and the interactions between measures to control \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions vary according to the design characteristics of the individual plants. For instance, compression-ignition engines generally operate with lower air-to-fuel ratios and higher combustion temperatures, which results in higher \(\hbox {NO}_{\mathrm{x}}\) emissions per unit of power generated. Alternatively, the use of spark-ignition gas engines operated on natural gas would enable CHP to reduce \(\hbox {NO}_{\mathrm{x}}\) emissions, albeit with a small increase in \(\hbox {CO}_{2}\) emissions. Moreover, \(\hbox {NO}_{\mathrm{x}}\) is produced largely from an unintended chemical reaction between nitrogen and oxygen in the combustion chamber. The process is quite non-linear in temperature and other parameters of the combustion process, which implies that there is a large scope for \(\hbox {NO}_{\mathrm{x}}\) reduction through various technical measures. For example, it is possible to reduce \(\hbox {NO}_{\mathrm{x}}\) emissions by operating at lower engine efficiency or through the investment in post-combustion technologies (PCTs) that clean up \(\hbox {NO}_{\mathrm{x}}\) once it has been formed. However, such technologies usually require energy and thus will increase \(\hbox {CO}_{2}\) emissions (at least relative to output). It is also possible to invest in combustion technologies (CTs) involving the optimal control of combustion parameters (temperature, pressure, stoichiometry, flame stability and homogeneity, and flue gas residence time) to inhibit the formation of thermal and prompt \(\hbox {NO}_{\mathrm{x}}\). The adoption of these technologies clearly depends on the investment costs, which have shown to be boiler and plant specific and vary with boiler capacity (Linn 2008). Moreover, some technologies are not commercially available below certain size thresholds (Sterner and Turnheim 2009).Footnote 1 Since most emission control measures employed by CHP plants affect both \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions, one should observe some changes in pollutants' substitutability in response to the variations in the level of stringency of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) policies and multiple layers of regulations.Footnote 2 Therefore, in order to develop a hypothesis regarding the relative incentives to reduce \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions, respectively, we start by briefly describing climate and \(\hbox {NO}_{\mathrm{x}}\) policies in Sweden and the evolution of the relative cost of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) per unit of output over the period 2001–2009 (for a detailed description of Swedish environmental policy see SEPA 2007). In 1991, Sweden introduced a carbon tax that is directly connected to the carbon content of the fuel. Initially, the tax was equivalent to 25 €/metric ton of \(\hbox {CO}_{2}\). After increasing steadily over the last decade, it currently corresponds to 105 €/ton.Footnote 3 Since the tax is very high and Sweden is a small open economy, there has been concern for the competitiveness of some energy-intensive industries. Thus, a number of deductions and exemptions have been created in those sectors that are open to competition, and a series of reduced tax rates have been introduced. In the case of the heat and power sector, the carbon tax varies according to the type of generation, i.e., whether the generating unit is a CHP boiler or an only-heat boiler. From 2005 to June 2008, the carbon tax and the EU ETS overlapped. At that point, the tax was essentially replaced with the EU ETS, and CHP plants were granted a tax reduction of 85%. Since the level of the price of \(\hbox {CO}_{2}\) allowances is much lower than the Swedish tax level, this harmonization with the EU actually has implied a sizeable fall in the price of carbon emissions for most CHP plants. The tax reform of 1991 introduced not only a carbon tax but also other taxes including a high fee on \(\hbox {NO}_{\mathrm{x}}\). The fee was initially confined to \(\hbox {NO}_{\mathrm{x}}\) emissions from electricity and heat-producing boilers, stationary combustion engines, and gas turbines with a useful energy productionFootnote 4 of at least 50 gigawatt hours (GWh) per year (approx. 182 boilers). However, because of its effectiveness in reducing emissions and simultaneously falling monitoring costs, in 1996 the charge system was extended to include all boilers producing at least 40 GWh of useful energy per year, and in 1997 the limit was again lowered to 25 GWh. The total fees are refunded to the participating plantsFootnote 5 in proportion to their production of useful energy. Hence, the system encourages plants to reduce \(\hbox {NO}_{\mathrm{x}}\) emissions per unit of energy to the largest possible extent, since plants with lower emissions relative to energy output are net receivers of the refund. The fee was originally set at 4.3 €/kg, which is an extremely high level compared to other countries.Footnote 6 The Swedish \(\hbox {NO}_{\mathrm{x}}\) fee has been evaluated extensively (see e.g., Höglund 2005; Sterner and Höglund 2006; Sterner and Turnheim 2009; Bonilla et al. 2015). It has been shown to be very effective in lowering emissions. Empirical findings suggest that extensive emission reductions have taken place due to learning and technological development in abatement. Nevertheless, emissions fell mostly in the early years. The decrease has continued since then, but at a reduced pace. Hence, as the impact of the charge seemed to diminish, in 2008 the Swedish government decided to raise the fee to 5.3 €/kg to foster further adoption of more effective treatment techniques (SEPA 2003, 2007, 2009). How have the regulations described above affected the relative cost of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) emissions? To assess the overall picture is not altogether easy. Although large industrial plants in the energy sector can to some extent adjust their technology in response to short-run price variations (e.g., through fuel switching), many features of their design take a decade to build and are adapted to expected price trends over a longer time horizon (e.g., the ability to switch fuels may well be one such feature). Furthermore, the carbon tax paid and allowances used depend on the type of fuel being burned, which is endogenous to the stringency of \(\hbox {CO}_{2}\) policies in previous years.Footnote 7 To provide an indication of the relative stringency of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) policies, we compute the relative cost of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions per unit of output for the CHP plants in our data (see Fig. 1). The cost of \(\hbox {CO}_{2}\) emissions is calculated as the sum of the \(\hbox {CO}_{2}\) tax plus the EU ETS price, while the cost of \(\hbox {NO}_{\mathrm{x}}\) emissions corresponds to the fee on \(\hbox {NO}_{\mathrm{x}}\). As shown in Fig. 1, it seems clear that over the period 2001–2009, policy signals in Sweden told power companies to avoid fossil fuels. The cost of emitting \(\hbox {CO}_{2}\) is much higher than the cost of emitting \(\hbox {NO}_{\mathrm{x}}\). For example, in 2003, an average CHP plant emitted 0.082 tons of \(\hbox {CO}_{2}\) and 0.248 kg of \(\hbox {NO}_{\mathrm{x}}\) to produce 1 MWh of useful energy. Given the magnitude of the carbon tax and \(\hbox {NO}_{\mathrm{x}}\) fee at that time, this implied a cost of 6.116 and 0.936 €/MWh for \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions, respectively. This is to say that the cost of \(\hbox {CO}_{2}\) emissions per unit of output was over six times the cost of \(\hbox {NO}_{\mathrm{x}}\) emissions. The variation observed in Fig. 1 suggests that \(\hbox {CO}_{2}\) policy did become less stringent (relative to \(\hbox {NO}_{\mathrm{x}})\) due to the carbon tax phase-out. Indeed, the reduction began already in 2004 when CHP plants were granted a significant carbon tax reduction.Footnote 8 Furthermore, Sweden increased the fee on \(\hbox {NO}_{\mathrm{x}}\) for all regulated boilers in 2008, adding to the effect of the reduced carbon tax on the relative cost of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\). A clear hypothesis to be derived from the analysis above is that firms should direct most abatement efforts to reducing \(\hbox {CO}_{2}\) emissions as the economic effect of \(\hbox {CO}_{2}\) regulations on firms' profitability (taking into account both abatement costs and abatement benefits through reduced pollution payments) is much higher than that of \(\hbox {NO}_{\mathrm{x}}\) regulations. Moreover, the variations in the relative cost of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) emissions should have induced some variations in the relative \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) abatement efforts if generating units were to minimize the cost of compliance with environmental regulations. The magnitude and direction of the changes in the optimal mix of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) emissions would depend, however, on a series of factors such as technological development and whether \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) are substitutes or complements in abatement. For instance, in the absence of technological development, one would expect emissions of \(\hbox {NO}_{\mathrm{x}}\) from CHPs to decrease relatively more in 2005–2009 than 2001–2004 if pollutants are substitutes, while the reverse holds if pollutants are complements. On the other hand, the high relative cost of reducing \(\hbox {CO}_{2}\) emissions should have also triggered technological fixes and fuel switching aiming to reduce them. Hence, given the relative stringency of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) regulations, we would expect technological efforts to be overall biased towards \(\hbox {CO}_{2}\) emission reductions. In the next sections, we use a quadratic directional output distance function to derive the relative shadow prices of emissions for each generating unit and analyze the changes on technical efficiency and abatement efforts induced by the regulatory changes, but first we will describe the estimation strategy. Relative cost of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) emissions per MWh for CHP plants. Note The cost of \(\hbox {CO}_{2}\) emissions is calculated as the sum of the carbon tax plus the \(\hbox {CO}_{2}\) EU ETS price (mean of forward contracts 2007–2013). We compute the relative cost of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) emissions per MWh per boiler and average the values across boilers Estimation Strategy One of the main objectives of this paper is to quantitatively characterize the degree of substitutability or complementarity between reductions of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions. The technical efficiency literature provides a variety of methods to evaluate the performance of the generating units, including nonparametric and parametric approaches. Since substitutability is associated with the curvature along the output possibilities set (see Färe et al. 2005), in our approach we employ a parametric directional output distance function that is twice differentiable to derive estimates of elasticities of substitution output/pollutants and between pollutants.Footnote 9 We also estimate technical efficiencies and absolute and relative shadow prices of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\). In the past, researchers have first estimated a production function frontier and then the distances of individual plants from the frontier. The function used here seeks the simultaneous expansion of good outputs and contraction of bad outputs, which is very suitable to our case. Modeling the technology in this manner allows for the adoption of abatement measures in order to reduce the bad outputs (emissions) and still increase, or hold constant, the production of heat and power. Following Färe et al. (2005), we treat emissions as bad or undesirable outputs generated in the boilers' combustion process and model jointly the production of heat and power and emissions. Let the set of output possibilities \(P(x)=\left\{ {({y,b}){:}\,x \hbox { can produce}\,({y,b})} \right\} \) represent all feasible input–output possibilities of boilers that jointly generate heat and power (y) and a vector of emissions \(b=( {b_1, b_2 })\) (where \(b_1 \) represents emissions of \(\hbox {CO}_{2}\) and \(b_2 \) represents emissions of \(\hbox {NO}_{\mathrm{x}})\) using an input vector \(x=( {x_1, x_2, x_3})\) containing installed capacity, fuel consumption (as input energy) and labor, respectively. We assume that inputs are strongly disposable, which implies that the output set is not shrinking if the inputs are expanding. Furthermore, we assume null-jointness, implying that no good output is produced without a positive amount of at least one of the bad outputs. Moreover, all outputs are assumed to be jointly weakly disposable, implying that bad outputs are not freely disposable and cannot be reduced without affecting the production of good outputs, i.e., through the diversion of inputs into pollution reduction. Finally, we consider that the good output is strongly disposable, implying that if an observed good and bad output vector is feasible, then any output vector with less of the good output is also feasible (i.e., good outputs are freely disposable since they have a nonnegative value). The directional output distance function is characterized as $$\begin{aligned} \vec {D}_{0}({x,y,b;g}) = { max}\left\{ {\gamma {:}\,\left( {y + \gamma {g_y},{b_1} - \gamma {g_{{b_1}}},{b_2} - \gamma {g_{{b_2}}}} \right) \in P(x)} \right\} . \end{aligned}$$ Equation (1) is a functional representation of the technology that is consistent with P(x) and its associated properties. The solution \(\gamma ^{*}=\vec {D}_{o}(x,y,b;g)\) corresponds to the maximum expansion and contraction of good and bad outputs, respectively. The directional vector \(g=\left( {g_y, g_{b_1 }, g_{b_2 } } \right) \) specifies in which direction the good and bad outputs are scaled so as to reach the boundary of the output set at \(\left( {b_1 -\gamma ^{*}g_{b_1 }, b_2 -\gamma ^{*}g_{b_2 }, y+\gamma ^{*}g_y } \right) \). The directional output distance function has several properties listed below (see Färe et al. 2005): \(\vec {D}_o ({x,y,b;g})\ge 0\) if and only if \(\left( {y, b} \right) \) is an element of P(x) \(\vec {D}_o \left( {x,y^{\prime },b;g} \right) \ge \vec {D}_o ({x,y,b;g})\) for \(\left( {y^{\prime }, b} \right) \le \left( {y, b} \right) \in P(x)\) \(\vec {D}_o \left( {x,y,b^{\prime };g} \right) \ge \vec {D}_o ({x,y,b;g})\) for \(\left( {y, b^{\prime }} \right) \ge \left( {y, b} \right) \in P(x)\) \(\vec {D}_o \left( {x^{\prime },y,b;g} \right) \ge \vec {D}_o ({x,y,b;g})\) for \(x^{\prime }\ge x\in P(x)\) \(\vec {D}_o ({x,y,b;g})\) is concave in \(\left( {y, b} \right) \in P(x)\) \(\vec {D}_o \left( {x,\lambda y,\lambda b;g} \right) \ge 0\) for \(\left( {y, b} \right) \in P(x)\) and \(0\le \lambda \le 1\) If \(\vec {D}_o \left( {x,y,0;g} \right) <0\) for \(y>0\), then \(\left( {y, 0} \right) \notin P(x)\) \(\vec {D}_o \left( {x,y+\rho g_y,b_1 -\rho g_{b_1 } ,b_2 -\rho g_{b_2 } ;g} \right) =\vec {D}_o ({x,y,b;g})-\rho , \rho \in \mathfrak {R}\), Property (1) points out that \(\vec {D}_o ({x,y,b;g})\) is non-negative for feasible output vectors. Thus, the function takes the value of zero for generating units with efficient output vectors on the boundary of P(x) or takes positive values for generating units operating inefficiently below the boundary. Higher values of \(\vec {D}_o ({x,y,b;g})\) indicate higher inefficiency. \(\vec {D}_o ({x,y,b;g})\) also satisfies monotonicity: (2) indicates that it is non-increasing in good output, (3) states that it is non-decreasing in undesirable outputs, and (4) points out that it is non-decreasing in inputs. Property (5) indicates that the output set frontier is concave in good and bad outputs. \(\vec {D}_o ({x,y,b;g})\) also satisfies (6) weak disposability of good output and bad outputs, and (7) null jointness. Additionally, the directional output distance function satisfies the translation property. Under this property, which corresponds to expression (8), the inefficiency can decrease by the amount of \(\rho \) if bad outputs are contracted by \(\rho g_{b_1}\) and \(\rho g_{b_2}\), and the good output is expanded by \(\rho g_y\). We specify our directional output distance function with a quadratic form to ensure that the frontier is a twice differentiable function.Footnote 10 For a generating unit k operating at period t, the directional output distance function corresponds to $$\begin{aligned}&\vec {D}_{o}^{t}\left( x_{k}^{t},y_{k}^{t},b_{k}^{t};g \right) \nonumber \\&\quad =\alpha + \sum _{n=1}^3 {\alpha _{n}x_{nk}^{t}} +\beta _{1}y_{k}^{t}+ \sum _{i=1}^2 {\theta _{i}b_{ik}^{t}} +\frac{1}{2} \sum _{n=1}^3 \sum _{n^{\prime }=1}^3 {\alpha _{nn'}x_{nk}^{t}x_{n^{\prime }k}^{t}} +\frac{1}{2}\beta _{2}\left[ y_{k}^{t} \right] ^{2}\nonumber \\&\qquad +\,\frac{1}{2} \sum _{i=1}^2 \sum _{i^{\prime }=1}^2 {\theta _{ii^{\prime }}b_{ik}^{t}} b_{i^{\prime }k}^{t}+ \sum _{n=1}^3 \sum _{i=1}^2 {\eta _{ni}x_{nk}^{t}b_{ik}^{t}} +\sum _{i=1}^2 {\mu _{i}y_{k}^{t}b_{ik}^{t}}\nonumber \\&\qquad +\,\sum _{n=1}^3 {\delta _{n}x_{nk}^{t}y_{k}^{t}} +\tau _{t}+\varsigma _{f}, \end{aligned}$$ where \(k=1,2,\ldots , K;\,t=1,2,\ldots , T\); \(\tau _t \) are fixed effects per year and \(\varsigma _f \) are fixed effects per firm. In order to estimate the directional output distance function in (2), we minimize the sum of the deviations of the estimated distance function from the efficient value of zero, subject to the constraints (1)–(9). That is, \(\min {\sum }_{t=1}^T {\sum }_{k=1}^K \left[ \vec {D}_{o}^{t}\left( x_{k}^{t},y_{k}^{t},b_{k}^{t};g \right) -0 \right] \), subject to \(\vec {D}_{o}^{t}\left( x_{k}^{t},y_{k}^{t},b_{k}^{t};g \right) \ge 0,\forall k,t.\) \(\partial {\vec {D}_{o}^{t}\left( x_{k}^{t},y_{k}^{t},b_{k}^{t};g \right) }\big /{\partial b_{i}}=\theta _{i}+\theta _{ii}b_{ik}^{t}+\frac{1}{2}\left( \theta _{ii^{\prime }}+\theta _{i^{\prime }i} \right) b_{i^{\prime }k}^{t}+{\sum }_{n=1}^3 {\eta _{ni}x_{nk}^{t}+} \mu _{i}y_{k}^{t}\ge 0\,\,\forall i,k,t.\) \(\partial {\vec {D}_{o}^{t}\left( x_{k}^{t},y_{k}^{t},b_{k}^{t};g \right) } \big /{\partial y=\beta _{1}+}\beta _{2}y_{k}^{t}+{\sum }_{i=1}^2 {\mu _{i}b_{ik}^{t}} +{\sum }_{n=1}^3 {\delta _{n}x_{nk}^{t}} \le 0,\forall k,t.\) \(\partial \vec {D}_{o}^{t}\left( \overline{x},\overline{y} ,\overline{b} ;g \right) \big /{\partial x_{n}=\alpha _{n}+}\alpha _{nn}x_{nk}^{t}+\frac{1}{2}{\sum }_{n'\ne n} {\left( \alpha _{nn'}+\alpha _{n'n} \right) x_{n'k}^{t}} +{\sum }_{i=1}^2 {\eta _{ni}b_{ik}^{t}} +\delta _{n}y_{k}^{t}\ge 0,n=1,2,3\) \(\begin{array}{ll}\partial ^{2}{\vec {D}_{o}^{t}\left( x_{k}^{t},y_{k}^{t},b_{k}^{t};g \right) } \big /{\partial y^{2}=}\beta _{2}\le 0,\forall k,t.\\ \partial ^{2}{\vec {D}_{o}^{t}\left( x_{k}^{t},y_{k}^{t},b_{k}^{t};g \right) } \big /{\partial b_{i}^{2}}=\theta _{ii}\le 0,\forall i,k,t. \end{array}\) \(\partial ^{2}{\vec {D}_{o}^{t}\left( x_{k}^{t},y_{k}^{t},b_{k}^{t};g \right) } \big /{\partial b_{i}\partial y}=\mu _{i}\le 0,\forall i,k,t.\) \(\vec {D}_{o}^{t}\left( x_{k}^{t},y_{k}^{t},0;g \right) <0,\forall k,t\) \(\begin{array}{ll}\beta _{1}g_{y}-{\sum }_{i=1}^2 {\theta _{i}g_{b_{i}}=-1},\beta _{2}g_{y}-{\sum }_{i=1}^2 {\mu _{i}g_{b_{i}}=0},{\mu }_{i}g_{y}-{\sum }_{i^{\prime }=1}^2 {\theta _{ii^{\prime }}g_{b_{i^{\prime }}}=0}\,\,{\mathrm{and}}\\ \delta _{n}g_{y}-{\sum }_{i=1}^2 {\eta _{ni}g_{b_{i}}=0,\forall i,n.} \end{array}\) \(\alpha _{nn'}=\alpha _{n^{\prime }n}\)with \(n,n^{\prime }=1,2,3\) and \(n\ne n^{\prime }\). \(\theta _{ii^{\prime }}=\theta _{i^{\prime }i}\) for \(i,i^{\prime }=1,2\) and \(i\ne i^{\prime }.\) In this parametric specification we have also imposed through expression (9) cross-output and cross-input symmetry conditions. In Tables 5 and 6 in the "Appendix" we assess the monotonicity conditions, null-jointness property and concavity of the output set frontier. Shadow Prices When pollutants are considered as bad outputs, it is usual to interpret the values of the directional output distance function as a measure of the combined environmental and technical efficiency. The reason for this is that given the level of inputs, the function besides allowing increases in the good output, it also accounts for environmental efficiency as it enables simultaneous decreases in pollutants (see Färe et al. 2005). The directional output distance function approach allows us, however, to not only account for technical and environmental efficiency, but also calculate the shadow prices of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions and the elasticity of substitution between these pollutants. Indeed, since the directional output distance function \(\vec {D}_{o}\left( x,y,b;g \right) \) describes the technology, the revenue function can be written as $$\begin{aligned} R(x,p,q)={\mathop {{\max }}\limits _{y, b_{1}, b_{2}}}\left\{ py-q_{1}b_{1}-q_{2}b_{2}{:}\,\vec {D}_{o}\left( x,y,b;g \right) \ge 0 \right\} \end{aligned}$$ where \(q=\left( {q_1, q_2 } \right) \) denotes the emissions price vector containing \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) shadow prices, respectively, and p denotes the output price. Chambers et al. (1998) show that the Lagrange multiplier associated with the maximization of revenues corresponds to \(\lambda =pg_y +q_1 g_{b_1 } +q_2 g_{b_2 } \). Hence, the revenue function can be characterized as $$\begin{aligned} R(x,p,q)={\mathop {{\max }}\limits _{y, b_{1}, b_{2}}}\left\{ py-q_{1}b_{1}-q_{2}b_{2}+\left( pg_{y}+q_{1}g_{b_{1}}+q_{2}g_{b_{2}} \right) \vec {D}_{o}\left( x,y,b;g \right) \right\} . \end{aligned}$$ The corresponding first order conditions with regard to y, \(b_1\) and \(b_2\) are $$\begin{aligned} \left( {pg_y +q_1 g_{b_1 } +q_2 g_{b_2 } } \right) D_y= & {} -p, \end{aligned}$$ $$\begin{aligned} \left( {pg_y +q_1 g_{b_1 } +q_2 g_{b_2 } } \right) D_{b_1 }= & {} q_1, \end{aligned}$$ where \(D_y\) and \(D_{b_i}\) are the first order derivatives of \(\vec {D}_{o}\left( x,y,b;g \right) \) with respect to good output and pollutant \(b_i \), respectively. The ratio between Eqs. (4) and (5) provides us with the relative shadow price of pollution, which represents the trade-off between these two pollutants, i.e., the shadow marginal rate of transformation.Footnote 11 $$\begin{aligned} \frac{q_1 }{q_2 }=\frac{D_{b_1 } }{D_{b_2 } }\ge 0. \end{aligned}$$ Moreover, we can obtain the absolute shadow prices of pollution (i.e., the marginal loss in CHP production necessary to reduce \(\hbox {CO}_{2}\) or \(\hbox {NO}_{\mathrm{x}}\) emissions) from the ratio between Eqs. (4) and (3) and (5) and (3): $$\begin{aligned} q_i =-p\frac{D_{b_i } }{D_y }. \end{aligned}$$ Morishima Elasticities of Substitution The Morishima elasticity of substitution provides us with information on how much the relative shadow prices of outputs will change in response to changes in emission intensities (see Blackorby and Russell 1981, 1989). In the context of the technology described by the directional output distance function, the indirect Morishima elasticity of substitution between pollutants \(b_1 \) and \(b_2 \) can be expressed as (see Kumar and Managi 2011) $$\begin{aligned} M_{b_1 b_2 } =\frac{\partial \hbox {ln}\left( {q_1 /q_2 } \right) }{\partial \hbox {ln}\left( {b_2 /b_1 } \right) }=b_2^{*}\left( {\frac{D_{b_1 b_2 } }{D_{b_1 } }-\frac{D_{b_2 b_2 }}{D_{b_2 } }} \right) , \end{aligned}$$ and between pollutant \(b_i \) and good output as $$\begin{aligned} M_{b_i y} =\frac{\partial \hbox {ln}\left( {q_i /p} \right) }{\partial \hbox {ln}\left( {y/b_i } \right) }=y^{*}\left( {\frac{D_{b_i y} }{D_{b_i } }-\frac{D_{{yy}} }{D_y }} \right) , \end{aligned}$$ where \(D_{b_1 b_2 }, D_{b_2 b_2 } \), \(D_{b_i y} \), and \(D_{{yy}} \) are second order derivatives of the directional output distance function, \(y^{*}=y\,+\,\vec {D}_{o}\left( x,y,b;g \right) g_{y}\) and \(b_{i}^{*}=b_{i}-\vec {D}_{o}\left( x,y,b;g \right) g_{b_{i}}\). If \(M_{b_1 b_2 } >0\), then \(b_1 \) and \(b_2 \) are Morishima substitutes. That is, the pollutants are substitutes if the emission intensity \((b_2 /b_1)\) increases when the relative shadow price \(\left( {q_2 /q_1 } \right) \) decreases; emission reductions in \(b_1 \) are accompanied by increased emissions in \(b_2\). Conversely, \(b_1 \) and \(b_2\) are complements when \(M_{b_1 b_2}<0\). Note that \(M_{b_i y} \le 0\) since \(\vec {D}_{o}\left( x,y,b;g \right) \) satisfies monotonicity and concavity properties.Footnote 12 Moreover, in terms of the quadratic directional distance function we can write the elasticity \(M_{b_1 b_2 } \) as $$\begin{aligned} M_{b_1 b_2 }= & {} b_2^{*}\left( \frac{\theta _{12} }{\theta _1 +\theta _{11} b_{1k}^t +\theta _{12} b_{2k}^t +{\sum }_{n=1}^3 \eta _{n1} x_{{nk}}^t +\mu _1 y_k^t }\right. \nonumber \\&\quad \left. -\,\frac{\theta _{22} }{\theta _2 +\theta _{22} b_{2k}^t +\theta _{21} b_{1k}^t +{\sum }_{n=1}^3 \eta _{n2} x_{{nk}}^t +\mu _2 y_k^t } \right) , \end{aligned}$$ where the sign of \(M_{b_1 b_2 } =b_2^{*}\left[ {\frac{?}{+}-\frac{-}{+}} \right] \) depends critically on the sign and magnitude of \(\theta _{12} \). In particular, for \(b_2^{*}>0,\) it holds that \(b_1 \) and \(b_2 \) are Morishima substitutes if \(\theta _{12} >0\) or \(\theta _{12} <0\) and \(\left| {\frac{\theta _{12} }{\theta _1 +\theta _{11} b_{1k}^t +\theta _{12} b_{2k}^t +{\sum }_{n=1}^3 \eta _{n1} x_{{nk}}^t +\mu _1 y_k^t }} \right| <\left| {\frac{\theta _{22} }{\theta _2 +\theta _{22} b_{2k}^t +\theta _{21} b_{1k}^t +{\sum }_{n=1}^3 \eta _{n2} x_{{nk}}^t +\mu _2 y_k^t }} \right| \). By analogy, the elasticity \(M_{b_2 b_1 } \) corresponds to $$\begin{aligned} M_{b_2 b_1 }= & {} b_1^{*}\left( \frac{\theta _{21} }{\theta _2 +\theta _{22} b_{2k}^t +\theta _{21} b_{1k}^t +{\sum }_{n=1}^3 \eta _{n2} x_{{nk}}^t +\mu _2 y_k^t }\right. \nonumber \\&\qquad \left. -\,\frac{\theta _{11} }{\theta _1 +\theta _{11} b_{1k}^t +\theta _{12} b_{2k}^t +{\sum }_{n=1}^3 \eta _{n1} x_{{nk}}^t +\mu _1 y_k^t } \right) . \end{aligned}$$ The Morishima elasticities of substitution show that a percentage change in the price ratio \(q_1 /q_2 \) (motivated for instance by an increase in \(q_1 )\) has two effects on the quantity ratio: the first term shows the effect on \(b_1\) while the second term shows the effect on \(b_2 \). Therefore, despite the fact that symmetry conditions for cross-output ensure that \(\theta _{12} =\theta _{21} \), the Morishima elasticities are inherently asymmetric (\(M_{b_1 b_2 } \ne M_{b_2 b_1 } )\) since they represent the difference between two elasticities: a cross elasticity and own price elasticity. In terms of the analysis, the asymmetric substitutability tells us which pollutant is easier to substitute for another pollutant for a fixed amount of output. Note that a lower value of the Morishima elasticities of substitution indicates greater substitution possibilities between pollutants. The intuition behind this is that to generate the same change in the emission intensity (\(b_2 /b_1 )\), a smaller change in the prices \(\left( {q_1 /q_2 } \right) \) is required when the pollutants are close substitutes. Likewise, lower pollution-good output elasticities in absolute value indicate greater substitution possibilities. We estimate the directional output distance function using a deterministic method, i.e., parametric linear programming (PLP) that allows us to impose parametric restrictions that are a result of the underlying technology such as monotonicity in good or bad outputs. We follow Aigner and Chu's (1968) procedure of minimizing the sum of the distance between the frontier technology and the actual observations of the generating units in each period. Hence, the method chooses the parameters that make the generating units as efficient as possible subject to a set of restrictions associated with the technology properties already described (see Färe et al. 2001, 2005, 2006). Note that the choice of directional vector \(g=\left( {g_y, g_{b_1 }, g_{b_2 } } \right) \) affects the magnitude of the first and second order derivatives of the directional distance function through the constraints (8). As in Färe et al. (2006), our choice of directional vector is \(g=({1,1,1})\), i.e., the component of the good output and the components of the two pollutants are equal to one, making the model parsimonious. In Sect. 5.5 we develop a sensitivity analysis estimating the directional distance function in Eq. (2) for two additional direction sets. We derive estimates of the coefficients for pre- (2001–2004) and post- (2005–2009) EU ETS implementation. We wrote the code to solve the optimization problem in Matlab. In order to avoid convergence problems in the algorithm, all variables are expressed in normalized values, i.e., each output and input is divided by its own sample mean.Footnote 13 Year and firm fixed effects (\(\tau _t \) and \(\varsigma _f )\) are estimated using a set of yearly and firm dummy variables. These variables take the value of one if the observation belongs to year t or firm f, accordingly; and zero otherwise. In the case of the yearly dummies, the reference year corresponds to the first year for each period of analysis (e.g., 2001 for pre-EU ETS and 2005 for post-EU ETS). Regarding the firm dummies, we chose arbitrarily a firm as the reference firm for both periods.Footnote 14 All the dummy variables (except those representing the reference cases) are included in the objective function of the minimization problem to carry out the estimations. Using the estimated coefficients of the directional output distance function, we compute the technical and environmental efficiencies and the Morishima elasticities of substitution between pollutants and between pollutants and output according to Eqs. (8) and (9), respectively. The relative and absolute shadow prices are obtained by applying Eqs. (6) and (7). To identify changes in efficiencies, elasticities, and relative shadow prices before and after the implementation of the EU ETS, we compare the density functions of these measures between periods. To this end, we employ kernel-based methods to statistically test the difference between distributions. Our tests are conducted by computing the nonparametric \(T_{{n}}\)-statistic of Li et al. (2009)Footnote 15 which assesses the equality between two density functions. Let f(x) and g(x) denote the density functions of a random variable x. We test the null hypothesis that \(f(x)=g(x)\) against the alternative hypothesis that \(f(x)\ne g(x)\). Following Hayfield and Racine (2008, 2011), we implement this procedure in the software R with 500 bootstrap repetitions and estimate the \(T_{{n}}\)-statistic using a standard normal kernel. The empirical p values of the consistent density equality test are computed after bootstrapping. Our analysis models production and emissions at the boiler level in the heat and power sector using data for the Swedish CHP plants during 2001–2009. We focus on CHP plants since approximately 75% of the plants in the heat and power sector belong to this group. Moreover, CHP plants have been promoted within the European Union as an effective means to increase the overall energy efficiency (EU Directive 2004/8/EC).Footnote 16 In Sweden approximately 30–50% of the total input energy of a CHP is converted to electricity and the rest to heat (Svensk Fjärrvärme 2011). Though we would have liked to develop the analysis by disaggregating production into heat and power, the information available at the boiler level only allows us to analyze the joint production. Hence, our measure of good output is the amount of useful energy (MWh) commercially sold. This is the sum of electrical energy and process heat in those cases where this heat is sold (generally for district heatingFootnote 17) or used in industrial processes. The two undesirable outputs, \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions, are expressed in metric tons, and as stated above, the inputs consist of installed capacity (MW),Footnote 18 fuel consumption (MWh),Footnote 19 and labor (number of employees). \(\hbox {NO}_{\mathrm{x}}\) emissions and useful energy are taken from the Swedish Environmental Protection Agency's (SEPA) \(\hbox {NO}_{\mathrm{x}}\) charge database. These two variables are measured and reported to the SEPA directly by the generating units along with information about energy fuel shares, installed capacity, and the available \(\hbox {NO}_{\mathrm{x}}\) combustion and post-combustion technologies (CTs and PCTs, respectively), which makes this dataset unique in the sense that it is the most detailed longitudinal database at the boiler level of the Swedish heat and power sector. Installed capacity is used as a proxy for capital in physical units. With regard to labor, the data at company level were gathered from Retriever Bolagsinfo. For multi-unit plants, we allocated labor to generating units according to their generating capacity ratio.Footnote 20 Although we have \(\hbox {CO}_{2}\) values accessed from the SEPA's EU ETS database, their aggregation at installation level prevents us from recovering the emissions for each boiler. Instead, \(\hbox {CO}_{2}\) emissions are estimated based on available data on the boilers' energy fuel shares and emission factors per fuel type. Hence, we can recover the total input energy that corresponds to the amount of fuel consumed per boiler. In addition, our dataset comprises a wide range of fuel types, i.e., gas, oil, coal, peat, biofuel, and waste. We use emission factors for each fuel classification.Footnote 21 However, this method only considers emissions derived from fuel use for combustion and excludes emissions coming from raw materials,Footnote 22 which—unlike other industries—are not significant in the case of the heat and power generation.Footnote 23 We focus on boilers that operate every year. This group of generating units represents the operation of the sector under normal or standard conditions, i.e., we exclude boilers that may only be switched on under certain circumstances (e.g., as backup during episodes of very cold winters). Two boilers for which information on fuel shares is missing are dropped. One boiler that uses a combination of mixed refinery gas and gas converted during the process is also excluded due to the complexity of the fuel and its extremely high emissions. Finally, our sample consists of a panel of 82 boilers distributed across 35 firms (738 observations, of which 328 are for the period 2001–2004 and 410 are for 2005–2009). Descriptive statistics of the variables are presented in Table 1. As we can see, there is large variation in emission rates among boilers. \(\hbox {CO}_{2}\) emission rates vary from zero to 428.5 tons/GWh, while \(\hbox {NO}_{\mathrm{x}}\) emission rates vary almost 20-fold from 0.03 to 0.68 tons/GWh. This reflects differences within the sector in fuel mix, fuel usage, boiler size, and availability of \(\hbox {NO}_{\mathrm{x}}\) abatement technologies. Table 1 Descriptive statistics In this section, we describe and analyze the results of our estimates of technical efficiencies, elasticities of substitution, and shadow prices. Technical Efficiency and Technical Progress The estimated coefficients of the quadratic directional output distance function are shown in Table 2, and Tables 5 and 6 in the "Appendix" verify that the coefficients are consistent with monotonicity, concavity, and null-jointness constraints. Table 2 Parameter estimates of the quadratic directional output distance function using the parametric linear programming approach \((g=1,1,1)\) Stringent environmental regulations have a positive effect not only on environmental quality but also possibly on firms' absolute performance if they induce development of new technologies and a more efficient use of resources. Compliance costs due to stricter environmental regulations make environmentally friendlier technological development relatively less costly. This should be represented by an outward shift of the production possibility curve or an inward shift in the input coefficient space, which means that with a given set of resources it is now possible to produce more goods and services without worsening the environmental quality (Xepapadeas and Zeeuw 1999). Our results from the Swedish CHP plants are a case in point and indicate the existence of significant technical progress. We compute the frontier for the years 2001 and 2009 (the initial and final years in our sample) using our PLP coefficient estimates. The frontiers are obtained for a generating unit using the mean values of inputs in both periods. The normalized values of inputs of this generating unit are substituted into the directional output distance function leading to an expression that only depends on the good and bad outputs. We solve this equation for values of the good output that make the directional output distance function equal to zero (i.e., no technical inefficiency). The frontiers are plotted allowing for a grid of values of both pollutants. As shown in Fig. 2, technological progress drives a significant movement of the frontier towards reduced emissions of both pollutants, though as expected the overall reduction is biased towards \(\hbox {CO}_{2}\) emission reduction. The movement of the frontier itself is consistent with the fact that the amount of heat and power generation per unit of emissions increases for both pollutants between periods, but the efficiency increase is greater in the case of \(\hbox {CO}_{2}\). We could thus say that the technical change is "carbon saving" in much the same way as traditionally technical changes often have been characterized as "labor saving."Footnote 24 On average, \(\hbox {CO}_{2}\) emissions per GWh by CHP plants decreased by 15% between the periods (from 81.4 in 2001–2004 to 69.7 ton/GWh in 2005–2009) and by approximately 5% for \(\hbox {NO}_{\mathrm{x}}\) (from 0.245 to 0.232 ton/GWh). Frontiers in 2001 and 2009. Note The frontiers are obtained for a generating unit using the mean values of inputs in both periods. The normalized values of inputs of this generating unit are substituted into the directional output distance function leading to an expression that only depends on the good and bad outputs. The plots are expressed in original units multiplying the normalized values by the corresponding means of good and bad outputs Thus far we have analyzed technical progress at the frontier. We are also interested in the performance of all firms, which is conveniently measured by studying technical efficiency relative to each respective frontier: We find that out of 82 boilers of the sample, 45 were found to operate at least once on the frontier during 2001–2004 and 44 at least once on the frontier during 2005–2009. Thirty-three boilers were found to operate on the frontier at least one of the years in each of the two periods. The estimations of technical and environmental efficiency yield mean inefficiency values of 14.86 and 17.68% for the pre- and post-introduction periods of the EU ETS, respectively.Footnote 25 This indicates that during 2001–2004, boilers on average could have expanded heat and power generation by 35.56 GWh (i.e., 239.3 \(\times \) 14.86%) and contracted \(\hbox {CO}_{2}\) emissions by 3730 tons (25,098 \(\times \) 14.86%) and \(\hbox {NO}_{\mathrm{x}}\) emissions by 6.94 tons (46.7 \(\times \) 14.86%) if they would have adopted the best practice of frontier-generating units. Similarly, in 2005–2009 boilers on average could have increased their production by 42.91 GWh and decreased \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions by 3638 and 7.51 tons, respectively, if the generating units were to operate efficiently.Footnote 26 Given these results, the amount of possible reduction in emissions and increase in heat and power generation that could have been achieved between periods following the practice of the most efficient generating units is of considerable magnitude. The bootstrapped Li et al. (2009) \(T_{\mathrm{n}}\)-statistic allows us to compare the inefficiency distributions between periods and conclude that the boilers are statistically equally efficient in the two periods at the 10% significance level. If we compare inefficiencies by fuel type, we find that the highest technical and environmental efficiencies are reached by boilers using mainly biofuel while the highest inefficiencies are found for boilers burning mainly fossil fuels. For instance, average inefficiencies for boilers with biofuel shares exceeding 80% are 9.64% for 2001–2004 and 13.60% for 2005–2009 whereas the corresponding numbers for boilers with fossil fuel shares exceeding 80% are 36.37 and 33.42%, respectively. In the case of fossil fuels, the bootstrapped Li et al. (2009) test indicates that the differences between periods are not statistically significant, while in the case of biofuels the test indicates that inefficiencies tend to increase in the period 2005–2009 (p value \(=\) 0.056). When it comes to the availability of \(\hbox {NO}_{\mathrm{x}}\) abatement technologies, we find that the inefficiencies are larger for boilers that use only PCTs (22.27% in 2001–2004 vs. 27.37% in 2005–2009) compared with boilers that use only CTs (11.11% in 2001–2004 vs. 15.21% in 2005–2009). As discussed in Sect. 2, PCTs usually require energy that might lead to increased \(\hbox {CO}_{2}\) emissions. Nevertheless, as shown in Table 3, we find a significant correlation between the type of \(\hbox {NO}_{\mathrm{x}}\) technology installed and the boiler size. Indeed, if we classify boilers according to size, we observe that the incidence of CTs (whose investment cost is lower) is greater among small boilers compared with PCTs, which are more expensive and mostly adopted by large boilers (alone or in combination with CTs).Footnote 27 The incidence of biofuel use is also greater among small boilers. Moreover, these boilers tend to minimize the use of fossil fuels to the largest possible extent by using alternative fuels such as peat and waste, which were exempt from \(\hbox {CO}_{2}\) taxation at certain times during the period 2001–2009. Overall, these findings suggest that smaller boilers substitute pollutants by means of the choice of fuel while larger boilers substitute by adopting \(\hbox {NO}_{\mathrm{x}}\)-reducing technologies, which is a reflection of the fact that the production process of large boilers is relatively more inflexible and—as discussed in the following section—less sensitive to variations in the relative price of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\). Table 3 \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions, fuel use, and availability of abatement technology by boiler size a Kernel distribution of \(\hbox {CO}_{2}\)-output elasticities. b kernel distribution of \(\hbox {NO}_{\mathrm{x}}\)-output elasticities. Note Elasticities lower than −8.5 are not shown for convenience of presentation Output-\(\hbox {CO}_{2}\) and Output-\(\hbox {NO}_{\mathrm{x}}\) Elasticities With regard to the pollution-good output elasticities, Fig. 3a, b illustrate the kernel distributions between the two periods (and report the mean values, standard deviation, and p-value of the nonparametric test). As expected, the pollution-good output elasticities are negative, implying that the emission intensity decreases when the relative shadow price of emissions to output increases. When it comes to \(\hbox {CO}_{2}\), the mean \(\hbox {CO}_{2}\)-output elasticity changed from −0.420 to −0.176 between the periods. That is, substitution increased since lower pollution-good output elasticities in absolute values indicate greater substitution possibilities. Using the Li et al. (2009) test, we are able to reject the null hypothesis of equality between the density functions of the pollution-good output elasticities pre- and post-introduction of the EU ETS for \(\hbox {CO}_{2}\). An explanation for the increased substitution is technological development. Indeed, the results in the previous section indicate that several technical measures were implemented in order to reduce \(\hbox {CO}_{2}\) emissions. The trend towards phasing out of fossil fuels in Sweden has been quite stable over the sampled period and most firms in the sector have already switched to "carbon-free" fuels. For instance, during the period 2005–2009 the fraction of boilers with biofuel shares greater than or equal to 80% was approximately 63%, while the fraction of boilers using mainly fossil fuels was around 11% (see Table 3). As discussed before, there are clear differences between small and large boilers; the Morishima elasticities of substitution indicate greater substitution among small boilers (e.g., a \(\hbox {CO}_{2}\)-output elasticity of −0.243 and −0.076 for the first and second period versus −0.634 and −0.309 for large boilers). Consequently, the \(\hbox {CO}_{2}\) emission intensity of small boilers is lower than that of large boilers. The mean absolute value of the \(\hbox {NO}_{\mathrm{x}}\)-output elasticity also decreased between the pre- and post-EU ETS periods (from −1.823 to −0.676), indicating greater substitution. The test that compares the density functions of the elasticities before and after the introduction of the EU ETS indicates that there is a statistically significant difference between the two distributions at the 1% significance level. This result might be explained by the fact that the fraction of generating units without any \(\hbox {NO}_{\mathrm{x}}\) abatement measure declined between the periods from 21 to 17% and also that the simultaneous adoption of more than one \(\hbox {NO}_{\mathrm{x}}\) reduction technology increased from 19 to 27% (see Table 3). The decrease in absolute value of these elasticities for the period 2005–2009 would suggest that the easiest emission reductions have already been undertaken. Therefore further reductions of \(\hbox {NO}_{\mathrm{x}}\) per unit of output will only be supported by much higher charges for \(\hbox {NO}_{\mathrm{x}}\) emissions. \(\hbox {NO}_{\mathrm{x}}\) emission reductions have certainly been achieved since the \(\hbox {NO}_{\mathrm{x}}\) charge was implemented yet the slow pace of these reductions relative to \(\hbox {CO}_{2}\) during the last decade also reflects an inclination to rather favor \(\hbox {CO}_{2}\) than \(\hbox {NO}_{\mathrm{x}}\) emission reductions. When it comes to \(\hbox {NO}_{\mathrm{x}}\)-output elasticity by size, our results indicate greater substitution among small boilers than among large boilers. For small boilers, the \(\hbox {NO}_{\mathrm{x}}\)-output elasticity is −0.564 and −0.360 for the first and second period, respectively, versus −3.348 and −1.100 for large boilers. a Kernel distribution of \(\mathrm{CO}_{2}\)–\(\hbox {NO}_{\mathrm{x}}\) elasticities. Note Elasticities greater than 6 are not shown for convenience of presentation, b kernel distribution of elasticities \(\hbox {NO}_{\mathrm{x}}\)–\(\hbox {CO}_{2}\). Note Elasticities greater than 0.6 are not shown for convenience of presentation \(\hbox {CO}_{2}\)–\(\hbox {NO}_{\mathrm{x}}\) Substitution One main purpose of this study is to assess the existence of substitutability between \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\). Our results indicate that \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) are substitutes. Like in the case of pollution-good output elasticities, lower absolute values of the Morishima elasticities of substitution indicate greater substitution possibilities between pollutants. Hence, substitution increased during the period 2005–2009. For instance, the mean estimates of the Morishima \(\hbox {CO}_{2}\)–\(\hbox {NO}_{\mathrm{x}}\) elasticity correspond to 1.706 for 2001–2004 and 0.472 for 2005–2009. The mean estimates of the corresponding \(\hbox {NO}_{\mathrm{x}}\)–\(\hbox {CO}_{2}\) elasticity are 0.446 for 2001–2004 and 0.004 for 2005–2009. The asymmetry of the elasticities indicates that it is easier to substitute \(\hbox {CO}_{2}\) with \(\hbox {NO}_{\mathrm{x}}\) than \(\hbox {NO}_{\mathrm{x}}\) with \(\hbox {CO}_{2}\). In other words, firms are much more sensitive to variations in \(\hbox {CO}_{2}\) prices and therefore more willing to decrease \(\hbox {CO}_{2}\) emissions at the expense of increasing \(\hbox {NO}_{\mathrm{x}}\) emissions rather than the other way around. Clearly, this is linked to the fact that \(\hbox {CO}_{2}\) emissions have a much higher opportunity cost than \(\hbox {NO}_{x}\) emissions. As mentioned before, a policy implication that could be derived from here is that—given the high opportunity costs of \(\hbox {CO}_{2}\) emissions—the \(\hbox {NO}_{\mathrm{x}}\) fee would have to be increased to a higher level than its current value in order to achieve large \(\hbox {NO}_{\mathrm{x}}\) emission reductions. An alternative implication has to do with the choice of policy instruments as the effects of climate policy on \(\hbox {NO}_{\mathrm{x}}\) emissions depend crucially on the environmental regulation of \(\hbox {NO}_{\mathrm{x}}\). In particular, \(\hbox {NO}_{\mathrm{x}}\) emissions change in response to climate policy since \(\hbox {NO}_{\mathrm{x}}\) is subject to price regulation. If \(\hbox {NO}_{\mathrm{x}}\) instead would be subject to, for instance, an emissions cap-and-trade system, then climate policy would not change \(\hbox {NO}_{\mathrm{x}}\) emissions but would change the price of permits in the \(\hbox {NO}_{\mathrm{x}}\) market. Kernel distributions of the elasticities between pollutants are depicted in Fig. 4a, b. Using the bootstrapped Li et al. (2009) \(T_{{n}}\)-statistics, we tested the null hypothesis of equal density functions between the pre- and post-implementation period of the EU ETS. We reject this null hypothesis; the elasticity of \(\hbox {CO}_{2}\)–\(\hbox {NO}_{\mathrm{x}}\) substitution does tend to shift towards the left-hand side of the distribution, indicating greater substitution in 2005–2009 than in 2001–2004. In the case of the \(\hbox {NO}_{\mathrm{x}}\)–\(\hbox {CO}_{2}\) elasticity, we also observe that for many generating units the estimates are concentrated around zero. Further \(\hbox {CO}_{2}\)–\(\hbox {NO}_{\mathrm{x}}\) substitution has increased because most firms have implemented technical measures to allow for this substitution to take place (they have in fact undertaken a complete transition to biofuels). The asymmetric substitution is also observed when classifying by boiler size. Among small boilers, it is easier to substitute \(\hbox {NO}_{\mathrm{x}}\) for \(\hbox {CO}_{2}\) than \(\hbox {CO}_{2 }\) for \(\hbox {NO}_{\mathrm{x}}\), e.g., the value of the \(\hbox {CO}_{2}\)–\(\hbox {NO}_{\mathrm{x}}\) elasticity for small boilers is 0.699 and 0.340 for the first and second period, respectively, versus 0.172 and 0.001 for the \(\hbox {NO}_{\mathrm{x}}\)–\(\hbox {CO}_{2}\) elasticity. Similar evidence is found for large boilers; the value of the \(\hbox {CO}_{2}\)–\(\hbox {NO}_{\mathrm{x}}\) elasticity is 2.926 and 0.648 for the first and second period versus 0.777 and 0.008 for the \(\hbox {NO}_{\mathrm{x}}\)–\(\hbox {CO}_{2 }\)elasticity. Moreover, in line with the results described in the previous section, in the case of both the \(\hbox {CO}_{2}\)–\(\hbox {NO}_{\mathrm{x}}\) and the \(\hbox {NO}_{\mathrm{x}}\)–\(\hbox {CO}_{2 }\) elasticity, we observe that substitution is greater among small boilers than large boilers, which might be explained by a more inflexible production process for large boilers. Finally, we compute the absolute and relative shadow prices. When it comes to the absolute shadow prices, we follow Färe et al.'s (2005) approach and assume that the price of heat and power is known (and equal to its shadow price). We proxy the price of heat and power through the annual price of district heating reported by the Swedish Energy Agency. The average price corresponds to 56.6 €/MWh for the period 2001–2004 and 61.7 €/MWh for the period 2005–2009.Footnote 28 Based on our estimates of the derivatives \(D_y \) and \(D_{b_i } \), the shadow prices of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) for the period 2001–2004 correspond to 5.33 and 80,136 €/ton, respectively. The shadow prices for the period 2005–2009 are significantly higher (which supports the claim that the easiest emission reductions have already been undertaken) at 19.23 and 129,556 €/ton, respectively. Furthermore, there is a very small (though statistically significant) increase in the shadow price of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) from a mean of 0.00018 in 2001–2004 to a mean of 0.00019 in 2005–2009Footnote 29 (Fig. 5.). The sum of the \(\hbox {CO}_{2}\) tax (net of deductions) plus the EU ETS price corresponded to 52 €/ton in 2001–2004 and 33.4 €/ton in 2005–2009. The price of the \(\hbox {NO}_{\mathrm{x}}\) charge corresponded to 3766.5 and 4143.1 €/ton, respectively. Hence, in terms of €/ton, the regulatory relative prices of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) decreased from a mean of 0.014 to a mean of 0.008. Overall, these estimates suggest that, in relative terms, the relative shadow price of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) emissions is lower than the regulatory relative price. The comparison of absolute prices indicates that with regard to the prices of the regulations, \(\hbox {CO}_{2}\) has been cheaper to reduce while the reverse holds for \(\hbox {NO}_{\mathrm{x}}\). This is consistent with our previous claim that in order to achieve further \(\hbox {NO}_{\mathrm{x}}\) reductions, the \(\hbox {NO}_{\mathrm{x}}\) fee has to be increased to a greater extent. The Choice of the Directional Vector One important advantage of the directional distance function is the possibility to analyze efficiency by increasing good outputs while simultaneously decreasing bad outputs. However, the wide range of possible directions allows for a great deal of subjectivity regarding the importance of the production of the good output and the abatement of bad output. So far we have assumed that pollution reduction is regarded as an equally important target as the increase of good output. Moreover, we assume that \(\hbox {CO}_{2}\) abatement is regarded as equally important as \(\hbox {NO}_{\mathrm{x}}\) abatement. Nevertheless, as discussed by Agee et al. (2014), increasing output might be regarded as more important than reducing emissions. Furthermore, in line with the discussion in Sect. 2, since the transition to a carbon-free economy is as key environmental goal of the Swedish government, it is reasonable to assume that \(\hbox {CO}_{2}\) abatement is regarded as more important than \(\hbox {NO}_{\mathrm{x}}\) abatement. Hence, in this section, we develop a sensitivity analysis estimating the elasticities of substitution and efficiency for two additional direction sets, namely (1.2, 1, 1) and (1, 1, 0.8).Footnote 30 The first vector represents the case where the increase in good output is 20% more important than a decrease in pollution and the second represents the case where reducing \(\hbox {CO}_{2}\) emissions is as important as increasing output and 25% more important than a decrease in \(\hbox {NO}_{\mathrm{x}}\) emissions. Kernel distribution of relative shadow prices of \(\hbox {CO}_{2} /\hbox {NO}_{\mathrm{x}}\), Note Relative shadow price estimates greater than 0.0032 are not shown for convenience of presentation Table 4 Sensitiveness to the directional vector Table 4 shows that our results are generally robust to the choice of the direction vector. For instance, \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) are substitutes regardless of the directional vector and \(\hbox {NO}_{\mathrm{x}}\)–\(\hbox {CO}_{2}\) substitution increases post-EU ETS implementation. Moreover, when it comes to inefficiency, the differences between periods remain statistically insignificant, though (interestingly) the level of the inefficiencies tend to be lower when there is a greater focus on good output instead of pollution, i.e., the directional vector (1.2, 1, 1) instead of vectors (1, 1, 1) and (1, 1, 0.8). The magnitude of the absolute shadow prices of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) for the period 2005–2009 tends to be lower under vectors (1.2, 1, 1) and (1, 1, 0.8) than under vector (1, 1, 1). Regarding relative prices, under the directional vector (1, 1, 0.8), the relative shadow price of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) is statistically lower for the period 2005–2009 than for the period 2001–2004. Note that for this vector, \(\hbox {NO}_{\mathrm{x}}\) abatement is less important than \(\hbox {CO}_{2}\) abatement. The reduction in the relative shadow price is consistent with the reduction in the relative price of the regulation. As a robustness check we also estimated the shadow prices and elasticities of substitution for the same set of directional vectors using stochastic frontier analysis (SFA) and corrected ordinary least squares (COLS). In line with the PLP results, SFA and COLS showed that \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) are substitutes. The results are not qualitatively sensitive to the estimation technique, however for observations satisfying technology properties SFA and COLS yielded higher elasticities and shadow prices (in absolute values) than the PLP method (Du et al. 2015a also found similar results for SFA). A disadvantage of the SFA and COLS methods is that around 74–79% of the observations simultaneously violated monotonicity in good and undesirable outputs. In contrast, our PLP results fully satisfy the properties of the underlying technology through the sample. Finally, for comparison purposes we also estimated inefficiencies using DEA. The inefficiencies were much lower in magnitude than those obtained with PLP, indicating that the output set of the quadratic specification contains the piecewise linear DEA output possibility set (see Färe et al. 2005).Footnote 31 Synergies and Trade-Offs Between Reductions of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) Emissions Has the overlapping of climate policies brought about increased \(\hbox {NO}_{\mathrm{x}}\) emissions? Since pollutants are substitutes, we do find a tendency in this direction. However, \(\hbox {NO}_{\mathrm{x}}\) emissions have not actually increased in practice as technological development has led to reduced emissions of both pollutants. Furthermore, contrary to what one may have expected, the regulatory relative price of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) emissions has decreased since harmonization with the EU actually implied a sizeable fall in the price of carbon emissions for most CHP plants in addition to the simultaneous increase in the local \(\hbox {NO}_{\mathrm{x}}\) charge in 2008. This is to say that in relative terms, \(\hbox {CO}_{2}\) policies in Sweden have become less stringent after the introduction of the EU ETS due to the variations in the levels of the local policies. A natural question that arises is what would have happened if the carbon tax had not been phased out. In such a case, the regulatory relative price (\(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}})\) per ton during the period 2005–2009 would have been 2.625 times higher (i.e., 0.029 instead of 0.008). Based on our estimates for the \(\hbox {CO}_{2}\)–\(\hbox {NO}_{\mathrm{x}}\) elasticity for the period 2001–2004, we calculate that the (\(\hbox {NO}_{\mathrm{x}}/\hbox {CO}_{2})\) emissions intensity would have increased from 0.186% in 2001–2004 to 0.520% in 2005–2009. In reality, however, it increased to a much lower extent than predicted (i.e., 0.207%) due to the combined effect of technological progress and the fact that the regulatory relative price of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\) decreased sharply. As mentioned in the introduction, some previous studies have found ancillary benefits from reductions in greenhouse gases in the U.S. For instance, Burtraw et al. (2003) simulate the effects of moderate carbon taxes for electricity production on reductions of \(\hbox {NO}_{\mathrm{x}}\) emissions beyond the requirements of the 1990 Clean Air Act Amendments. Their results indicate that health-related ancillary benefits appear significant relative to the costs of those reductions and that they should play an important role in facilitating the debate regarding near-term policies to address the threat of climate change. Similar evidence is provided by Agee et al. (2014), who report that the marginal effect of reducing \(\hbox {NO}_{\mathrm{x}}\) on \(\hbox {CO}_{2}\) emissions is positive, roughly of equal absolute magnitude, and explained by adoption of more efficient production and pollution control technologies. How do we reconcile their findings with the findings of our study? We believe that the differences are explained by several factors, including significant differences in emission intensity as well as in the type and stringency of the regulations in place. For instance, information provided by Mekaroonreung and Johnson (2012) for a sample of 336 U.S. bituminous coal-burning electricity plants over the period 2000–2008 indicates that the average \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions were 922.56 and 1.790 tons/GWh, respectively. By contrast, the average \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emission intensities in our sample were 74.87 and 0.238 tons/GWh. As discussed in Sect. 2, the low emission intensity of Swedish CHP plants is the result of very ambitious national policies with extremely high stringency levels compared with those in the U.S. Stringent policies have yielded significant reductions in emissions, putting Swedish plants in a situation where there is no slack or low-hanging fruit that will increase general efficiency and thus cut emissions of both pollutants. Instead, as indicated by our results, if further \(\hbox {CO}_{2}\) emission reductions are to be achieved we would expect hard tradeoffs in the form of increases in \(\hbox {NO}_{\mathrm{x}}\) emission intensities. Finally, there is a difference when it comes to the type of \(\hbox {NO}_{\mathrm{x}}\) regulations in place. In the U.S., \(\hbox {NO}_{\mathrm{x}}\) regulations generally take the form of tradable permits while Sweden has chosen to control \(\hbox {NO}_{\mathrm{x}}\) emissions through a refundable fee. As shown by Ambec and Coria (2013), under price regulation firms modify their abatement levels of both pollutants in response to an increased stringency of climate policy, while if \(\hbox {NO}_{\mathrm{x}}\) were regulated through quantity regulations we would only observe an increase in the marginal cost of reducing \(\hbox {NO}_{\mathrm{x}}\) without any effects on the level of emissions. The implementation of environmental policies to reduce greenhouse gas (GHG) emissions does not only have a global impact, but can also bring local co-benefits (or costs) by reducing (increasing) other air pollutants due to complementarity or substitution. These interactions have clear implications for policy design as many European countries are committed to reach the Kyoto obligations and there are currently multiple policies in place aiming to reduce \(\hbox {CO}_{2}\) emissions. The question is what ancillary benefits (costs) we can expect from pursuing GHG reduction policies and local air pollution policies simultaneously. We explore this question formally by analyzing the patterns of substitution between \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) in the heat and power sector in Sweden induced by the interaction of national and international environmental policies. We model the pollution technology of generating units in the Swedish heat and power sector as a non-separable production process where \(\hbox {CO}_{2}\), \(\hbox {NO}_{\mathrm{x}}\), and production are treated as joint outputs. We use a directional output distance function that accounts for the simultaneous expansion of good outputs and contraction of bad outputs, which is a fair representation of the problem that many regulated firms deal with. We choose a quadratic representation of the technology and subsequently derive the estimates of elasticities between \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) and heat and power generation. Through this method, we characterize changes in the relative performance of Swedish combined heat and power (CHP) plants with respect to \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emissions in response to variations in the level of stringency of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) policies and multiple layers of regulation. To evaluate the change in elasticities or relative shadow prices, we compare the probability distributions between the periods 2001–2004 (pre-introduction of the EU ETS) and 2005–2009 (post-introduction) by means of a kernel consistent density test. Our results indicate that there are important interactions between the abatement efforts for \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\). Indeed, we find that in the combined heat and power generation sector, \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) are substitutes, implying that regulatory efforts that limit emissions of one pollutant might have the unintended consequence of increasing the other pollutant. Overall, the degree of substitution for CHP plants between these two pollutants increases after the introduction of the EU ETS in response to technological development and regulatory changes that led to a reduced relative price of \(\hbox {CO}_{2}/\hbox {NO}_{\mathrm{x}}\). Our results also indicate that \(\hbox {CO}_{2}\) is more sensitive than \(\hbox {NO}_{\mathrm{x}}\) to prices, which means that if the regulator is to encourage a large reduction in \(\hbox {NO}_{\mathrm{x}}\) emissions, the charge must be increased to a much higher level than the present. This is also confirmed by our estimates of the shadow price of \(\hbox {NO}_{\mathrm{x}}\), which is much higher than the current value of the \(\hbox {NO}_{\mathrm{x}}\) fee. It is also consistent with the fact that technological development has been biased towards \(\hbox {CO}_{2}\) emission reductions, implying that it has become easier to reduce \(\hbox {CO}_{2}\) emissions than \(\hbox {NO}_{\mathrm{x}}\) emissions per unit of output. Finally, our results are robust to choice of directional vectors. The fact that generating units respond to variation in the relative prices of emissions by changing the intensity of their abatement efforts suggests that there is a need for policy coordination to avoid unintended effects of one policy instrument on the emissions of other pollutants. This is particularly important if abatement of climate and local air pollutants are substitutes since the unintended costs of climate policy in the form of local pollution can make it less acceptable to the public and policymakers. This is an area where much research is needed since we are experiencing a polycentric approach to climate change with mitigation and adaptation activities undertaken by multiple policy actors at a range of different levels. Furthermore, a caveat of our analysis is that our dataset only includes generating units of the heat and power sector. Nevertheless, different industrial sectors might differ significantly in terms of the relative burden of local and global air pollution regulations. Thus, even though the analysis of the variations in technical complementarity/substitutability across industries is beyond the scope of the study it is suggested as a relevant area for further research. PCTs consist of flue gas treatments designed to clean up \(\hbox {NO}_{\mathrm{x}}\) once it has been formed, usually through conversion to benign chemical species. Examples include: (1) SCR (selective catalytic reduction), which is rather costly to install but achieves highly efficient reduction levels, and (2) SNCR (selective non-catalytic reduction of chemicals, e.g., ammonia, urea, and sodium bicarbonate), which is less costly than SCR in both capital and operating costs but also less effective. The adoption of \(\hbox {CO}_{2}\) and \(\hbox {NO}_{\mathrm{x}}\) emission control technologies does not necessarily imply a change in the relative technical efficiency of the generating units because the efficiency scores are computed relative to the performance of a best-practice frontier, but the fact that multiple regulations are into play motivate us also to explore this as an empirical question in Sect. 5.1. This is by all accounts a very high carbon tax. To put it in context, the carbon dioxide permits on U.S. markets such as the Regional Greenhouse Gas Initiative (RGGI) and Chicago are trading at around 4 €/ton; the EU ETS has varied around a mean of 15–20 €/ton; and France has tried to introduce a carbon tax of 17 €/ton—but has failed because of fears that such a level would be detrimental to the economy. "Useful energy" is the output variable of the NO\(_{\mathrm{x}}\) charge system that measures the total energy production from the heterogeneous group of regulated industries (heat and power, waste, wood, pulp and paper, chemical, metal, and food industry). For CHP plants and only-heat plants "useful energy" is the energy sold in the form of heat or electricity. For other industries, "useful energy" is the energy in the form of steam, hot water or electricity produced in the boiler and used in production processes or heating of factory buildings (see Sterner and Höglund 2006 for details). With the exception of 0.7% that is kept for administration costs. The refund varies from year to year, but in recent years it has been around 0.9 €/megawatt hour (MWh) of useful energy while the average \(\hbox {NO}_{\mathrm{x}}\) emissions per unit of energy has been 0.23 kg of \(\hbox {NO}_{\mathrm{x}}\) per MWh. This number corresponds to 4300 €/ton, which can be compared with the Taxe Parafiscale in France of 40.85 €/ton and the Norwegian fee of 525 €/ton. As described by Färe et al. (2016), in a given period the opportunity costs of abatement depends on the choice of inputs and technology available at that point in time. Figure 1 plots the relative cost of \(\hbox {CO}_{2}\)/\(\hbox {NO}_{\mathrm{x}}\) emissions per MWh. We calculated the relative cost per boiler per year (as each boiler uses a different combination of inputs and technology, which determines the opportunity cost) and then averaged the relative cost of \(\hbox {CO}_{2}\)/\(\hbox {NO}_{\mathrm{x}}\) emissions per MWh across boilers. Thus, we compute the average value of the relative cost across boilers. Specifically, CHP plants about to be regulated by the EU ETS were only required to pay 21% of the \(\hbox {CO}_{2}\) tax. We acknowledge that in terms of measuring the effect of variations in the CO\(_{2}\)/NO\(_{\mathrm{x}}\), the periods 2001–2003 and 2004–2009 can be interesting. Nevertheless, the objective of this paper is to analyze the effect of multi-governance and overlapping of policies, and therefore, we focus on the comparison of the periods 2001–2004 and 2005–2009. An advantage of a nonparametric method such as Data Envelopment Analysis (DEA) is that it does not restrict the directional distance function to any particular functional form (Førsund et al. 2007; Hjalmarsson et al. 1996), However, as discussed by Førsund and Hjalmarsson (1983) a piecewise linear frontier is not differentiable, not being suited for computing elasticities of substitution between pollutants. We chose the generalized quadratic function as it also has additional suitable properties from the econometric and economic point of view. Besides including both first and second order terms (which allows twice differentiability) this specification is linear in the parameters and it is the only specification that yields a second-order Taylor's series approximation to arbitrary functions consistent with the translation property of the directional distance function (Chambers et al. 2013; Färe et al. 2010; Hailu and Chambers 2012). For a detailed explanation of the shadow prices approach in the context of the output distance function, see Färe et al. (1993). For applications in the case of directional output distance functions, see Färe et al. (2001, 2005). \({ sign}\,\,M_{b_i y} =y^{*}\left[ {\frac{-}{+}-\frac{-}{-}} \right] \le 0\) for \(y^{*}>0\). Note that the choice of a unit vector g is equivalent to set the direction equal to the sample average of inputs, good and bad outputs when the data is normalized by the sample mean. Given that the regressions are estimated with intercept, our model included 34 firm dummies. The purpose of including those variables is to control for unobserved factors at the firm level that are constant over time (either 2001–2004 or 2005–2009) which may affect the technical and environmental efficiency. The \(T_{{n}}\)-statistic of Li et al. (2009) can be used in a broader perspective to test equality of distributions with mixed and continuous data. The test of equality of two density functions is just a particular case of it. Unlike the T-statistic of Li (1996, 1999), the T-statistic of Li et al. (2009) is not sensitive to the ordering of the data. A high-efficiency CHP can use 10% less fuel than that required to produce the same quantities of heat and electricity separately (Swedish Energy Agency 2009). District heating is an energy network that supplies and spreads heat to homes and other facilities. Heat is produced by a group of boilers in a central plant burning a range of different fuels, and then distributed through pipelines to customers. Although our variable of installed capacity does not account directly for the capital invested to abate \(\hbox {NO}_{\mathrm{x}}\) emissions, this variable partially and indirectly considers the likelihood of carrying out abatement investments since PCTs and CTs are strongly dependent on boiler size. A vast literature reports the heat content in the fuel in Btu units; however for simplicity we have expressed fuel input in MWh since the \(\hbox {NO}_{\mathrm{x}}\) charge system makes use of MWh as its fundamental unit of data analysis. Assuming that labor is proportional to generating capacity is common in the literature (see e.g., Färe et al. 2005; Agee et al. 2014). The motivation for including labor as an input in the production process is that the implementation of \(\hbox {NO}_{\mathrm{x}}\) abatement measures requires qualified human capital. Moreover, as pointed out by Sterner and Turnheim (2009), direct real-time measurement of \(\hbox {NO}_{\mathrm{x}}\) emissions is very important as simple rules of thumb are not useful, and not even the engineers themselves know the emissions levels unless they are continuously monitored. Each type of fuel has some sub-classifications. For instance, gas may include natural gas, LPG, and biogas, oil may include no. 1 and no. 5 fuel oil as well as bio-oil, and biofuel may include several kinds of residues from the forest and other types of biomass. Specific emission factors for every sub-classification have been considered to develop the estimations (see emission factors in SEPA 2009). Raw materials in this context are primary substances or goods, different than fuels, used as feedstocks in the production process to be transformed and generate industrial products, e.g., chemicals or metals. We acknowledge that the carbon content in those substances may not be negligible. For comparison purposes, we also estimated the \(\hbox {CO}_{2}\) emissions using the total output per boiler, adjusting it by boiler efficiency to obtain the input energy and distributing it across fuels by means of the energy fuel shares. Another check involved the comparison of the sum of the estimated emissions per installation and the corresponding aggregated emissions in the SEPA's EU ETS database for some installations where such aggregation was possible. In both cases, our estimations were in a similar order of magnitude. We mean by "carbon saving" the same as "reducing net system emissions of \(\hbox {CO}_{2}\)". We assume that the biomass is sustainably grown and leads to no (or low) \(\hbox {CO}_{2}\) emissions. Thus, reducing carbon emissions is here largely synonymous with "fossil-fuel saving" or "biomass using". The firm fixed-effect coefficients were in general different from zero; i.e., on average, a set of CHP units belonging to specific firms tend to be much closer or farther to the best-practice frontier than the group of CHP units of the reference firm. We also analyzed the sensitivity of our estimations to the heterogeneity in the number of plants per firm. In our dataset we have firms with one, two, or seven plants. We estimated Eq. (2) adding two dummy variables: a dummy that takes the value of one for firms with seven plants (and zero otherwise); and a dummy that takes the value of one for firms with one plant (and zero otherwise). The group of firms with two plants is the reference case. We then compared the density functions of the inefficiency scores of the models with and without dummy variables for the number of plants per firm. For both periods (2001–2004 and 2005–2009) we were not able to reject the null hypothesis that the inefficiency-scores density functions are equal at the 10% significance level. This suggests that controlling for the number of plants per firm does not have a statistically significant impact on the results. Size is proxied by installed capacity. Generating units with installed capacity below the median are classified as small boilers and boilers with installed capacity above the median are classified as large boilers. These prices ranged from 52.6 to 61.0 €/MWh during the period 2001–2004 and from 61.0 to 62.2 €/MWh in 2005–2009. These prices are converted to units of 2009 constant euros. We use the Sweden Harmonised Index of Consumer Prices (HICP) and an exchange rate of 10.62 SEK/€. Note that the mean relative shadow price differs from the quotient between the means of the absolute shadow prices. The first measure corresponds to \(({{\frac{\overline{q_1} }{q_2 }}} )=\frac{1}{KT}{\sum }_{t=1}^T {\sum }_{k=1}^K \big ( {\frac{D_{b_1, k}^t }{D_{b_2, k}^t }} \big )\), while the second is given by \(\frac{\bar{q}_1}{\bar{q}_2}=\frac{{\sum }_{t=1}^T {\sum }_{k=1}^K p_t \left( {D_{b_1, k}^t /D_{y,k}^t } \right) }{{\sum }_{t=1}^T {\sum }_{k=1}^K p_t \left( {D_{b_2, k}^t /D_{y,k}^t } \right) }\). Some studies have applied optimization methods to endogenously determine optimal directions; see e.g., Peyrache and Daraio (2012) and Färe et al. (2013). However, this is outside the scope of this study. SFA, COLS and DEA results can be obtained from the authors on request. Regarding the firm fixed-effect coefficients, in the SFA and COLS models we tested the null hypothesis whether those coefficients were jointly statistically equal to zero. We found that this hypothesis is rejected at the 1% significance level, indicating that there are differences in technical efficiency across firms. Aigner DJ, Chu SJ (1968) On estimating the industry production function. Am Econ Rev 58:826–839 Agee MD, Atkinson SE, Crocker TD, Williams JW (2014) Non-separable pollution control: implications for a \(\text{ CO }_{2}\) emissions cap and trade system. Resour Energy Econ 36:64–82 Ambec S, Coria J (2013) Prices vs. quantities with multiple pollutants. J Environ Econ Manag 66:123–140 Blackorby C, Russell RR (1981) The Morishima elasticity of substitution; symmetry, constancy, separability, and its relationship to the Hicks and Allen elasticities. Rev Econ Stud 48:147–158 Blackorby C, Russell RR (1989) Will the real elasticity of substitution please standup? (A comparison of the Allen/Uzawa and Morishima elasticities). Am Econ Rev 79:882–888 Bonilla J, Coria J, Mohlin K, Sterner T (2015) Refunded emission payments and diffusion of NOx abatement technologies in Sweden. Ecol Econ 116:132–145 Brännlund R, Kriström B (2001) Too hot to handle? Benefits and costs of stimulating the use of biofuels in the Swedish heating sector. Resour Energy Econ 23:343–358 Burtraw D, Krupnick A, Palmer K, Paul A, Toman M, Bloyd C (2003) Ancillary benefits of reduced air pollution in the U.S. from moderate greenhouse gas mitigation policies in the electricity sector. J Environ Econ Manag 45:650–673 Caillaud B, Jullien B, Picard P (1996) Hierarchical organization and incentives. Eur Econ Rev 40:687–695 Chambers RG, Chung Y, Färe R (1998) Profit, directional distance functions, and Nerlovian efficiency. J Optim Theory Appl 98:351–364 Chambers R, Färe R, Grosskopf S, Vardanyan M (2013) Generalized quadratic revenue functions. J Econ 173(1):11–21 Du L, Hanley A, Wei C (2015a) Marginal Abatement costs of carbon dioxide emissions in China: a parametric analysis. Environ Resour Econ 61:191–216 Du L, Hanley A, Wei C (2015b) Estimating the marginal abatement cost curve of \(\text{ CO }_{2}\) emissions in China: provincial panel data analysis. Energy Econ 48:217–229 Färe R, Grosskopf S, Lovell CAK, Yaisawarng S (1993) Derivation of shadow prices for undesirable outputs: a distance function-approach. Rev Econ Stat 75:374–380 Färe R, Grosskopf S, Noh DW, Weber W (2001) Shadow prices of Missouri public conservation land. Public Finance Rev 29:444–460 Färe R, Grosskopf S, Noh DW, Weber W (2005) Characteristics of a polluting technology: theory and practice. J Econ 126:469–492 Färe R, Grosskopf S, Weber W (2006) Shadow prices and pollution costs in U.S. agriculture. Ecol Econ 56:89–103 Färe R, Martins-Filho C, Vardanyan M (2010) On functional form representation of multi-output production technologies. J Prod Anal 33:81–96 Färe R, Grosskopf S, Pasurka CA, Weber W (2012) Substitutability among undesirable outputs. Appl Econ 44:39–47 Färe R, Grosskopf S, Whittaker G (2013) Directional output distance functions: endogenous directions based on exogenous normalization constraints. J Prod Anal 40:267–269 Färe R, Grosskopf S, Pasurka C (2016) Technical change and pollution abatement costs. Eur J Oper Res 248:715–724 Førsund F, Hjalmarsson L, Krivonozhko VE, Utkin OB (2007) Calculation of scale elasticities in DEA models: direct and indirect approaches. J Prod Anal 28:45–56 Førsund F, Hjalmarsson L (1983) Technical progress and structural change in the Swedish cement industry 1955–1979. Econometrica 51(5):1449–1467 Hailu A, Chambers R (2012) A Luenberger soil-quality indicator. J Prod Anal 38(2):145–154 Hayfield T, Racine JS (2008) Nonparametric econometrics: the np Package. J Stat Softw 27: 1-32. Available at www.jstatsoft.org/v27/i05/ (27 Nov 2011) Hayfield T, Racine JS (2011) Nonparametric kernel smoothing methods for mixed data types: Package 'np'. Available at http://cran.r-project.org/web/packages/np/ (27 Nov 2011) Hjalmarsson L, Kumbhakar SC, Heshmati A (1996) DEA, DFA and SFA: a comparison. J Prod Anal 7:303–327 Höglund L (2005) Abatement costs in response to the Swedish charge on nitrogen oxide emissions. J Environ Econ Manag 50:102–120 Kumar S, Managi S (2011) Non-separability and substitutability among water pollutants: evidence from India. Environ Dev Econ 16:709–733 Li Q (1996) Nonparametric testing of closeness between two unknown distribution functions. Econ Rev 15:261–274 Li Q (1999) Nonparametric testing the similarity of two unknown density functions: local power and bootstrap analysis. Nonparametric Stat 11:189–213 Li Q, Maasoumi E, Racine JS (2009) A nonparametric test for equality of distributions with mixed categorical and continuous data. J Econ 148:186–200 Linn J (2008) Technological modifications in the nitrogen oxides tradable permit program. Energy J 29:153–176 Marklund PO, Samakovlis E (2007) What is driving the EU burden-sharing agreement: efficiency or equity? J Environ Manag 85:317–329 Mekaroonreung M, Johnson AL (2012) Estimating the shadow prices of SO2 and \(\text{ NO }_{{\rm x}}\) for U.S. coal power plants: a convex nonparametric least squares approach. Energy Econ 34:723–732 Murty MN, Kumar S, Dhavala KK (2007) Measuring environmental efficiency of industry: a case study of thermal power generation in India. Environ Resour Econ 38:31–50 Peyrache A, Daraio C (2012) Empirical tools to assess the sensitivity of directional distance functions to direction selection. Appl Econ 44:933–943 SEPA (2003) Kväveoxidavgiften ett effektivt styrmedel Utvärdering av \(\text{ NO }_{{\rm x}}\)-avgiften. Rapport 5335. http://www.naturvardsverket.se/Documents/publikationer/620-5335-3.pdf SEPA (2007) Economic instruments in environmental policy. Report 5678. http://www.energimyndigheten.se/Global/Engelska/News/620-5678-6_webb.pdf SEPA (2009) Naturvårdsverkets föreskrifter och allmänna råd om utsläppsrätter för koldioxid. NFS 2007:5 Konsoliderad med NFS 2009:6. Stockholm Sterner T, Höglund L (2006) Refunded emission payments theory, distribution of costs, and Swedish experience of \(\text{ NO }_{{\rm x}}\) abatement. Ecol Econ 57:93–106 Sterner T, Turnheim B (2009) Innovation and diffusion of environmental technology: industrial \(\text{ NO }_{{\rm x}}\) abatement in Sweden under refunded emission payments. Ecol Econ 68:2996–3006 Svensk Fjärrvärme (2011) Combined heat and power. Available at www.svenskfjarrvarme.se (5 Oct 2011) Swedish Energy Agency (2009) Energy in Sweden 2009. Stockholm, Statens energimyndighet Wei C, Löschel A, Liu B (2013) An empirical analysis of the \(\text{ CO }_{2}\) shadow price in Chinese thermal power enterprises. Energy Econ 40:22–31 Xepapadeas A, de Zeeuw A (1999) Environmental policy and competitiveness: the Porter hypothesis and the composition of capital. J Environ Econ Manag 37:165–182 Jorge Bonilla gratefully acknowledges financial support from the Swedish International Development Cooperation Agency (SIDA) through the Environmental Economics Unit of the University of Gothenburg, and from the Universidad de los Andes, Colombia. Jessica Coria and Thomas Sterner gratefully acknowledge financial support from Göteborg Energi and the Swedish Energy Agency (Energimyndigheten). This paper is dedicated to the memory of our dear colleague Lennart Hjalmarsson, who gave advice on an early version, and unfortunately passed away over the course of this project. We thank Finn Førsund and Rolf Färe for useful comments and suggestions. Department of Economics, Universidad de los Andes, Bogotá, Colombia Jorge Bonilla Department of Economics, University of Gothenburg, P.O. Box 640, 405 30, Gothenburg, Sweden Jessica Coria & Thomas Sterner Jessica Coria Thomas Sterner Correspondence to Jessica Coria. See Tables 5 and 6. Table 5 Assessment of monotonicity conditions and null-jointness property of the directional output distance function using the parametric linear programming approach Table 6 Assessment of concavity property of the directional output distance function using the parametric linear programming approach Bonilla, J., Coria, J. & Sterner, T. Technical Synergies and Trade-Offs Between Abatement of Global and Local Air Pollution. Environ Resource Econ 70, 191–221 (2018). https://doi.org/10.1007/s10640-017-0117-8 Issue Date: May 2018 Shadow pricing Directional distance function Local pollution Policy interactions Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition © 2023 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
IACR News Updates on the COVID-19 situation are on the Announcement channel. Here you can see all recent updates to the IACR webpage. These updates are also available: via Weibo Filter news by All news Announcement Election Award Crypto Eurocrypt Asiacrypt CHES FSE PKC TCC Real World Crypto Journal of Cryptology ePrint report Job posting Event calendar Schools Threshold Signatures in the Multiverse Leemon Baird, Sanjam Garg, Abhishek Jain, Pratyay Mukherjee, Rohit Sinha, Mingyuan Wang, Yinuo Zhang ePrint Report We introduce a new notion of {\em multiverse threshold signatures} (MTS). In an MTS scheme, multiple universes -- each defined by a set of (possibly overlapping) signers, their weights, and a specific security threshold -- can co-exist. A universe can be (adaptively) created via a non-interactive asynchronous setup. Crucially, each party in the multiverse holds constant-sized keys and releases compact signatures with size and computation time both independent of the number of universes. Given sufficient partial signatures over a message from the members of a specific universe, an aggregator can produce a short aggregate signature relative to that universe. We construct an MTS scheme building on BLS signatures. Our scheme is practical, and can be used to reduce bandwidth complexity and computational costs in decentralized oracle networks. As an example data point, consider a multiverse containing 2000 nodes and 100 universes (parameters inspired by Chainlink's use in the wild) each of which contains arbitrarily large subsets of nodes and arbitrary thresholds. Each node computes and outputs 1 group element as its partial signature; the aggregator performs under 0.7 seconds of work for each aggregate signature, and the final signature of size 192 bytes takes 6.4 ms (or 198K EVM gas units) to verify. For this setting, prior approaches when used to construct MTS, yield schemes that have one of the following drawbacks: (i) partial signatures that are 97$\times$ larger, (ii) have aggregation times 311$\times$ worse, or (iii) have signature size 39$\times$ and verification gas costs 3.38$\times$ larger. We also provide an open-source implementation and a detailed evaluation. Post-Quantum Secure Deterministic Wallet: Stateless, Hot/Cold Setting, and More Secure Mingxing Hu Since the invention of Bitcoin, cryptocurrencies have gained huge popularity. Crypto wallet, as the tool to store and manage the cryptographic keys, is the primary entrance for the public to access cryptocurrency funds. Deterministic wallet is an advanced wallet mecha- nism that has been proposed to achieve some appealing virtues, such as low-maintenance, easy backup and recovery, supporting functionalities required by cryptocurrencies, and so on. However, the existing deter- ministic wallet schemes especially in the quantum world still have a long way to be practical. The first barrier is how to build a deterministic wallet scheme without relying on the state, i.e., stateless. The stateful deterministic wallet scheme must internally maintain and keep refreshing synchronously a parameter named state which makes the implementa- tion in practice become more complex. And once one of the states is leaked, thereafter the security notion of unlinkability is cannot be guar- anteed (referred to as the weak security notion of forward unlinkability). The second barrier is how to derive the session secret keys from the master secret key in one-way. There are security shortfalls in previous works, they suffer a fatal vulnerability when a minor fault happens (say, one derived key is compromised somehow), then the damage is not lim- ited to the leaked derived key, instead, it spreads to the master key and the whole system collapses. The third barrier is how to build a post- quantum secure deterministic wallet scheme supporting hot/cold setting, which is important since nearly all popular cryptocurrencies relied on the hardness problems that can be broken by quantum adversaries, and the hot/cold setting is a widely adopted method to effectively reduce the exposure chance of secret keys and hence improving the security of the system. The last barrier is how to build a deterministic wallet scheme with standard security notion of unforgeability. It is motivated by pre- vious works which are based on a weaker/nonstandard unforgeability notion, in which the adversary is only allowed to query and forge the signatures w.r.t. the public keys that were assigned by the challenger. In this work, we present a new deterministic wallet scheme in quantum world, which is stateless, supports hot/cold setting, satisfiies stronger security notions, and is more efficient. In particular, we reformalize the syntax and security models for deterministic wallets, capturing the func- tionality and security requirements (including full unlinkability and stan- dard unforgeability) imposed by the practice in cryptocurrency. Then we propose a deterministic wallet construction and prove its security in the quantum random oracle model. Finally, we show our wallet scheme is more practicable by analyzing an instantiation of our wallet scheme based on the signature scheme Falcon. Key-and-Signature Compact Multi-Signatures: A Compiler with Realizations Shaoquan Jiang, Dima Alhadidi, Hamid Fazli Khojir Multi-signature is a protocol where a set of signatures jointly sign a message so that the final signature is significantly shorter than concatenating individual signatures together. Recently, it finds applications in blockchain, where several users want to jointly authorize a payment through a multi-signature. However, in this setting, there is no centralized authority and it could suffer from a rogue key attack where the attacker can generate his own keys arbitrarily. Further, to minimize the storage on blockchain, it is desired that the aggregated public-key and the aggregated signature are both as short as possible. In this paper, we find a compiler that converts a kind of identification (ID) scheme (which we call a linear ID) to a multi-signature so that both the aggregated public-key and the aggregated signature have a size independent of the number of signers. Our compiler is provably secure. The advantage of our results is that we reduce a multi-party problem to a weakly secure two-party problem. We realize our compiler with two ID schemes. The first is Schnorr ID. The second is a new lattice-based ID scheme, which via our compiler gives the first regular lattice-based multi-signature scheme with key-and-signature compact without a restart during signing process. Silph: A Framework for Scalable and Accurate Generation of Hybrid MPC Protocols Edward Chen, Jinhao Zhu, Alex Ozdemir, Riad S. Wahby, Fraser Brown, Wenting Zheng Many applications in finance and healthcare need access to data from multiple organizations. While these organizations can benefit from computing on their joint datasets, they often cannot share data with each other due to regulatory constraints and business competition. One way mutually distrusting parties can collaborate without sharing their data in the clear is to use secure multiparty computation (MPC). However, MPC's performance presents a serious obstacle for adoption as it is difficult for users who lack expertise in advanced cryptography to optimize. In this paper, we present Silph, a framework that can automatically compile a program written in a high-level language to an optimized, hybrid MPC protocol that mixes multiple MPC primitives securely and efficiently. Compared to prior works, our compilation speed is improved by up to 30000×. On various database analytics and machine learning workloads, the MPC protocols generated by Silph match or outperform prior work by up to 3.6×. Oil and Vinegar: Modern Parameters and Implementations Ward Beullens, Ming-Shing Chen, Shih-Hao Hung, Matthias J. Kannwischer, Bo-Yuan Peng, Cheng-Jhih Shih, Bo-Yin Yang Two multivariate digital signature schemes, Rainbow and GeMSS, made it into the third round of the NIST PQC competition. However, either made its way to being a standard due to devastating attacks (in one case by Beullens, the other by Tao, Petzoldt, and Ding). How should multivariate cryptography recover from this blow? We propose that, rather than trying to fix Rainbow and HFEv- by introducing countermeasures, the better approach is to return to the classical Oil and Vinegar scheme. We show that, if parametrized appropriately, Oil and Vinegar still provides competitive performance compared to the new NIST standards by most measures (except for key size). At NIST security level 1, this results in either 128-byte signatures with 44 kB public keys or 96-byte signatures with 67 kB public keys. We revamp the state-of-the-art of Oil and Vinegar implementations for the Intel/AMD AVX2, the Arm Cortex-M4 microprocessor, the Xilinx Artix-7 FPGA, and the Armv8-A microarchitecture with the Neon vector instructions set. SCALLOP: scaling the CSI-FiSh Luca De Feo, Tako Boris Fouotsa, Péter Kutas, Antonin Leroux, Simon-Philipp Merz, Lorenz Panny, Benjamin Wesolowski We present SCALLOP: SCALable isogeny action based on Oriented supersingular curves with Prime conductor, a new group action based on isogenies of supersingular curves. Similarly to CSIDH and OSIDH, we use the group action of an imaginary quadratic order's class group on the set of oriented supersingular curves. Compared to CSIDH, the main benefit of our construction is that it is easy to compute the class-group structure; this data is required to uniquely represent— and efficiently act by— arbitrary group elements, which is a requirement in, e.g., the CSI-FiSh signature scheme by Beullens, Kleinjung and Vercauteren. The index-calculus algorithm used in CSI-FiSh to compute the class-group structure has complexity L(1/2), ruling out class groups much larger than CSIDH-512, a limitation that is particularly problematic in light of the ongoing debate regarding the quantum security of cryptographic group actions. Hoping to solve this issue, we consider the class group of a quadratic order of large prime conductor inside an imaginary quadratic field of small discriminant. This family of quadratic orders lets us easily determine the size of the class group, and, by carefully choosing the conductor, even exercise significant control on it— in particular supporting highly smooth choices. Although evaluating the resulting group action still has subexponential asymptotic complexity, a careful choice of parameters leads to a practical speedup that we demonstrate in practice for a security level equivalent to CSIDH-1024, a parameter currently firmly out of reach of index-calculus-based methods. However, our implementation takes 35 seconds (resp. 12.5 minutes) for a single group-action evaluation at a CSIDH-512-equivalent (resp. CSIDH-1024-equivalent) security level, showing that, while feasible, the SCALLOP group action does not achieve realistically usable performance yet. DY Fuzzing: Formal Dolev-Yao Models Meet Protocol Fuzz Testing Max Ammann, Lucca Hirschi, Steve Kremer Critical and widely used cryptographic protocols have repeatedly been found to contain flaws in their design and their implementation. A prominent class of such vulnerabilities is logical attacks, i.e. attacks that solely exploit flawed protocol logic. Automated formal verification methods, based on the Dolev-Yao (DY) attacker, excel at finding such flaws, but operate only on abstract specification models. Fully automated verification of existing protocol implementations is today still out of reach. This leaves open whether widely used protocol implementations are secure. Unfortunately, this blind spot hides numerous attacks, notably recent logical attacks on widely used TLS implementations introduced by implementation bugs. We answer by proposing a novel and effective technique that we call DY model-guided fuzzing, which precludes logical attacks against protocol implementations. The main idea is to consider as possible test cases the set of abstract DY executions of the DY attacker, and use a mutation-based fuzzer to explore this set. The DY fuzzer concretizes each abstract execution to test it on the program under test. This approach enables reasoning at a more structural and security-related level of messages (e.g. decrypt a message and re-encrypt it with a different key) as opposed to random bit-level modifications that are much less likely to produce relevant logical adversarial behaviors. We implement a full-fledged and modular DY protocol fuzzer. We demonstrate its effectiveness by fuzzing three popular TLS implementations, resulting in the discovery of four novel vulnerabilities. Quantum Annealing for Subset Product and Noisy Subset Product Trey Li In recent works of Li the noisy subset product problem (also known as subset product with errors) was invented and applied to cryptography. To better understand its hardness, we give a quantum annealing algorithm for it. Our algorithm is the first algorithm for the problem. We also give the first quantum annealing algorithm for the subset product problem. The efficiencies of both algorithms rely on the fundamental efficiency of quantum annealing. At the end we give two lattice algorithms for both problems via solving the closest vector problem. The complexities of the lattice algorithms depend on the complexities of solving the closest vector problem in two special lattices. They are efficient when the special closest vector problems fall into the regime of bounded distance decoding problems that can be efficiently solved using existing methods based on the LLL algorithm or Babai's nearest plane algorithm. An analysis of a scheme proposed for electronic voting systems Nicu Neculache, Vlad-Andrei Petcu, Emil Simion Voting mechanisms allow the expression of the elections by a democratic approach. Any voting scheme must ensure, preferably in an efficient way, a series of safety measures such as confidentiality, integrity and anonymity. Since the 1980s, the concept of electronic voting became more and more of interest, being an advantageous or even necessary alternative for the organization of secure elections. In this paper, we give an overview for the e-voting mechanisms together with the security features they must fulfill. Then we focus on the blind signature paradigm, specifically on the Pairing Free Identity-Based Blind Signature Scheme with Message Recovery (PF-IDBS-MR). Our goal is to give a better understanding on the PF-IDBS-MR scheme by offering an adaptation on the standard voting protocol's phases. More important, we analyze if the general security requirements and the recommendations proposed by the Council of Europe are met by the scheme. On the Incoercibility of Digital Signatures Ashley Fraser, Lydia Garms, Elizabeth A. Quaglia We introduce incoercible digital signature schemes, a variant of a standard digital signature. Incoercible signatures enable signers, when coerced to produce a signature for a message chosen by an attacker, to generate fake signatures that are indistinguishable from real signatures, even if the signer is compelled to reveal their full history (including their secret signing keys and any randomness used to produce keys/signatures) to the attacker. Additionally, we introduce an authenticator that can detect fake signatures, which ensures that coercion is identified. We present a formal security model for incoercible signature schemes that comprises an established definition of unforgeability and captures new notions of weak receipt-freeness, strong receipt-freeness and coercion-resistance. We demonstrate that an incoercible signature scheme can be viewed as a transformation of any generic signature scheme. Indeed, we present two incoercible signature scheme constructions that are built from a standard signature scheme and a sender-deniable encryption scheme. We prove that our first construction satisfies coercion-resistance, and our second satisfies strong receipt-freeness. We conclude by presenting an extension to our security model: we show that our security model can be extended to the designated verifier signature scheme setting in an intuitive way as the designated verifier can assume the role of the authenticator and detect coercion during the verification process. ?3? : Privacy-Preserving Path Validation System for Multi-Authority Sliced Networks Weizhao Jin, Erik Kline, T. K. Satish Kumar, Lincoln Thurlow, Srivatsan Ravi In practical operational networks, it is essential to validate path integrity, especially when untrusted intermediate nodes are from numerous network infrastructures operated by several network authorities. Current solutions often reveal the entire path to all parties involved, which may potentially expose the network structures to malicious intermediate attackers. Additionally, there is no prior work done to provide a systematic approach combining the complete lifecycle of packet delivery, i.e., path slicing, path validation and path rerouting, leaving these highly-intertwined modules completely separated. In this work, we present a decentralized privacy-preserving path validation system ?3? that integrates our novel path validation protocol with an efficient path slicing algorithm and a malice-resilient path rerouting mechanism. Specifically, leveraging Non-Interactive Zero-Knowledge proofs, our path validation protocol XOR-Hash-NIZK protects the packet delivery tasks against information leakage about multi-hop paths and potentially the underlying network infrastructures. We implemented and evaluated our system on a state-of-the-art 5G Dispersed Computing Testbed simulating a multi-authority network. Our results show that while preserving the privacy of paths and nodes and enhancing the security of network service, our system optimizes the performance trade-off between network service quality and security/privacy. Putting the Online Phase on a Diet: Covert Security from Short MACs Sebastian Faust, Carmit Hazay, David Kretzler, Benjamin Schlosser An important research direction in secure multi-party computation (MPC) is to improve the efficiency of the protocol. One idea that has recently received attention is to consider a slightly weaker security model than full malicious security -- the so-called setting of $\textit{covert security}$. In covert security, the adversary may cheat but only is detected with certain probability. Several works in covert security consider the offline/online approach, where during a costly offline phase correlated randomness is computed, which is consumed in a fast online phase. State-of-the-art protocols focus on improving the efficiency by using a covert offline phase, but ignore the online phase. In particular, the online phase is usually assumed to guarantee security against malicious adversaries. In this work, we take a fresh look at the offline/online paradigm in the covert security setting. Our main insight is that by weakening the security of the online phase from malicious to covert, we can gain significant efficiency improvements during the offline phase. Concretely, we demonstrate our technique by applying it to the online phase of the well-known TinyOT protocol (Nielsen et al., CRYPTO '12). The main observation is that by reducing the MAC length in the online phase of TinyOT to $t$ bits, we can guarantee covert security with a detection probability of $1- \frac{1}{2^t}$. Since the computation carried out by the offline phase depends on the MAC length, shorter MACs result in a more efficient offline phase and thus speed up the overall computation. Our evaluation shows that our approach reduces the communication complexity of the offline protocol by at least 35% for a detection rate up to $\frac{7}{8}$. In addition, we present a new generic composition result for analyzing the security of online/offline protocols in terms of concrete security. A proof of the Scholz conjecture on addition chains Theophilus Agama Applying the pothole method on the factors of numbers of the form $2^n-1$, we prove the inequality $$\iota(2^n-1)\leq n-1+\iota(n)$$ where $\iota(n)$ denotes the length of the shortest addition chain producing $n$. A Practical Template Attack on CRYSTALS-Dilithium Alexandre Berzati, Andersson Calle Viera, Maya Chartouni, Steven Madec, Damien Vergnaud, David Vigilant This paper presents a new profiling side-channel attack on the signature scheme CRYSTALS-Dilithium, which has been selected by the NIST as the new primary standard for quantum-safe digital signatures. This algorithm has a constant-time implementation with consideration for side-channel resilience. However, it does not protect against attacks that exploit intermediate data leakage. We exploit such a leakage on a vector generated during the signing process and whose costly protection by masking is a matter of debate. We design a template attack that enables us to efficiently predict whether a given coefficient in one coordinate of this vector is zero or not. Once this value has been completely reconstructed, one can recover, using linear algebra methods, part of the secret key that is sufficient to produce universal forgeries. While our paper deeply discusses the theoretical attack path, it also demonstrates the validity of the assumption regarding the required leakage model, from practical experiments with the reference implementation on an ARM Cortex-M4. Implementing and Benchmarking Word-Wise Homomorphic Encryption Schemes on GPU Hao Yang, Shiyu Shen, Wangchen Dai, Lu Zhou, Zhe Liu, Yunlei Zhao Homomorphic encryption (HE) is one of the most promising techniques for privacy-preserving computations, especially the word-wise HE schemes that allow batched computations over ciphertexts. However, the high computational overhead hinders the deployment of HE in real-word applications. The GPUs are often used to accelerate the execution in such scenarios, while the performance of different HE schemes on the same GPU platform is still absent. In this work, we implement three word-wise HE schemes BGV, BFV, and CKKS on GPU, with both theoretical and engineering optimizations. We optimize the hybrid key-switching technique, reducing the computational and memory overhead of this procedure. We explore several kernel fusing strategies to reuse data, which reduces the memory access and IO latency, and improves the overall performance. By comparing with the state-of-the-art works, we demonstrate the effectiveness of our implementation. Meanwhile, we present a framework that finely integrates our implementation of the three schemes, covering almost all scheme functions and homomorphic operations. We optimize the management of pre-computation, RNS bases and memory in the framework, to provide efficient and low-latency data access and transfer. Based on this framework, we provide a thorough benchmark of the three schemes, which can serve as a reference for scheme selection and implementation in constructing privacy-preserving applications. On-Line/Off-Line DCR-based Homomorphic Encryption and Applications Marc Joye On-line/off-line encryption schemes enable the fast encryption of a message from a pre-computed coupon. The paradigm was put forward in the case of digital signatures. This work introduces a compact public-key additively homomorphic encryption scheme. The scheme is semantically secure under the decisional composite residuosity (DCR) assumption. Compared to Paillier cryptosystem, it merely requires one or two integer additions in the on-line phase and no increase in the ciphertext size. This work also introduces a compact on-line/off-line trapdoor commitment scheme featuring the same fast on-line phase. Finally, applications to chameleon signatures are presented. Side-Channel Resistant Implementation Using Arbiter PUF Raja Adhithan RadhaKrishnan The goals of cryptography are achieved using mathematically strong crypto-algorithms, which are adopted for securing data and communication. Even though the algorithms are mathematically secure, the implementation of these algorithms may be vulnerable to side-channel attacks such as timing and power analysis attacks. One of the effective countermeasures against such attacks is Threshold Implementation(TI). However, TI realization in crypto-device introduces hardware complexity, so it shall not be suitable for resource-constrained devices. Therefore, there is a need for efficient and effective countermeasure techniques for resource-constrained devices. In this work, we propose a lightweight countermeasure using an Arbiter Physical Unclonable Function (A-PUF) to obfuscate intermediate values in the register for rolled and unrolled implementation of Advanced Encryption Standard (AES). The countermeasure is realized in rolled (iterative) implementation of AES in a 65nm Field Programmable Gate Array (FPGA). We have analyzed the security strength and area of the obfuscated AES using A-PUF and compared it with conventional (rolled AES) and masked TI of AES. Further, we have illustrated the effectiveness of pre-charge and neutralizing countermeasures to strengthen the side channel resistance. We have discussed the complexity of mounting a side channel and modeling attacks on obfuscated AES using A-PUF. Cognitive Cryptography using behavioral features from linguistic-biometric data Jose Contreras This study presents a proof-of-concept for a cognitive-based authentication system that uses an individual's writing style as a unique identifier to grant access to a system. A machine learning SVM model was trained on these features to distinguish between texts generated by each user. The stylometric feature vector was then used as an input to a key derivation function to generate a unique key for each user. The experiment results showed that the developed system achieved up to 87.42\% accuracy in classifying texts as written, and the generated keys were found to be secure and unique. We explore the intersection between natural intelligence, cognitive science, and cryptography, intending to develop a cognitive cryptography system. The proposed system utilizes behavioral features from linguistic-biometric data to detect and classify users through stylometry. This information is then used to generate a cryptographic key for authentication, providing a new level of security in access control. The field of cognitive cryptography is relatively new and has yet to be fully explored, making this research particularly relevant and essential. Through our study, we aim to contribute to understanding the potential of cognitive cryptography and its potential applications in securing access to sensitive information. A note on machine learning applied in ransomware detection Manuela Horduna, Simona-Maria Lăzărescu, Emil Simion Ransomware is a malware that employs encryption to hold a victim's data, causing irreparable loss and monetary incentives to individuals or business organizations. The occurrence of ransomware attacks has been increasing significantly and as the attackers are investing more creativity and inventiveness into their threats, the struggle of fighting against ill-themed activities has become more difficult and even time and energy-draining. Therefore, recent researches try to shed some light on combining machine learning with defense mechanisms for detecting this type of malware. Machine learning allows anti-ransomware systems to become more accurate at predicting outcomes or behaviors of the attacks and is vastly used in the advanced research of cybersecurity. In this paper we analyze how machine learning can improve malware recognition in order to stand against critical security issues, giving a brief, yet comprehensive overview of this thriving topic in order to facilitate future research. We also briefly present the most important events of 2022 in terms of ransomware attacks, providing details about the ransoms demanded. Complete Knowledge: Preventing Encumbrance of Cryptographic Secrets Mahimna Kelkar, Kushal Babel, Philip Daian, James Austgen, Vitalik Buterin, Ari Juels Most cryptographic protocols model a player's knowledge of secrets in a simple way. Informally, the player knows a secret in the sense that she can directly furnish it as a (private) input to a protocol, e.g., to digitally sign a message. The growing availability of Trusted Execution Environments (TEEs) and secure multiparty computation, however, undermines this model of knowledge. Such tools can encumber a secret sk and permit a chosen player to access sk conditionally, without actually knowing sk. By permitting selective access to sk by an adversary, encumbrance of secrets can enable vote-selling in cryptographic voting schemes, illegal sale of credentials for online services, and erosion of deniability in anonymous messaging systems. Unfortunately, existing proof-of-knowledge protocols fail to demonstrate that a secret is unencumbered. We therefore introduce and formalize a new notion called complete knowledge (CK). A proof (or argument) of CK shows that a prover does not just know a secret, but also has fully unencumbered knowledge, i.e., unrestricted ability to use the secret. We introduce two practical CK schemes that use special-purpose hardware, specifically TEEs and off-the-shelf mining ASICs. We prove the security of these schemes and explore their practical deployment with a complete, end-to-end prototype that supports both. We show how CK can address encumbrance attacks identified in previous work. Finally, we introduce two new applications enabled by CK that involve proving ownership of blockchain assets. ◄ Previous Next ►
CommonCrawl
Polar equation to rectangular equation May 13, 2018 · Therefore, `56\ ∠\ 27^@ ≈ 49. Download this image for free in High-Definition resolution the choice "download button" below. 1) r = cotqcscq2) r = 2cotqcscq 3) r = 4cotqcscq4) r = 2sinq 5) r = -2cosq - 2sinq6) r = 2cosq + 2sinq Oct 23, 2021 · The polar point \ (P (-1,5\pi/4)\) is converted to rectangular with:\begin {equation*}x=-1\cos (5\pi/4) = \sqrt {2}/2\qquad y= -1\sin (5\pi/4) = \sqrt {2}/2. Theta equals pi over 6. 4. Rewrite the equation as . Identify the curve. * Name. We have learned how to convert rectangular coordinates to polar coordinates, and we have seen that the points are indeed the same. Simplify. Examples. The following equations convert the frequency domain from rectangular to polar notation, and vice versa: Rectangular and polar notation allow you to think of the DFT in two different ways. Rectangular to Polar Equation: The following equation is called the rectangular (or cartesian) form of the equation. polar coordinates lines origin polar equation rectangular equation. Example 10. This calculator converts between polar and rectangular coordinates. = 0 = 0. Step 1: Square both sides of r = 5 and substitute for r2. The polar coordinates would be (3. r and θ. Rectangular (Cartesian) Grid Polar Grid origin pole positive x-axis polar axis y-axis line 2 Example 1: Sketch the graph of the polar equation r 4, then convert it Polar and Rectangular Forms (Coordinates and Equations) Task CardsStudents will practice converting between polar and rectangular forms of coordinates and equations with this task card activity. With rectangular notation, the DFT decomposes an N point signal into N /2 + 1 cosine waves and N /2 + 1 sine waves, each with a specified amplitude . y=. Viewed 5k times Convert each equation from polar to rectangular form. Plug this into the given polar equation. What is a rectangular equation? A rectangular equation, or an equation in rectangular form is an equation composed of variables like x and y which can be graphed on a regular Cartesian plane. We multiply both sides by R R = 4 sin t R 2 = 4 R sin t We now use the relationship between polar and rectangular coordinates: R 2 = x 2 + y 2 and y = R sin t to rewrite the equation as follows: x 2 + y 2 = 4 y x 2 + y 2 - 4 y = 0 It is the equation of a circle A rectangular equation, or an equation in rectangular form is an equation composed of variables like x and y which can be graphed on a regular Cartesian plane. To prove that this is actually the correct graph for this equation we will go back to the relationship between polar and Cartesian coordinates. Let (since ). (a) r = 3 (b) θ = π/6 (c) r sinθ = 2 (d) r = sec θ. In what follows the polar coordinates of a point are (R , t) where R is the radial coordinate and t is the angular coordinate. In this video, we look at taking equations that are in rectangular form and put them in polar form Difficult conversion from polar equation to rectangular equation. We use these conversion equations. Active 9 years, 6 months ago. The equation is . Rewrite the equation. Translating between polar coordinates and rectangular coordinates requires the following set of equations: Consider r = a cos q. Multiply both sides of the equation through by 𝑟. sin( ) rcos2( ) = 1 r Write each Cartesian equation in polar form (in terms of rand ). However the conversion from rectangular coordinates to polar coordinates requires more work. Let P be the point have the polar coordinates (r, θ) and its rectangular coordinates will be (x, y). The relationships that have been established between the polar and rectangular coordinate systems can also be used in converting a polar equation into rectangular form or converting a rectangular equation into polar form. Using the appropriate substitutions makes it possible to rewrite a polar equation as a rectangular equation, and then graph it in the rectangular plane. y = r s i n θ. ⁡. Polar and Rectangular Equations In calculus, you will sometimes need to convert fmm the rectangular form of an equation to its polar form and vice versa to facilitate some calculations. We start by expanding and simplifying: (x − a)2 + y 2 = a 2 x 2 − 2ax + a 2 + y 2 = a 2 x 2 − 2ax + y 2 = 0 (x 2 + y 2 ) − 2ax = 0 Sep 08, 2021 · Convert each polar equation to rectangular form. May 25, 2021 · This equation of rectangular equations in rectangular equations by transforming equations directly is. Wilson told us to look at 2a. Then, x = r cos θ and y = r sin θ. Apr 17, 2008 · Ask in the home-work section, not here. y = r sin θ. 4 j` We have converted a complex number from polar form (using degrees) into rectangular form. Convert equation from rectangular to polar form Updated April 24, 2017 carried out by expert in trigonometry, the use of rectangular (Cartesian) coordinate system is very common during graphic representation of functions or systems of equations. 9 Questions Show answers. Oct 06, 2021 · To convert from polar coordinates to rectangular coordinates, use the formulas x = rcosθ and y = rsinθ. It contains plenty of examples and practice problems. To convert polar equation to a rectangular equation, put r = x 2 + y 2 a n d 0 = y x. Translating back into polar coordinates we find the intersections of the original curves are The equation of the circle can be transformed into rectangular coordinates using the coordinate transformation formulas in Equation 7. Calculus. ang= (deg) Convert the polar equation R = 4 sin t to rectangular form. r= sec( ) 11. Sep 04, 2021 · Polar and rectangular forms of equations worksheet. This precalculus video tutorial explains how to convert polar equations to rectangular equations. tan θ = y/x. Sep 08, 2021 · Convert each polar equation to rectangular form. Oct 19, 2017 · Mar 1, 2012. Taking a closer look at the equation above, we can categorize the parameters before converting them to r 2. In rectangular coordinates, the equation of this circle is: (x − a)2 + y 2 = a 2 . Now, the polar to rectangular equation calculator substitute the value of r and θ Equations of Lines in Polar Coordinates. Section III: Polar Coordinates and Complex Numbers. But in the Polar Coordinate System, a point in Convert the following polar equation to rectangular form. . Use r 2 = x 2 + y 2 r^2=x^2+y^2 r 2 = x 2 + y 2 . Convert each polar equation to rectangular form. Pre-Calculus HELP! change the polar equation r=5/1+cos(theta) to rectangular form. The procedure to use polar to rectangular calculator is as follows: Step 1: Enter the polar coordinate values in the respective input field. Here each r unit is 1/2 and we went out 3 and did all angles. We multiply both sides by R R = 4 sin t R 2 = 4 R sin t We now use the relationship between polar and rectangular coordinates: R 2 = x 2 + y 2 and y = R sin t to rewrite the equation as follows: x 2 + y 2 = 4 y x 2 + y 2 - 4 y = 0 It is the equation of a circle Convert Polar Equations To Rectangular Form - Precalculus. Transforming equations between polar and rectangular forms means making the appropriate substitutions based on the available formulas, together with algebraic manipulations. Complete the following steps to help you find the Cartesian form of the equation by writing an equivalent equation each time. Glossary polar axis In general, to convert between polar and rectangular coordinates use the following rules: x = r cos(θ) y = r sin(θ) r = (x2 + y2)1/2. What is the rectangular form of the polar equation? r−rsinθ=4. I want to talk about lines in polar coordinates. For example, for y equals 5, I could use this equation Mar 04, 2017 · Graphing a polar equation is accomplished in pretty much the same manner as rectangular equations are graphed. x 2 + y 2 sin. Now plug these values back into the first equation to find the y-coordinates of the intersection points: 02 +y2 =2(0) 0+y2 =0 y2 =0 y =0 and 22 +y2 =2(2) 4+y2 =4 y2 =0 y =0 The rectangular coordinates of the points of intersection are (0,0)and (2,0). Remember how we convert back and forth between polar and rectangular. WORKSHEET 1 ON POLAR Work the following on notebook paper. y=8x^2−16. The required is to convert the polar equation to rectangular form and sketch its graph. The conversion formula is used by the polar to Cartesian equation calculator as: x = r c o s θ. Chapter 2: Polar Equations and Functions . First, you must solve for the parameter in one equation. X=. To obtain these terms requires each side to be multiplied by r 2. We have moved all content for this concept to for better organization. Videos, worksheets, games and activities to help PreCalculus students learn how to find the equation of a line in polar coordinates. Convert from a polar to a rectangular equation: answer choices. Given polar equation is: r sin. Step 2: Determine the value of t a n θ and equate this to y x . e. Start your trial now! First week only $4. This means you rotate θ radians around and go out r units. 9 + 25. y=8x^2−2. This line has a Cartesian equation of the form y=mx+b ,where m and b are constants. Convert the polar equation to a rectangular equation. And (since ). Convert the rectangular coordinate system equation to a polar coordinate system equation. For example the graph of the equation x2 + y2 = a we know to be a circle, if a > 0. Clear of fractions: Replace any trig functions: Refer to the picture above and you can see that you can replace by Now replace the r by using the Pythagorean theorem from the picture above. Excel in polar equations between cartesian form and phase angle of exponents and then throw a valid solution plot a spiral using polar form is that. Example: Convert (3, ­1) to polar coordinates Changing polar equations to rectangular equations. In general, to convert between polar and rectangular coordinates use the following rules: x = r cos(θ) y = r sin(θ) r = (x2 + y2)1/2. The rectangular coordinates are called the Cartesian coordinate which is of the form (x, y), whereas the polar coordinate is in the form of (r, θ). Notice that every point has the same second coordinate pi over 6, that's the angle the angle from the horizontal axis, pi over 6. 1) tan 2) r cos sin 3) r cos 4) r cos sin Feb 17, 2015 · Conversion from polar to rectangular equation : and . Convert the following polar equation to rectangular form. x 2 + y 2 = 64. \\text{\\textcolor{#640000}{The required is to convert the polar equation to rectangular form and sketch its graph. O A. ( 0 − π 4) = 2. Convert each equation from polar to rectangular form. The next two examples will demonstrate how this is done. If the equation contains an r 2 you MUST show factoring in the Jul 31, 2021 · Unlike rectangular form which plots points in the complex plane, the Polar Form of a complex number is written in terms of its magnitude and angle. The graph of this equation is a circle centered at 0 15 with radius 15. Another way to write this equation is r Convert from polar equation to rectangular equation: Number 1 Draw this picture which defines x,y,r, and q and all their relationships. Polar. Q. r 2 · r 2 = 6 sin θ cos θ · r 2 (r 2) 2 = 6 r sin θ r cos θ. ) x= y+ cos theta cos theta = x-y theta = cos^-1 (x-y) b. Example: Convert r = 5 to a rectangular equation Converting polar equations to rectangular to graph the resulting equation As we have mentioned in the earlier section, we graph polar equations on a rectangular coordinate system by rewriting the polar equations to their rectangular form first. Suppose we use the substitution {eq}x = r\cos \theta The following polar-rectangular relationships are useful in this regard. Rectangular. Let us see some example problems to understand how to graph the given polar equations. Step 2: Now click the button "Calculate Rectangular Coordinates" to get the result. Now eliminating from the above two relations we get. Since, Thus y = x +1. (that's probably why Dr. Example: Find the polar form of complex number 7-5i. r = 7. Parametric equations. Although we can certainly graph polar curves in the polar plane, it is still helpful to refer to our old rectangular grid for analysis. Q1: Consider the polar equation 𝑟 = 2 𝜃 c o s . 3. So I've got three rectangular equations. Then graph the polar equation. We could plug in x = r cos θ, y = sin θ to convert to polar coordinates, but there's a faster way. = -pi/6. Step 3: Finally, the conversion of polar to rectangular coordinate will be displayed in the output field. Solution : The equation in rectangular form is . 8. Quadratic Relations We will see that a curve defined by a quadratic relation betwee n the variables x; y is one of these three curves: a) parabola, b) ellipse, c) hyperbola. Jan 21, 2020 · Representing Polar Coordinates. May 03, 2020 · 1. Polar forms of numbers can be converted into their rectangular equivalents by the formula, Rectangular form= amplitude * cos (phase) + j (amplitude) * sin (phase). Select the correct choice below and, if necessary, fill in the answer boxes to complete your choice. r= cos( ) sin( ) 12. 180 seconds. We have circles centered along the x-axis with diameter equal to a so the radius must be . Hence the rectangular equation of the given curve is A rectangular equation, or an equation in rectangular form is an equation composed of variables like x and y which can be graphed on a regular Cartesian plane. To change a rectangular equation to a polar equation just replace x with r cos θ and y with r sin θ . In this video, we look at taking equations that are in rectangular form and put them in polar form Converting polar equations to rectangular to graph the resulting equation As we have mentioned in the earlier section, we graph polar equations on a rectangular coordinate system by rewriting the polar equations to their rectangular form first. 1. It looks like this. 981. x = r cos θ. This is all based off the fact that the polar form takes on the format, amplitude < phase. Then, substitute the rectangular expression for the parameter in the other equation, and simplify. The rectangular coordinates are called the cartesian coordinate which is of the form (x, y), whereas the polar coordinate is in the form of (r, θ). The rectangular representation of the polar point is. The above derived equations are known as the polar to rectangular formulas. Step 1: Substitute for x and y in x 2 + y 2 = 16 and solve for r. r x2 y2 r 3 r must be 3 but there is no restriction on so consider all values. For example, r =3sin( )θ is an equation in polar coordinates since it's an equation and it involves the polar coordinates r θand . Square on both sides. Some complicated rectangular equations have much simpler polar equations. Notice that the original polar form of the equation was r = 4sinθ and the center of the circle in rectangular coordinates is (0,2). I am supposed to convert it to a rectangular equation. Polar and Rectangular Forms of Equations Polar and Rectangular Equations You can also use the relationships r2 x2 y2 x r cos and y r sin O. I looked up a Youtube video to do it and I got stuck after a certain point. Both polar and rectangular forms of notation for a complex number can be related graphically in the form of a right triangle, with the hypotenuse representing the vector itself (polar form: hypotenuse length = magnitude; angle with respect to horizontal side = angle), the horizontal side representing the rectangular "real" component, and Figure 10. Example 7. r 2 = x 2 + y 2. And, these coordinates are directed horizontal and vertical distances along the x and y axes, as Khan Academy points out. What is the difference between rectangular and polar coordinates? Rectangular coordinates, or cartesian coordinates, come in the form (x,y Convert each equation from polar to rectangular form. Jun 24, 2021 · Convert the polar equation to a rectangular equation. whoops. Substitute and in above equation. Q P [ACl[lR yrKiqgFhptQsP ^rVeMsqeArAvVefdc. ) So the center is and the radius is polar coordinates lines origin polar equation rectangular equation. asked 2021-09-08. The equation in rectangular form is . y=1/8x^2−16. y= x 15. Cards 1-6 require students convert polar coordinates to rectangular coordinates; cards 7-12 require stude Polar forms of numbers can be converted into their rectangular equivalents by the formula, Rectangular form= amplitude * cos (phase) + j (amplitude) * sin (phase). The points of polar coordinates are expressed as r,θ, where r is the interval from the origin to the point and θ is the angle from the positive x-axis to the point. Report an Error. There are other possibilities, considered degenerate. But in the Polar Coordinate System, a point in polar to rectangular equation calculator wolfram is important information accompanied by photo and HD pictures sourced from all websites in the world. +147. Solution to Problem 1 We multiply both sides by R. I know the answer is going to be y^2-3 (x)^2=0. Polar equation of a circle with a center at the pole Polar coordinate system The polar coordinate system is a two-dimensional coordinate system in which each point P on a plane is determined by the length of its position vector r and the angle q between it and the positive direction of the x -axis, where 0 < r < + oo and 0 < q < 2 p . Sep 03, 2021 · Search: Convert to polar equation to rectangular equation calculator. Difficult conversion from polar equation to rectangular equation. SURVEY. In Convert the polar equation. For example y=4x+3 is a rectangular equation. 3 and Example 10. Now these are pretty simple ones like y equals 5 is a horizontal line. Polar and rectangular coordinates are used to identify a point's location in the coordinate system and can be easily converted to each other. Then, we finally substitute for all y and x terms The process for converting parametric equations to a rectangular equation is commonly called eliminating the parameter. Most common are equations of the form r = f ( θ). Thus, a polar form vector is presented as: Z = A ∠±θ , where: Z is the complex number in polar form, A is the magnitude or modulo of the vector and θ is its angle or argument of A which can be Jun 24, 2021 · Convert the polar equation to a rectangular equation. • Write the polar equation as r in terms ofθ. I don't know how to get to the answer he gave us. . Now, recall that . Answer by jim_thompson5910(35256) (Show Source): In Mathematics, polar to rectangular coordinates represents the conversion of polar to the rectangular coordinates. R 2 = 4 R sin t. Question. Polar Equations to Rectangular Name_____ Period____ ©\ v2R0U1D7i CKeuTtQa_ sSWokfotKwWaRrHee ULALNCI. CONVERTING POLAR COORDINATES TO RECTANGULAR COORDINATES. and Rectangular Equations In calculus, you will sometimes need to convert from the The process for converting parametric equations to a rectangular equation is commonly called eliminating the parameter. close. 14 gives some more examples of functions for transforming from polar to rectangular coordinates. Convert the polar equation into rectangular form: Possible Answers: Correct answer: Explanation: Start by using the double angle formula for . 12 = 3 +4 cos 0 Simplify the rectangular equation by moving all of the terms to the left side of t box below. Sometimes it is more convenient to use polar equations: perhaps the nature of the graph is better described that way, or the equation is much simpler. Convert equations from rectangular form to polar form. Subject (optional) Create Problem Set. Make the polar-rectangular substitutions. For example, for y equals 5, I could use this equation Write each polar equation in Cartesian form (in terms of xand y). Step 1: Substitute for x and y in x2 + y2 = 16 and solve for r. To convert into polar form modulus and argument of the given complex number, i. Polar coordinates of the point ( 1, 3). Here's my work: 1) I multiplied r on both sides. To convert from rectangular coordinates to polar coordinates, use one or more of the formulas: cosθ = x r, sinθ = y r, tanθ = y x, and r = √x2 + y2. Then, we finally substitute for all y and x terms Parametric Equations and Polar Coordinates. answered Feb 19, 2015 by yamin_math Mentor. Another way to write this equation is r Answer (1 of 4): Often it must be done piecemeal. What familiar shape is the graph? 10. r=. Of course, you have to be careful that you have your calculator set correctly in degrees (or radians, if required). See Example 10. A line with a slope and a y-intercept OB. Convert the polar equation to rectangular coordinates. The conversion Of a rectangular equation to a polar equation is fairly straightforward. to rectangular form. imbadatmath12 May 3, 2020. Let us see some examples of conversion of the rectangular form of complex numbers into polar form. ANSWERED. a polar, rather than a rectangular equation. 5. Substitute that into the equation gives the following: Because we need and to get and respectively, multiply both sides by . Elissa89 said: My professor gave us a study guide with the solutions: The equation is: sec (theta)=2. The x is the real number of the expression and the y represents the imaginary number of the expression. Solution to Problem 1. We now use the relationship between polar and rectangular coordinates: R 2 = x 2 + y 2 and y = R sin t to rewrite the equation as follows: x 2 + y 2 = 4 y. \ (r^2 = {7 \over 9sinθ - cosθ} \times r\) 2) Because r 2 equals x 2 + y 2 , I substituted x 2 + y 2 into r 2. Step 2: Substitute for x and y in x = 6 and solve for r. Step 2. Polar - Rectangular Coordinate Conversion Calculator. Just as we describe curves in the plane using equations involving x and y, so can we describe curves using equations involving r and θ. Polar to Rectangular Calculator. The relationships between the rectangualr xy and polar Rt coordinates of a. Replace x with r cos and y with r sin B, and then simplify the resulting equation using algebraic manipulations and trigonometric identities. 11) q = p 4 0 p 6 p 3 p 2p2 3 5p 6 p 7p 6 4p 3p 2 5p 3 11p 6 1234567 12) r = 2sinq 0 p 6 p 3 p 2p2 3 5p 6 p 7p 6 4p 3p 2 5p 3 11p 6 1234567 13) r = -6sinq 0 p 6 p 3 p 2p2 3 5p 6 p 7p 6 4p 3p 2 5p 3 11p 6 1234567 14) q = 3p 4 0 p 6 p 3 p 2p2 3 5p 6 p 7p 6 4p 3p Convert from rectangular equations to polar equations Converting Between Polar and Rectangular Equations, Ex 3. In the rectangular form, a phasor can be divided into two components, represented as a + jb. -1-31 11. x 4 + 2 x 2 y 2 + y 4 = 6 x y Convert the polar equation R = 4 sin t to rectangular form. Well, as you already know, a point in the Rectangular or Cartesian Plane is represented by an ordered pair of numbers called coordinates (x,y). Apr 27, 2019 · Convert the Cartesian equation x^2 - y^2 = 16 to a polar equation. So, for example, let's take the expression in polar form, 12 < 55°. y=1/8x^2−2. Call Now to Set Up Tutoring: (888) 888-0446. Translating back into polar coordinates we find the intersections of the original curves are To play this quiz, please finish editing it. Ask Question Asked 9 years, 6 months ago. When converting equations it is more complicated to convert from polar to rectangular form. Coordinates in polar equations are of the form (r,θ), where r represents radius and θ represents angle. Study the example below, in which the parametric equations x = 2t - 4 a polar, rather than a rectangular equation. Step 1 : Conversion from polar to rectangular equation : and . Solution:7-5i is the rectangular form of a complex number. 7 Polar Equations By now you've seen, studied, and graphed many functions and equations - perhaps all of them in Cartesian coordinates. I want to convert them to polar form, to see what they look like in polar. If you are not founding for Convert to polar equation to rectangular equation calculator, simply will check out our text below : Jun 11, 2021 · Step 1. Conics and Polar Coordinates x 11. R = 4 sin t. The conversion of polar coordinate to the rectangular coordinate formula is given Conversion: Rectangular to Polar/ Polar to Rectangular 2011 Rev by James, Apr 2011 and either y = r sin(θ) (when y is the term used in the original equation) Jun 04, 2020 · Hello:) I'm trying to convert the polar equation above into a rectangular equation. Rectangular (Cartesian) Grid Polar Grid origin pole positive x-axis polar axis y-axis line 2 Example 1: Sketch the graph of the polar equation r 4, then convert it Aug 14, 2016 · Convert the polar equation to rectangular form: - 1619567 Conics and Polar Coordinates x 11. Example 1 : Graph each of the following polar equations. Step 2: State your goal Converting from Polar Coordinates to Rectangular Coordinates. So on the rectangular coordinate system, this is a circle with center (0,2) and radius = 2. Example 1: The distance of point \(A\) from the origin is 3 units and the distance of point \(B\) from the origin is 4 units. Please update your bookmarks accordingly. On the polar grid, rectangular coordinate (0,2) translates to (2, π/2). In Mathematics, polar to rectangular coordinates represents the conversion of polar to the rectangular coordinates. First, it is important to appreciate that this is a rectangular equation. Jul 07, 2019 · Understand how polar equations work. Example 1 : In this worksheet, we will practice converting equations from polar to rectangular form and vice versa. We will use the fact that x = r cosθ and y = r sinθ to show that the polar equation is actually equivalent to the equation y = x + 1. $$ {x^2} + {y^2} = 4 $$. We have also transformed polar equations to rectangular equations and vice versa. Therefore, the goal is to get a polar equation from the rectangular equation. Question 1. Just as we can create equations in rectangular coordinates, we can create equations in polar coordinates. Your equation makes it is a polar equations from subtraction, therefore we learned previously Dec 22, 2014 · Equation of the Sphere in Standard Form, Center, a Equation of Sphere given Endpoints of Diameter; Equation of Sphere with Radius 3 and Center (3, -1 Equation of Sphere with Radius 2 and Center (4, 2, Distance between Two Points in Space (-3, 2, 5) an Distance between Two Points in Space (3, 1, 4) and May 13, 2018 · Therefore, `56\ ∠\ 27^@ ≈ 49. 1) r = cotqcscq2) r = 2cotqcscq 3) r = 4cotqcscq4) r = 2sinq 5) r = -2cosq - 2sinq6) r = 2cosq + 2sinq So on the rectangular coordinate system, this is a circle with center (0,2) and radius = 2. Question 872844: How to change r=1-sin theta a polar equation to rectangular form i've started to solve the problem and i got r^2=r-rsin theta and i don't know what to do next. Strategy for Changing Equations in Rectangular Form to Polar Form • Use the conversions r x y2 2 2= +,x r=cos θ, and x r=sin θto find a polar equation. r 2 (1) = 16 Trig id. p x2 +y2 1 = x p x 2+y 16. Calculus questions and answers. Consider the rectangular and polar equations of the circle graphed below. polar to rectangular equation calculator wolfram is important information accompanied by photo and HD pictures sourced from all websites in the world. x2 y2 = 1 14. 2, 7 4 π). 99! arrow_forward. How to convert between Polar Form and Rectangular Form? In the study of polar equations we must learn how to write the equation of a polar coordinates line. Identify and Graph Polar Equations by Converting to Rectangular Equations. The two components are the horizontal component or real part and a vertical component or imaginary part. (x 2 + y 2) 2 = 6 y x. Once we've eliminated all r and theta variables, and replaced them with x and y variables, we're done with the conversion. Step 3: Substitute and simplify the equation. Convert to rectangular coordinates, if possible. Now let's take a look at the graph of one of them. Multiply each side by . What is the difference between rectangular and polar coordinates? Rectangular coordinates, or cartesian coordinates, come in the form (x,y Convert from rectangular equations to polar equations Converting Between Polar and Rectangular Equations, Ex 3. Both polar and rectangular forms of notation for a complex number can be related graphically in the form of a right triangle, with the hypotenuse representing the vector itself (polar form: hypotenuse length = magnitude; angle with respect to horizontal side = angle), the horizontal side representing the rectangular "real" component, and CONVERTING POLAR COORDINATES TO RECTANGULAR COORDINATES. Converting Rectangular form into Polar form. r² = 16/cos2è r = -4 r² = 8 help please im stuck between 2 answers. Then, r 2 = x 2 + y 2. polar-equation; rectangular-equation; convert this quadratic formula from stndard from to vertex form f(x)=3x^2-6x+4 Convert each angle measure to decimal degree form without using a calculator. Study the example below, in which the parametric equations x = 2t - 4 First, it is important to appreciate that this is a rectangular equation. Consider the polar equation of a circle of radius 1: r=1 \theta doesn't appear because r is the same for all values of \theta We have the universal conversion between the two systems x=r\cos\theta y=r\sin\theta Applying this we get the foll Identify and Graph Polar Equations by Converting to Rectangular Equations. x 2 + y 2 - 4 y = 0. What familiar graph is given by the polar equation ln(r) = rcos 11. r=24 cos 0 The rectangular form of the polar equation is (Type your answer in standard form) Identify the curve. 3 hours ago Problems were equations in rectangular form are converted to polar form, using the relationship between polar and rectangular coordinates, are presented along with detailed solutions. Modify the conversion formula x = r cos θ x=r\cos {\theta} x = r cos θ. 1) tan 2) r cos sin 3) r cos 4) r cos sin The given polar equation is. To change a rectangular equation to a polar equation just replace x with r cos θ and y with r sin θ. 1 Graph the curve given by r = 2. 13. Viewed 5k times May 18, 2020 · When converting equations it is more complicated to convert from polar to rectangular form. was 2related to and ? From conversions, how r x2 y Before we do the conversion let's look at the graph. Example 2: Convert the following polar equation to rectangular equations. x 2+3 x + y 2=6 (x 2+ y 2)+3 x= 6. To use the polar-rectangular relationships we need r cos θ and r sin θ. This thread will be moved, so don't make a new one. }} The required is to convert the polar equation to rectangular form and sketch its graph. Jul 28, 2021 · The conversion calculator given in this section is used to convert the polar form to the rectangular form and vice versa. Feb 17, 2015 · 0 votes. Eliminate the Parameter, Set up the parametric equation for to solve the equation for . ycy gk6 fr7 oyi na6 myk p7b xcy g3q esu 953 t9f wwq dv9 nb8 dsk kup etf nuo xcq Movie Explainers About The Cinemaholic
CommonCrawl
Why is wavenumber used in IR spectroscopy rather than wavelength? In IR spectroscopy, the $x$-axis is used to represent wavenumber, in $\mathrm{cm^{-1}}$. Why is wavenumber, equal to $1/\lambda$, used in place of wavelength, which is simply $\lambda$? Sources I've already found explain why it was chosen rather than energy of waves, but the conversion from wavelength to wavenumber is never explained. Below are two relations from Wikipedia, which explain how it can be used in equations, but in all of these cases, $\lambda$ seems to be a choice that's easier to work with. What explanations are there, if anything other than "historical reasons", for why $1/\lambda$ is favored over $\lambda$? A spectroscopic wavenumber $\tilde\nu$ can be converted into energy per photon $E$ via Planck's relation: $$E = hc\tilde\nu$$ It can also be converted into wavelength of light via $$\lambda = \frac{1}{n\tilde\nu}$$ where $n$ is the refractive index of the medium. sqrtbottlesqrtbottle $\begingroup$ en.wikipedia.org/wiki/Wavenumber#In_spectroscopy $\endgroup$ – Wildcat May 29 '15 at 11:41 $\begingroup$ Wavenumber is directly proportional to energy, so higher wavenumbers correspond to a higher energy by the same factor. (That doesn't explain why high wavenumbers are usually on the left, though.) That put aside, who still uses IR? $\endgroup$ – Jan May 29 '15 at 11:51 $\begingroup$ In Fourier Transform IR, the interferometer modulates the light so that each incident 'color' has a unique frequency, typically in the relatively easily handled acoustic frequency range. Thus, the modulation frequencies are directly proportional to the wavenumber value of the light and the inverse fourier transform then does the decoding, i.e., tells you which peak, at whatever wavenumber, suffered absorbance. So it is not just historical reasons. $\endgroup$ – Ed V Jun 20 at 22:27 The choice to use wavenumbers for infrared spectroscopy (rather than wavelengths, frequencies, or energies) was probably done to provide a range that has both the appearance of width (so that the difference between two peaks is more meaningful) and spans a set of reasonable values that do not contain very large or very small numbers (which are hard to conceptualize). The goal is to be able to easily compare values. See the following comparison of units/values for the typical range of IR spectroscopy for organic compounds and some "example" values for the absorptions of common bond types: absorption cm⁻¹ m µm Hz THz J kJ/mol meV high end 500 2.00E-5 20 1.5E+13 15 9.94E-21 5.98 62 C-O 1100 9.09E-6 9.09 3.3E+13 33 2.19E-20 13.2 136 C=C 1660 6.02E-6 6.02 5.0E+13 50 3.30E-20 19.9 206 C=O 1720 5.81E-6 5.81 5.2E+13 52 3.42E-20 20.6 213 C-H 3000 3.33E-6 3.33 9.0E+13 90 5.96E-20 35.9 372 O-H 3500 2.86E-6 2.86 1.05E+14 105 6.96E-20 41.9 434 low end 4000 2.50E-6 2.50 1.20E+14 120 7.95E-20 47.9 496 Let's compare especially the peaks for $\ce{C=C}$ and $\ce{C=O}$. These peaks are easily resolvable by all modern FTIR spectrometers, and there is room for peaks to be resolved between them. Only the values in wavenumbers give this sense of resolution intuitively. Modern spectrometers can resolve data to $1.0\ \text{cm}^{-1}$ or better. A resolution of $1.0\ \text{cm}^{-1}$ is equivalent to resolutions of $0.040\ \mu\text{m}$, $0.030\text{ THz}$, $12\text{ kJ/mol}$, and $0.12\text{ meV}$. Which of these resolutions is most intuitive to understand? Ben NorrisBen Norris $\begingroup$ Well, of course you are falling victim to the exact value logical fallacy. If we were used to kJ/mol it would intuitively immediately make sense to remember that we can resolve peaks around 10 kJ/mol apart. Also, you decidedly say modern spectrometers — but the wavenumber scale was popular way before such a resolution was achieved. Finally, of course we should change NMR scales to something more meaningful as $\pu{0.05ppm}$ makes absolutely no sense */sarcasm* $\endgroup$ – Jan Oct 20 '17 at 3:52 Not only in IR spectroscopy. Wavenumber is unit of energy and therefore you can directly deduce the difference of energy between states. In addition, humans like to think in acceptably small numbers (0.01 - 10,000). Wavenumber allows this for IR and conveniently supplements the eV unit in small energy separations range. Admittedly, the conversion factor of 8,065.73 won't win beauty contest. ssavecssavec $\begingroup$ the same holds for λ as a reflection of energy, though, through E = hc/λ, it's just inverted (though I see your point and think this is a good answer) $\endgroup$ – sqrtbottle May 29 '15 at 12:38 $\begingroup$ Practically for us chemists the most important conversion factor is 11.96 J per mol = 1 reciprocal centimetre. $\endgroup$ – J. LS May 29 '15 at 13:41 $\begingroup$ @Sqrtbottle: as an analogy, the temperature is also "wrong", we are accustomed to higher temperature meaning higher numerical value. But for statistical mechanics, you would also like to have temperature expressed in energy units, therefore $\beta = \frac{1}{k_b T}$ $\endgroup$ – ssavec Jun 1 '15 at 5:53 Although the question is already a few years old, I hope that I can still add a little perspective to what has already been mentioned by others. First of all, the reason why $1/\lambda$ is preferred over $\lambda$ has been nicely addressed by Ben Norris and ssavec: it provides a scale that is linear in energy so that you can directly compare the distance between lines and also the linewidths. But that being said, why not use other energy units? In fact, you can. You will often find spectra that have frequency units (MHz, GHz, THz, etc) instead of wavenumbers and that is totally fine. The reason why wavenumber is so popular - besides providing "reasonable" numbers - is partially historical. The first spectroscopic experiments were performed before the advent of quantum mechanics and the relation between the energy and wavelength of light was not known until Planck and Einstein introduced it early in the 20th century. You probably have heard of the Balmer formula (1885) that predicts the emission lines of atomic hydrogen. The original formulation of the Balmer formula is $$ \lambda = B\left (\frac{n^2}{n^2-2^2} \right ) \text{ with }B=364.50682\,\text{nm}. $$ A few years later (1888) Johannes Rydberg tried to find a similar equation to predict the spectral lines of the alkali atoms (that like hydrogen have a single valence electron). Rydberg did not manage to find an expression similar to that of Balmer. While playing with the data, Rydberg realized that if he used the inverse of the wavelength it was much easier to spot a pattern in the data and he derived the following formula that is now named after him $$ \frac{1}{\lambda}=R_\text{H}\left (\frac{1}{n_1^2}-\frac{1}{n_2^2} \right ) \text{ with }R_\text{H}=1.09677583\times 10^{-7}\,\text{m}^{-1}. $$ When Rydberg introduced the concept of wavenumber he was unaware of the fact that a wavenumber scale is proportional to an energy scale: he only used it as a mathematical manipulation to find a relation between the spectral lines of alkali atoms. Now that we know that the pattern in a spectrum is easier to interpret if you plot it on an energy scale it does not matter what scale you choose, but many people still prefer the wavenumber scale. PaulPaul Reciprocal length (cm-1) or wavenumber (cm-1) used in vibrational spectroscopy is actually an intuitive concept in reciprocal space (used heavily in diffraction world). From literature: "Reciprocal length is used as a measure of energy. The frequency of a photon yields a certain photon energy, according to the Planck-Einstein relation. Therefore, as reciprocal length is a measure of frequency, it can also be used as a measure of energy. For example, the reciprocal centimetre, cm−1, is an energy unit equaling the energy of a photon with 1 cm wavelength. That energy amounts to approximately 1.24×10−4 eV or 1.986×10−23 J. The higher the number of inverse length units, the higher the energy. Ashraf KhanAshraf Khan Not the answer you're looking for? Browse other questions tagged spectroscopy or ask your own question. Wavelength of iPhone 4S camera light for visible spectroscopy calibration Wavelength extention in AAS Atomic emission spectroscopy difference between absorption spectroscopy and extinction spectroscopy Calculation of fluorescence quantum yield Why is tetramethylsilane (TMS) used as an internal standard in NMR spectroscopy? Why does mesitylene absorb at a longer wavelength than benzene (UV-Vis)? What is the region of this wavelength (462 nm)? Wavelength-dependence of refractive index in internal reflection spectroscopy? Why aren't lasers used as light sources for atomic absorption spectroscopy?
CommonCrawl
Christine D Wilson Professor, Physics & Astronomy Dr. Christine Wilson is a Distinguished University Professor in the Department of Physics and Astronomy at McMaster University. Dr. Wilson is the (Tier 1) Canada Research Chair in Extragalactic Star Formation and a Fellow of the Royal Society of Canada. Her research focuses on observational astronomy, specifically on gas and star formation in galaxies, and makes extensive use of new and archival data from the Atacama Large Millimeter/submillimeter Array. Current research includes determining how the large-scale environment affects the process of star formation inside galaxies and identifying the mechanisms that trigger intense bursts of star formation when two galaxies collide. EMAIL: wilsoncd McMaster University, Faculty of Science, McMaster University Member, Origins Institute, Faculty of Science Professor, Physics & Astronomy, Faculty of Science Canada Research Chair in Extragalactic Star Formation, McMaster University, Cosmology and extragalactic astronomy (RA) Galactic astronomy (RA) Scholarly Activity in McMaster Experts selected scholarly activity The HASHTAG Project: The First Submillimeter Images of the Andromeda Galaxy from the Ground. Astrophysical Journal, Supplement Series. 257:52-52. 2021 VERTICO: The Virgo Environment Traced in CO Survey. Astrophysical Journal, Supplement Series. 257:21-21. 2021 PHANGS–ALMA Data Processing and Pipeline. Astrophysical Journal, Supplement Series. 255:19-19. 2021 Observed CN and HCN intensity ratios exhibit subtle variations in extreme galaxy environments. Monthly Notices of the Royal Astronomical Society. 504:5863-5879. 2021 The case for thermalization as a contributor to the [C ii] deficit. Monthly Notices of the Royal Astronomical Society. 503:911-919. 2021 Physical properties of the ambient medium and of dense cores in the Perseus star-forming region derived from Herschel Gould Belt Survey observations. Astronomy and Astrophysics. 645:A55-A55. 2021 Highly turbulent gas on GMC scales in NGC 3256, the nearest luminous infrared galaxy. Monthly Notices of the Royal Astronomical Society. 500:4730-4748. 2020 A new estimator of resolved molecular gas in nearby galaxies. Monthly Notices of the Royal Astronomical Society. 500:1261-1278. 2020 Dissecting the Global Cold Dust Properties and Possible Submillimeter Excess of 13 Nearby Spiral Galaxies from the NGLS. Astrophysical Journal. 900:53-53. 2020 Is this an early stage merger? A case study on molecular gas and star formation properties of Arp 240. Monthly Notices of the Royal Astronomical Society. 496:5243-5261. 2020 JINGLE – IV. Dust, H i gas, and metal scaling laws in the local Universe. Monthly Notices of the Royal Astronomical Society. 496:3668-3687. 2020 The MALATANG survey: dense gas and star formation from high-transition HCN and HCO+ maps of NGC 253. Monthly Notices of the Royal Astronomical Society. 494:1276-1296. 2020 The HASHTAG project I. A survey of CO(3–2) emission from the star forming disc of M31. Monthly Notices of the Royal Astronomical Society. 492:195-209. 2020 The CO(3–2)/CO(1–0) Luminosity Line Ratio in Nearby Star-forming Galaxies and Active Galactic Nuclei from xCOLD GASS, BASS, and SLUGS. Astrophysical Journal. 889:103-103. 2020 Estimating the Molecular Gas Mass of Low-redshift Galaxies from a Combination of Mid-infrared Luminosity and Optical Properties. Astrophysical Journal. 887:172-172. 2019 [C i](1–0) and [C i](2–1) in Resolved Local Galaxies. Astrophysical Journal. 887:105-105. 2019 JINGLE V: Dust properties of nearby galaxies derived from hierarchical Bayesian SED fitting. Monthly Notices of the Royal Astronomical Society. 489:4389-4417. 2019 The Kennicutt–Schmidt Law and Gas Scale Height in Luminous and Ultraluminous Infrared Galaxies. Astrophysical Journal. 882:5-5. 2019 Extreme CO Isotopologue Line Ratios in ULIRGS: Evidence for a Top-heavy IMF. Astrophysical Journal. 879:17-17. 2019 JINGLE, a JCMT legacy survey of dust and gas for galaxy evolution studies: II. SCUBA-2 850 μm data reduction and dust flux density catalogues. Monthly Notices of the Royal Astronomical Society. 486:4166-4185. 2019 SCOPE: SCUBA-2 Continuum Observations of Pre-protostellar Evolution – survey description and compact source catalogue. Monthly Notices of the Royal Astronomical Society. 485:2895-2908. 2019 Linking bar- and interaction-driven molecular gas concentration with centrally enhanced star formation in EDGE–CALIFA galaxies. Monthly Notices of the Royal Astronomical Society. 484:5192-5211. 2019 New Insights into the Physical Conditions and Internal Structure of a Candidate Proto-globular Cluster. Astrophysical Journal. 874:120-120. 2019 Kiloparsec-Scale Variations in the Star Formation Efficiency of Dense Gas: The Antennae Galaxies (NGC 4038/39). Astronomical Journal. 157:131-131. 2019 An ALMA archival study of the clump mass function in the Large Magellanic Cloud. Monthly Notices of the Royal Astronomical Society. 483:1624-1641. 2019 Comprehensive comparison of models for spectral energy distributions from 0.1 μm to 1 mm of nearby star-forming galaxies. Astronomy and Astrophysics. 621:A51-A51. 2019 JINGLE, a JCMT legacy survey of dust and gas for galaxy evolution studies – I. Survey overview and first results. Monthly Notices of the Royal Astronomical Society. 481:3497-3519. 2018 The Effect of Galaxy Interactions on Molecular Gas Properties. Astrophysical Journal. 868:132-132. 2018 Exploring the CO/CN line ratio in nearby galaxies with the ALMA archive. Monthly Notices of the Royal Astronomical Society. 477:2926-2942. 2018 The MALATANG Survey: The L GAS–L IR Correlation on Sub-kiloparsec Scale in Six Nearby Star-forming Galaxies as Traced by HCN J = 4 → 3 and HCO+ J = 4 → 3. Astrophysical Journal. 860:165-165. 2018 The JCMT Gould Belt Survey: A First Look at the Auriga–California Molecular Cloud with SCUBA-2. Astrophysical Journal. 852:73-73. 2018 Probing the cold and warm molecular gas in the Whirlpool Galaxy: Herschel SPIRE-FTS observations of the central region of M51 (NGC 5194). Monthly Notices of the Royal Astronomical Society. 470:4989-5006. 2017 The Origins of [C ii] Emission in Local Star-forming Galaxies. Astrophysical Journal. 845:96-96. 2017 The JCMT Nearby Galaxies Legacy Survey – XI. Environmental variations in the atomic and molecular gas radial profiles of nearby spiral galaxies. Monthly Notices of the Royal Astronomical Society. 467:4282-4292. 2017 Extreme CO Isotopic Abundances in the ULIRG IRAS 13120-5453: An Extremely Young Starburst or Top-heavy Initial Mass Function. Astrophysical Journal Letters. 840:L11-L11. 2017 Luminous Infrared Galaxies with the Submillimeter Array. V. Molecular Gas in Intermediate to Late-stage Mergers. Astrophysical Journal. 840:8-8. 2017 The interstellar medium in Andromeda's dwarf spheroidal galaxies – II. Multiphase gas content and ISM conditions. Monthly Notices of the Royal Astronomical Society. 465:3741-3758. 2017 Updated 34-band Photometry for the SINGS/KINGFISH Samples of Nearby Galaxies. Astrophysical Journal. 837:90-90. 2017 THE DENSE GAS IN THE LARGEST MOLECULAR COMPLEXES OF THE ANTENNAE: HCN AND HCO+OBSERVATIONS OF NGC 4038/39 USING ALMA. Astrophysical Journal. 823:87-87. 2016 The JCMT nearby galaxies legacy survey – X. Environmental effects on the molecular gas and star formation properties of spiral galaxies. Monthly Notices of the Royal Astronomical Society. 456:4384-4406. 2016 The spatially resolved correlation between [NII] 205μm line emission and the 24μm continuum in nearby galaxies. Astronomy and Astrophysics. 587:A45-A45. 2016 MORPHOLOGY AND KINEMATICS OF WARM MOLECULAR GAS IN THE NUCLEAR REGION OF ARP 220 AS REVEALED BY ALMA. Astrophysical Journal. 806:17-17. 2015 The JCMT Gould Belt Survey: SCUBA-2 observations of circumstellar discs in L 1495. Monthly Notices of the Royal Astronomical Society. 449:2472-2488. 2015 The JCMT Gould Belt Survey: constraints on prestellar core properties in Orion A North. Monthly Notices of the Royal Astronomical Society. 449:1769-1781. 2015 Cool dust heating and temperature mixing in nearby star-forming galaxies. Astronomy and Astrophysics. 576:A33-A33. 2015 The JCMT Gould Belt Survey: evidence for radiative heating in Serpens MWC 297 and its influence on local star formation. Monthly Notices of the Royal Astronomical Society. 448:1551-1573. 2015 Insights into gas heating and cooling in the disc of NGC 891 fromHerschelfar-infrared spectroscopy. Astronomy and Astrophysics. 575:A17-A17. 2015 Spatially resolved physical conditions of molecular gas and potential star formation tracers in M 83, revealed by theHerschelSPIRE FTS. Astronomy and Astrophysics. 575:A88-A88. 2015 Revealing the cold dust in low-metallicity environments(Corrigendum). Astronomy and Astrophysics. 573:C1-C1. 2015 QUANTIFYING THE HEATING SOURCES FOR MID-INFRARED DUST EMISSIONS IN GALAXIES: THE CASE OF M 81. Astrophysical Journal. 797:129-129. 2014 AROUND THE RING WE GO: THE COLD, DENSE RING OF MOLECULAR GAS IN NGC 1614. Astrophysical Journal Letters. 796:L15-L15. 2014 An Overview of the Dwarf Galaxy Survey (PASP, 125, 600, [2013])—Corrigendum. Publications of the Astronomical Society of the Pacific. 126:1079-1080. 2014 High-resolution, 3D radiative transfer modeling. Astronomy and Astrophysics. 571:A69-A69. 2014 EXTREME DUST DISKS IN Arp 220 AS REVEALED BY ALMA. Astrophysical Journal Letters. 789:L36-L36. 2014 FIRST EXTRAGALACTIC DETECTION OF SUBMILLIMETER CH ROTATIONAL LINES FROM THEHERSCHEL SPACE OBSERVATORY. Astrophysical Journal. 788:147-147. 2014 A multiwavelength analysis of the clumpy FIR-bright sources in M33. Monthly Notices of the Royal Astronomical Society. 441:224-242. 2014 THE PHYSICAL CHARACTERISTICS OF THE GAS IN THE DISK OF CENTAURUS A USING THEHERSCHEL SPACE OBSERVATORY. Astrophysical Journal. 787:16-16. 2014 A resolved analysis of cold dust and gas in the nearby edge-on spiral NGC 891. Astronomy and Astrophysics. 565:A4-A4. 2014 Dissecting the origin of the submillimetre emission in nearby galaxies with Herschel and LABOCA. Monthly Notices of the Royal Astronomical Society. 439:2542-2570. 2014 HERSCHEL-SPIRE FOURIER TRANSFORM SPECTROMETER OBSERVATIONS OF EXCITED CO AND [C I] IN THE ANTENNAE (NGC 4038/39): WARM AND COLD MOLECULAR GAS. Astrophysical Journal. 781:101-101. 2014 SHOCK EXCITED MOLECULES IN NGC 1266: ULIRG CONDITIONS AT THE CENTER OF A BULGE-DOMINATED GALAXY. Astrophysical Journal Letters. 779:L19-L19. 2013 The James Clerk Maxwell Telescope Nearby Galaxies Legacy Survey – IX. 12CO J = 3→2 observations of NGC 2976 and NGC 3351. Monthly Notices of the Royal Astronomical Society. 436:921-933. 2013 LUMINOUS INFRARED GALAXIES WITH THE SUBMILLIMETER ARRAY. IV.12COJ= 6-5 OBSERVATIONS OF VV 114. Astrophysical Journal. 777:126-126. 2013 THE CO-TO-H2CONVERSION FACTOR AND DUST-TO-GAS RATIO ON KILOPARSEC SCALES IN NEARBY GALAXIES. Astrophysical Journal. 777:5-5. 2013 COLD DUST BUT WARM GAS IN THE UNUSUAL ELLIPTICAL GALAXY NGC 4125. Astrophysical Journal Letters. 776:L30-L30. 2013 REGIONAL VARIATIONS IN THE DENSE GAS HEATING AND COOLING IN M51 FROMHERSCHELFAR-INFRARED SPECTROSCOPY. Astrophysical Journal. 776:65-65. 2013 Revealing the cold dust in low-metallicity environments. Astronomy and Astrophysics. 557:A95-A95. 2013 HERSCHELREVEALS MASSIVE COLD CLUMPS IN NGC 7538. Astrophysical Journal. 773:102-102. 2013 Star formation and dust heating in the FIR bright sources of M83. Monthly Notices of the Royal Astronomical Society. 432:2182-2207. 2013 The SCUBA-2 Cosmology Legacy Survey: blank-field number counts of 450-μm-selected galaxies and their contribution to the cosmic infrared background. Monthly Notices of the Royal Astronomical Society. 432:53-61. 2013 An Overview of the Dwarf Galaxy Survey. Publications of the Astronomical Society of the Pacific. 125:600-635. 2013 Towards understanding the relation between the gas and the attenuation in galaxies at kpc scales. Astronomy and Astrophysics. 554:A14-A14. 2013 HERSCHELEXPLOITATION OF LOCAL GALAXY ANDROMEDA (HELGA). III. THE STAR FORMATION LAW IN M31. Astrophysical Journal. 769:55-55. 2013 STAR FORMATION RATES IN RESOLVED GALAXIES: CALIBRATIONS WITH NEAR- AND FAR-INFRARED DATA FOR NGC 5055 AND NGC 6946. Astrophysical Journal. 768:180-180. 2013 HERSCHEL/SPIRE SUBMILLIMETER SPECTRA OF LOCAL ACTIVE GALAXIES,. Astrophysical Journal. 768:55-55. 2013 A detailed study of the radio-FIR correlation in NGC 6946 withHerschel-PACS/SPIRE from KINGFISH. Astronomy and Astrophysics. 552:A19-A19. 2013 HERSCHELOBSERVATIONS OF THE W3 GMC: CLUES TO THE FORMATION OF CLUSTERS OF HIGH-MASS STARS. Astrophysical Journal. 766:85-85. 2013 CAN DUST EMISSION BE USED TO ESTIMATE THE MASS OF THE INTERSTELLAR MEDIUM IN GALAXIES—A PILOT PROJECT WITH THE HERSCHEL REFERENCE SURVEY. Astrophysical Journal. 761:168-168. 2012 The nature of the interstellar medium of the starburst low-metallicity galaxy Haro 11: a multi-phase model of the infrared emission. Astronomy and Astrophysics. 548:A20-A20. 2012 SUBMILLIMETER LINE SPECTRUM OF THE SEYFERT GALAXY NGC 1068 FROM THEHERSCHEL-SPIRE FOURIER TRANSFORM SPECTROMETER. Astrophysical Journal. 758:108-108. 2012 TheHerschelExploitation of Local Galaxy Andromeda (HELGA). Astronomy and Astrophysics. 546:A34-A34. 2012 Mapping the cold dust temperatures and masses of nearby KINGFISH galaxies withHerschel. Monthly Notices of the Royal Astronomical Society. 425:763-787. 2012 THEHERSCHELEXPLOITATION OF LOCAL GALAXY ANDROMEDA (HELGA). II. DUST AND GAS IN ANDROMEDA. Astrophysical Journal. 756:40-40. 2012 The JCMT Nearby Galaxies Legacy Survey — VIII. CO data and the LCO(3-2)-LFIR correlation in the SINGS sample. Monthly Notices of the Royal Astronomical Society. 424:3050-3080. 2012 SPATIALLY RESOLVED STELLAR, DUST, AND GAS PROPERTIES OF THE POST-INTERACTING WHIRLPOOL GALAXY SYSTEM. Astrophysical Journal. 755:165-165. 2012 HERSCHEL-SPIRE IMAGING SPECTROSCOPY OF MOLECULAR GAS IN M82. Astrophysical Journal. 753:70-70. 2012 Herschel★and JCMT observations of the early-type dwarf galaxy NGC 205. Monthly Notices of the Royal Astronomical Society. 423:2359-2373. 2012 LUMINOUS INFRARED GALAXIES WITH THE SUBMILLIMETER ARRAY. III. THE DENSE KILOPARSEC MOLECULAR CONCENTRATIONS OF Arp 299. Astrophysical Journal. 753:46-46. 2012 The JCMT Nearby Galaxies Legacy Survey - VII. H imaging and massive star formation properties. Monthly Notices of the Royal Astronomical Society. 422:3208-3248. 2012 The gas-to-dust mass ratio of Centaurus A as seen by Herschel★. Monthly Notices of the Royal Astronomical Society. 422:2291-2301. 2012 THE NEXT GENERATION VIRGO CLUSTER SURVEY (NGVS). I. INTRODUCTION TO THE SURVEY. Astrophysical Journal, Supplement Series. 200:4-4. 2012 The dust and gas properties of M83. Monthly Notices of the Royal Astronomical Society. 421:2917-2929. 2012 Herschelobservations of a potential core-forming clump: Perseus B1-E. Astronomy and Astrophysics. 540:A10-A10. 2012 THEHERSCHELREFERENCE SURVEY: DUST IN EARLY-TYPE GALAXIES AND ACROSS THE HUBBLE SEQUENCE. Astrophysical Journal. 748:123-123. 2012 Herschel observations of Cen A: stellar heating of two extragalactic dust clouds. Monthly Notices of the Royal Astronomical Society. 420:1882-1896. 2012 RESOLVING THE FAR-IR LINE DEFICIT: PHOTOELECTRIC HEATING AND FAR-IR LINE COOLING IN NGC 1097 AND NGC 4559. Astrophysical Journal. 747:81-81. 2012 Investigations of dust heating in M81, M83 and NGC 2403 with the Herschel Space Observatory. Monthly Notices of the Royal Astronomical Society. 419:1833-1859. 2012 HERSCHELFAR-INFRARED AND SUBMILLIMETER PHOTOMETRY FOR THE KINGFISH SAMPLE OF NEARBY GALAXIES. Astrophysical Journal. 745:95-95. 2012 UNVEILING THE PHYSICAL PROPERTIES AND KINEMATICS OF MOLECULAR GAS IN THE ANTENNAE GALAXIES (NGC 4038/9) THROUGH HIGH-RESOLUTION CO (J= 3-2) OBSERVATIONS. Astrophysical Journal. 745:65-65. 2012 OBSERVATIONS OF Arp 220 USINGHERSCHEL-SPIRE: AN UNPRECEDENTED VIEW OF THE MOLECULAR GAS IN AN EXTREME STAR FORMATION ENVIRONMENT. Astrophysical Journal. 743:94-94. 2011 KINGFISH—Key Insights on Nearby Galaxies: A Far-Infrared Survey withHerschel: Survey Description and Image Atlas1. Publications of the Astronomical Society of the Pacific. 123:1347-1369. 2011 Filaments and ridges in Vela C revealed byHerschel: from low-mass to high-mass star-forming sites. Astronomy and Astrophysics. 533:A94-A94. 2011 THE EMISSION BY DUST AND STARS OF NEARBY GALAXIES IN THEHERSCHELKINGFISH SURVEY. Astrophysical Journal. 738:89-89. 2011 High resolution CO observation of massive star forming regions. Astronomy and Astrophysics. 530:A53-A53. 2011 Characterizing interstellar filaments withHerschelin IC 5146. Astronomy and Astrophysics. 529:L6-L6. 2011 The JCMT Nearby Galaxies Legacy Survey. Astronomy and Astrophysics. 527:A16-A16. 2011 ERRATUM: "A MATURE DUSTY STAR-FORMING GALAXY HOSTING GRB 080607 AT z = 3.036" (2010, ApJ, 723, L218). Astrophysical Journal Letters. 727:L53-L53. 2011 FIR/Submm Spectroscopy with Herschel: First Results from the VNGS and H-Atlas Surveys. Open Astronomy. 20:371-378. 2011 THE DISPLACED DUSTY INTERSTELLAR MEDIUM OF NGC 3077: TIDAL STRIPPING IN THE M 81 TRIPLET. Astrophysical Journal Letters. 726:L11-L11. 2011 On the origin of M81 group extended dust emission. Monthly Notices of the Royal Astronomical Society. 409:102-108. 2010 A MATURE DUSTY STAR-FORMING GALAXY HOSTING GRB 080607 AT z = 3.036. Astrophysical Journal Letters. 723:L218-L222. 2010 The JCMT Nearby Galaxies Legacy Survey - IV. Velocity dispersions in the molecular interstellar medium in spiral galaxies. Monthly Notices of the Royal Astronomical Society. 410:no-no. 2010 The JCMT Nearby Galaxies Legacy Survey - V. The CO(J= 3-2) distribution and molecular outflow in NGC 4631. Monthly Notices of the Royal Astronomical Society. 410:no-no. 2010 AHerschelstudy of the properties of starless cores in the Polaris Flare dark cloud region using PACS and SPIRE. Astronomy and Astrophysics. 518:L92-L92. 2010 Enhanced dust heating in the bulges of early-type spiral galaxies. Astronomy and Astrophysics. 518:L56-L56. 2010 FIR colours and SEDs of nearby galaxies observed withHerschel. Astronomy and Astrophysics. 518:L61-L61. 2010 Far-infrared line imaging of the starburst ring in NGC 1097 with theHerschel/PACS spectrometer. Astronomy and Astrophysics. 518:L60-L60. 2010 Filamentary structures and compact objects in the Aquila and Polaris clouds observed byHerschel. Astronomy and Astrophysics. 518:L103-L103. 2010 From filamentary clouds to prestellar cores to the stellar IMF: Initial highlights from theHerschelGould Belt Survey. Astronomy and Astrophysics. 518:L102-L102. 2010 Herschel-SPIRE observations of the disturbed galaxy NGC 4438. Astronomy and Astrophysics. 518:L63-L63. 2010 Herschelobservations of embedded protostellar clusters in the Rosette molecular cloud. Astronomy and Astrophysics. 518:L84-L84. 2010 Herschelphotometric observations of the low metallicity dwarf galaxy NGC 1705. Astronomy and Astrophysics. 518:L58-L58. 2010 Herschelphotometric observations of the nearby low metallicity irregular galaxy NGC 6822. Astronomy and Astrophysics. 518:L55-L55. 2010 Initial highlights of the HOBYS key program, theHerschelimaging survey of OB young stellar objects. Astronomy and Astrophysics. 518:L77-L77. 2010 Mapping far-IR emission from the central kiloparsec of NGC 1097. Astronomy and Astrophysics. 518:L59-L59. 2010 Mapping the interstellar medium in galaxies withHerschel/SPIRE. Astronomy and Astrophysics. 518:L62-L62. 2010 Probing the molecular interstellar medium of M82 withHerschel-SPIRE spectroscopy. Astronomy and Astrophysics. 518:L37-L37. 2010 Radial distribution of gas and dust in spiral galaxies. Astronomy and Astrophysics. 518:L72-L72. 2010 SPIRE imaging of M 82: Cool dust in the wind and tidal streams. Astronomy and Astrophysics. 518:L66-L66. 2010 Small-scale structure in the Rosette molecular cloud revealed by Herschel. Astronomy and Astrophysics. 518:L91-L91. 2010 The Aquila prestellar core population revealed byHerschel. Astronomy and Astrophysics. 518:L106-L106. 2010 The central region of spiral galaxies as seen byHerschel. Astronomy and Astrophysics. 518:L64-L64. 2010 The dust morphology of the elliptical Galaxy M 86 with SPIRE. Astronomy and Astrophysics. 518:L45-L45. 2010 TheHerschel-SPIRE instrument and its in-flight performance. Astronomy and Astrophysics. 518:L3-L3. 2010 TheHerschelSpace Observatory view of dust in M81. Astronomy and Astrophysics. 518:L65-L65. 2010 TheHerschelfirst look at protostars in the Aquila rift. Astronomy and Astrophysics. 518:L85-L85. 2010 TheHerschelview of star formation in the Rosette molecular cloud under the influence of NGC 2244. Astronomy and Astrophysics. 518:L83-L83. 2010 THE JAMES CLERK MAXWELL TELESCOPE NEARBY GALAXIES LEGACY SURVEY. II. WARM MOLECULAR GAS AND STAR FORMATION IN THREE FIELD SPIRAL GALAXIES. Astrophysical Journal. 714:571-588. 2010 The Herschel Reference Survey. Publications of the Astronomical Society of the Pacific. 122:261-287. 2010 The JCMT Nearby Galaxies Legacy Survey - III. Comparisons of cold dust, polycyclic aromatic hydrocarbons, molecular gas and atomic gas in NGC 2403. Monthly Notices of the Royal Astronomical Society. 402:1409-1425. 2010 ROTATION OF THE WARM MOLECULAR GAS SURROUNDING ULTRACOMPACT H II REGIONS. Astrophysical Journal. 703:1308-1317. 2009 A new view on the ISM of galaxies: Far-infrared and submillimetre spectroscopy with Herschel. New Astronomy Reviews Vistas in Astronomy. 53:108-112. 2009 LUMINOUS INFRARED GALAXIES WITH THE SUBMILLIMETER ARRAY. II. COMPARING THE CO (3-2) SIZES AND LUMINOSITIES OF LOCAL AND HIGH-REDSHIFT LUMINOUS INFRARED GALAXIES. Astrophysical Journal. 695:1537-1549. 2009 THE JAMES CLERK MAXWELL TELESCOPE NEARBY GALAXIES LEGACY SURVEY. I. STAR-FORMING MOLECULAR GAS IN VIRGO CLUSTER SPIRAL GALAXIES. Astrophysical Journal. 693:1736-1748. 2009 SMA12CO(J= 6 – 5) AND 435 μm INTERFEROMETRIC IMAGING OF THE NUCLEAR REGION OF Arp 220. Astrophysical Journal. 693:56-68. 2009 Luminous Infrared Galaxies with the Submillimeter Array. I. Survey Overview and the Central Gas to Dust Ratio. Astrophysical Journal, Supplement Series. 178:189-224. 2008 Outflow and Infall in a Sample of Massive Star Forming Regions. II. Large‐Scale Kinematics. Astrophysical Journal. 684:1273-1280. 2008 Erratum: "Outflow and Infall in a Sample of Massive Star‐forming Regions" (ApJ, 663, 1092 [2007]). Astrophysical Journal. 683:1229-1230. 2008 On the Constancy of the Characteristic Mass of Young Stars. Astrophysical Journal. 681:365-374. 2008 Clarifying the nature of the brightest submillimetre sources: interferometric imaging of LH 850.02. Monthly Notices of the Royal Astronomical Society. 387:707-712. 2008 Comparative Analysis of Molecular Clouds in M31, M33, and the Milky Way. Astrophysical Journal. 675:330-339. 2008 Luminous infrared galaxies with the submillimeter array: probing the extremes of star formation. Astrophysics and Space Science. 313:297-302. 2008 The Detection of Molecular Gas in the Outskirts of NGC 6946. Astrophysical Journal. 669:L73-L76. 2007 Outflow and Infall in a Sample of Massive Star‐forming Regions. Astrophysical Journal. 663:1092-1102. 2007 Upper limits to the water abundance in starburst galaxies. Astronomy and Astrophysics. 469:121-124. 2007 Molecular oxygen in the ρ Ophiuchi cloud. Astronomy and Astrophysics. 466:999-1003. 2007 High‐Resolution Imaging of Warm and Dense Molecular Gas in the Nuclear Region of the Luminous Infrared Galaxy NGC 6240. Astrophysical Journal. 659:283-295. 2007 High‐Mass Star Formation. III. The Functional Form of the Submillimeter Clump Mass Function. Astrophysical Journal. 650:970-984. 2006 High‐Mass Star Formation. II. The Mass Function of Submillimeter Clumps in M17. Astrophysical Journal. 644:990-1005. 2006 Two Populations of Young Massive Star Clusters in Arp 220. Astrophysical Journal. 641:763-772. 2006 Multiscale Magnetic Fields in Star‐forming Regions: Interferometric Polarimetry of the MMS 6 Core of OMC‐3. Astrophysical Journal. 626:959-965. 2005 High‐Mass Star Formation. I. The Mass Distribution of Submillimeter Clumps in NGC 7538. Astrophysical Journal. 625:891-905. 2005 ISM properties in low-metallicity environments. Astronomy and Astrophysics. 434:867-885. 2005 Searching for O2 in the SMC:. Astronomy and Astrophysics. 433:L5-L8. 2005 CO and [C I ] in Nearby Galaxies: Probing Physical and Chemical Conditions. Proceedings of the International Astronomical Union. 1:271-271. 2005 Extended Molecular Gas in the Nearby Starburst Galaxy Maffei 2. Astrophysical Journal. 612:860-866. 2004 Side‐by‐Side Comparison of Fourier Transform Spectroscopy and Water Vapor Radiometry as Tools for the Calibration of Millimeter/Submillimeter Ground‐based Observatories. Astrophysical Journal, Supplement Series. 153:363-367. 2004 The Metallicity Distribution in the Outer Halo of M33. Astronomical Journal. 128:237-244. 2004 Molecular Gas in Candidate Double‐Barred Galaxies. III. A Lack of Molecular Gas?. Astrophysical Journal. 603:495-502. 2004 The Mass Function of Supergiant Molecular Complexes and Implications for Forming Young Massive Star Clusters in the Antennae (NGC 4038/4039). Astrophysical Journal. 599:1049-1066. 2003 ISM properties in low-metallicity environments - II. The dust spectral energy distribution of NGC 1569. Astronomy and Astrophysics. 407:159-176. 2003 First NH3detection of the Orion Bar. Astronomy and Astrophysics. 402:L69-L72. 2003 First detection of NH3 ($\mathsf{1_0 \rightarrow 0_0}$) from a low mass cloud core. Astronomy and Astrophysics. 402:L73-L76. 2003 Highlights from the first year of Odin observations. Astronomy and Astrophysics. 402:L39-L46. 2003 Low upper limits on the O2abundance from the Odin satellite. Astronomy and Astrophysics. 402:L77-L81. 2003 Odin observations of H2O in the Galactic Centre. Astronomy and Astrophysics. 402:L63-L67. 2003 Odin water mapping in the Orion KL region. Astronomy and Astrophysics. 402:L47-L54. 2003 Submillimeter emission from water in the W3 region. Astronomy and Astrophysics. 402:L59-L62. 2003 Molecular Gas in Candidate Double‐barred Galaxies. II. Cooler, Less Dense Gas Associated with Stronger Central Concentrations. Astrophysical Journal. 587:649-659. 2003 Interferometric Observations of the Nuclear Region of Arp 220 at Submillimeter Wavelengths. Astrophysical Journal. 581:229-240. 2002 Molecular Gas in Candidate Double‐barred Galaxies. I. The Diverse Morphology and Dynamics of NGC 2273 and NGC 5728. Astrophysical Journal. 575:814-825. 2002 Gasdynamics in NGC 5248: Fueling a Circumnuclear Starburst Ring of Super–Star Clusters. Astrophysical Journal. 575:156-177. 2002 Magnetic Fields in Star‐forming Molecular Clouds. V. Submillimeter Polarization of the Barnard 1 Dark Cloud. Astrophysical Journal. 574:822-833. 2002 Magnetic Fields in Star‐forming Molecular Clouds. IV. Polarimetry of the Filamentary NGC 2068 Cloud in Orion B. Astrophysical Journal. 571:356-365. 2002 The Nature of the Low-Metallicity ISM in the Dwarf Galaxy NGC 1569. EAS Publications Series. 4:381-381. 2002 Magnetic Fields in Star‐forming Molecular Clouds. II. The Depolarization Effect in the OMC‐3 Filament of Orion A. Astrophysical Journal. 562:400-423. 2001 High-Resolution Images of CO [ITAL]J[/ITAL] = 2–1 Emission from the Carbon Star V Cygni. Astronomical Journal. 122:979-990. 2001 Mapping the inner accretion disc of the Galactic black hole J1550-564 through its rise to outburst. Monthly Notices of the Royal Astronomical Society. 325:167-177. 2001 Large-area mapping at 850 microns. II. Analysis of the clump distribution in the ρ Ophiuchi molecular cloud. Astrophysical Journal. 545:327-339. 2000 Large‐Area Mapping at 850 Microns. II. Analysis of the Clump Distribution in the ρ Ophiuchi Molecular Cloud. Astrophysical Journal. 545:327-339. 2000 Large‐Area Mapping at 850 Microns. I. Optimum Image Reconstruction from Chop Measurements. Astrophysical Journal, Supplement Series. 131:505-518. 2000 A submillimetre survey for protostellar accretion discs using the JCMT-CSO interferometer. Monthly Notices of the Royal Astronomical Society. 319:154-162. 2000 High-resolution imaging of molecular gas and dust in the antennae (NGC 4038/39): Super giant molecular complexes. Astrophysical Journal. 542:120-127. 2000 High‐Resolution Imaging of Molecular Gas and Dust in the Antennae (NGC 4038/39): Super Giant Molecular Complexes. Astrophysical Journal. 542:120-127. 2000 Temperature and Density Gradients Across the Nucleus of M82. Astrophysical Journal. 538:L117-L120. 2000 Temperature and density gradients across the nucleus of M82. Astrophysical Journal. 538. 2000 The Spatial Distribution of Atomic Carbon Emission in the Giant Molecular Cloud NGC 604‐2. Astrophysical Journal. 538:134-140. 2000 The spatial distribution of atomic carbon emission in the giant molecular cloud NGC 604-2. Astrophysical Journal. 538:134-140. 2000 Submillimeter Observations of IC 10: The Dust Properties and Neutral Carbon Content of a Low‐Metallicity Starburst. Astrophysical Journal. 532:909-921. 2000 Submillimeter observations of IC 10: The dust properties and neutral carbon content of a low-metallicity starburst. Astrophysical Journal. 532:909-921. 2000 Magnetic Fields in Star‐forming Molecular Clouds. I. The First Polarimetry of OMC‐3 in Orion A. Astrophysical Journal. 531:868-872. 2000 Magnetic fields in star-forming molecular clouds. I. The first polarimetry of OMC-3 in Orion A. Astrophysical Journal. 531:868-872. 2000 Star Formation in the Giant HiiRegions of M101. Astrophysical Journal. 522:238-249. 1999 Star formation in the giant H II regions of M101. Astrophysical Journal. 522:238-249. 1999 The Large‐ScaleJ = 3 → 2 andJ = 2 → 1 CO Emission from M17 and Its Implications for Extragalactic CO Observations. Astrophysical Journal. 517:174-187. 1999 The large-scale J = 3 → 2 and J = 2 → 1 CO emission from M17 and its implications for extragalactic CO observations. Astrophysical Journal. 517:174-187. 1999 Submillimeter Continuum Emission in the ρ Ophiuchi Molecular Cloud: Filaments, Arcs, and an Unidentified Far-Infrared Object. Astrophysical Journal. 513:L139-L142. 1999 Submillimeter continuum emission in the ρ Ophiuchi molecular cloud: Filaments, arcs, and an unidentified far-infrared object. Astrophysical Journal. 513. 1999 The Physical Conditions and Dynamics of the Interstellar Medium in the Nucleus of M83: Observations of CO and Ci. Astrophysical Journal. 503:219-230. 1998 The Effect of Star Formation on Molecular Clouds in Dwarf Irregular Galaxies: IC 10 and NGC 6822. Astrophysical Journal. 496:226-234. 1998 12COJ= 1–0 Observations of Individual Giant Molecular Clouds in M81. Astrophysical Journal. 494:581-586. 1998 A sensitive search for CO emission from faint blue galaxies at z similar to 0.5. Astronomy and Astrophysics. 330:63-66. 1998 Atomic Carbon Emission from Individual Molecular Clouds in M33. Astrophysical Journal. 487:L49-L52. 1997 The Density and Temperature of Molecular Clouds in M33. Astrophysical Journal. 483:210-219. 1997 Atomic carbon emission from individual molecular clouds in M33. Astrophysical Journal. 487. 1997 Accretion Disks around Class 0 Protostars: The Case of VLA 1623. Astrophysical Journal. 470:L123-L126. 1996 Testing an Identification Algorithm for Extragalactic OB Associations Using a Galactic Sample. Astronomical Journal. 112:1588-1588. 1996 The Distance To IC 10 From Near-Infrared Observations of Cepheids. Astronomical Journal. 111:1106-1106. 1996 Accretion disks around class 0 protostars: The case of VLA 1623. Astrophysical Journal. 470. 1996 Hot, Luminous Stars in Selected Regions of NGC 6822, M31, and M33. Astronomical Journal. 110:2715-2715. 1995 The Star Formation Histories and Efficiencies of Two Giant H II Regions in M33. Astrophysical Journal. 455:125-125. 1995 Are Galaxies Optically Thin to Their Own Lyman-Continuum Radiation? II. NGC 6822. Astrophysical Journal. 453:162-162. 1995 Are Galaxies Optically Thin to Their Own Lyman-Continuum Radiation? I. M33. Astrophysical Journal. 451:607-607. 1995 The Metallicity Dependence of the CO-to-H[TINF]2[/TINF] Conversion Factor from Observations of Local Group Galaxies. Astrophysical Journal. 448:L97-L100. 1995 Molecular Gas in the Inner 3.2 Kiloparsecs of NGC 2403: Star Formation at Subcritical Gas Surface Densities. Astrophysical Journal. 447:616-616. 1995 Erratum - the Stellar Populations of NGC6822. Astronomical Journal. 109:449-449. 1995 First observations of individual molecular clouds in the irregular galaxy NGC 6822. Astrophysical Journal. 434. 1994 First observations of individual molecular clouds in the irregular galaxy NGC 6822. Astrophysical Journal. 434:L11-L11. 1994 Diffuse molecular clouds and the molecular interstellar medium from (13)CO observations of M33. Astrophysical Journal. 432:148-148. 1994 Constraining the influence of star formation on the lowest (12)CO line ratios in M33. Astrophysical Journal. 421:458-458. 1994 Interferometric observations of three giant molecular clouds in M31. Astrophysical Journal. 406:477-477. 1993 OB associations and spiral structure in the southern arm of M33. Astronomical Journal. 105:499-499. 1993 The stellar populations of NGC 6822. Astronomical Journal. 104:1374-1374. 1992 Twin peaks of CO emission in the central regions of barred galaxies. Astrophysical Journal. 395. 1992 Twin peaks of CO emission in the central regions of barred galaxies. Astrophysical Journal. 395:L79-L79. 1992 CO emission from the dwarf irregular galaxy NGC 6822. Astrophysical Journal. 391:144-144. 1992 The molecular interstellar medium in two giant H II regions in M33 - NGC 604 and NGC 595. Astrophysical Journal. 385:512-512. 1992 A comparison of the properties of an unbiased sample of OB associations in the local group galaxies M33 and NGC 6822. Astrophysical Journal. 384. 1992 A comparison of the properties of an unbiased sample of OB associations in the local group galaxies M33 and NGC 6822. Astrophysical Journal. 384:L29-L29. 1992 New Cepheid distances to nearby galaxies based on BVRI CCD photometry. II - The local group galaxy M33. Astrophysical Journal. 372:455-455. 1991 OB associations in the inner two kiloparsecs of M33. Astronomical Journal. 101:1663-1663. 1991 High-mass star-formation rates in M33. Astronomical Journal. 101:1293-1293. 1991 OB star formation and H I production in molecular clouds in M33. Astrophysical Journal. 370:184-184. 1991 Giant molecular clouds and the CO-to-H2 conversion factor in IC 10. Astrophysical Journal. 366:L11-L11. 1991 The star-forming disk and CO bar in M101. Astrophysical Journal. 366:432-432. 1991 The properties of individual giant molecular clouds in M33. Astrophysical Journal. 363:435-435. 1990 A nova-like red variable in M31. Astrophysical Journal. 353:L35-L35. 1990 Stellar content of nearby galaxies. III - The local group spiral galaxy M33. Astronomical Journal. 99:149-149. 1990 The molecular gas content of the nuclear region of M33. Astrophysical Journal. 347:743-743. 1989 Interferometric observations of 1.4 millimeter continuum sources. Astrophysical Journal. 337:L41-L41. 1989 Aperture synthesis observations of individual molecular clouds in M33. Astrophysics and Space Science. 156:269-273. 1989 A Roadmap for Canadian Submillimetre Astronomy An Observational Perspective of Low Mass Dense Cores II: Evolution towards the Initial Mass Function Comparing Giant Molecular Clouds in M31, M33 & the Milky Way Development Plans for the Atacama Large Millimeter/submillimeter Array (ALMA) Outflow and Accretion in Massive Star Forming Regions Physical Conditions and Spatial Structure in the Molecular Interstellar Medium of M33 Quantization of Pseudoclassical Systems in the Schrödinger Realization The Submillimetre Universe Venus: Key to understanding the evolution of terrestrial planets Environmental Effects on the ISM and Star Formation Properties of Nearby Spiral Galaxies. Proceedings of the International Astronomical Union. 2015 Molecular gas in galaxies: changing conditions from disks to starbursts. Proceedings of the International Astronomical Union. 199-206. 2015 ALMA High Frequency Techniques. Astronomical Society of the Pacific Conference Series. 361-362. 2015 Unveiling the Physical Properties and Kinematics of Molecular Gas in the Antennae Galaxies with the SMA. GALAXY MERGERS IN AN EVOLVING UNIVERSE. 311-+. 2013 Molecular gas and dust in spiral galaxies. Proceedings of the International Astronomical Union. 119-126. 2012 Stellar and dust SED modelling of the Whirlpool interacting galaxy system. Proceedings of the International Astronomical Union. 107-111. 2011 ALMA Pipeline Heuristics. Astronomical Society of the Pacific Conference Series. 573-+. 2008 ALMA pipeline heuristics. ASTRONOMICAL DATA ANALYSIS SOFTWARE AND SYSTEMS XVI. 241-+. 2007 SMA CO J=3-2 Observations of the Antennae (NGC 4038/39). FROM Z-MACHINES TO ALMA: (SUB) MILLIMETER SPECTROSCOPY OF GALAXIES. 267-+. 2007 High Resolution CO(3-2) and HCO+(4-3) Imaging of the Luminous Infrared Galaxy NGC 6240. Proceedings of the International Astronomical Union. 309-309. 2006 Sequential starburst in Arp220?. Astronomische Nachrichten. 534. 2005 Surveys of the galaxy with the JCMT. European Space Agency, (Special Publication) ESA SP. 365-366. 2005 Nearby galaxies on GMC scales: Massive complexes, cold dust, and the effect of environment. European Space Agency, (Special Publication) ESA SP. 93-98. 2005 Molecular gas forming massive star clusters and starbursts. Formation and Evolution of Massive Young Star Clusters. 245-253. 2004 The mass function of supergiant molecular complexes in the antennae. NEUTRAL ISM IN STARBURST GALAXIES. 294-298. 2004 Atomic carbon in starburst galaxies: Missing CI (J=1-0) emission in the centre of M82. European Space Agency, (Special Publication) ESA SP. 483-485. 2001 Collision-triggered star formation in the antennae (NGC 4038/39). SCIENCE WITH THE ATACAMA LARGE MILLIMETER ARRAY. 347-350. 2001 Magnetic fields in star-forming clouds: How can FIRST contribute?. European Space Agency, (Special Publication) ESA SP. 463-466. 2001 A close-up view of a merger system: Molecular gas and star formation in the antennae (NGC 4038/39). XVTH IAP MEETING DYNAMICS OF GALAXIES: FROM THE EARLY UNIVERSE TO THE PRESENT. 359-360. 2000 Magnetic fields in star-forming molecular clouds: JCMT polarimetry of OMC-3 in Orion A. European Space Agency, (Special Publication) ESA SP. 473-476. 2000 Molecular gas inflow in barred galaxies: Fueling starbursts and active galactic nuclei with double bars. European Space Agency, (Special Publication) ESA SP. 487-490. 2000 The formation of super star clusters: Clues from the molecular interstellar medium in nearby galaxies. European Space Agency, (Special Publication) ESA SP. 209-218. 2000 Photodissociation Regions in the Large Magellanic Cloud and IC 10. Iau Symposia. 128-129. 1999 Protostars in the molecular ISM: Probing their structure with SCUBA at the JCMT. NEW PERSPECTIVES ON THE INTERSTELLAR MEDIUM. 252-256. 1999 MOLECULAR CLOUDS AND THE INTERSTELLAR-MEDIUM IN M33. DYNAMICS OF GALAXIES AND THEIR MOLECULAR CLOUD DISTRIBUTIONS. 17-19. 1991 Mechanics - PHYSICS 2E03, Instructor 2021 Ph.D. Astronomy, California Institute of Technology 1990 B.Sc. Physics, High Distinction, University of Toronto 1984
CommonCrawl
Board index Numberologics, Alchemy, Linguinomics, and other Academiology Mathematics Chain email For the discussion of math. Duh. Rozer Postby Rozer » Thu Feb 25, 2010 5:07 pm UTC I got this chain email recentrly with this puzzle, and the answer is escaping me. At first i thought that each number was treated as a different number but then realised that it wouldnt work. Then i thought it might be in something other than base 10, but i am not smart enough to work it out. 2 + 3 = 10 9 + 7 = ???? Re: Chain email Postby JBJ » Thu Feb 25, 2010 5:11 pm UTC It's a pattern. (a + b) * a = answer (2+3) * 2 = 5 * 2 = 10 (6+5) * 6 = 11 * 6 = 66 (9+7) * 9 = 16 * 9 = 144 Briareos Joined: Thu Jul 12, 2007 12:40 pm UTC Location: Town of the Big House Postby Briareos » Thu Feb 25, 2010 5:15 pm UTC This has got to be the stupidest overloading of the '+' operator since I don't know when. We should all remember that "communicating badly and then acting smug when you're misunderstood is not cleverness." EDIT: I should have remembered my usual mantra of "Is this worth cluttering my egosearch?" Sandry wrote: Bless you, Briareos. Blriaraisghaasghoasufdpt. Oregonaut wrote: Briareos is my new bestest friend. skeptical scientist closed-minded spiritualist Joined: Tue Nov 28, 2006 6:09 am UTC Postby skeptical scientist » Thu Feb 25, 2010 5:23 pm UTC Briareos wrote: This has got to be the stupidest overloading of the '+' operator since I don't know when. We should all remember that "communicating badly and then acting smug when you're misunderstood is not cleverness." It's not communicating badly, since it's obvious from the statement of the puzzle that something is not being given it's standard meaning. The task then is to figure out what is not being given its standard meaning, and what the new pattern is. I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side. "With math, all things are possible." —Rebecca Watson kernelpanic Location: 1.6180339x10^18 attoparsecs from Earth Postby kernelpanic » Fri Feb 26, 2010 12:15 am UTC skeptical scientist wrote: But why then use +? why not, for example, |, or $, or ¬? It's given an alternate meaning not ever used. It's like saying this post has words, but they aren't in their standard meaning. The actual meaning is "The puppy jumped onto the 16:21 space elevator to station 5477/2, and bit the leg of the man in red". I'm not disorganized. My room has a high entropy. Bhelliom wrote: Don't forget that the cat probably knows EXACTLY what it is doing is is most likely just screwing with you. You know, for CAT SCIENCE! SWGlassPit Contact SWGlassPit Postby SWGlassPit » Fri Feb 26, 2010 1:31 am UTC It's a little brain teaser puzzle. Why care so much about what particular symbol is used? That's somewhat missing the forest for trees. Up in space is a laboratory the size of a football field zipping along at 7 km/s. It's my job to keep it safe. Erdös number: 5 Postby skeptical scientist » Fri Feb 26, 2010 1:38 am UTC kernelpanic wrote: Yes, except here it is possible to figure out the meaning from context; in fact, that's the whole point of the puzzle. IMO, the difference between a valid puzzle and a 169-style puzzle is whether it is designed to be solved. A 169 puzzle is designed so that it won't be solved, and then the inventor can act smug when their audience fails an unfair test. This puzzle, on the other hand, is clearly meant to be solved, so the objection in comic 169 does not apply. PM 2Ring Postby PM 2Ring » Fri Feb 26, 2010 5:32 pm UTC kernelpanic wrote: But why then use +? why not, for example, |, or $, or ¬? It's given an alternate meaning not ever used. Because it's a puzzle aimed at general readers, most of whom are probably more familiar with word-based puzzles, and are probably unfamiliar with a lot of mathematical notation. Sure, you could write it using standard f(u, v) style notation, but that might scare off the general readers. Doing it this way may screw with the minds of the more mathematically inclined, but the puzzle creator may consider that a good thing, in that it forces us to think outside the box. odulwa Postby odulwa » Mon Mar 01, 2010 3:29 pm UTC These problems have an infinite number of solutions, so you could guess a solution and solve a set of 4 equations, say a polynomial: [math]f(x,y)=\frac{1}{9}(512-13x-214y+41xy)[/math] In which case the answer will be: [math](9+7)=f(9,7)=\frac{1480}{9}[/math] Equally valid, less beautiful. mike-l Joined: Tue Sep 04, 2007 2:16 am UTC Postby mike-l » Mon Mar 01, 2010 4:48 pm UTC kernelpanic wrote: But why then use +? why not, for example, |, or $, or ¬? It's given an alternate meaning not ever used. It's like saying this post has words, but they aren't in their standard meaning. The actual meaning is "The puppy jumped onto the 16:21 space elevator to station 5477/2, and bit the leg of the man in red". Actually, the + is a HUGE clue. If you look at what a+b is in each case, you might notice that a+b is always a factor of the answer, and then when you ask which factor, you solve the problem. I happened to get this email a few days ago, and I solved it in about 2 minutes. It probably would have taken 3 or 4 had all the ' + ' been ' * '. odulwa wrote: These problems have an infinite number of solutions, so you could guess a solution and solve a set of 4 equations, say a polynomial: This is why mathematics is an art. Yes there are technically infinitely many solutions, but there is an 'obvious' elegant answer. addams wrote: This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk. Postby skeptical scientist » Mon Mar 01, 2010 11:31 pm UTC odulwa wrote: These problems have an infinite number of solutions... Equally valid, but incorrect. Yes, these puzzles always have infinitely many "valid" solutions, but if they are well constructed, there will be exactly one solution which is obviously the right one once you find it, and all the other solutions will be obviously wrong. This puzzle is one of the well constructed ones. blakkcube Joined: Mon Mar 01, 2010 11:58 pm UTC Postby blakkcube » Tue Mar 02, 2010 12:24 am UTC Let N be the set of natural numbers and let [imath]\left( {N} , + \right)[/imath] be a magma with [math]+:{N} \times {N}\longmapsto {N},\ \left( a,b \right) \mapsto a \cdot \left( a \oplus b \right)[/math] ([imath]\oplus[/imath] stands for the 'usual' +) With [imath]+[/imath] being defined like that it shouldn't be an issue. (Edit: I first wrote it is a semigroup but since [imath]+[/imath] is not associative it's only a magma.) andrewxc Location: Savage, MD Postby andrewxc » Tue Mar 02, 2010 3:26 am UTC Just to anger / calm both of you, you could use the six-pointed star ([imath]*[/imath]), which in Fraleigh's Abstract Algebra means "binary operator"... And you could just say "this represents a function that maps these two numbers to the third, therefore I could name any number in the last set as the solution, and it is correct merely by definition," but as s.s. stated, that's not the point. Use Occam's Razor on this: the simplest explanation is typically the correct one. It really is much simpler to find the function which takes 2 & 3 and maps it to 10 by way of algebraic manipulation, than to create any number of operators that create the functions you want them to. Besides, this was a chain email from someone who probably wasn't too fussy about proper notation. "We never do anything well unless we love doing it for its own sake." Avatar: I made a "plastic carrier" for Towel Day à la So Long and Thanks for All the Fish. Dopefish Location: The Well of Wishes Postby Dopefish » Tue Mar 02, 2010 3:58 am UTC Wouldn't the simpliest solution often be discarded as the 'trivial' solution though? Do any high level mathematicians (or for that matter, anyone highly advanced in any field) actually start chain letters? I have yet to get any emails with conundrums pertaining to category theory (or some such thing; I don't speak high level maths ), although perhaps thats because it wouldn't propagate nearly as fast due to the majority deleting it as general nonsense when they got it. Postby skeptical scientist » Tue Mar 02, 2010 7:33 am UTC Dopefish wrote: Wouldn't the simpliest solution often be discarded as the 'trivial' solution though? Sometimes but not always. One generally uses "trivial" to refer to something that is either obvious or so simple it might be overlooked, or else something which is always a solution to a general type of problem, when one is interested in things which are only solutions to a particular instance.* I would not describe the simplest solution as being trivial in general (sometimes the simplest solution to a problem is still quite subtle) and I wouldn't describe the solution to this puzzle as being trivial. However, trivial solutions are still solutions, and are not discarded simply because they are trivial. *For example, the differential equation f"=-f has the "trivial" solution f=0, but also the nontrivial solutions f=sin and f=cos. (In fact, any linear combination of sin and cos is a solution.) Here we call the zero solution "trivial" because it is always a solution to differential equations of this type (linear & homogeneous) but sin and cos are solutions specific to this particular differential equation, and not, say, the differential equation f''=f+f'. Timtu Joined: Sat Feb 06, 2010 5:58 am UTC Postby Timtu » Tue Mar 02, 2010 8:16 am UTC It's trivial if it can be proven on less than ten A4 sheets of paper. Or so I've heard... Torn Apart By Dingos Postby Torn Apart By Dingos » Tue Mar 02, 2010 10:12 am UTC Another vote for that the notation of this puzzle is stupid. In this case it might be okay, since it's just replacing the + operator (but I'd rather it be another operator), but I've seen puzzles in the form of 3 = 10 which really grinded my gears. Don't change the meaning of the equality sign! Use a colon or arrow instead! When I see that, I only think that the puzzle creator does not understand the equality sign, rather than that he's chosen a particular notation for a puzzle. Postby mike-l » Tue Mar 02, 2010 3:43 pm UTC Timtu wrote: In a rather advanced graduate seminar, an audience member asked the lectuerer if something was true. The lecturer tried in vain to prove the statement for an hour, until his time was up. The next week he came back and announced that it was trivial. gmalivuk Joined: Wed Feb 28, 2007 6:02 pm UTC Location: Here and There Contact gmalivuk WLM Yahoo Messenger AOL Postby gmalivuk » Wed Mar 03, 2010 5:14 pm UTC Equally valid, but incorrect. Yeah, the "I could justify any answer" argument doesn't hold much water. I mean, the same could be said of learning a language as a baby: you only get finite input, so logically there are an infinite number of possible generalizations. And yet, just about every child that has ever lived has managed to pick the single "right" generalization that results in speaking intelligible, grammatical sentences. Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true. If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome (he/him/his) Cleverbeans Postby Cleverbeans » Wed Mar 03, 2010 6:50 pm UTC Torn Apart By Dingos wrote: which really grinded my gears. Don't change the meaning of the equality sign! Why not? Equivalence relations are all over the place, and it clutters things up to use a different symbol. Do you object to the use of "=" for natural, rational, real and complex numbers even though the relation changes? It's the most natural symbol to use when attempting to communicate the notion of "sameness" to a non-technical audience, just as "+" communicates the idea of a binary operation in the most straight forward manner. "Labor is prior to, and independent of, capital. Capital is only the fruit of labor, and could never have existed if labor had not first existed. Labor is the superior of capital, and deserves much the higher consideration." - Abraham Lincoln Postby Torn Apart By Dingos » Wed Mar 03, 2010 7:45 pm UTC Cleverbeans wrote: I object to it in this case precisely because it's not an equivalence relation. Torn Apart By Dingos wrote: I object to it in this case precisely because it's not an equivalence relation. So the object is to the use of "=" as an assignment operator? It's used that way in most programming languages, and I think statements like "Solve for f(x) given x=5" are common enough to justify using it this way for a non-technical audience. If it's expressive and communicates the correct idea, I have no objection to the abuse of any common token for personal amusement. Postby Torn Apart By Dingos » Thu Mar 04, 2010 9:59 pm UTC No. You've misunderstood my example. The particular puzzle I had in mind (which I've seen posted on this forum, but I'm not going to try to search for it), basically asked you to find f(n), given that f(1)=4 f(2)=12 f(n-1)=3 which the OP wrote as n-1=3 n=? This is not an equivalence relation, and it's not assignment. Writing it this way doesn't help the reader, it is just confusing and perpetuates a bad understanding of the equality sign. To go off on a tangent, I don't like the use of = as an assignment operator, nor as a replacement for "is/are" (like "me = happy"). This doesn't make a difference though because it's so commonplace I use it myself (the latter in the form of f(x)=O(x)). Postby skeptical scientist » Thu Mar 04, 2010 11:41 pm UTC Torn Apart By Dingos wrote: This is not an equivalence relation, and it's not assignment. Writing it this way doesn't help the reader, it is just confusing and perpetuates a bad understanding of the equality sign. Seriously? Do you think anyone is confused about what equality is because of an unusual use in a puzzle? Do you really think that it "perpetuates a bad understanding of the equality sign" that some guy used it in an abnormal way as part of a riddle? Because to me this seems a tad hyperbolic. Postby SWGlassPit » Fri Mar 05, 2010 1:02 am UTC Wow. Brain teaser puzzles. Serious business. Postby Torn Apart By Dingos » Fri Mar 05, 2010 11:41 pm UTC Some students have a horrible understanding of equality, and I think this is because they've been exposed to improper use of it too many times. Seeing it used incorrectly in faux-mathematical settings should only make things worse. I don't know why these semantic discussions always arise on this forum. To me, the notation is initially confusing, even though I easily understand its intent, and to non-mathematically trained people, it might actually be more intuitive. This does not change the fact that it is horrible notation and could've been written in a much better way. Postby skeptical scientist » Sat Mar 06, 2010 5:01 am UTC Well, I have no idea what you mean by students having a horrible understanding of equality. In my experience students understand equality just fine. The one thing I sometimes see is students using equals signs in places they shouldn't, like for the next step in an algorithm they've been taught, or other manipulations. Generally these are the manipulations they should be doing to solve the problem, but they want to use some symbol between steps, and aren't sure what to use, so they use equals signs. This is certainly a bad habit that I try to break students of, but I don't think it shows that they don't understand the concept of equality, but rather that they misuse the symbol as they have a limited mathematical vocabulary. Is that what you are referring to? In that case, I think the solution is to suggest some other more appropriate symbol (perhaps arrows, or short explanations using actual English words). In my experience, this usually solves the problem (because students really do understand equality!) Postby Torn Apart By Dingos » Sat Mar 06, 2010 9:27 am UTC Actually that was what I was referring to. Writing "a=b=2a=2b" (equality sign between equations) or "a=>2a/2" (arrow where there should be an equality sign). I've seen worse, but admittedly it's rare. You're right that it doesn't mean they don't understand equality, but I'd like to see the equality sign used correctly when it is used. Postby kernelpanic » Sat Mar 06, 2010 1:55 pm UTC Torn Apart By Dingos wrote: Actually that was what I was referring to. Writing "a=b=2a=2b" (equality sign between equations) or "a=>2a/2" (arrow where there should be an equality sign). I've seen worse, but admittedly it's rare. You're right that it doesn't mean they don't understand equality, but I'd like to see the equality sign used correctly when it is used. That bothers me too. Could we be considered mathematical notation nazis? On a related note, I am now founding the mathematical notation national socialistparty. Return to "Mathematics"
CommonCrawl
Stochastic modelling of infectious diseases for heterogeneous populations Rui-Xing Ming1, Jiming Liu2, William K. W. Cheung2 & Xiang Wan3 Infectious Diseases of Poverty volume 5, Article number: 107 (2016) Cite this article The Erratum to this article has been published in Infectious Diseases of Poverty 2017 6:50 Infectious diseases such as SARS and H1N1 can significantly impact people's lives and cause severe social and economic damages. Recent outbreaks have stressed the urgency of effective research on the dynamics of infectious disease spread. However, it is difficult to predict when and where outbreaks may emerge and how infectious diseases spread because many factors affect their transmission, and some of them may be unknown. One feasible means to promptly detect an outbreak and track the progress of disease spread is to implement surveillance systems in regional or national health and medical centres. The accumulated surveillance data, including temporal, spatial, clinical, and demographic information can provide valuable information that can be exploited to better understand and model the dynamics of infectious disease spread. The aim of this work is to develop and empirically evaluate a stochastic model that allows the investigation of transmission patterns of infectious diseases in heterogeneous populations. We test the proposed model on simulation data and apply it to the surveillance data from the 2009 H1N1 pandemic in Hong Kong. In the simulation experiment, our model achieves high accuracy in parameter estimation (less than 10.0 % mean absolute percentage error). In terms of the forward prediction of case incidence, the mean absolute percentage errors are 17.3 % for the simulation experiment and 20.0 % for the experiment on the real surveillance data. We propose a stochastic model to study the dynamics of infectious disease spread in heterogeneous populations from temporal-spatial surveillance data. The proposed model is evaluated using both simulated data and the real data from the 2009 H1N1 epidemic in Hong Kong and achieves acceptable prediction accuracy. We believe that our model can provide valuable insights for public health authorities to predict the effect of disease spread and analyse its underlying factors and to guide new control efforts. Infectious diseases remain a major cause of morbidity and mortality worldwide, triggering immeasurable loss in many societies. Most people may still have a fresh memory of the H1N1 outbreak in 2009, which brought pictures of empty streets and people wearing face masks and collectively caused at least 12799 deaths according to the World Health Organization (WHO) report [1]. The H1N1 pandemic calls for research on accurately modelling the spread dynamics of an infectious disease, which offers a practically useful means for policy makers to evaluate the potential effects of intervention strategies [2–4]. Mathematical models of the spread of infectious diseases are an important tool for investigating and quantifying the spread dynamics because direct experimental study on the spread of disease among humans is not ethical. Although the subjects involved in different epidemics may be different, many can be modeled by the popular Susceptible-Infected-Recovered (SIR) models [5–7], which study the spread of infectious diseases by tracking the number (S) of people susceptible to the disease, the number (I) of people infected with the disease, and the number (R) of people who have recovered from the disease. Three assumptions are made: (1) the total population N=S(t)+I(t)+R(t) is fixed at any time t; (2) those who have recovered from the disease are forever immune; and (3) those who have not had the disease are equally susceptible, and the probability of their contracting the disease at time t is proportional to the product of S(t) and I(t). Based on these assumptions, the SIR model defines a set of three ordinary differential equations for S(t), I(t), and R(t): $$\begin{array}{@{}rcl@{}} {dS}/{dt} & = & - \beta S(t) I(t) \\ {dI}/{dt} & = & \beta S(t)I(t) - k I(t) \\ {dR}/{dt} & = & k I(t). \end{array} $$ Here, β≥0 is the effective transmission rate and k≥0 is the recovery rate. Because the SIR-based models are well presented in the literature, herein, we omit a verbose introduction of these models. Readers with an interest in such a topic can find the details in [5–7]. The SIR-based models and its variants have proven to be quite useful in the study of the spread dynamics of infectious diseases [8–10]. In [11–13], the progression of disease spread is characterized by tracking the number of S t with a chain binomial model. The number of susceptible members S t+△t (△t represents the infectious period of the disease and is always chosen to be 1/k) at time t+△t is a binomial random variable that depends on S t and I t α, S t+△t ∼B i n(S t ,1−I t α), which provides a recursive relationship between S t+△t and S t and produces a formal stochastic process. However, the power of these models is mainly limited to uniform and homogeneous populations or populations with infinite size and homogeneous interactions. In many cases, the actual spread of infectious diseases occurs in a diverse or dispersed population. To study the spread of infectious diseases in heterogeneous populations, people usually divide a population into subpopulations that differ from each other. Sub-populations can be determined on the basis of social, cultural, economic, demographic, and geographic factors. Next, besides the dynamics of the internal spread within a subpopulation, the transmission dynamics between subpopulations should also be considered in the study of epidemic spreading. Network-based epidemic modelling represents a popular approach for heterogeneous populations in which the nodes in the network correspond to sub-populations, and the links indicate the neighboring relationships. Many network-based models have been proposed, including patch models [14–16], distance-transmission models [17], and multi-group models [18, 19]. However, these models require knowledge of every individual (or host) and all relationships between individuals, which may be not achievable due to information privacy-related restrictions and the high cost of subject recruitment. To overcome the difficulties of collecting data, researchers have investigated several types of computer-generated networks in the context of disease spread in population-scale studies [20–24]. Grassberger first studied the dynamics of infectious diseases that propagate on regular networks using the percolation theory [25]. Recent studies have revealed that many real-world networks, including social networks in which infectious diseases propagate, are either small-world [26] or scale-free [27] rather than regular or random, as thought previously [28]. Because the underlying structures of networks will influence the effect that the dynamics of epidemics will have on them, researchers, such as Pastor-Satorras and Vespignani, have made many contributions to critical value analysis of typical epidemics on different types of complex network [23, 24, 29]. On the basis of the mean-field theory, they found that compared with homogeneous networks, scale-free networks are fragile to the invasion of infectious diseases, computer viruses, or any other type of negative epidemics. Epidemics have also been studied in various disciplines. Sociologists are concerned with the diffusion of rumors or innovation on social networks [30]; economists have studied viral marketing and recommendation strategies by considering both cascading dynamics and the network effects of vital nodes [31]; and computer scientists are interested in how some topics can quickly cascade in virtual blog spaces and how their propagation trends [32, 33]. Although network-based studies have contributed to the modelling of disease and/or information dynamics, some models make a strong assumption that the structures of underlying networks over which epidemics spread are known beforehand. In the real world, however, the structures of underlying diffusion networks are not known directly. Many others assume the availability of information about the interactions occurring between individuals [34–37] that are often not valid in the context of disease spread. What may be obtained is only the time at which particular sub-populations become infected, but not how they become infected, nor how they affect their neighboring areas. Moreover, the underlying structures of networks will greatly influence the dynamics of infectious disease spread. Since the emergence of the H1N1 influenza pandemic in April 2009, its underlying dynamics have been of great public health interest, and many approaches for its study have been proposed [14, 38–41]. Most of them are based on the classic SIR model. For example, Birrell et al. [40] provided an age structure-based compartmental model with a Bayesian synthesis of multiple evidence sources to reveal substantial changes in contact patterns throughout the epidemic. Besides of the compartmental models, other mathematical models are also used to describe the transmission dynamics [3, 42–47]. The chain binomial model was used to calculate the household secondary attack rates to measure the transmissibility of the 2009 H1N1 influenza pandemic by Lessler et al. [44] and Klick et al. [45]. Yang et al. [46] constructed a model based on chains of infections and used the infection hazard function and survival function to study the 2009 H1N1 influenza pandemic. Ferguson et al. [3] and Cauchemez et al. [42, 43] incorporated other factors, such as household risk, within-school risk, and community risk, in the study of infection spread and found out that younger age groups under 19 years old were more susceptible than older age groups. Jin et al. [47] formulated an epidemic model of influenza A based on networks and calculated the basic reproduction number and studied the effects of various immunization schemes. However, this work required that the individual contact pattern be provided. Nonetheless, none of the aforementioned approaches takes spatial heterogeneity into consideration in the study of disease spread. Recently, an outbreak of Ebola virus disease (EVD) swept across parts of West Africa from March 2014 to April 2015. By June 10, 2015, WHO had reported 27,237 confirmed, probable, or suspected cases in three countries with 11,158 deaths [48]. This epidemic received extensive research attention on its dynamics of spread [49–57] (for further references in the review article [58]). To name a few, Chowell et al. found that district-level Ebola virus disease outbreaks in West Africa follow polynomial-based growth in time instead of the exponential growth that describes the progress of many infectious disease epidemics [52]. Fisman et al. used a simple, two parameter mathematical model to characterize epidemic growth patterns in the 2014 Ebola outbreak [53]. Webb et al. proposed a variant of the classic SIR model with three extra groups, incubating, contaminated and isolated, which can provide a more accurate prediction for the future incidences [56]. Carroll et al. used a deep sequencing approach to gain insight into the evolution of the Ebola virus (EBOV) in Guinea from the ongoing West African outbreak. The viral sequence data can be combined with epidemiological information to retrospectively test the effectiveness of control measures, and provides an unprecedented window into the evolution of an ongoing outbreak of viral haemorrhagic fever [57]. To accurately predict when and where outbreaks will occur, a feasible means is to deploy manual or electronic surveillance systems through regional or national public health and medical organizations [59]. Most of the surveillance data accumulated from such systems contains temporal, spatial, clinical, and demographic information. For instance, Telehealth Ontario is a teletriage helpline that is available free to all Ontario residents, which allows those with suspected infections to connect with experts who can assess their symptoms. The records of such calls provide valuable information on which individual from where was possibly infected and by which type of disease at what time. In this paper, we address the problem of modelling disease spread dynamics in heterogeneous populations from temporal-spatial surveillance data. We analyse the role of heterogeneity in a stochastic epidemic model on a two-dimensional lattice. Within a particular sub-population, the speed of spread is controlled by a single parameter, the transmissibility of the pathogen between individuals. Between sub-populations, the transmissibility becomes a random variable drawn from a probability distribution. Our work differs from existing studies in some fundamental ways, in light of the unique nature of infectious disease diffusion dynamics. Our results have practical implications for the analysis of disease control strategies in realistic heterogeneous epidemic systems. In this work, we propose a stochastic model to study the dynamics of infectious disease spread in heterogeneous populations from temporal-spatial surveillance data. We divide the whole population into m sub-populations on the basis of geographic regions. In the following, we use S i (t),I i (t), and R i (t) to denote the number of susceptible, infected, and recovered people, respectively, at time t in region i, i=1,2,⋯,m and t∈[0,T]. Stochastic model Classic SIR-based modelling of infectious diseases assumes that the population is well-mixed. To take the role of heterogeneity into consideration, we use an alternative approach to model the dynamics of infectious disease spread. First, the classic SIR model (Eq. (1)) studies the change in the numbers of peoples in the three groups. In reality, the change in the number of the infected people is the major concern of society. Second, in many epidemics or pandemics such as H1N1 and SARS, the number of infected people I i (t) is relatively small compared to the whole subpopulation S i (t). Therefore, we may consider S i (t) as a constant to simplify the modelling of the change in the number of infected people I(t), for which we propose the following stochastic differential equation: $$\begin{array}{@{}rcl@{}} \mathrm{d}I_{i}(t)=(\alpha+\delta_{i}I_{i}(t))\mathrm{d}t+\sigma_{i}\mathrm{d}B_{i}(t), \end{array} $$ where α is a parameter that measures the auto-recovery rate of one particular infectious disease, which is usually considered as a constant among sub-populations, δ i is the parameter that measures the different disease transmissibility in different subpopulations, σ i >0 is the diffusion parameter that measures the disease spread from neighbors, and B i (t) is a standard Brownian motion. It is worth noting that we assume the parameter δ i ≠0 for technical purposes, and the results in the case of δ i =0 can be achieved with δ i →0. Comparing our model in Eq. (2) with the classic model in Eq. (1), we can see that they both capture the situation in which the change in the number of infected people has a positive relationship with the total number of infected people, which means that the more infected people there are, the more people will get infected. There are two key differences between these two models: first, the key factor (β S i (t)−k) associated with the disease spread in Eq. (1) is replaced with a single parameter δ i in Eq. (2), which can be used to analyse the role of heterogeneity in the disease spread; and second, Eq. (2) takes the neighboring relationships into consideration to study the dynamics of the disease spread among different sub-populations. By Ito formula, the solution of Eq. (2) is given by $$\begin{array}{@{}rcl@{}} &I_{i}(t)=I_{i}(0)e^{\delta_{i}t}+\frac{\alpha}{\delta_{i}}(e^{\delta_{i}t}-1)\\ &\quad+\sigma_{i} e^{\delta_{i}t}{\int_{0}^{t}}e^{-\delta_{i}s}\mathrm{d}B_{i}(s). \end{array} $$ Notice that for any fixed t, \({\int _{0}^{t}}e^{-\delta _{i}s}\mathrm {d}B_{i}(s)\) is a normal random variable with $$\begin{array}{@{}rcl@{}} E\left[\int_{0}^{t}e^{-\delta_{i}s}\mathrm{d}B_{i}(s)\right]=0, \\ Var\left[\int_{0}^{t}e^{-\delta_{i}s}\mathrm{d}B_{i}(s)\right]=\frac{1-e^{-2\delta_{i}t}}{2\delta_{i}}. \end{array} $$ Thus, for any fixed t, I i (t) is a normal random variable with $$\begin{array}{@{}rcl@{}} E[I_{i}(t)]=I_{i}(0)e^{\delta_{i}t}+\frac{\alpha}{\delta_{i}}\left(e^{\delta_{i}t}-1\right) \end{array} $$ $$\begin{array}{@{}rcl@{}} Var[I_{i}(t)]=\frac{{\sigma_{i}^{2}}}{2\delta_{i}}\left(e^{2\delta_{i}t}-1\right). \end{array} $$ There are three cases of being interested for parameter α: α>−I i (0)δ i In this case, E[I i (t)] tends to infinity as t goes to infinity, which implies that all people in that region will be infected if the time is long enough. α=−I i (0)δ i In this case, the pandemic or epidemic will reach a state of equilibrium. α<−I i (0)δ i In this case, E[I i (t)] will reach 0 at some time \(t = \hat {t}\) and go to negative infinity as t goes to infinity, which implies the pandemic or epidemic will end at time \(\hat {t}\). To estimate the parameters in our proposed stochastic model from the surveillance data, we need to divide the interval [0,T] into n subintervals, [t 0,t 1], [t 1,t 2], ⋯, [t n−1,t n ], where 0=t 0<t 1<t 2<⋯<t n =T. Denote △t(k)=t k+1−t k , △B i (k)=B i (t k+1)−B i (t k ), △I i (k)=I i (t k+1)−I i (t k ), k=0,1,⋯,n−1. Then Eq. (2) is rewritten as $$\begin{array}{@{}rcl@{}} \triangle I_{i}(k)=(\alpha+\delta_{i}I_{i}(t_{k}))\triangle t(k)+\sigma_{i}\triangle B_{i}(k). \end{array} $$ It is easy to see that \(\triangle I_{i}(k)|I_{i}(t_{k})\sim N\big ((\alpha +\delta _{i}I_{i}(t_{k}))\triangle t(k),{\sigma _{i}^{2}}\triangle t(k)\big)\). Let θ i =(α,δ i ,σ i ). Then the transition density of the process {I i (t);t≥0} is $$\begin{array}{@{}rcl@{}} &&p_{\theta_{i}}(s+t,y|s,x)\\ &=&\frac{1}{\sqrt{2\pi t{\sigma_{i}^{2}}}}\exp\left\{-\frac{(y-x-(\alpha+\delta_{i}x)t)^{2}}{2t{\sigma_{i}^{2}}}\right\}. \end{array} $$ Hence, the likelihood function is given by $$\begin{array}{@{}rcl@{}} &&f(\theta_{i}|I_{i})\triangleq f\left(\theta_{i}|I_{i}(t_{k}),0\leq k\leq n-1\right)\\&=&I_{i}(0)\left(\frac{1}{2{\pi\sigma_{i}^{2}}}\right)^{\frac{n}{2}}\prod\limits_{k=0}^{n-1}\frac{1}{\triangle t(k)} \\ &&\exp\left\{-\frac{(\triangle I_{i}(k)-(\alpha+\delta_{i}I_{i}(t_{k}))\triangle t(k))^{2}}{2\triangle t(k){\sigma_{i}^{2}}}\right\}.\\ \end{array} $$ Consequently, the log-likelihood function is $$\begin{array}{@{}rcl@{}} &&\log f(\theta_{i}|I_{i})\varpropto-\frac{n}{2}{\log\sigma_{i}^{2}}\\ &&-\sum\limits_{k=0}^{n-1}\frac{\left(\triangle I_{i}(k)-(\alpha+\delta_{i}I_{i}(t_{k}))\triangle t(k)\right)^{2}}{2\triangle t(k){\sigma_{i}^{2}}}. \end{array} $$ $$\begin{array}{@{}rcl@{}} u_{i1}&=&\frac{1}{T}\sum\limits_{k=0}^{n-1}I_{i}(t_{k}), \end{array} $$ $$\begin{array}{@{}rcl@{}} u_{i2}&=&\frac{1}{T}\sum\limits_{k=0}^{n-1}I_{i}(t_{k+1}), \end{array} $$ $$\begin{array}{@{}rcl@{}} u_{i11}&=&\frac{1}{T}\sum\limits_{k=0}^{n-1}{I_{i}^{2}}(t_{k}), \end{array} $$ $$\begin{array}{@{}rcl@{}} u_{i12}&=&\frac{1}{T}\sum\limits_{k=0}^{n-1}I_{i}(t_{k})I_{i}(t_{k+1}), \end{array} $$ $$\begin{array}{@{}rcl@{}} u_{i1\Delta}&=&\frac{1}{T}\sum\limits_{k=0}^{n-1}I_{i}(t_{k})\triangle t(k), \end{array} $$ $$\begin{array}{@{}rcl@{}} u_{i11\Delta}&=&\frac{1}{T}\sum\limits_{k=0}^{n-1}{I_{i}^{2}}(t_{k})\triangle t(k), \end{array} $$ $$\begin{array}{@{}rcl@{}} u_{i11\Delta^{-1}}&=&\frac{1}{T}\sum\limits_{k=0}^{n-1}{I_{i}^{2}}(t_{k})(\triangle t(k))^{-1}, \end{array} $$ $$\begin{array}{@{}rcl@{}} u_{i12\Delta^{-1}}&=&\frac{1}{T}\sum\limits_{k=0}^{n-1}I_{i}(t_{k})I_{i}(t_{k+1})(\triangle t(k))^{-1}, \end{array} $$ $$\begin{array}{@{}rcl@{}} u_{i22\Delta^{-1}}&=&\frac{1}{T}\sum\limits_{k=0}^{n-1}{I_{i}^{2}}(t_{k+1})(\triangle t(k))^{-1}. \end{array} $$ We have the estimator of θ i as follows: $$\begin{array}{@{}rcl@{}} \widehat{\delta}_{i}&=&\frac{u_{i12}-u_{i11}-u_{i2}u_{i1\Delta}+u_{i1}u_{i1\Delta}}{u_{i11\Delta}-u_{i1\Delta}^{2}}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \widehat{\alpha}&=&u_{i2}-u_{i1}-\widehat{\delta}_{i}u_{i1\Delta}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \widehat{{\sigma^{2}_{i}}}&=&Tn^{-1}\left\{u_{i22\Delta^{-1}}-2u_{i12\Delta^{-1}}+u_{i11\Delta^{-1}}\right.\\ && -(u_{i2}-u_{i1})^{2}+\left[({u_{i2}}-{u_{i1}})u_{i1\Delta}\right.\\ && \left.\left.-(u_{i12}-u_{i11})\right]\widehat{\delta}_{i}\right\}. \end{array} $$ It is obviously to see that \(\widehat {\alpha }\) is not a bona fide estimator of α, because only the information of {I i (t);0≤t≤T} is used to estimate α. A good estimator should pool all the information {I i (t);0≤t≤T} (i=1,2,⋯,m). There are two ways to find the pool estimator. The first way is to approximate α by pooling all \(\widehat {\alpha }_{i}\) as follows: $$\begin{array}{@{}rcl@{}} \widehat{\alpha}&=&m^{-1}\sum\limits_{i=1}^{m}\widehat{\alpha_{i}}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \widehat{\alpha_{i}}&=& u_{i2}-u_{i1}-\widehat{\delta}_{i}u_{i1\Delta}. \end{array} $$ But the issue in Eq. (23) is that m must be very large in order to achieve the accurate estimate of α. In this work, we choose the second way, which is the maximum likelihood estimation. To do so, we need to assume that the processes {I i (t);0≤t≤T} (i=1,2,⋯,m) are mutually independent. Then the log-likelihood function of {I i (t);0≤t≤T} (i=1,2,⋯,m) is given by $$\begin{array}{@{}rcl@{}} &&\sum\limits_{i=1}^{m}\log f(\alpha|I_{i})\varpropto\\ &&-\sum\limits_{i=1}^{m}\sum\limits_{k=0}^{n-1}\frac{(\triangle I_{i}(k)-(\alpha+\widehat{\delta}_{i}I_{i}(t_{k}))\triangle t(k))^{2}}{2\triangle t(k)\widehat{{\sigma_{i}^{2}}}}. \end{array} $$ The maximum likelihood estimate is $$\begin{array}{@{}rcl@{}} \widetilde{\alpha}=\sum\limits_{i=1}^{m}\omega_{i}\widehat{\alpha}_{i}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \omega_{j}&=&\frac{\widehat{{\sigma_{j}^{2}}}^{-1}}{\sum\limits_{i=1}^{m}\sum\limits_{k=0}^{n-1}\widehat{{\sigma_{i}^{2}}}^{-1}},\,\,j=1,2,\cdots,m. \end{array} $$ \(\widehat {\alpha }_{i}\) is defined in Eq. (24). In this section, we illustrate the performance of our proposed model using both simulated and real data. Simulation study In the simulation study, we examine the performance of our proposed model with respect to the accuracy of parameter estimation and the forward prediction of the case incidence. First, we generate data using various parameters by the following steps: Set m=4 (the number of sub-populations) and T=100 (the number of time slots). These two numbers are randomly selected. Randomly draw α from [0.05,0.09], δ i from [0.02,0.08], and σ i from [0.02,0.08]. Initialize I i (0),1≤i≤m. Simulate I i (k+1)=I i (k)+△I i (k),k=0,1,⋯,T−1 using Eq. (7) and \(\triangle I_{i}(k)|I_{i}(t_{k})\sim N\left ((\alpha +\delta _{i}I_{i}(t_{k}))\triangle t(k),{\sigma _{i}^{2}}\triangle t(k)\right)\). Three parameters, α,δ i , and σ i , in Eq. (2) will be estimated from the simulated data. We conduct 100 replicates by repeating Step 2–4 and compare the estimated ones, \(\hat {\alpha }, \hat {\delta _{i}},\) and \(\hat {\sigma _{i}}\), with the ground truth values in terms of the mean absolute percentage error (MAPE) defined as: $$\begin{array}{@{}rcl@{}} E_{\alpha} = \frac{1}{100}\sum\limits_{j=1}^{100}\left|\frac{\hat{\alpha}_{j} - \alpha_{j}}{\alpha_{j}}\right|, \end{array} $$ $$\begin{array}{@{}rcl@{}} E_{\delta} = \frac{1}{100*m}\sum\limits_{i=0}^{m-1}\sum\limits_{j=1}^{100}\left|\frac{\hat{\delta}_{ij} - \delta_{ij}}{\delta_{ij}}\right|, \end{array} $$ $$\begin{array}{@{}rcl@{}} E_{\sigma} = \frac{1}{100*m}\sum\limits_{i=0}^{m-1}\sum\limits_{j=1}^{100}\left|\frac{\hat{\sigma}_{ij} - \sigma_{ij}}{\sigma_{ij}}\right|. \end{array} $$ The mean absolute percentage errors (MAPEs) for E α , E δ , and E σ are 10.0 %, 6.0 %, and 10.0 %, respectively. We plot the distribution of the estimated errors for 100 replicates for \(\hat {\alpha }, \hat {\delta _{i}},\) and \(\hat {\sigma _{i}}\) in Fig. 1. From Fig. 1, we can see that both the estimates of \(\hat {\alpha }\) and \(\hat {\sigma _{i}}\) have small variations. The variation of the estimate of \(\hat {\delta _{i}}\) is slightly larger but is still acceptable and is due to the uncertainty embedded in the stochastic process. We also use the estimated values of the parameters to generate the data and compare it with the simulated data using the ground truth values of the parameters. The correlation between them is 0.96. We randomly select one replicate and show the comparison results in Fig. 2. Basically, we can use the estimated parameters to accurately recover the ground truth data. The performance of parameter estimation. The mean absolute percentage errors for α, δ, and σ are 10.0, 6.0, and 10.0 %, respectively. α measures the auto-recovery rate of one particular infectious disease. δ measures the disease transmissibility within the population. σ measures the disease transmissibility between populations The comparison of original data and estimated data for four regions. The x-axis represents the time in days. The y-axis represents the total number of confirmed cases. The The original data is generated using the ground truth values of parameters while the estimated data is generated with estimated values of parameters. The correlation between them is 0.96 Next, we conduct an experiment to test the prediction accuracy of our model. Let us consider a sequence of data points I ij (t) over a time interval [0,T] for the i th subpopulation in the j th replicate. We choose a time point s and use the data points I ij [0],I ij [1],⋯,I ij [s] as the training data and predict the data points I ij (t) for s<t≤T. s=80 is chosen in this experiment. The MAPE of the prediction is defined as $$\begin{array}{@{}rcl@{}} E_{pre} = \frac{1}{100*20*m}\sum\limits_{j=1}^{100}\sum\limits_{i=0}^{m-1}\sum\limits_{s=81}^{100}\left|\frac{\hat{I}_{ij}[t] - I_{ij}[t]}{I_{ij}[t]}\right|. \end{array} $$ The MAPE of the prediction is 17.3 %, which indicates that our model can achieve around 82.7 % accuracy in terms of the prediction. Again, we randomly select one replicate and show the prediction results in Fig. 3. The prediction performance of our method using simulation data for four regions.The x-axis represents the time in days. The y-axis represents the total number of confirmed cases. The data for the first 80 days are used for training. The data for the last 20 days are used for testing. The mean absolute percentage error in testing is 17.3 % Real application In the case study, we apply our model to the surveillance data from the 2009 H1N1 pandemic in Hong Kong. We acquired the time series data of the daily number of confirmed H1N1 cases with symptom onset from May 1, 2009 to May 23, 2010. The database includes 36 547 confirmed cases with demographic information on location, age, and sex along with the laboratory confirmation dates. The epidemic curve of confirmed H1N1 cases (see Fig. 4) reaches its peak at the end of September 2009, after which the intervention procedure comes into effect and the curve goes down. We use the data up to September 30, 2009, which comprises 27 898 cases (more than 2/3 of all cases). Hong Kong is geographically divided by 18 political areas (districts). Each district is considered as one sub-population in our proposed model. The time interval △t(k)(k=0,1,⋯,n−1) of H1N1 is set as 1 day. The total number of days is 100. Daily H1N1 epidemic curve in Hong Kong from May 1, 2009 to May 23, 2010. The epidemic curve of confirmed H1N1 cases reaches its peak at the end of September, 2010 Figure 5 gives the effect of the different components for the 18 political areas in Hong Kong. From Fig. 5, we can find that the effect of δ, which measures the internal disease spread within each district, varies less than the effect of σ, which measures the external disease spread between districts. In general, the speed of internal disease spread is closely connected with the population density and the external disease spread pattern is associated with the pattern of people's daily travel. It is well known that Hong Kong has the highest population density in the world, and most districts are densely populated. However, it possesses a heavy heterogeneous traffic pattern, and there is intensive traffic between districts every day. Therefore, the imported infections for each district account for a critical factor in the disease spread, whereas the internal effects only play a very small role in the progression of disease spread. The estimation of two factors in disease spread for 18 districts for 2009 H1N1 pandemic in Hong Kong. δ measures the transmissibility of disease spread within each district. σ measures the transmissibility of disease spread from the neighbors of each region. This figure shows that the imported infections for each district account for a critical factor in the disease spread while the internal effects only play a very small role in the progression of disease spread We also use the H1N1 data to test the prediction accuracy of our model. The MAPEs for all districts are shown in Fig. 6 and Table 1. The average prediction error is 20.0 %. We notice that the prediction error for the district "TSUEN WAN" is very high because the number of daily infections in this district changes suddenly during the epidemic period. Figure 7 shows the epidemic curves of the three regions with the lowest incidence rate. We can observe that between time slot 34 and 42, there is a sudden rise for the "TSUEN WAN" district. Such a change significantly affects the parameter estimation and thereby the prediction accuracy for the district "TSUEN WAN". Although the incidence rates of the other two districts also low, their epidemic curves are relatively smooth in comparison with that of "TSUEN WAN", indicating that the prediction accuracies of these two districts are higher than that of the "TSUEN WAN" district. The prediction errors for 18 districts using the real H1N1 data. The mean absolute percentage error is 20.0 % The epidemic curve of three districts with the lowest incidence rates. The x-axis represents the time staring from May 1, 2009 to May 23, 2010. Between time slot 34 and 42, there is a sudden rise for the "TSUEN WAN" district. Such a change significantly affects the parameter estimation and thereby the prediction accuracy for the district "TSUEN WAN" Table 1 Prediction error for real data Epidemic modelling offers a practical means for policy makers to evaluate the potential effects of intervention strategies. To do so, the accuracy of epidemic modelling with respect to the real-world disease transmission dynamics is essential and remains a challenging task due to the inaccessibility of many factors that affect the spread patterns of infectious diseases. In particular, heterogeneity should be taken into consideration when modelling the disease spread in non-random mixing populations. Many methods have been proposed to deal with heterogeneity in the study of epidemic dynamics, mostly using network-based epidemic models in which nodes correspond to spatial locations with reported incidences over time, and the directional links indicate the probability of disease transmission from one node to another over time. However, it is very challenging to determine the network topology. Many studies have used a geographical topology whereas others have used a mobility network inferred from the public transportation network or other sources. How to verify the inferred network topology is another challenging issue because the true epidemic network topology is unknown, and it may vary for different types of infectious diseases for the same population. Furthermore, the neighborhood effect estimation is non-trivial; it involves many parameters (a polynomial of the number of nodes) and requires a large amount of data to avoid overfitting. Such data may not always be available for the inference of network topology. Therefore, in this work, we propose an alternate approach to investigate the spatial heterogeneity from temporal-spatial surveillance data without the inference of network topology. Our proposed model possesses several merits over the previous works. First, it quantifies the role of the heterogeneity in the analysis of the spread dynamics of infectious diseases in heterogeneous populations. Second, parameter estimation can be computed very quickly. Therefore, the prediction and the corresponding intervention policies can be implemented without delay in an outbreak of infectious disease. We apply our model on both the simulated data and the real data from the 2009 H1N1 epidemic in Hong Kong and achieve acceptable prediction accuracy. Based on the study of disease diffusion, the model proposed in this work can be extended to study other propagation patterns such as the Internet and World Wide Web, through which individuals form multiple communities in which information can propagate in a manner similar to that of infectious disease. We believe that our work makes theoretical and empirical contributions in many areas. There are some limitations in our proposed stochastic model. First, it does not consider the epidemic network topology. However, how to infer such networks is another challenging task. To the best of our knowledge, the best way to do so is to use the contact data among some infected patients to verify the results, but such data are not always available and can be difficult to collect due to many issues (e.g., privacy). This issue may be addressed by using other types of data, such as daily commute data extracted from social networks. Second, our proposed model achieves a prediction accuracy of only around 80 %. We need to further improve it to allow its full use in real applications. Third, the proposed model is only suitable for the situation in which the susceptible population (or sub-population) maintains a relatively constant size and structure in a region. However, if the number of infected people in an epidemic is large or asymptomatic infection plays a central role (e.g., the malaria epidemic in Africa), the population factor should be taken into consideration in the model. Moreover, for a highly spatially heterogeneous outbreak (e.g., the Ebola epidemic) in which cases may seem to disappear due to reduced transmission in one area while growth may continue or rise in new locales, our proposed model may have problems in capturing these opposite dynamics in different regions. Fourth, because the proposed model is based on the classic SIR model, it only works in the situation in which the number of infected people grows exponentially. We will investigate resolutions to these limitations in our future work. World Health Organization Pandemic (H1N1). 2009. http://www.who.int/csr/don/2010_01_08/en/index.html. Accessed 1 Aug 2010. Cohen J, Enserink M. As swine flu circles globe, scientists grapple with basic questions. Science. 2009; 324(5927):572–3. Ferguson NM, Cummings DA, Fraser C, Cajka JC, Cooley PC, Burke DS. Strategies for mitigating an influenza pandemic. Nature. 2006; 442(7101):448–52. Germann TC, Kadau K, Longini IM, Macken CA. Mitigation strategies for pandemic influenza in the United States. Proc Natl Acad Sci. 2006; 103(15):5935–940. Bailey NTJ, et al. The Mathematical Theory of Infectious Diseases and Its Applications. High Wycombe: Charles Griffin and Company Ltd; 1975. Anderson RM, May RM. Infectious Diseases of Humans vol. 1. Oxford: Oxford University Press; 1991. Heesterbeek J. Mathematical Epidemiology of Infectious Diseases: Model Building, Analysis and Interpretation. vol 5. New York City: Wiley; 2000. Li MY, Muldowney JS. Global stability for the SEIR model in epidemiology. Math Biosci. 1995; 125(2):155–64. Kuznetsov YA, Piccardi C. Bifurcation analysis of periodic SEIR and SIR epidemic models. J Math Biol. 1994; 32(2):109–21. Hethcote HW. Qualitative analyses of communicable disease models. Math Biosci. 1976; 28(3):335–56. Becker NG, Britton T. Statistical studies of infectious disease incidence. J R Stat Soc Series B Stat Methodol. 2002; 61(2):287–307. Ferrari MJ, Bjørnstad ON, Dobson AP. Estimation and inference of R0 of an infectious pathogen by a removal method. Math Biosci. 2005; 198(1):14–26. Allen L. An introduction to stochastic epidemic models. Math Epidemiol. 2008; 1945:81–130. Cooper BS, Pitman RJ, Edmunds WJ, Gay NJ, et al. Delaying the international spread of pandemic influenza. PLoS Med. 2006; 3(6):212. Hufnagel L, Brockmann D, Geisel T. Forecast and control of epidemics in a globalized world. Proc Natl Acad Sci U S A. 2004; 101(42):15124–15129. Hollingsworth TD, Ferguson NM, Anderson RM. Will travel restrictions control the international spread of pandemic influenza?Nat Med. 2006; 12(5):497–9. Keeling MJ, Woolhouse ME, Shaw DJ, Matthews L, Chase-Topping M, Haydon DT, Cornell SJ, Kappey J, Wilesmith J, Grenfell BT. Dynamics of the 2001 UK foot and mouth epidemic: stochastic dispersal in a heterogeneous landscape. Science. 2001; 294(5543):813–7. Ferguson NM, Cummings DA, Cauchemez S, Fraser C, Riley S, Meeyai A, Iamsirithaworn S, Burke DS. Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005; 437(7056):209–14. Longini IM, Nizam A, Xu S, Ungchusak K, Hanshaoworakul W, Cummings DA, Halloran ME. Containing pandemic influenza at the source. Science. 2005; 309(5737):1083–1087. Pastor-Satorras R, Vespignani A. Epidemic dynamics and endemic states in complex networks. Phys Rev E. 2001; 63(6):066117. Pastor-Satorras R, Vespignani A. Epidemics and immunization in scale-free networks. 2002. arXiv preprint cond-mat/0205260. Kuperman M, Abramson G. Small world effect in an epidemiological model. Phys Rev Lett. 2001; 86(13):2909–912. Newman ME, Jensen I, Ziff R. Percolation and epidemics in a two-dimensional small world. Phys Rev E. 2002; 65(2):021904. Boguná M, Pastor-Satorras R, Vespignani A. Absence of epidemic threshold in scale-free networks with degree correlations. Phys Rev Lett. 2003; 90(2):028701. Grassberger P. On the critical behavior of the general epidemic process and dynamical percolation. Math Biosci. 1983; 63(2):157–72. Watts DJ, Strogatz SH. Collective dynamics of small-world networks. Nature. 1998; 393(6684):440–2. Barabási AL, Albert R. Emergence of scaling in random networks. Science. 1999; 286(5439):509–12. Erd, 6s P, Rényi A. On the evolution of random graphs. Publ Math Inst Hungar Acad Sci. 1960; 5:17–61. Pastor-Satorras R, Vespignani A. Epidemics and immunization in scale-free networks. In: Handbook of graphs and networks: from the genome to the internet. Hoboken: Wiley Online Library: 2005. p. 111–130. Rogers EM. Diffusion of Innovations. New York: Simon and Schuster; 2010. Leskovec J, Adamic LA, Huberman BA. The dynamics of viral marketing. ACM T Web. 2007; 1(1):5. Kumar R, Novak J, Raghavan P, Tomkins A. On the bursty evolution of Blogspace. World Wide Web. 2005; 8(2):159–78. Leskovec J, McGlohon M, Faloutsos C, Glance NS, Hurst M. Patterns of cascading behavior in large blog graphs. In: SDM, vol. 7. SIAM: 2007. p. 551–6. Salathé M, Kazandjieva M, Lee JW, Levis P, Feldman MW, Jones JH. A high-resolution human contact network for infectious disease transmission. Proc Natl Acad Sci. 2010; 107(51):22020–2025. Stehlé J, Voirin N, Barrat A, Cattuto C, Isella L, Pinton JF, Quaggiotto M, Van den Broeck W, Régis C, Lina B, et al. High-resolution measurements of face-to-face contact patterns in a primary school. PloS ONE. 2011; 6(8):23176. Dickison M, Havlin S, Stanley HE. Epidemics on interconnected networks. Phys Rev E. 2012; 85(6):066109. Hancock K, Veguilla V, Lu X, Zhong W, Butler EN, Sun H, Liu F, Dong L, DeVos JR, Gargiullo PM, et al. Cross-reactive antibody responses to the 2009 pandemic H1N1 influenza virus. N Engl J Med. 2009; 361(20):1945–1952. Riley S, Fraser C, Donnelly CA, Ghani AC, Abu-Raddad LJ, Hedley AJ, Leung GM, Ho LM, Lam TH, Thach TQ, et al. Transmission dynamics of the etiological agent of SARS in Hong Kong: impact of public health interventions. Science. 2003; 300(5627):1961–1966. Mills CE, Robins JM, Lipsitch M. Transmissibility of 1918 pandemic influenza. Nature. 2004; 432(7019):904–6. Birrell PJ, Ketsetzis G, Gay NJ, Cooper BS, Presanis AM, Harris RJ, Charlett A, Zhang XS, White PJ, Pebody RG, et al. Bayesian modelling to unmask and predict influenza A/H1N1pdm dynamics in London. Proc Natl Acad Sci. 2011; 108(45):18238–18243. Towers S, Geisse KV, Zheng Y, Feng Z. Antiviral treatment for pandemic influenza: Assessing potential repercussions using a seasonally forced SIR model. J Theor Biol. 2011; 289:259–68. Cauchemez S, Valleron AJ, Boelle PY, Flahault A, Ferguson NM. Estimating the impact of school closure on influenza transmission from sentinel data. Nature. 2008; 452(7188):750–4. Cauchemez S, Donnelly CA, Reed C, Ghani AC, Fraser C, Kent CK, Finelli L, Ferguson NM. Household transmission of 2009 pandemic influenza A (H1N1) virus in the United States. N Engl J Med. 2009; 361(27):2619–627. Lessler J, Reich NG, Cummings DA. Outbreak of 2009 pandemic influenza a (h1n1) at a New York city school. N Engl J Med. 2009; 361(27):2628–636. Klick B, Nishiura H, Ng S, Fang VJ, Leung GM, Peiris JM, Cowling BJ. Transmissibility of seasonal and pandemic influenza in a cohort of households in Hong Kong in 2009. Epidemiology (Cambridge). 2011; 22(6):793. Yang Y, Sugimoto JD, Halloran ME, Basta NE, Chao DL, Matrajt L, Potter G, Kenah E, Longini IM. The transmissibility and control of pandemic influenza a (H1N1) virus. Science. 2009; 326(5953):729–33. Jin Z, Zhang J, Song LP, Sun GQ, Kan J, Zhu H. Modelling and analysis of influenza a (H1N1) on networks. BMC Public Health. 2011; 11(Suppl 1):9. World Health Organization. Ebola Situation Reports. 2015. Available at: http://apps.who.int/ebola/en/current-situation/ebolasituation-report. Accessed 29 June 2015. Camacho A, Kucharski A, Aki-Sawyerr Y, White MA, Flasche S, Baguelin M, Pollington T, Carney JR, Glover R, Smout E, et al.Temporal changes in Ebola transmission in sierra leone and implications for control requirements: a real-time modelling study. PLoS Curr. 2015; 7:PMC4339317. Chowell D, Castillo-Chavez C, Krishna S, Qiu X, Anderson KS. Modelling the effect of early detection of Ebola. Lancet Infect Dis. 2015; 15(2):148–9. Chowell G, Hengartner NW, Castillo-Chavez C, Fenimore PW, Hyman J. The basic reproductive number of Ebola and the effects of public health measures: the cases of Congo and Uganda. J Theor Biol. 2004; 229(1):119–26. Chowell G, Viboud C, Hyman JM, Simonsen L. The Western Africa Ebola virus disease epidemic exhibits both global exponential and local polynomial growth rates. PLoS Curr. 2015. doi:10.1371/currents.outbreaks.8b55f4bad99ac5c5db3663e916803261. Fisman D, Khoo E, Tuite A. Early epidemic dynamics of the West African 2014 Ebola outbreak: estimates derived with a simple two-parameter model. PLoS Curr. 2014. doi:10.1371/currents.outbreaks.89c0d3783f36958d96ebbae97348d571. Gomes MF, y Piontti AP, Rossi L, Chao D, Longini I, Halloran ME, Vespignani A. Assessing the international spreading risk associated with the 2014 West African Ebola outbreak. PLoS Curr. 2014. doi:10.1371/currents.outbreaks.cd818f63d40e24aef769dda7df9e0da5. Althaus CL. Estimating the reproduction number of Ebola virus (EBOV) during the 2014 outbreak in West Africa. PLoS Curr. 2014. doi:10.1371/currents.outbreaks.91afb5e0f279e7f29e7056095255b288. Webb G, Browne C, Huo X, Seydi O, Seydi M, Magal P. A model of the 2014 Ebola epidemic in West Africa with contact tracing. PLoS Curr. 2014. doi:10.1371/currents.outbreaks.846b2a31ef37018b7d1126a9c8adf22a. Carroll MW, Matthews DA, Hiscox JA, Elmore MJ, Pollakis G, Rambaut A, Hewson R, García-Dorival I, Bore JA, Koundouno R, et al.Temporal and spatial analysis of the 2014-2015 Ebola virus outbreak in West Africa. Nature. 2015; 524(7563):97–101. Chowell G, Nishiura H. Transmission dynamics and control of Ebola virus disease (EVD): a review. BMC Med. 2014; 12(1):196. van Dijk A, Aramini J, Edge G, Moore KM. Real-time surveillance for respiratory disease outbreaks, Ontario, Canada. Emerg Infect Diseases. 2009; 15(5):799. This work is supported by Hong Kong Baptist University Strategic Development Fund and Hong Kong General Research Grant HKBU12202114. RXM, JML, and XW conceived and designed the experiments. RXM implemented the software. JML and XW analysed the data. All authors were involved in the manuscript preparation. All authors read and approved the final manuscript. School of Statistics & Mathematics, Zhejiang Gongshang University, Hangzhou, China Rui-Xing Ming Department of Computer Science, Hong Kong Baptist University, Kowloon Tong, Hong Kong Jiming Liu & William K. W. Cheung Department of Computer Science & Institute of Theoretical and Computational Study, Hong Kong Baptist University, Kowloon Tong, Hong Kong Xiang Wan Jiming Liu William K. W. Cheung Correspondence to Xiang Wan. An erratum to this article is available at http://dx.doi.org/10.1186/s40249-017-0269-3. Ming, RX., Liu, J., Cheung, W.K.W. et al. Stochastic modelling of infectious diseases for heterogeneous populations. Infect Dis Poverty 5, 107 (2016). https://doi.org/10.1186/s40249-016-0199-5 Spread pattern
CommonCrawl
Null controllable sets and reachable sets for nonautonomous linear control systems DCDS-S Home Multiple homoclinic solutions for a one-dimensional Schrödinger equation August 2016, 9(4): 1039-1068. doi: 10.3934/dcdss.2016041 Piecewise smooth systems near a co-dimension 2 discontinuity manifold: Can one say what should happen? Luca Dieci 1, and Cinzia Elia 2, School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332, United States Dipartimento di Matematica, University of Bari, I-70125, Bari, Italy Received August 2015 Revised March 2016 Published August 2016 In this work we attempt to understand what behavior one should expect of a solution trajectory near $\Sigma$ when $\Sigma$ is attractive, what to expect when $\Sigma$ ceases to be attractive (at generic exit points), and finally we also contrast and compare the behavior of some regularizations proposed in the literature, whereby the piecewise smooth system is replaced by a smooth differential system. Through analysis and experiments in $\mathbb{R}^3$ and $\mathbb{R}^4$, we will confirm some known facts and provide some important insight: (i) when $\Sigma$ is attractive, a solution trajectory remains near $\Sigma$, viz. sliding on $\Sigma$ is an appropriate idealization (though one cannot a priori decide which sliding vector field should be selected); (ii) when $\Sigma$ loses attractivity (at first order exit conditions), a typical solution trajectory leaves a neighborhood of $\Sigma$; (iii) there is no obvious way to regularize the system so that the regularized trajectory will remain near $\Sigma$ while $\Sigma$ is attractive, and so that it will be leaving (a neighborhood of) $\Sigma$ when $\Sigma$ looses attractivity. We reach the above conclusions by considering exclusively the given piecewise smooth system, without superimposing any assumption on what kind of dynamics near $\Sigma$ should have been taking place. Keywords: Filippov convexification, co-dimension 2 discontinuity manifold, Piecewise smooth systems, regularization., Runge Kutta methods. Mathematics Subject Classification: 34A36, 65P9. Citation: Luca Dieci, Cinzia Elia. Piecewise smooth systems near a co-dimension 2 discontinuity manifold: Can one say what should happen?. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1039-1068. doi: 10.3934/dcdss.2016041 J. C. Alexander and T. Seidman, Sliding modes in intersecting switching surfaces, I: Blending., Houston J. Math., 24 (1998), 545. Google Scholar J. C. Alexander and T. Seidman, Sliding modes in intersecting switching surfaces, II: Hysteresis., Houston J. Math., 25 (1999), 185. Google Scholar Z. Artstein, On singularly perturbed ordinary differential equations with measure-valued limits,, Mathematics Bohemica, 127 (2002), 139. Google Scholar J. Cortes, Discontinuous Dynamical Systems: A tutorial on solutions, nonsmooth analysis, and stability,, IEEE Control Systems Magazine, 28 (2008), 36. doi: 10.1109/MCS.2008.919306. Google Scholar N. Del Buono, C. Elia and L. Lopez, On the equivalence between the sigmoidal approach and Utkin's approach for models of gene regulatory networks,, SIAM J. Applied Dynamical Systems, 13 (2014), 1270. doi: 10.1137/130950483. Google Scholar M. di Bernardo, C. J. Budd, A. R. Champneys and P. Kowalczyk, Piecewise-smooth Dynamical Systems. Theory and Applications., Applied Mathematical Sciences 163. Springer-Verlag, (2008). Google Scholar L. Dieci, Sliding motion on the intersection of two manifolds: Spirally attractive case,, Communications in Nonlinear Science and Numerical Simulation, 26 (2015), 65. doi: 10.1016/j.cnsns.2015.02.002. Google Scholar L. Dieci and F. Difonzo, A Comparison of Filippov sliding vector fields in co-dimension $2$,, Journal of Computational and Applied Mathematics, 262 (2014), 161. doi: 10.1016/j.cam.2013.10.055. Google Scholar L. Dieci and F. Difonzo, The Moments sliding vector field on the intersection of two manifolds,, Journal of Dynamics and Differential Equations, (2015), 1. doi: 10.1007/s10884-015-9439-9. Google Scholar L. Dieci, C. Elia and L. Lopez, A Filippov sliding vector field on an attracting co-dimension 2 discontinuity surface, and a limited loss-of-attractivity analysis,, J. Differential Equations, 254 (2013), 1800. doi: 10.1016/j.jde.2012.11.007. Google Scholar L. Dieci, C. Elia and L. Lopez, Sharp sufficient attractivity conditions for sliding on a co-dimension 2 discontinuity surface,, Mathematics and Computers in Simulations, 110 (2015), 3. doi: 10.1016/j.matcom.2013.12.005. Google Scholar L. Dieci, C. Elia and L. Lopez, Uniqueness of Filippov sliding vector field on the intersection of two surfaces in $\mathbbR^3$ and implications for stability of periodic orbits,, J. Nonlin. Science, 25 (2015), 1453. doi: 10.1007/s00332-015-9265-6. Google Scholar L. Dieci and N. Guglielmi, Regularizing piecewise smooth differential systems: Co-dimension 2 discontinuity surface,, J. Dynamics and Differential Equations, 25 (2013), 71. doi: 10.1007/s10884-013-9287-4. Google Scholar A. Dontchev and F. Lempio, Difference methods for differential inclusions: A survey,, SIAM REVIEW, 34 (1992), 263. doi: 10.1137/1034050. Google Scholar A. F. Filippov, Differential Equations with Discontinuous Right-Hand Sides,, Mathematics and Its Applications, (1988). doi: 10.1007/978-94-015-7793-9. Google Scholar N. Guglielmi and E. Hairer, Classification of hidden dynamics in discontinuous dynamical systems,, SIADS, 14 (2015), 1454. doi: 10.1137/15100326X. Google Scholar M. Jeffrey, Dynamics at a switching intersection: Hierarchy, isonomy, and multiple sliding,, SIAM J. Applied Dyn. Systems, 13 (2014), 1082. doi: 10.1137/13093368X. Google Scholar J. Llibre, P. R. Silva and M. A. Teixeira, Regularization of discontinuous vector fields on $\mathbbR^3$ via singular perturbation,, J. Dynam. Differential Equations, 19 (2007), 309. doi: 10.1007/s10884-006-9057-7. Google Scholar A. Machina, R. Edwards and P. van den Driessche, Singular dynamics in gene network models,, SIAM J. Appl. Dyn. Syst., 12 (2013), 95. doi: 10.1137/120872747. Google Scholar E. Plahte and S. Kjóglum, Analysis and generic properties of gene regulatory networks with graded response functions,, Physica D, 201 (2005), 150. doi: 10.1016/j.physd.2004.11.014. Google Scholar A. Polynikis, S. J. Hogan and M. di Bernardo, Comparing different ODE modelling approaches for gene regulatory networks,, Journal of Theoretical Biology, 261 (2009), 511. doi: 10.1016/j.jtbi.2009.07.040. Google Scholar T. Seidman, Some limit results for relays,, Proc.s of World Congress of Nonlinear Analysts, 1 (1996), 787. Google Scholar T. Seidman, The residue of model reduction. The residue of model reduction,, In Hybrid Systems III. Verification and Control, (1996), 201. Google Scholar J. Sotomayor and M. A. Teixeira, Regularization of discontinuous vector field,, In International Conference on Differential Equations, (1998), 207. Google Scholar V. I. Utkin, Sliding Modes and Their Application in Variable Structure Systems., MIR Publisher, (1978). Google Scholar V. I. Utkin, Sliding Mode in Control and Optimization,, Springer, (1992). doi: 10.1007/978-3-642-84379-2. Google Scholar Dingheng Pi. Limit cycles for regularized piecewise smooth systems with a switching manifold of codimension two. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 881-905. doi: 10.3934/dcdsb.2018211 Todd Young. Partially hyperbolic sets from a co-dimension one bifurcation. Discrete & Continuous Dynamical Systems - A, 1995, 1 (2) : 253-275. doi: 10.3934/dcds.1995.1.253 Sihong Shao, Huazhong Tang. Higher-order accurate Runge-Kutta discontinuous Galerkin methods for a nonlinear Dirac model. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 623-640. doi: 10.3934/dcdsb.2006.6.623 Alan Mackey, Theodore Kolokolnikov, Andrea L. Bertozzi. Two-species particle aggregation and stability of co-dimension one solutions. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1411-1436. doi: 10.3934/dcdsb.2014.19.1411 Carles Bonet-Revés, Tere M-Seara. Regularization of sliding global bifurcations derived from the local fold singularity of Filippov systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3545-3601. doi: 10.3934/dcds.2016.36.3545 Antonia Katzouraki, Tania Stathaki. Intelligent traffic control on internet-like topologies - integration of graph principles to the classic Runge--Kutta method. Conference Publications, 2009, 2009 (Special) : 404-415. doi: 10.3934/proc.2009.2009.404 Da Xu. Numerical solutions of viscoelastic bending wave equations with two term time kernels by Runge-Kutta convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2389-2416. doi: 10.3934/dcdsb.2017122 Wenjuan Zhai, Bingzhen Chen. A fourth order implicit symmetric and symplectic exponentially fitted Runge-Kutta-Nyström method for solving oscillatory problems. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 71-84. doi: 10.3934/naco.2019006 Yanli Han, Yan Gao. Determining the viability for hybrid control systems on a region with piecewise smooth boundary. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 1-9. doi: 10.3934/naco.2015.5.1 Jihua Yang, Liqin Zhao. Limit cycle bifurcations for piecewise smooth integrable differential systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2417-2425. doi: 10.3934/dcdsb.2017123 D. J. W. Simpson, R. Kuske. Stochastically perturbed sliding motion in piecewise-smooth systems. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2889-2913. doi: 10.3934/dcdsb.2014.19.2889 Kazuyuki Yagasaki. Application of the subharmonic Melnikov method to piecewise-smooth systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 2189-2209. doi: 10.3934/dcds.2013.33.2189 N. Chernov. Statistical properties of piecewise smooth hyperbolic systems in high dimensions. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 425-448. doi: 10.3934/dcds.1999.5.425 Hebai Chen, Jaume Llibre, Yilei Tang. Centers of discontinuous piecewise smooth quasi–homogeneous polynomial differential systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6495-6509. doi: 10.3934/dcdsb.2019150 Shanshan Liu, Maoan Han. Bifurcation of limit cycles in a family of piecewise smooth systems via averaging theory. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020133 Sergey V. Bolotin, Piero Negrini. Global regularization for the $n$-center problem on a manifold. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 873-892. doi: 10.3934/dcds.2002.8.873 Lijun Wei, Xiang Zhang. Limit cycle bifurcations near generalized homoclinic loop in piecewise smooth differential systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2803-2825. doi: 10.3934/dcds.2016.36.2803 Yilei Tang. Global dynamics and bifurcation of planar piecewise smooth quadratic quasi-homogeneous differential systems. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 2029-2046. doi: 10.3934/dcds.2018082 Yurong Li, Zhengdong Du. Applying battelli-fečkan's method to transversal heteroclinic bifurcation in piecewise smooth systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6025-6052. doi: 10.3934/dcdsb.2019119 Viviane Baladi, Daniel Smania. Smooth deformations of piecewise expanding unimodal maps. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 685-703. doi: 10.3934/dcds.2009.23.685 Luca Dieci Cinzia Elia
CommonCrawl
Recent questions tagged expansion Find the value of $\mathrm{k}$ if the constant term in the expansion $\left(\mathrm{k} x-\frac{1}{x^{2}}\right)^{6}$ is 240 . Find the value of $\mathrm{k}$ if the constant term in the expansion $\left(\mathrm{k} x-\frac{1}{x^{2}}\right)^{6}$ is 240 .Find the value of $\mathrm{k}$ if the constant term in the expansion $\left(\mathrm{k} x-\frac{1}{x^{2}}\right)^{6}$ is 240 . ... asked in Mathematics Jan 13 Find a formula in terms of $n$ for $\sum_{r=1}^{n}\left(3 r^{2}+r-2\right)$. Simplify your answer fully. Find a formula in terms of $n$ for $\sum_{r=1}^{n}\left(3 r^{2}+r-2\right)$. Simplify your answer fully. Find a formula in terms of $n$ for $\sum_{r=1}^{n}\left(3 r^{2}+r-2\right)$. Simplify your answer fully.   NB: You may find the following ... Find the sum of the first three terms of $ \sum_{r=1}^{n}\left(3 r^{2}+r-2\right)$ Find the sum of the first three terms of $ \sum_{r=1}^{n}\left(3 r^{2}+r-2\right)$\begin{aligned} &\text { Find the sum of the first three terms of } \sum_{r=1}^{n}\left(3 r^{2}+r-2\right) \text {. } \end{aligned} ... Using the geometric series $\frac{1}{1-x}=1+x+x^{2}+x^{3}+\ldots$, it follows that $\frac{1}{1+x^{3}}$ has the power series expansion Using the geometric series $\frac{1}{1-x}=1+x+x^{2}+x^{3}+\ldots$, it follows that $\frac{1}{1+x^{3}}$ has the power series expansionUsing the geometric series $\frac{1}{1-x}=1+x+x^{2}+x^{3}+\ldots$, it follows that $\frac{1}{1+x^{3}}$ has the power series expansion A. $\quad 1-x^{ ... The rational function $\frac{x^{5}+1}{\left(x^{2}-1\right)^{2}}$ has a partial fraction expansion of the form The rational function $\frac{x^{5}+1}{\left(x^{2}-1\right)^{2}}$ has a partial fraction expansion of the formThe rational function $\frac{x^{5}+1}{\left(x^{2}-1\right)^{2}}$ has a partial fraction expansion of the form A. $\quad f(x)=G x^{2}+F x+\frac{A}{(x- ... Using the geometric series $\frac{1}{1-x}=1+x+x^{2}+x^{3}+\ldots$, it follows that $\frac{1}{1+x^{3}}$ has the power series expansionUsing the geometric series $\frac{1}{1-x}=1+x+x^{2}+x^{3}+\ldots$, it follows that $\frac{1}{1+x^{3}}$ has the power series expansion A. $\quad 1+x^{ ... What is the series expansion of the natural logarithm? What is the series expansion of the natural logarithm?What is the series expansion of the natural logarithm? ... What is a coefficient? What is a coefficient?What is a coefficient? ... asked in Data Science & Statistics Jan 3 The binomial coefficient is defined as ... The binomial coefficient is defined as ...The binomial coefficient is defined as ... ... asked in Data Science & Statistics Dec 16, 2021 Find the first three terms, in ascending powers of \(x\), of the binomial expansion \((1-2 x)^{5}\) Find the first three terms, in ascending powers of \(x\), of the binomial expansion \((1-2 x)^{5}\)(a) Find the first three terms, in ascending powers of \(x\), of the binomial expansion \((1-2 x)^{5}\) (b) Find the first three terms, in ascending ... asked in Mathematics Nov 9, 2021 Find the first four terms, in ascending powers of \(x\), of the binomial expansion \((2+k x)^{6}\) Given that the coefficient of the \(x^{3}\) term in the expansion is \(-20\) Find the first four terms, in ascending powers of \(x\), of the binomial expansion \((2+k x)^{6}\) Given that the coefficient of the \(x^{3}\) term in the expansion is \(-20\)(a) Find the first four terms, in ascending powers of \(x\), of the binomial expansion \((2+k x)^{6}\) Given that the coefficient of the \(x^{3}\) ter ... Expand \((1+4 x)^{8}\) in ascending powers of \(x\), up to and including \(x^{3}\), simplifying each coefficient in the expansion. Expand \((1+4 x)^{8}\) in ascending powers of \(x\), up to and including \(x^{3}\), simplifying each coefficient in the expansion.(a) Expand \((1+4 x)^{8}\) in ascending powers of \(x\), up to and including \(x^{3}\), simplifying each coefficient in the expansion. (b) Showing yo ... Using the last two terms of the binomial expansion, or otherwise, find the probability that Dave is late no more than one time in a school week. Using the last two terms of the binomial expansion, or otherwise, find the probability that Dave is late no more than one time in a school week.(a) Fully expand \((p+q)^{5}\) The probability of Dave being late for school on any day is \(0.1\). Let \(\mathrm{p}\) represent the probability that ... Given that the first two terms, in ascending powers of \(x\), in the series expansion of \(\mathrm{f}(x)\) are 384 and \(-104 x\) Given that the first two terms, in ascending powers of \(x\), in the series expansion of \(\mathrm{f}(x)\) are 384 and \(-104 x\)(a) Find the first 3 terms in ascending powers of \(x\) of the binomial expansion of \(\left(2-\frac{x}{8}\right)^{7}\) \(\mathrm{f}(x)=(a x+b)\left(2 ... Find the first 3 terms in ascending powers of \(x\) of the binomial expansion of \(\left(2+\frac{x}{2}\right)^{6}\) Find the first 3 terms in ascending powers of \(x\) of the binomial expansion of \(\left(2+\frac{x}{2}\right)^{6}\)(a) Find the first 3 terms in ascending powers of \(x\) of the binomial expansion of \(\left(2+\frac{x}{2}\right)^{6}\) (b) Use your expansion to fin ... : The coefficient of linear expansion of steel is 1.2 x 10-5/ oC. (read: 1 point 2 times 10 to the minus 5 per degree Celsius). What is the increase in length of a steel girder that is 20 meters long at 10o C when its temperature is raised to 20o C? : The coefficient of linear expansion of steel is 1.2 x 10-5/ oC. (read: 1 point 2 times 10 to the minus 5 per degree Celsius). What is the increase in length of a steel girder that is 20 meters long at 10o C when its temperature is raised to 20o C?: The coefficient of linear expansion of steel is 1.2 x 10-5/ oC. (read: 1 point 2 times 10 to the minus 5 per degree Celsius). What is the increase i ... by Digitized Wooden asked in Physics Oct 5, 2021 Use the binomial theorem to expand Use the binomial theorem to expandUse binomial theorem $$ \begin{aligned} (a+b)^{n} &=\left(\begin{array}{c} n \\ 0 \end{array}\right) a^{n}+\left(\begin{array}{c} n \\ 1 \end{arra ... asked in Mathematics May 10, 2021 Can you determine the general $a_{n} ?$ Can you determine the general $a_{n} ?$Find a Taylor expansion for the solution $x(t)=a_{0}+a_{1} t+a_{2} t^{2}+\cdots$ for the differential equation $\frac{d x}{d t}=t \cdot x$ with the bo ... boundary-condition asked in Mathematics May 9, 2021 Tumi and Jason are helping their dad tile the bathroom floor. Their dad tells them to leave small gaps between the tiles. Why do they need to leave these small gaps? Tumi and Jason are helping their dad tile the bathroom floor. Their dad tells them to leave small gaps between the tiles. Why do they need to leave these small gaps?Tumi and Jason are helping their dad tile the bathroom floor. Their dad tells them to leave small gaps between the tiles. Why do they need to leave th ... by ♦Joshua Mwanza Diamond asked in Physics Jun 2, 2020 During expansion, the spaces between the particles get........ , and during contraction, the spaces between the particles get........ . During expansion, the spaces between the particles get........ , and during contraction, the spaces between the particles get........ .During expansion, the spaces between the particles get........ , and during contraction, the spaces between the particles get........ . ... asked in Chemistry May 16, 2020 Expand $\left(x^{1/2} - 2y^{1/3}\right)^4$ using the binomial theorem Expand $\left(x^{1/2} - 2y^{1/3}\right)^4$ using the binomial theoremExpand $\left(x^{1/2} - 2y^{1/3}\right)^4$ using the binomial theorem ... Expand $\left(x^2+\dfrac{y}{2}\right)^3$ using the binomial theorem Expand $\left(x^2+\dfrac{y}{2}\right)^3$ using the binomial theoremExpand $\left(x^2+\dfrac{y}{2}\right)^3$ using the binomial theorem ... Expand $(3x+y)^5$ using the binomial theorem Expand $(3x+y)^5$ using the binomial theoremExpand $(3x+y)^5$ using the binomial theorem ... Find the first three terms in the binomial expansion of $(8-3x)^{\frac{1}{3}}$. Simplify each term. Find the first three terms in the binomial expansion of $(8-3x)^{\frac{1}{3}}$. Simplify each term.Find the first three terms in the binomial expansion of $(8-3x)^{\frac{1}{3}}$. Simplify each term. ... Find the first four terms in the binomial expansion $(1-y^{-1})^{17}$. Simplify each term Find the first four terms in the binomial expansion $(1-y^{-1})^{17}$. Simplify each termFind the first four terms in the binomial expansion $(1-y^{-1})^{17}$. Simplify each term ... Join the MathsGee Homework Help Q&A club where you get study support for success from our verified experts. We Subscribe for only R100 per month or R960 per year. On the MathsGee Homework Help Q&A learning community, you can: Comment on Answers Vote on Questions and Answers Donate to your favourite users Create/Take Live Video Lessons Posting on the MathsGee Homework Help Q&A learning community Remember the human Behave like you would in real life Look for the original source of content Search for duplicates before posting Read the community's rules calculate equation numbers function number data question answer solve calculus math help probability interest algebra business learning sequence statistics rate graph theorem expression equations value difference price real system exponents analytics credit questions mean distance Latest: Modern Scientists Lose to Albert Einstein Again Whitepaper: European Union Urging Closer to Tighter Artificial Intelligence Regulation Measures Agenda 2063 Champions Featured in AUDA-NEPAD Comic Series CMO Council - Telkom's Mmathebe Zvobwo talks Data-Informed Telco innovation Post-Covid19: Society Needs New Ways of Behavior, Study, Work, Social Life ASK - ANSWER - COMMENT - VOTE Network Sites: Global Q&A | Banking | Climate Change | Joburg Libraries | Kenya | Marketing | South Africa | StartUps | Wits | Zimbabwe MathsGee is Zero-Rated (You do not need data to access) on: Telkom |Dimension Data | Rain | MWEB MathsGee Tools Math Worksheet Generator Math Algebra Solver Trigonometry Simulations Vectors Simulations Matrix Arithmetic Simulations Matrix Transformations Simulations Quadratic Equations Simulations Probability & Statistics Simulations PHET Simulations Visual Statistics Management Leadership | MathsGee ZOOM | eBook | H5P
CommonCrawl
Canadian Journal of Mathematics FirstView articles Journal canadien de mathématiques Bookmark added. Go to My account to manage bookmarked content. Add bookmark Alert added. Go to My account > My alerts to manage your alert preferences. Add alert Search within full text You are leaving Cambridge Core and will be taken to this journal's article submission site. Add alert Add alert Accepted manuscripts / Manuscrits acceptés Award Winners / Lauréats des prix Refine listing To send this article to your account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to . To send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the 'name' part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. MathJax MathJax help Close MathJax help MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org. Cohomological invariants of root stacks and admissible double coverings Cycles and subschemes (Co)homology theory Andrea Di Lorenzo, Roberto Pirisi We give a formula for the cohomological invariants of a root stack, which we apply to compute the cohomological invariants and the Brauer group of the compactification of the stacks of hyperelliptic curves given by admissible double coverings. A $\mathcal {C}^k$ -seeley-extension-theorem for Bastiani's differential calculus Calculus on manifolds; nonlinear operators Maps and general types of spaces defined by maps Measures, integration, derivative, holomorphy We generalize a classical extension result by Seeley in the context of Bastiani's differential calculus to infinite dimensions. The construction follows Seeley's original approach, but is significantly more involved as not only $C^k$ -maps (for ) on (subsets of) half spaces are extended, but also continuous extensions of their differentials to some given piece of boundary of the domains under consideration. A further feature of the generalization is that we construct families of extension operators (instead of only one single extension operator) that fulfill certain compatibility (and continuity) conditions. Various applications are discussed as well. Algorithms yield upper bounds in differential algebra Computability and recursion theory Differential and difference algebra Wei Li, Alexey Ovchinnikov, Gleb Pogudin, Thomas Scanlon Consider an algorithm computing in a differential field with several commuting derivations such that the only operations it performs with the elements of the field are arithmetic operations, differentiation, and zero testing. We show that, if the algorithm is guaranteed to terminate on every input, then there is a computable upper bound for the size of the output of the algorithm in terms of the size of the input. We also generalize this to algorithms working with models of good enough theories (including, for example, difference fields). We then apply this to differential algebraic geometry to show that there exists a computable uniform upper bound for the number of components of any variety defined by a system of polynomial PDEs. We then use this bound to show the existence of a computable uniform upper bound for the elimination problem in systems of polynomial PDEs with delays. Unitary representations of type B rational Cherednik algebras and crystal combinatorics Representation theory of rings and algebras Rings and algebras arising under various constructions Representation theory of groups Algebraic combinatorics Emily Norton We compare crystal combinatorics of the level $2$ Fock space with the classification of unitary irreducible representations of type B rational Cherednik algebras to study how unitarity behaves under parabolic restriction. We show that the crystal operators that remove boxes preserve the combinatorial conditions for unitarity, and that the parabolic restriction functors categorifying the crystals send irreducible unitary representations to unitary representations. Furthermore, we find the supports of the unitary representations. PDE comparison principles for Robin problems Elliptic equations and systems Jeffrey J. Langford We compare the solutions of two Poisson problems in a spherical shell with Robin boundary conditions, one with given data, and one where the data have been cap symmetrized. When the Robin parameters are nonnegative, we show that the solution to the symmetrized problem has larger convex means. Sending one of the Robin parameters to $+\infty $ , we obtain mixed Robin/Dirichlet comparison results in shells. We prove similar results on balls and prove a comparison principle on generalized cylinders with mixed Robin/Neumann boundary conditions. An arithmetic property of intertwining operators for p-adic groups Discontinuous groups and automorphic forms Lie groups A. Raghuram The main aim of this article is to show that normalised standard intertwining operator between induced representations of p-adic groups, at a very specific point of evaluation, has an arithmetic origin. This result has applications to Eisenstein cohomology and the special values of automorphic L-functions. Approximation in the Zygmund and Hölder classes on $\mathbb {R}^n$ Functions of several variables Eero Saksman, Odí Soler i Gibert We determine the distance (up to a multiplicative constant) in the Zygmund class $\Lambda _{\ast }(\mathbb {R}^n)$ to the subspace $\mathrm {J}_{}(\mathbf {bmo})(\mathbb {R}^n).$ The latter space is the image under the Bessel potential $J := (1-\Delta )^{{-1}/2}$ of the space $\mathbf {bmo}(\mathbb {R}^n)$ , which is a nonhomogeneous version of the classical $\mathrm {BMO}$ . Locally, $\mathrm {J}_{}(\mathbf {bmo})(\mathbb {R}^n)$ consists of functions that together with their first derivatives are in $\mathbf {bmo}(\mathbb {R}^n)$ . More generally, we consider the same question when the Zygmund class is replaced by the Hölder space $\Lambda _{s}(\mathbb {R}^n),$ with $0 < s \leq 1$ , and the corresponding subspace is $\mathrm {J}_{s}(\mathbf {bmo})(\mathbb {R}^n)$ , the image under $(1-\Delta )^{{-s}/2}$ of $\mathbf {bmo}(\mathbb {R}^n).$ One should note here that $\Lambda _{1}(\mathbb {R}^n) = \Lambda _{\ast }(\mathbb {R}^n).$ Such results were known earlier only for $n = s = 1$ with a proof that does not extend to the general case. Our results are expressed in terms of second differences. As a by-product of our wavelet-based proof, we also obtain the distance from $f \in \Lambda _{s}(\mathbb {R}^n)$ to $\mathrm {J}_{s}(\mathbf {bmo})(\mathbb {R}^n)$ in terms of the wavelet coefficients of $f.$ We additionally establish a third way to express this distance in terms of the size of the hyperbolic gradient of the harmonic extension of f on the upper half-space $\mathbb {R}^{n +1}_+$ . Interpolation between noncommutative martingale Hardy and BMO spaces: the case $\textbf{0}\boldsymbol{<}\textbf{p}\boldsymbol{<}\textbf{1}$ Normed linear spaces and Banach spaces; Banach lattices Topological linear spaces and related structures Selfadjoint operator algebras Narcisse Randrianantoanina Published online by Cambridge University Press: 25 August 2021, pp. 1-45 Let $\mathcal {M}$ be a semifinite von Nemann algebra equipped with an increasing filtration $(\mathcal {M}_n)_{n\geq 1}$ of (semifinite) von Neumann subalgebras of $\mathcal {M}$ . For $0<p <\infty $ , let $\mathsf {h}_p^c(\mathcal {M})$ denote the noncommutative column conditioned martingale Hardy space and $\mathsf {bmo}^c(\mathcal {M})$ denote the column "little" martingale BMO space associated with the filtration $(\mathcal {M}_n)_{n\geq 1}$ . We prove the following real interpolation identity: if $0<p <\infty $ and $0<\theta <1$ , then for $1/r=(1-\theta )/p$ , $$ \begin{align*} \big(\mathsf{h}_p^c(\mathcal{M}), \mathsf{bmo}^c(\mathcal{M})\big)_{\theta, r}=\mathsf{h}_{r}^c(\mathcal{M}), \end{align*} $$ with equivalent quasi norms. For the case of complex interpolation, we obtain that if $0<p<q<\infty $ and $0<\theta <1$ , then for $1/r =(1-\theta )/p +\theta /q$ , $$ \begin{align*} \big[\mathsf{h}_p^c(\mathcal{M}), \mathsf{h}_q^c(\mathcal{M})\big]_{\theta}=\mathsf{h}_{r}^c(\mathcal{M}) \end{align*} $$ These extend previously known results from $p\geq 1$ to the full range $0<p<\infty $ . Other related spaces such as spaces of adapted sequences and Junge's noncommutative conditioned $L_p$ -spaces are also shown to form interpolation scale for the full range $0<p<\infty $ when either the real method or the complex method is used. Our method of proof is based on a new algebraic atomic decomposition for Orlicz space version of Junge's noncommutative conditioned $L_p$ -spaces. We apply these results to derive various inequalities for martingales in noncommutative symmetric quasi-Banach spaces. Unique continuation properties for polyharmonic maps between Riemannian manifolds Higher-dimensional theory Variational problems in infinite-dimensional spaces Global differential geometry Volker Branding, Stefano Montaldo, Cezar Oniciuc, Andrea Ratto Polyharmonic maps of order k (briefly, k-harmonic maps) are a natural generalization of harmonic and biharmonic maps. These maps are defined as the critical points of suitable higher-order functionals which extend the classical energy functional for maps between Riemannian manifolds. The main aim of this paper is to investigate the so-called unique continuation principle. More precisely, assuming that the domain is connected, we shall prove the following extensions of results known in the harmonic and biharmonic cases: (i) if a k-harmonic map is harmonic on an open subset, then it is harmonic everywhere; (ii) if two k-harmonic maps agree on an open subset, then they agree everywhere; and (iii) if, for a k-harmonic map to the n-dimensional sphere, an open subset of the domain is mapped into the equator, then all the domain is mapped into the equator. Wick polynomials in noncommutative probability: a group-theoretical approach General nonassociative rings Hopf algebras, quantum groups and related topics Kurusch Ebrahimi-Fard, Frédéric Patras, Nikolas Tapia, Lorenzo Zambotti Wick polynomials and Wick products are studied in the context of noncommutative probability theory. It is shown that free, Boolean, and conditionally free Wick polynomials can be defined and related through the action of the group of characters over a particular Hopf algebra. These results generalize our previous developments of a Hopf-algebraic approach to cumulants and Wick products in classical probability theory. Combinatorics of the geometry of Wilson loop diagrams II: Grassmann necklaces, dimensions, and denominators Quantum field theory; related classical field theories Susama Agarwala, Siân Fryer, Karen Yeats Wilson loop diagrams are an important tool in studying scattering amplitudes of SYM $N=4$ theory and are known by previous work to be associated to positroids. In this paper, we study the structure of the associated positroids, as well as the structure of the denominator of the integrand defined by each diagram. We give an algorithm to derive the Grassmann necklace of the associated positroid directly from the Wilson loop diagram, and a recursive proof that the dimension of these cells is thrice the number of propagators in the diagram. We also show that the ideal generated by the denominator in the integrand is the radical of the ideal generated by the product of Grassmann necklace minors. Nontame Morse–Smale flows and odd Chern–Weil theory General theory of differentiable manifolds Daniel Cibotaru, Wanderley Pereira Using a certain well-posed ODE problem introduced by Shilnikov in the sixties, Minervini proved the currential "fundamental Morse equation" of Harvey–Lawson but without the restrictive tameness condition for Morse gradient flows. Here, we construct local resolutions for the flow of a section of a fiber bundle endowed with a vertical vector field which is of Morse gradient type in every fiber in order to remove the tameness hypothesis from the currential homotopy formula proved by the first author. We apply this to produce currential deformations of odd degree closed forms naturally associated to any hermitian vector bundle endowed with a unitary endomorphism and metric compatible connection. A transgression formula involving smooth forms on a classifying space for odd K-theory is also given. Automorphisms and opposition in spherical buildings of exceptional type, I Nonlinear incidence geometry Structure and classification of infinite or finite groups Finite geometry and special incidence structures James Parkinson, Hendrik Van Maldeghem To each automorphism of a spherical building, there is a naturally associated opposition diagram, which encodes the types of the simplices of the building that are mapped onto opposite simplices. If no chamber (that is, no maximal simplex) of the building is mapped onto an opposite chamber, then the automorphism is called domestic. In this paper, we give the complete classification of domestic automorphisms of split spherical buildings of types $\mathsf {E}_6$ , $\mathsf {F}_4$ , and $\mathsf {G}_2$ . Moreover, for all split spherical buildings of exceptional type, we classify (i) the domestic homologies, (ii) the opposition diagrams arising from elements of the standard unipotent subgroup of the Chevalley group, and (iii) the automorphisms with opposition diagrams with at most two distinguished orbits encircled. Our results provide unexpected characterizations of long root elations and products of perpendicular long root elations in long root geometries, and analogues of the density theorem for connected linear algebraic groups in the setting of Chevalley groups over arbitrary fields. The epsilon constant conjecture for higher dimensional unramified twists of ${\mathbb Z}_p^r$ (1) Algebraic number theory: local and $p$-adic fields Werner Bley, Alessandro Cobbe Published online by Cambridge University Press: 29 June 2021, pp. 1-45 Let $N/K$ be a finite Galois extension of p-adic number fields, and let $\rho ^{\mathrm {nr}} \colon G_K \longrightarrow \mathrm {Gl}_r({{\mathbb Z}_{p}})$ be an r-dimensional unramified representation of the absolute Galois group $G_K$ , which is the restriction of an unramified representation $\rho ^{\mathrm {nr}}_{{{\mathbb Q}}_{p}} \colon G_{{\mathbb Q}_{p}} \longrightarrow \mathrm {Gl}_r({{\mathbb Z}_{p}})$ . In this paper, we consider the $\mathrm {Gal}(N/K)$ -equivariant local $\varepsilon $ -conjecture for the p-adic representation $T = \mathbb Z_p^r(1)(\rho ^{\mathrm {nr}})$ . For example, if A is an abelian variety of dimension r defined over ${{\mathbb Q}_{p}}$ with good ordinary reduction, then the Tate module $T = T_p\hat A$ associated to the formal group $\hat A$ of A is a p-adic representation of this form. We prove the conjecture for all tame extensions $N/K$ and a certain family of weakly and wildly ramified extensions $N/K$ . This generalizes previous work of Izychev and Venjakob in the tame case and of the authors in the weakly and wildly ramified case. A group-theoretic generalization of the p-adic local monodromy theorem Linear algebraic groups and related topics Shuyang Ye Let G be a connected reductive group over a p-adic number field F. We propose and study the notions of G- $\varphi $ -modules and G- $(\varphi ,\nabla )$ -modules over the Robba ring, which are exact faithful F-linear tensor functors from the category of G-representations on finite-dimensional F-vector spaces to the categories of $\varphi $ -modules and $(\varphi ,\nabla )$ -modules over the Robba ring, respectively, commuting with the respective fiber functors. We study Kedlaya's slope filtration theorem in this context, and show that G- $(\varphi ,\nabla )$ -modules over the Robba ring are "G-quasi-unipotent," which is a generalization of the p-adic local monodromy theorem proved independently by Y. André, K. S. Kedlaya, and Z. Mebkhout. Tree densities in sparse graph classes Tony Huynh, David R. Wood What is the maximum number of copies of a fixed forest T in an n-vertex graph in a graph class $\mathcal {G}$ as $n\to \infty $ ? We answer this question for a variety of sparse graph classes $\mathcal {G}$ . In particular, we show that the answer is $\Theta (n^{\alpha _{d}(T)})$ where $\alpha _{d}(T)$ is the size of the largest stable set in the subforest of T induced by the vertices of degree at most d, for some integer d that depends on $\mathcal {G}$ . For example, when $\mathcal {G}$ is the class of k-degenerate graphs then $d=k$ ; when $\mathcal {G}$ is the class of graphs containing no $K_{s,t}$ -minor ( $t\geqslant s$ ) then $d=s-1$ ; and when $\mathcal {G}$ is the class of k-planar graphs then $d=2$ . All these results are in fact consequences of a single lemma in terms of a finite set of excluded subgraphs. Proof of Laugwitz Conjecture and Landsberg Unicorn Conjecture for Minkowski norms with $SO(k)\times SO(n-k)$ -symmetry Local differential geometry General convexity Ming Xu, Vladimir S. Matveev For a smooth strongly convex Minkowski norm $F:\mathbb {R}^n \to \mathbb {R}_{\geq 0}$ , we study isometries of the Hessian metric corresponding to the function $E=\tfrac 12F^2$ . Under the additional assumption that F is invariant with respect to the standard action of $SO(k)\times SO(n-k)$ , we prove a conjecture of Laugwitz stated in 1965. Furthermore, we describe all isometries between such Hessian metrics, and prove Landsberg Unicorn Conjecture for Finsler manifolds of dimension $n\ge 3$ such that at every point the corresponding Minkowski norm has a linear $SO(k)\times SO(n-k)$ -symmetry. 1324- and 2143-avoiding Kazhdan–Lusztig immanants and k-positivity Basic linear algebra Sunita Chepuri, Melissa Sherman-Bennett Published online by Cambridge University Press: 14 May 2021, pp. 1-31 Immanants are functions on square matrices generalizing the determinant and permanent. Kazhdan–Lusztig immanants, which are indexed by permutations, involve $q=1$ specializations of Type A Kazhdan–Lusztig polynomials, and were defined by Rhoades and Skandera (2006, Journal of Algebra 304, 793–811). Using results of Haiman (1993, Journal of the American Mathematical Society 6, 569–595) and Stembridge (1991, Bulletin of the London Mathematical Society 23, 422–428), Rhoades and Skandera showed that Kazhdan–Lusztig immanants are nonnegative on matrices whose minors are nonnegative. We investigate which Kazhdan–Lusztig immanants are positive on k-positive matrices (matrices whose minors of size $k \times k$ and smaller are positive). The Kazhdan–Lusztig immanant indexed by v is positive on k-positive matrices if v avoids 1324 and 2143 and for all noninversions $i< j$ of v, either $j-i \leq k$ or $v_j-v_i \leq k$ . Our main tool is Lewis Carroll's identity. Involution pipe dreams Special varieties Zachary Hamaker, Eric Marberg, Brendan Pawlowski Involution Schubert polynomials represent cohomology classes of K-orbit closures in the complete flag variety, where K is the orthogonal or symplectic group. We show they also represent $\mathsf {T}$-equivariant cohomology classes of subvarieties defined by upper-left rank conditions in the spaces of symmetric or skew-symmetric matrices. This geometry implies that these polynomials are positive combinations of monomials in the variables $x_i + x_j$, and we give explicit formulas of this kind as sums over new objects called involution pipe dreams. Our formulas are analogues of the Billey–Jockusch–Stanley formula for Schubert polynomials. In Knutson and Miller's approach to matrix Schubert varieties, pipe dream formulas reflect Gröbner degenerations of the ideals of those varieties, and we conjecturally identify analogous degenerations in our setting. Oscillating multipliers on symmetric and locally symmetric spaces Harmonic analysis in several variables Effie Papageorgiou We prove $L^{p}$-boundedness of oscillating multipliers on symmetric spaces of noncompact type of arbitrary rank, as well as on a wide class of locally symmetric spaces.
CommonCrawl
Hexagonal Lattice 3d The three unit vectors, a, b, c can define a cell as shown by the shaded region in Fig. Shadow Colour. Download high quality Lattice Design cartoons from our collection of 41,940,205 cartoons. What is the best way in TikZ to draw a hexagonal stucture in 3D? For example, graphite crystals, honeycombs or stable racks made of hexagonal aligned stiffeners. Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here!. I already found this nice solution: Drawing hexagons. If these interactions are mainly attractive, then close-packing usually leads to more energetically stable structures. This makes the hexagonal lattice easier to create larger gap. Close the dialog box and open PWE Simulation Parameters dialog. The centering types identify the locations of the lattice points in the unit cell as follows: Primitive (P): lattice points on the cell corners only (sometimes called simple). With an image! (absolute url plz) Give me a Shadow! Shadow Blur. Tetragonal System, 4. The hexagon was drawn out using Ad Triangulum ( equilateral triangle rotated within the circle). dimensional lattice structures, such as hexagonal graphite. Ideal for steel security grilles, steel fencing, railing infill panels, steel gates, window guards and any application where control of the opening size is needed. create_lattice initializes the system with many copies of a unit cell. Hexagonal seamless patterns for printing, engraving, paper cut. 4 ibrav=4, hexagonal lattice The primitive vectors of the direct lattice are: a 1 = a(1;0;0); a 2 = a(1 2; p 3 2;0); a 3 = a(0;0; c a); while the reciprocal lattice vectors are: b 1 = 2ˇ a (1; 1 p 3;0); b 2 = 2ˇ a (0; 2 p 3;0); b 3 = 2ˇ a (0;0; a c): The BZ is: The gure has been obtained with c=a= 1:4. (a) This cell is known as unit cell (Fig. WS cell about a lattice point: a region of space that is closer to a given lattice point than to any other point. Sorenson and A. A regular skew hexagon is vertex-transitive with equal edge lengths. Roll out any moldable material like cake fondant or clay, place it on the lattice Silicone Onlay® and roll again to enable the Silicone Onlay® to cut out and hold the lattice pattern within its structure. @article{osti_22105720, title = {Computation of neutron fluxes in clusters of fuel pins arranged in hexagonal assemblies (2D and 3D)}, author = {Prabha, H. The OPC part that contains a 3D model. Cellular structure construction for a variable-density hexagonal lattice structure. These have higher symmetry since some are invariant under rotations of 2ˇ=3, or 2ˇ=6, or 2ˇ=4, etc. Meshes are 2d arrays of lattice points, Lattices are 3d arrays. Hexagonal close packed is a slip system, which is close-packed structure. I am trying to model an infinite array of 3D particles in a hexagonal lattice that will be illuminated by a source perpendicular to the plane. Schmitt Michael G. square, hexagonal, or triangular patterns. dense, ordered packing. The hexagon was drawn out using Ad Triangulum ( equilateral triangle rotated within the circle). density of the hexagonal packing pˇ 12 is the highest that can be attained. 2010, 3(10): 694–700 Graphene-Like Bilayer Hexagonal Silicon Polymorph Jaeil Bai1, Hideki Tanaka2, and Xiao Cheng Zeng1 ( 1 Department of Chemistry and Nebraska Center for Materials and Nanoscience, University of Nebraska-Lincoln, Lincoln, Nebraska. Basic classes of lattices include:. What the formula expresses with the use of $\,m\,$ the parity of $\,k\,$ is that HCP is not a lattice but the union of two lattices. The inset on the top right in Fig. PETRESCU1, Mădălina-Gabriela BALINT2 S-a efectuat o examinare aprofundată a structurii varietăţilor polimorfe ale nitrurii de bor BN şi a mecanismelor tranziţiilor polimorfe de la structurile. 3,986 3D Lattice models available for download. the xy plane over the set (range(x), range(y)) is tessellated by a regular grid of hexagons. Note: Current color scheme may not be as shown, See parts list for current. Lattice definition is - a framework or structure of crossed wood or metal strips. We start with a hexagonal array of spheres (the blue "A" layer). At a high infill %, Hexagonal is essentially the same as Linear and therefore the discussion is really between Linear and Diagonal. and Marleau, G. The 32 segments of the pattern ring each have a hexagonal '3D box' effect feature made from Sycamore, African Blackwood and Goncalo Alves surrounded by Muhuhu. [Hexagon Lattice Garden Stool] ☀☀Cheap Reviews☀☀ Hexagon Lattice Garden Stool [Low Prices]. It is particularly important to realize that surface or perimeter of the pixel (depending whether in 2D or 3D) is different than in the case of square pixel. This section covers the construction of Brillouin zones in two dimensions. Semiconductor Fundamentals OUTLINE General material properties Crystal structure Crystallographic notation Read: Chapter 1 What is a Semiconductor? Low resistivity => "conductor" High resistivity => "insulator" Intermediate resistivity => "semiconductor" conductivity lies between that of conductors and insulators generally crystalline in structure for IC devices In recent years. For cubic crystals the lattice parameter is identical in all three crystal axes. Both of these are so versatile that I felt they each warranted their own post. The Reciprocal Lattice Just like we can define a real space lattice in terms of our real space lattice vectors, we can define a reciprocal space lattice in terms of our reciprocal space lattice vectors: Now we can write: r d ha kb lc hkl * * The real and reciprocal space lattice vectors form an orthonormal set: 1 0 a a a b a c similar for b* and c*. Unlike stress and strain, elasticity is an intrinsic property of a material. Figure 9 depicts the projection of a rhombohedral lattice on the (00. If a unit cell contains lattice points only at it's corners, then it is called Primitive Unit Cell (or) Simple Unit Cell. three lattice designs, such as 3D Kagome structure, 3D pyramidal structure and the hexagonal diamond structure, for compression testing. The proposed lattice structure designs are fabricated using an Objet 350 3D printer while the material chosen is a polypropylene-like photopolymer called Objet DurusWhite RGD430. However, the 3D lattice structure is based on beams. In our 3D HPC model, we use the hexagonal prism lattice as a lattice structure. The 3DP method is a layer-by-layer approach for the fabrication of 3D objects. Instead of simply creating inner supports for hollow objects, which would only be useful during printing, it would be great if you could choose to create a 3D hexagonal lattice mesh inside the hollow parts of printed obj…. If you want to buy Hexagon Lattice Garden Stool Ok you want deals and save. To create a 3D lattice, follow these steps: On the Model tab, click Engineering and then select Lattice. Murphy and Gallagher (1982) shows that it can be used on Fourier and Fresnel digital holograms. We then place a second close-packed layer (the gold "B" layer) atop the the first, so they nestle into the left-pointing holes in the first. All Rights. It is the presence of this basis that allows the smaller disclination angle π/3 not (isotropically) available to other basis-free lattices. Start studying materials chapter 3 The Structure of Crystalline Solids. 0 is the 3D lattice. All 3D crystals belong to one of 14 Bravais lattices. This Demonstration shows the characteristics of 3D Bravais lattices arranged according to seven crystal systems: cubic, tetragonal, orthorhombic, monoclinic, triclinic, rhombohedral and hexagonal. It follows the fact that all hexagonal ferrites exhibit constant lattice parameter 'a' and variable parameter 'c' [8]. 24 Here we generalize hexagonal-lattice close packing for helices in 3D DNA origami, and also demonstrate hybrid 3D DNA origami packed on mixed geometries of honeycomb lattice, square lattice, and hexagonal lattice. The crystal basis is defined by. Related to hexagonal. For a 3D lattice, we can find threeprimitive lattice vectors (primitive translation vectors), such that any translation vector can be written as!⃗=$. Hexagonal lattice products are most popular in North America, Domestic Market, and Southeast Asia. space lattice - a network composed of an infinite three-dimensional array of points unit cell - the repeating unit in a space lattice hexagonal a = b ≠c. 2 = Symmetry Axis (360/2). All other ones cannot. Tetragonal System, 4. The photonic structure we want to analyze consists of a hexagonal pattern of air holes in dielectric with permittivity 13. 3D Solid State Crystal models WebGL models To see these models you need a modern graphical card and a browser with WebGL support (see get. Explore millions of stock photos, images, illustrations, and vectors in the Shutterstock creative collection. Download this stock image: Graphene layer structure molecular model, hexagonal lattice of carbon atoms. 1(b) shows 3D view of the AFM image suggests sheet like topology of the sample. Vase hexagon with lattice tiles 3D print model, formats MAX, OBJ, STL, MTL, , ready for 3D animation and other 3D projects. Consequently, the crystal looks the. See the diagram. A related concept is that of the irreducible Brillouin zone, which is the first Brillouin zone reduced by all of the symmetries in the point group of the lattice (point group of the crystal). The lines between silicon atoms in the lattice illustration indicate nearest-neighbor bonds. Because all three cell-edge lengths are the same in a cubic unit cell, it doesn't matter what orientation is used for the a, b, and c axes. A hexagonal lattice of dots 7a. 0 causes the polygon not to be filled. " The sd2 graphene is characterized by bond-centered electronic hopping, which transforms the apparent atomic hexagonal lattice into the physics of a kagome lattice that may exhibit a wide range of topological quantum phases. •NaCl (halite, rock salt): fcc lattice with a basis of 2 atoms •Diamond: fcc lattice with a basis of 2 atoms of the same species •ZnS (zincblende): fcc lattice with a basis of 2 atoms of different species •Hexagonal close packed (HCP): simple hexagonal lattice with a two atom basis •Let's look at these individually, in more detail. Case study lattice structures Complex 3D lattice structures - Micro Laser Sintering enables maximum functionality by minimum material use Lattice structures are used to save weight without sacrificing stability of parts. 3D Lattice models are ready for animation, games and VR / AR projects. Most commercial design tools have only recently begun to incorporate design capabilities for these structures, especially with analysis and optimization capability. This achievement is of military and commercial interest because the technique can be used to enhance or better transmit infrared images. lattice types Bravais lattices. atomic displacements away from the positions of a perfect lattice were not considered. In the investigation of the effect of the lattice type on the bandgap size, the majority of the results with the hexagonal lattice exhibit the greater bandgap than those with the square lattice. Identify a vector $\mathbf q$ perpendicular to that plane and therefore perpendicular to all of the other vectors. 70A˚, which closely matches hBN bulk crystal lattice constant of c¼6. (a) how should I deal with the three axes in Python, considering that what I want is not equivalent to a 3D matrix, due to the constraints on the. In this pursuit we report synthesis and investigation of optical properties of stanene up to few layers, a two-dimensional hexagonal structural analogue of graphene. You'll find six-sided hexagons in honeycombs, hardware and even in natural basalt columns along the coast of Ireland. 5) Hexagonal prismatic, Hex: This lattice is a 2D hexag-onal lattice that is extended to 3D as a hexagonal prism. Each crystal system consists of a set of three axes in a particular geometrical arrangement. Let's take a cube grid and slice out a diagonal plane at x + y + z = 0. lattice types Bravais lattices. Learn more before you buy, or discover other cool products in Mathematical Art. This is a function of the radius (r) of each of the atoms in the structure as well as the geometric configuration of the lattice. A crystal is made up of a periodic arrangement of one or more atoms (the basis, or motif) repeated at each lattice point. com/thesketchupesse. When the discrete points are atoms, ions, or polymer strings of solid matter, the Bravais lattice concept is used to formally define a crystalline arrangement and its (finite) frontiers. If you want to buy Hexagon Lattice Garden Stool Ok you want deals and save. A regular hexagon is a six sided polygon where all of the sides are the same length (s). This page was built to translate between Miller and Miller-Bravais indices, to calculate the angle between given directions and the plane on which a lattice vector is normal to for both cubic and hexagonal crystal structures. The basal plane of the unit cell coincides with the close packed layers ( Figure 6 ). This method requires tracking algorithms. Hexagonal Many metals including cobalt and zinc have a In a 3D crystal lattice, d is the spacing between crystallographic planes. There are two possible crystal structures for CdS (zincblende (cubic) and hexagonal (wurtzite)). See other classes in the hoomd. }, abstractNote = {For computations of fluxes, we have used Carvik's method of collision probabilities. It is a distinct lattice that normally repeats in order to fill the whole space. Bravais lattice fill space continuously and without gaps if a unit cell is repeated periodically along each lattice vector. A hexagonal lattice of holes with radius = 130 nm have been etched into the layer. lattice parameter, OQ. Fbx Panel Lattice Grille - 3D Model. The band diagrams should be the same. Although it can be shown that the lattice Boltzmann equation is a finite difference form of the linearized continuous Boltzmann equation [17, 18], we present RLBE as a self-contained mathematical object representing a dynamical system with a finite number of moments in discrete space and time. As depicted in Figure 1a, each vertex has five neighbors: FIG. Changing the lattice to hexagonal and reducing the size in half. This awesome graphic of 3D pyramid style is a business template for demonstrating step by step processes. Furthermore these structures can also realize functions like shielding, guiding or. 3D printing allows for the creation of shapes and volumes never possible before, thus, materials might behave differently when presented on new shapes. Recent advancements in AM have enabled rapid production speeds, high spatial resolution, and strong materials [1]. 24 Here we generalize hexagonal-lattice close-packing for helices in 3D DNA origami, and also demonstrate hybrid 3D DNA. The 3D Manufacturing Format described by this specification, defining one or more 3D objects intended for output to a physical form. This achievement is of military and commercial interest because the technique can be used to enhance or better transmit infrared images. Wallpaper Murals-3D Berg Fenster 67 Mauer Papier Exklusiv MXY Fototapete Abziehbild Innen Mauer puookg3065-select from the newest brands like - www. 3,986 3D Lattice models available for download. 1 is assumed for the axial detection PSF (blue). Graphene, a single-atomic layer of carbon atoms arranged in a hexagonal lattice, is repeatedly dubbed a "wonder material" due to its immense array of uncanny properties like extraordinary conductivity, flexibility and transparency. 2 = Symmetry Axis (360/2). Blender Stack Exchange is a question and answer site for people who use Blender to create 3D graphics, animations, or games. Stencil lattice ornaments. (There are lots of other interesting lattices. the lattice was pseudo-hexagonal build using the rectangular lattice sample points Q ≈85,74% disregarding two inner pixels as in [4]. 99 5% coupon applied. Q: What is the close-pack structure in 2D? A: The hexagonal lattice (these disks form equilateral triangles). Compared with a diffraction-limited, deconvolved 3D image acquired with the dithered lattice light sheet before PALM imaging , the resulting 3D PALM image (Fig. Mahbub Hossain and Dr. Primitive lattice vectors Q: How can we describe these lattice vectors (there are an infinite number of them)? A: Using primitive lattice vectors (there are only d of them in a d-dimensional space). lattice types Bravais lattices. Different lattices have different concomitant angles and will thus generate kirigami surfaces with differently shaped tiles. Abstract Geometric Background Design stock vectors and royalty free photos in HD. The kirigami example in the video is available here. Readings ¡Chapter 3 of Structure of Materials NANO 106 - Crystallography ofMaterials by Shyue Ping Ong - Lecture 2. Shop for products tagged: lattice in the Shapeways 3D Printing marketplace. Another way to look at hexagonal grids is to see that there are three primary axes, unlike the two we have for square grids. It follows the fact that all hexagonal ferrites exhibit constant lattice parameter 'a' and variable parameter 'c' [8]. m,n = Oblique Glide Planes. 694 Nano Res. A planar chiral lattice with Poisson's ratio -1 [2] was developed. STRUCTURE AND PROPERTIES MODIFICATIONS IN BORON NITRIDE. The 3D lattice structures formed in solution in the optical microscopy chamber were controllably dried in the chamber under ambient conditions, to maximally maintain the structure for SEM imaging. Download this stock image: Graphene layer structure molecular model, hexagonal lattice of carbon atoms. Hexagonal ice lattice. The dashed hexagon is the Wigner-Seitz primitive cell. • g is the reciprocal lattice vector. For instance, if you imagine a hexagonal or triangular lattice "holding up" an object, it is clear that the alternating angles of the hexagonal and triangular lattices would do more to distribute weight to the lower levels than the orthogonally-confined square lattice would. This extension tutorial will teach you to use SketchUp and Shape Bender to work with a hexagonal lattice! ♦SUPPORT ME♦ http://www. The first is centered at the origin and the second is a shifted copy of the first. I eventually found the answer to this, so I'm posting it here in case other people find it helpful in the future. In three-dimensional space, there are 14 Bravais lattices. Applications. If we compare the square and hexagonal lattices, we see that both are made of columns of circles. The cells of a beehive honeycomb are hexagonal for this reason and because the shape makes efficient use of space and building materials. Any two of these axes are sufficient to uniquely identify each hexagon using a 2-tuple of integers. Inverse protein folding in 3D hexagonal prism lattice under HPC model Alireza Hadj Khodabakhshi∗, J´an Manˇuch ∗, Arash Rafiey and Arvind Gupta School of Computing Science, Simon Fraser. ¾It is a mathematic abstraction used to describe the translational symmetry (or order) of a periodic structure. Spatiotemporal pixel jitter is included in the scanning of time- varying imagery on a hexagonal grid. Laser cutting set. Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here! Common Properties Abundance in Earth's Crust. Isometric System, 2. In reality, we have to deal with finite sizes. The vertices adjacent to a vertex are called the neighbors of that vertex. The three lattice architectures are described by the RUCs of Fig. The hexagonal crystal family consists of the 12 point groups such that at least one of their space groups has the hexagonal lattice as underlying lattice, and is the union of the hexagonal crystal system and the trigonal crystal system. We can add as many outer rings as we want. In this construction the atomic positions in the lattice coordinates are. In this expression, R is a lattice vector between a pair of unit cells: R =ua +vb+wc; u,v, and w are integers and the dot product k R. We then place a second close-packed layer (the gold "B" layer) atop the the first, so they nestle into the left-pointing holes in the first. The band diagrams should be the same. PART I: DIRECT POLYMORPHIC TRANSFORMATIONS MECHANISMS M. A hexagonal lattice of dots 7a. Seven helices in a close-packed hexagonal lattice have been demonstrated for a non-origami DNA system. If these interactions are mainly attractive, then close-packing usually leads to more energetically stable structures. The PowerPoint PPT presentation: "Spontaneous Hexagon Organization in Pyrochlore Lattice" is the property of its rightful owner. High quality Hexagon inspired T-Shirts, Posters, Mugs and more by independent artists and designers. [Try to find G & for the fcc primitive reciprocal lattice, for example, when k 1 =1, k 2 = -2 and k 3 =3]. Each crystal system consists of a set of three axes in a particular geometrical arrangement. For a hexagonal lattice, the HexLattice. 3D Bravais Lattices Bravais lattice is a lattice with translation symmetry which consists of equivalent nodes. In the hexagonal closest-packed structure, a = b = 2r and c = 4(2/3)1/2 r, where r is the atomic radius of the atom. Notice that we use a rhombus (rather than a hexagon) to define the hexagonal lattice because it is simpler. The PowerPoint PPT presentation: "Spontaneous Hexagon Organization in Pyrochlore Lattice" is the property of its rightful owner. The symmetry of each group is described by the relationship between the lattice sides a, b, and c and angles α, β and γ. The Lattice Generator is a simple MatLab program that automatically generates various lattice geometries direct to STL format. (c) The hexagonal structure of the fcc lattice: Three hexagonal layers are stacked while being shifted against each other (type A-B-C-A-B-C). The second cell type you can see in Creo 4. I am having an issue creating a 3d lattice structure, it is not the creation of the repeating shapes, but the single cell in which I am having issues. We reduce the wave vector domain to the IBZ by observing the geometric symmetries (two-fold reflectional symmetry for rectangular and hexagonal lattices, and a rotational symmetry for the. The term honeycomb lattice could mean a corresponding hexagonal lattice, or a structure which is not a lattice in the group sense, but e. The formula for the volume of a hexagon column is: V= 3 / 2 • √3 • s 2 • h. It is ideal diagram for business strategy or plan presentations. 3, G and H, inset). Square lattice, free electron energies (a) Show for a simple square lattice in two dimensions that the kinetic energy of a free electron at a corner of the rst zone is higher than that of an electron at the midpoint of a side face of the zone by a factor of 2. The proposed lattice structure designs are fabricated using an Objet 350 3D printer while the material chosen is a polypropylene-like photopolymer called Objet DurusWhite RGD430. How can I make a hexagonal grill. hexagonal lattice PC membrane for the transverse electric-like polarization, which is calculated by a 3D plane-wave expansion method. Using the calculator provided you can calculate it's surface are and volume quickly and easily. The outer electrons (–) from the original metal atoms are free to move around between the positive metal ions formed (+). This new lattice structure is called the hexagonal close packed (hcp) lattice. No atoms lie on the lattice points, unusual. Then we can calculate, with respect to the center, the location of each of the 6 points of the hexagon, and use the fill () command to create the hexagons. 11 (From A&M) A simple hexagonal Bravais lattice (a) in 3-dimensions (b) in 2-dimensions. (a) how should I deal with the three axes in Python, considering that what I want is not equivalent to a 3D matrix, due to the constraints on the. Three nearby points form an equilateral triangle. Let's take a cube grid and slice out a diagonal plane at x + y + z = 0. Bluy and Black cell honeycomb Background of hexagonal lattice structures, similar to honeycombs. The lattice points in a cubic unit cell can be described in terms of a three-dimensional graph. In this expression, R is a lattice vector between a pair of unit cells: R =ua +vb+wc; u,v, and w are integers and the dot product k R. We show that beside a typical kagome band, the hexagonal W lattice also displays magnetism and large SOC, supporting room-temperature quantum anomalous Hall (QAH) state with an energy gap as large as 0. ) I will use letters like i to denote a site in the lattice. Hexagonal Close packed (hcp): In this type of packing, the spheres of molecules of a particular row in a particular dimension are in a position that they fit into depressions between adjacent spheres of the previous row. Hexagonal close packed (hcp) refers to layers of spheres packed so that spheres in alternating layers overlie one another. Hence, it is very easy to fabricate complex structures with complex internal features that cannot be manufactured by any other fabrication processes. A planar chiral lattice with Poisson's ratio -1 [2] was developed. Lattice Design offers great versatility for both civil and industrial use. Iin both of these lattices, the corners of the unit cells are centered on a lattice point. wikiHow is a "wiki," similar to Wikipedia, which means that many of our articles are co-written by multiple authors. Lattice ¾A periodic array of "dots" (or lattice points) with infinite repetition. A lattice in the sense of a 3-dimensional array of regularly spaced points coinciding with e. ¾A lattice can be described in terms of unit cell and lattice. This video deals with bravais lattice types and relation in radius and edge length for jee neet aiims students Sign up now to enroll in courses, follow best educators, interact with the community and track your progress. The reciprocal lattice basis vectors span a vector space that is commonly referred to as reciprocal space, or often in the context of quantum mechanics, k space. This Demonstration shows the characteristics of 3D Bravais lattices arranged according to seven crystal systems: cubic, tetragonal, orthorhombic, monoclinic, triclinic, rhombohedral and hexagonal. (a) how should I deal with the three axes in Python, considering that what I want is not equivalent to a 3D matrix, due to the constraints on the. It follows the fact that all hexagonal ferrites exhibit constant lattice parameter 'a' and variable parameter 'c' [8]. Isometric System, 2. There are few things to be aware of when using hexagonal lattice. The remaining 8 rings each have 16 segments for a total parts count of 481. We study the linear properties (band structure) of these lat- tices and consider the two most common types of optical. 3, G and H) reveals far more structural detail in the nuclear envelope, such as a twisted braid of lamin A on the inner surface (Fig. An equivalent quotient can be expressed for 3D as well: Q= 36πV 2 S3, where V is the volume and S is the surface of the polyhedron. A hexagonal prism is a prism composed of two hexagonal bases and six rectangular sides. The hexagonal crystal family consists of the 12 point groups such that at least one of their space groups has the hexagonal lattice as underlying lattice, and is the union of the hexagonal crystal system and the trigonal crystal system. It is the presence of this basis that allows the smaller disclination angle π/3 not (isotropically) available to other basis-free lattices. Hexagonal close-packed The closest packing of spheres in two dimensions has hexagonal symmetry where every sphere has six nearest neighbors. See the diagram. Physics of multiferroic hexagonal manganites RMnO3 Je-Geun Park Sungkyunkwan University KIAS 29 October 2005 What is multiferroic behavior? Renaissance of – A free PowerPoint PPT presentation (displayed as a Flash slide show) on PowerShow. In this expression, R is a lattice vector between a pair of unit cells: R =ua +vb+wc; u,v, and w are integers and the dot product k R. 5: common rafter pitch and 22:12 dormers into a rotated purlin header. There are 8 and 12 neighbors in the body centered and face centered cubic lattices correspondingly. The solution is to take the superposition of the Fourier transforms of two offset hexagonal lattices, with an appropriate modulation along the z-axis. We show that Diagonal is ~10% stronger than Linear. I eventually found the answer to this, so I'm posting it here in case other people find it helpful in the future. The geometry of each cell is defined by truss lengths l and h for the inclined and vertical walls respectively, wall thickness t, and inclined wall angle θ. a1 and a2 are the lattice vectors. Fbx Panel Lattice Grille - 3D Model. Background - hexagonal lattice structure similar to a honeycomb. An ideal crystal is infinite large (hence no boundary surfaces), with identical group of atoms (basis) located at every lattice points in space – no more, no less. See the diagram. Let Gc denote the shortest reciprocal Posted 5 years ago. Motion design. In this expression, R is a lattice vector between a pair of unit cells: R =ua +vb+wc; u,v, and w are integers and the dot product k R. Gauss proved that the densest packing of circles in the plane is the hexagonal lattice of the bee's honeycomb (or a bunch of bundled pipes). The coordination number is 12, APF = 0. Figure 2: Two dimensional lattice types of higher symmetry. Black cell honeycomb. Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here! Common Properties Abundance in Earth's Crust. 2D hexagonal lattice above by 10 degrees. Find Vector Seamless Black And White Hexagonal Grid Pattern. The band diagrams should be the same. The axes, x, y define the smallest hexagonal unit cell, the z axis being normal to the plane of the paper; the hexagonal unit cell is primitive with all the lattice points at 000. three lattice designs, such as 3D Kagome structure, 3D pyramidal structure and the hexagonal diamond structure, for compression testing. There are 14 different Bravais lattices in 3D. Instead of simply creating inner supports for hollow objects, which would only be useful during printing, it would be great if you could choose to create a 3D hexagonal lattice mesh inside the hollow parts of printed obj…. Show that the reciprocal lattice of the 3D hexagonal lattice is another hexagonal lattice rotated by 30 degrees with respect to the original. com/thesketchupesse. Due to symmetry constraints, there is a finite number of Bravais lattices, five in two dimensions, and 14 in three dimensions. aligned_lattice is a rotated version of other_lattice that has the same lattice parameters, but which is aligned in the coordinate system of this lattice so that translational points match up in 3D. 3D symmetry broken at surfaces => 14 bravais lattices in 3-Diminsions are replaced by 5 bravais lattices in 2 Dimensions 3D bravais lattices 2D Bravais lattices a 2 oblique rectangular centered rectangular a 2 a 2 a 2 Square a 2 Hexagonal 4 Determination of Miller Indices (fcc). Double click the lattice and in Lattice properties dialog box set the Azimuth angle to 10 degrees. Orthorhombic. Formula of crystallography: Local (point)symmetry + translational symmetry → spatial symmetry OR 32 Point groups + 14 Bravais lattice →230 space group. wikiHow is a "wiki," similar to Wikipedia, which means that many of our articles are co-written by multiple authors. A lattice in the sense of a 3-dimensional array of regularly spaced points coinciding with e. The hcp structure is very common for elemental metals, including:. , it is two-dimensional (2D). hexagonal Bravais lattice, and each of these gray atoms have a yellow atom pair to it. By filling the lattice with molecules, some of he symmetry elements might be destroyed and the symmetry is reduced. These periodic structures are intended for use with the metallic additive manufacturing technologies of Selective Laser Melting (SLM) or Electron Beam Melting (EBM), however could be applied to a number of other additive technologies that require the input of an STL file. Most environments have a definite up and down direction defined by the local gravity vector. To create a 3D lattice, follow these steps: On the Model tab, click Engineering and then select Lattice. The trigonal system is the tricky one, because its 25 space groups (143-167) belong either to the hexagonal (hP, 18 space groups) or the rhombohedral (hR, 7 space groups) Bravais lattice. This means that an atom or molecule located on this point in a real crystal lattice is shared with its neighboring cells. The hexagonal close packed cell is a derivative of the hexagonal Bravais lattice system (Figure 1) with the addition of an atom inside the unit cell at the coordinates (1 / 3, 2 / 3, 1 / 2). These periodic structures are intended for use with the metallic additive manufacturing technologies of Selective Laser Melting (SLM) or Electron Beam Melting (EBM), however could be applied to a number of other additive technologies that require the input of an STL file. Thus, these sites are much smaller than those in the square lattice. 2) using an unfolding method to make use of the hexagonal lattice symmetry. A related concept is that of the irreducible Brillouin zone, which is the first Brillouin zone reduced by all of the symmetries in the point group of the lattice (point group of the crystal). Iin both of these lattices, the corners of the unit cells are centered on a lattice point. We study hexagonal spin-channel ("triplet") density waves with commensurate M-point propagation vectors. The w-partition function is then defined to be the formal sum Z w = X π 3D diagram w(π). orthorhombic, tetragonal, trigonal, hexagonal, and cubic). The other six systems, in order of decreasing symmetry, are hexagonal, tetragonal, trigonal (also known as rhombohedral), orthorhombic, monoclinic and triclinic. Snapshot 1: This shows the primitive cubic system consisting of one lattice point at each corner of the cube. Constructing hexagonal lattice Although, on first sight, hexagonal lattice has little to do with Cartesian lattice it in fact can be shown that the two lattices are isomorphic i. Monoclinic. Each carbon covalently bonded to 3 neighboring carbon atoms in infinite interconnected hexagonal rings. 8(a), Callister 7e. Hexagonal close-packing corresponds to a ABAB stacking of such planes. Nonetheless, all the junctions of bubble walls are threefold, intersecting at angles that are close to 120 degrees. Draw the basal border for each hexagon facet: border. Schmitt Michael G. This molecular model has atoms arranged in 3 layers of 7-3-7 spheres to show the packing efficiency of HCP (hexagonal close packing) found in certain metals all for only $56. The equivalent reciprocal lattice in reciprocal space is defined by two reciprocal vectors, say and. Each crystal system can be further associated with between one and four lattices by adding to the primitive cell. construct 3D DNA origami packed on honeycomb-lattice geometry22 or square-lattice geometry23 also have been reported. The open lattice structure means that the material is strong without being too heavy, rather like the metal foams used for building aircraft. There's an elegant symmetry with these. This transition is not apparent in the magnetic susceptibility due to the frustration on the Mn triangular lattice and the dominating paramagnetic susceptibility of the Dy 3+ (S=9/2) spins. Crystal SymmetryCrystal Symmetry The external shape of a crystal reflects theThe external shape of a crystal reflects the presence or absence of translation-free syyymmetry elements in its unit cell. The 32 segments of the pattern ring each have a hexagonal '3D box' effect feature made from Sycamore, African Blackwood and Goncalo Alves surrounded by Muhuhu. Maniruzzaman —Abstract The design of Photonic Crystal Fiber (PCF) is very supple. - material has long range order. CLIP technology, like other forms of 3D printing, is able to create objects with complex geometries, which means it can be used to 3D print architectural lattice structures. A 3D distinct lattice spring model (DLSM) is proposed where matter is discretized into individual particles linked by springs. High quality Hexagon inspired T-Shirts, Posters, Mugs and more by independent artists and designers. construct 3D DNA origami packed on honeycomb-lattice geometry22 or square-lattice geometry23 also have been reported. This means that an atom or molecule located on this point in a real crystal lattice is shared with its neighboring cells. Just plug in the length of one of the sides and then solve the formula to find the area. • Each point in the reciprocal lattice represents. The triangular grid is dual to the hexagonal grid -- if you take a hexagonal grid, put a vertex in the center of each hexagonal tile, then form tile edges by connecting each vertex to its six neighboring vertices, you get a triangular grid. There are 209 hexagonal lattice suppliers, mainly located in Asia. 5: common rafter pitch and 22:12 dormers into a rotated purlin header. Orthorhombic. that of BN is hexagonal. square, hexagonal, or triangular patterns. A two dimension (2‐D) real lattice is defined by two unit cell vectors, say and inclined at an angle. Iin both of these lattices, the corners of the unit cells are centered on a lattice point. The Ionic Lattice Crystal Packing In an ionic solid, the ions are packed together into a repeating array called a crystal lattice. 5mm Triangular Pitch - 1mm Thickness. In this case, we use symmetric boundary conditions to reduce the computational volume and to consider only TE-like photonic crystal modes in the slab. A hexagonal closed-packed structure is built upon two simple hexagonal Bravais lattices. The space lattice points in a crystal are occupied by atoms.
CommonCrawl
Team:Valencia UPV/Modeling/diffusion <h3 class="hook" align="left"><a href="http://2014.igem.org/Team:Valencia_UPV/Modeling">Modeling</a> > <a href="http://2014.igem.org/Team:Valencia_UPV/Modeling/diffusion">Pheromone Diffusion</a></h3></p></br> <h3 class="hook" align="left"><a href="#">Modeling</a> > <a href="http://2014.igem.org/Team:Valencia_UPV/Modeling/diffusion">Pheromone Diffusion</a></h3></p></br> <div align="center"><span class="coda"><roja>P</roja>heromone <roja>D</roja>iffusion <br/><br/> and <roja>M</roja>oths <roja>R</roja>esponse</span> </div> <div id="tab1" class="tab active"> <p>Sexually communication among moths is accomplished chemically by the release of an "odor" into the air. This "odor" is the <span class="black-bold">sexual pheromones</span>.</p><br/> <p>Sexual communication among moths is accomplished chemically by the release of an "odor" into the air. This "odor" consists of <span class="black-bold">sexual pheromones</span>.</p><br/> <p>Pheromones are molecules that can undergo a diffusion process in which the random movement of gas molecules transport the chemical away from its source (Sol I. Rubinow, Mathematical Problems in the Biological Sciences, Lecture 9). However, diffusion processes are complex, and modelling them analytically and with accuracy is difficult, even more when the geometry is not simple.</p><br/> <div align="center"><img width="540px" src="http://2014.igem.org/wiki/images/9/9d/VUPVIntro_sexpheromone.png" alt="female_sex_pheromones" title="Female and Male Moths"></img></div><br/> <div align="center"><p style="text-align: justify; font-style: italic; font-size: 0.8em; width: 700px;"><span class="black-bold">Figure 1</span>. Female moth releasing sex pheromones and male moth.</p></div><br/> <p>For this reason, we decided to consider a simplified model in which pheromone chemicals obey to the heat diffusion equation. Then, it is solved by the Euler numeric approximation in order to obtain the spatial and temporal distribution of pheromone concentration. <a class="normal-link-page" href="#">See more about heat equation and mathematical expressions for Euler method.</a></p><br/> <p> Moths seem to respond to gradients of pheromone concentration attracted towards the source, although there are other factors that lead moths sexually to pheromone sources such as optomotor anemotaxis (J. N. Perry and C. Wall , A Mathematical Model for the Flight of Pea Moth to Pheromone Traps Through a Crop). However, increasing the pheromone concentration to unnaturally high levels may disrupt male orientation (W. L. Roelofs and R. T. Carde, Responses of Lepidoptera to synthetic sex pheromone chemicals and their analogues). <a class="normal-link-page" href="#"> See more about the modeling of moth flight paths.</a></p><br/> <p>Using a modeling environment called <a class="normal-link-page" href="#">Netlogo</a>, we simulate the approximate moths behavior during the pheromone dispersion process. So, this will help us to predict moth response when they are also in presence of our synthetic plants.</p><br/> <p>Pheromones are molecules that easily diffuse in the air. During the diffusion process, the random movement of gas molecules transport the chemical away from its source [1]. Diffusion processes are complex ones, and modeling them analytically and with accuracy is difficult. Even more when the geometry is not simple. For this reason, we decided to consider a simplified model in which pheromone chemicals obey to the heat diffusion equation. Then, the equation is solved using the Euler numeric approximation in order to obtain the spatial and temporal distribution of pheromone concentration. </p><br/> <p> Moths seem to respond to gradients of pheromone concentration to be attracted towards the source. Yet, there are other factors that lead moths to sexual pheromone sources, such as optomotor anemotaxis [2]. Moreover, increasing the pheromone concentration to unnaturally high levels may disrupt male orientation [3]. </p><br/> <p>Using a modeling environment called <a class="normal-link-page" href="https://ccl.northwestern.edu/netlogo/">Netlogo</a>, we simulated the approximate moths behavior during the pheromone dispersion process. So, this will help us to predict moth response when they are also in presence of <span class="red-bold">Sexy Plant</span>.</p><br/> <p align="left"><strong>References</strong></p><br/> <div style="position: relative; left: 3%; width: 96%;"> <li> Sol I. Rubinow, Mathematical Problems in the Biological Sciences, chap. 9, SIAM, 1973</li> <li> J. N. Perry and C. Wall , A Mathematical Model for the Flight of Pea Moth to Pheromone Traps Through a Crop, Phil. Trans. R. Soc. Lond. B 10 May 1984 vol. 306 no. 1125 19-48</li> <li>W. L. Roelofs and R. T. Carde, Responses of Lepidoptera to synthetic sex pheromone chemicals and their analogues, Annual Review of Entomology Vol. 22: 377-405, 1977</li> <div id="tab2" class="tab"> <p>Since pheromones are chemicals released into the air, we have to consider both the motion of the fluid and the one of the particles suspended in the fluid.</p><br/> <p>The motion of fluids can be described by the <span class="black-bold">Navier–Stokes equations</span>. But the frequent nonlinearity of these equations makes most problems difficult or impossible to solve, since it may exists turbulences in the air flow [1]. Now attending to the particles suspended in the fluid, an option for pheromone dispersion modeling consists in the assumption of pheromones diffusive-like behavior. <p>The motion of fluids can be described by the <span class="black-bold">Navier–Stokes equations</span>. But the typical nonlinearity of these equations when there may exist turbulences in the air flow, makes most problems difficult or impossible to solve. Thus, attending to the particles suspended in the fluid, a simpler effective option for pheromone dispersion modeling consists in the assumption of pheromones diffusive-like behavior. That is: pheromones are molecules that can undergo a diffusion process in which the random movement of gas molecules transport the chemical away from its source [2].</p><br/> That is, pheromones are molecules that can undergo a diffusion process in which the random movement of gas molecules transport the chemical away from its source [1].</p><br/> <p>There are two ways to introduce the notion of diffusion: either a phenomenological approach starting with <span class="black-bold"> Fick's laws of diffusion</span> and their mathematical consequences, or a physical and atomistic one, by considering the <span class="black-bold"> random walk</span> of the diffusing particles.</p><br/> <p>There are two ways to introduce the notion of diffusion: either using a phenomenological approach starting with <span class="black-bold"> Fick's laws of diffusion</span> and their mathematical consequences, or a physical and atomistic one, by considering the <span class="black-bold"> random walk</span> of the diffusing particles [2].</p><br/> <p>In our case, we decided to hold our diffusion process by the <span class="black-bold">Fick's laws</span>. So it is postulated that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient. However, diffusion processes are complex, and modelling them analytically and with accuracy is difficult, even more when the geometry is not simple (final distribution of our plants in the crop field). For this reason, we decided to consider a simplified model in which pheromone chemicals obey to the <span class="black-bold"> heat diffusion equation</span>.</p><br/><br/> <p>In our case, we decided to model our diffusion process using the <span class="black-bold">Fick's laws</span>. Thus, it is postulated that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient. However, diffusion processes are complex, and modelling them analytically and with accuracy is difficult. Even more when the geometry is not simple (e.g. consider the potential final distribution of our plants in the crop field). For this reason, we decided to consider a simplified model in which pheromone chemicals obey the heat diffusion equation.</p><br/><br/> <h3>Approximation</h3><br/><br/> <p align="left"><strong>Approximation</strong></p><br/> The '''diffusion equation''' is a partial differential equation which describes density dynamics The diffusion equation is a partial differential equation that describes density dynamics in a material undergoing diffusion. It is also used to describe processes exhibiting diffusive-like behavior, like in our case. diffusive-like behavior, like in our case. The equation is usually written as: The equation is usually written as: $$\frac{\partial \phi (r,t) }{\partial t} = \nabla • [D(\phi,r) \nabla \phi(r,t)]$$ $$\frac{\partial \phi (r,t) }{\partial t} = \nabla · [D(\phi,r) \nabla \phi(r,t)]$$ where $\phi(r, t)$ is the density of the diffusing material at location r and time t, and where $\phi(r, t)$ is the density of the diffusing material at location r and time t and $D(\phi, r)$ is the collective diffusion coefficient for density $\phi$ at location $r$; and $\nabla$ represents the vector differential operator. If the diffusion coefficient does not depend on the density then the equation is linear and $D$ is constant. Thus, the equation reduces to the following linear differential equation: $D$ is constant. Thus, the equation reduces to the linear differential equation: $$\frac{\partial \phi (r,t) }{\partial t} = D \nabla^2 \phi(r,t)$$ also called the '''heat equation'''. also called the '''heat equation'''. Making use of this equation we can write the pheromones chemicals diffusion equation with no Making use of this equation we can write pheromones chemicals diffusion equation with no wind effect consideration as: where c is the pheromone concentration, $\Delta$ is the Laplacian operator, and $D$ is the pheromone diffusion constant in air.<br/><br/> the pheromone diffusion constant in the air.<br/> If we consider the wind, we face a diffusion system with drift and an advection term is If we consider the wind, we face a diffusion system with drift, and an advection term is added to the equation above. $$\frac{\partial c }{\partial t} = D \nabla^2 c - \nabla \cdot (\vec{v} c )$$ where $\vec{v}$ is the average ''velocity'' that the quantity is moving. Thus, $\vec{v}$ where $\vec{v}$ is the average ''velocity''. Thus, $\vec{v}$ would be the velocity of the air flow.<br/> would be the velocity of the air flow in or case.<br/> For simplicity, we are not going to consider the third dimension. In $2D$ the equation $$\frac{\partial c }{\partial t} = D \left(\frac{\partial^2 c }{\partial^2 x} + \frac{\partial^2 c }{\partial^2 y}\right) – \left(v_{x} \cdot \frac{\partial c }{\partial x} + v_{y} \cdot \frac{\partial c }{\partial y} \right) = D \left( c_{xx} + c_{yy}\right) - \left(v_{x} \cdot c_{x} + v_{y} \cdot c_{y}\right) $$ <html><br/> For determining a numeric solution for this partial differential equation are In order to determine a numeric solution for this partial differential equation, the so-called finite difference methods are used. used the so-called finite difference methods. The technic consists in approximating differential ratios as $h$ is closer to zero, so they are useful to approximate differential equations. With finite difference methods, partial differential equations are replaced by its approximations in finite differences, resulting in an algebraic equations its approximations as finite differences, resulting in a system of algebraic equations. This is solved at each node system. The algebraic equations system is solved in each node $(x_i,y_j,t_k)$. These discrete values describe the temporal and spatial distribution of the unknown function.<br/><br/> distribution of the particles diffusing.<br/> Although implicit methods are unconditionally stable so time steps could be larger and Although implicit methods are unconditionally stable, so time steps could be larger and make the calculus process faster, the tool we have used to solve our heat equation is the Euler explicit method. Euler explicit method, for it is the simplest option to approximate spatial derivatives.<br/><br/> Euler explicit method is the simplest option to approximate spatial derivatives, in which all values are assumed at the beginning of Time.<br/><br/> The equation gives the new value of the pheromone level in terms of initial values in that The equation gives the new value of the pheromone level in a given node in terms of initial values at that node and its immediate neighbors. Since all these values are known the process is called node and its immediate neighbors. Since all these values are known, the process is called explicit. $$c(t_{k+1}) = c(t_k) + dt \cdot c'(t_k),$$ Now applying this method for the first case (with no wind consideration) we followed the Now, applying this method for the first case (with no wind consideration) we followed the 1. Split time $t$ into $n$ slices of equal length <i>dt</i>: \end{array} \right. $$ 2. Considering the backward difference for the Euler explicit method implies that the 2. Considering the backward difference for the Euler explicit method, the expression that refers to the current pheromone level each time step is: expression that gives the current pheromone level each time step is: $$c (x, y, t) \approx c (x, y, t - dt ) + dt \cdot c'(x, y, t)$$ 3. And now considering the spatial dimension, it is applied central differences to the Laplace operator $\Delta$, and the backward differences to the vector differential operator $\nabla$ ( in 2D and assuming equal steps in x and y directions): 3. And now considering the spatial dimension, central differences is applied to the Laplace operator $\Delta$, and backward differences are applied to the vector differential operator $\nabla$ (in 2D and assuming equal steps in x and y directions): $$c (x, y, t) \approx c (x, y, t - dt ) + dt \left( D \cdot \nabla^2 c (x, y, t) - \nabla \vec{v} c (x, y, t) \right)$$ With respect to the boundary conditions, they are null since we are considering an opened-space. Attending to the implementation and simulation of this method, <i>dt</i> must be small enough to avoid instability. With respect to the boundary conditions, they are null since we are considering an open space. Attending to the implementation and simulation of this method, <i>dt</i> must be small enough to avoid instability. <li> J. Philibert. One and a half century of diffusion: Fick, Einstein, before and beyond. Diffusion Fundamentals, 2,1.1-1.10, 2005.</li> <h3>The Idea</h3><br/> <p>When one stares at moths, they apparently move with erratic flight paths. It is possibly because of predator avoiding reasons.</p><br/> <p>When one observes moths behavior, they apparently move with erratic flight paths. This is possibly to avoid predators. This random flight is modified by the presence of sex pheromones. Since these are pheromones released by females in order to attract an individual of the opposite sex, it makes sense that males respond to <span class="purple-bold">gradients of sex pheromone concentration</span>, being attracted towards the source. As soon as a flying male <span class="green-bold">randomly</span> enters into a conical pheromone-effective sphere of sex pheromone released by a virgin female, the male begins to seek the female following a zigzag way. The male approaches the female, and finally copulates with her [1].</p><br/><br/><br/> <p> In this frame, sex pheromones influence in moth behavior is also considered. Since these are pheromones released by females in order to attract an individual of the opposite sex, it makes sense that males respond to <span class="purple-bold">gradients of sex pheromone concentration</span>, being attracted towards the source. As soon as a flying male <span class="green-bold">randomly</span> comes into conical pheromone-effective sphere of sex pheromone released by a virgin female, the male begins to seek the females in a zigzag way, approaches to the females and finally copulates with her [1].</p><br/><br/><br/> <h3>Approximation</h3><br/> <img width="150px" style="float:left; margin-right: 15px; margin-bottom: 15px;" src="http://2014.igem.org/wiki/images/1/17/VUPVPolillita_con_vectores_v1.png" alt="moth_array"></img> <p>In this project we approximate the resulting moth movement as a vectorial combination of a <span class="purple-bold">gradient vector</span> and a <span class="green-bold">random vector</span>. The magnitude of the gradient vector comes from the change in pheromone concentration level among points separated by a differential stretch in space. More precisely, the gradient points in the direction of the <b>greatest rate of increase</b> of the function and its magnitude is the slope of the graph in that direction. The <span class="green-bold">random vector</span> is restricted in this 'moth response' model by a fixed angle, assuming that the turning movement is relatively continuous and for example the moth can't turn 180 relative degrees at the next instant.</p><br/> <p>In <span class="red-bold">Sexy Plant</span> we approximate the resulting moth movement as a vectorial combination of a <span class="purple-bold">gradient vector</span> and a <span class="green-bold">random vector</span>. The magnitude of the gradient vector depends on the change in the pheromone concentration level between points separated by a differential stretch in space. More precisely, the gradient points in the direction of the greatest rate of increase of the function, and its magnitude is the slope of the graph in that direction. The random vector is constrained in this 'moth response' model by a fixed angle upper bound, assuming that the turning movement is relatively continuous. For example, one can asume that the moth cannot turn 180 degrees from one time instant to the next.</p><br/> <p>Since the objective of this project consists in avoiding pest damage by reaching the mating disrupting among moths, our synthetic plants are supposed to release enough sexual pheromone so as to <span class="red-bold">saturate moth perception</span>. In this sense the resulting moth vector movement will depend ultimately on the pheromone concentration levels in the field and the moth ability to follow better or worse the gradient of sex pheromone concentration.</p><br/> <p>At this point, let's highlight the three main aspects we consider for the characterization of males moth behavior:</p><br/> <p>Our synthetic plants are supposed to release enough sexual pheromone so as to be able to <span class="red-bold">saturate moth perception</span>. In this sense the resulting moth vector movement will depend ultimately on the pheromone concentration levels in the field and the moth ability to follow better or worse the gradient of sex pheromone concentration.</p><br/> <p>The three clases of male moth behavior we consider for the characterization of males moth behavior are described in Table 1.</p><br/> <div align="center"><p style="text-align: justify; font-style: italic; font-size: 0.9em; width: 700px;"><span class="black-bold">Table 1</span>. Male moths behaviour characterization.</p></div> <p>In this context, this ensemble of behaviors could be translated in a sum of vectors in which the random vector has a constant module and changeable direction inside a range, whereas the gradient vector module is a function of the gradient in the field. The question now is: <b>how do we include the saturation effect in the resulting moth shift vector?</b></p><br/> <p>This ensemble of behaviors can be translated into a sum of vectors in which the random vector has constant module and changing direction within a range, whereas the module of the gradient vector is a function of the gradient in the field. The question now is how do we include the saturation effect in the resulting moth shift vector. With this in mind, and focusing on the implementation process, our approach consists on the following:</p> <p>With this in mind and focusing on the implementation process, our approach consists on the following:</p><br/> <p>To model chemoattraction, the gradient vector will be always have fixed unit magnitude, and its direction is that of the greatest rate of increase of the pheromone concentration. </p><br/> <p>The <span class="purple-bold">gradient vector</span> instead of experiencing a change in its <b>magnitude</b>, this will be always the <b>unit</b> and its <b>direction</b> that of the <span class="purple-bold">greatest rate of increase</span> of the pheromone concentration. A <span class="green-bold">random direction vector</span> with constant module will not be literally considered, but a <span class="green-bold">random turning angle</span> starting from the gradient vector direction.</p></br/> <p>To model the random flight, instead of using a random direction vector with constant module, we consider a random turning angle starting from the gradient vector direction.</p><br/> <p>Thus, how do we include the saturation effect in the resulting moth shift vector? This is key to achieve sexual confusion. Our answer: the behaviour dependence on the moth saturation level --in turn related to the pheromone concentration in the field-- will be included in the random turning angle. </p><br/> <p>Attending to the previous question <i>how do we include the saturation effect in the resulting moth shift vector?</i>, here the answer: <b>the dependence on the</b> <span class="red-bold">moth saturation level</span> (interrelated with the pheromone concentration in the field) <b>will state in the random turning angle</b>.</p><br/> <div align="center"><p style="text-align: justify; font-style: italic; font-size: 0.9em; width: 700px;"><span class="black-bold">Table 1</span>. Approximation of the male moths behaviour.</p></div> <p>This random turning angle will not follow a uniform distribution, but a Poisson distribution in which the <b>mean is zero </b>(no angle detour from the gradient vector direction) and the <b>standard-deviation</b> will be <b>inversely proportional to the intensity of the gradient of sex pheromone concentration in the field</b>. This approach will drive to a 'sexual confusion' of the insect as the field homogeneity increases, since the moth in its direction of displacement will fit the gradient direction with certain probabilities which depend on how saturated they are.</p><br/> <p>This random turning angle will not follow a uniform distribution, but a Poisson distribution in which the mean is zero (no angle detour from the gradient vector direction) and the standard-deviation will be inversely proportional to the intensity of the gradient of sex pheromone concentration in the field. This approach leads to 'sexual confusion' of the insect as the field homogeneity increases. This is because the direction of displacement of the moth will equal the gradient direction with certain probability which depends on how saturated it is.</p><br/> <h3>References:</h3> <ol style="position: relative; left: 4%; width: 90%;"> <li>Yoshitoshi Hirooka and Masana Suwanai. Role of Insect Sex Pheromone in Mating Behavior I. Theoretical Consideration on Release and Diffusion of Sex Pheromone in the Air. </li> <li>W. L. Roelofs and R. T. Carde. Responses of Lepidoptera to Synthetic Sex Pheromone Chemicals and their Analogues. </li> <li> Yoshitoshi Hirooka and Masana Suwanai. Role of Insect Sex Pheromone in Mating Behavior I. Theoretical Consideration on Release and Diffusion of Sex Pheromone in the Air. J. Ethol, 4, 1986</li> <p></p><br/> <p>Using a modeling environment called Netlogo, we try to simulate the approximate moth population behavior when pheromone diffusion processes are given.</p><br/> <p>Using a modeling environment called Netlogo, we simulate the approximate moth population behavior when the pheromone diffusion process take place.</p><br/> <h3>SETUP:</h3> <p> The <a href="http://ccl.northwestern.edu/netlogo/">Netlogo</a> simulator can be found in its website at Northwestern University. To download the source file of our <span class="red-bold">Sexy plant</span> simulation in Netlogo click here: <a href="http://2014.igem.org/Team:Valencia_UPV/Modeling/sexyplants.nlogo" download>sexyplants.nlogo</a></p><br/> <p align="left"><strong>Setup</strong></p><br/> <li>We consider three <span class="black-bold">agents</span>: <span class="marron-bold">male</span> and <span class="fucsia-bold">female</span> moths, and <span class="red-bold">sexyplants</span>.</li> <li>We consider three <span class="black-bold">agents</span>: <span class="marron-bold">male</span> and <span class="fucsia-bold">female</span> moths, and <span class="red-bold">sexy plants</span>.</li> <li>We have two kind of sexual pheromone emission sources: <span class="fucsia-bold">female</span> moths and <span class="red-bold">sexyplants</span>. </li> <li>We have two kinds of sexual pheromone emission sources: <span class="fucsia-bold">female</span> moths and <span class="red-bold">sexyplants</span>. </li> <li>Our scenario is an opened crop field where <span class="red-bold">sexyplants</span> are intercropped and moths fly following different patterns depending on its sex.</li> <li>Our scenario is an open crop field where <span class="red-bold">sexy plants</span> are intercropped, and moths fly following different patterns depending on its sex.</li> <p><span class="fucsia-bold">Females</span>, apart from emitting sexual pheromones, they move with erratic random flight paths. After mating, females are 2 hours in which they are not emitting pheromone.</p> <p><span class="fucsia-bold">Females</span>, apart from emitting sexual pheromones, move following erratic random flight paths. After mating, females do not emit pheromones for a period of 2 hours.</p> <p><span class="marron-bold">Males</span> also move randomly while they are under its detection threshold. But when they detect a certain pheromone concentration, they start to follow the pheromone concentration gradients until its saturation threshold is reached. </p> <p> <span class="red-bold">sexyplants</span> act as continuously- emitting sources and their activity is regulated by a <span class="black-bold">Switch</span>.</p><br/> <p> <span class="red-bold">Sexy plants</span> act as continuously- emitting sources, and their activity is regulated by a <span class="black-bold">Switch</span>.</p><br/> <p> The pheromone diffusion process, it is simulated in Netlogo by implementing the Euler explicit method. </p><br/> <p> On the side of <span class="black-bold">pheromone diffusion</span> process, it is simulated in Netlogo by implementing the <span class="black-bold">Euler explicit</span> method. </p><br/> [[File:Upv_simu1.png|600px|center|Figure 1. NETLOGO Simulation environment.]] <div align="center"><p style="text-align: justify; font-style: italic; font-size: 0.9em; width: 700px;"><span class="black-bold">Figure 1</span>. NETLOGO Simulation environment.</p></div> <h3>GO!</h3> <p align="left"><strong>Runs</strong></p><br/> <p>When <span class="red-bold">sexyplants</span> are switched-off, <span class="marron-bold">males</span> move randomly until they detect pheromone traces from <span class="fucsia-bold">females</span>, in that case they follow them. </p> <p>When <span class="red-bold">sexyplants</span> are switched-on, the pheromone starts to diffuse from them, rising up the concentration levels in the field. At first, the <span class="red-bold">sexyplants</span> have an effect of pheromone traps on the <span class="marron-bold">male</span> moths.</p><br/> <p>When <span class="red-bold">sexy plants</span> are switched-off, <span class="marron-bold">males</span> move randomly until they detect pheromone traces from <span class="fucsia-bold">females</span>. In that case they follow them. </p> <p>When <span class="red-bold">sexy plants</span> are switched-on, the pheromone starts to diffuse from them, rising up the concentration levels in the field. At first, <span class="red-bold">sexy plants</span> have the effect of acting as pheromone traps on the <span class="marron-bold">male</span> moths.</p><br/> [[File:VUPV_Polillas.png|600px|center|Figure 2. On the left: sexyplants are switched-off and a male moth follows the pheromone trace from a female. On the right: sexyplants are switched on and a male moth go towards the static source like it happens with synthetic pheromone traps.]] <div align="center"><p style="text-align: justify; font-style: italic; font-size: 0.9em; width: 700px;"><span class="black-bold">Figure 2</span>. On the left: sexyplants are switched-off and a male moth follows the pheromone trace from a female. On the right: sexyplants are switched on and a male moth go towards the static source like it happens with synthetic pheromone traps.</p></div> <p>As the concentration rises in the field, it becomes more homogeneous. Remember that the <span class="green-bold">random turning angle</span> of the insect follows a Poisson distribution, in which the standard-deviation is inversely proportional to the intensity of the <span class="purple-bold">gradient</span>. Thus, the probability of the insect to take a <span class="black-bold">bigger</span> detour from the faced gradient vector direction is higher. This means that it is <span class="black-bold">less able</span> to follow pheromone concentration gradients, so <span class="black-bold">'sexual confusion' is induced</span>.</p> [[File:Upv_simu3.png|600px|center|Figure 3. NETLOGO Simulation of the field: sexyplants, female moths, pheromone diffusion and male moths.]] [[File:VUPV_Polillas.png|600px|center|Figure 2. On the left: sexy plants are switched-off and a male moth follows the pheromone trace from a female. On the right: sexy plants are switched on and a male moth go towards the static source as it happens with synthetic pheromone traps.]] <div align="center"><p style="text-align: justify; font-style: italic; font-size: 0.9em; width: 700px;"><span class="black-bold">Figure 2</span>.On the left: sexy plants are switched-off and a male moth follows the pheromone trace from a female. On the right: sexy plants are switched on and a male moth go towards the static source as it happens with synthetic pheromone traps.</p></div> <p>As the concentration rises in the field, it becomes more homogeneous. Remember that the <span class="green-bold">random turning angle</span> of the insect follows a Poisson distribution, in which the standard-deviation is inversely proportional to the intensity of the <span class="purple-bold">gradient</span>. Thus, the probability of the insect to take a bigger detour from the faced gradient vector direction is higher. This means that it is less able to follow pheromone concentration gradients, so sexual confusion is induced.</p> <iframe width="600" height="350" src="http://www.youtube.com/embed/URZgjbfEUwc"> </iframe><br/><br/> <div align="center"><p style="text-align: justify; font-style: italic; font-size: 0.9em; width: 700px;"><span class="black-bold">Figure 3</span>. NETLOGO Simulation of the field: sexyplants, female moths, pheromone diffusion and male moths.</p></div> <p align="left"><strong>Parameters</strong></p><br/> <p>The parameters of this model are not as well-characterized as we expected at first. Finding the accurate values of these parameters is not a trivial task. In the literature it is difficult to find a number experimentally obtained. So we decided to take an inverse engineering approach. The parameters ranges we found in the literature are: </p> <br/> <h3> Parameters</h3> The parameters of this model are not as well-characterized as we expected at first. Finding the accurate values of these parameters is not a trivial task, in the literature it is difficult to find a number experimentally obtained.</p> <p>So we decided to take an <span class="black-bold">inverse engineering approach</span>. Doing a model parameters swept, we simulate many possible scenarios, and then we come up with values of parameters corresponding to our desired one: insects get confused. This will be useful to know the limitations of our system and to help to decide the final distribution of our plants in the crop field. </p> <br/> <dt>Diffusion coefficient</dt> <dd>Range of search: 0.01-0.2 cm^2/s <br/> <dd>Range of physical search: 0.01-0.2 cm^2/s <br/> References: [1], [2], [3], [5]</dd> <dt>Release rate (female)<dt/> <dt>Release rate (female)</dt> <dd>Range of search: 0.02-1 µg/h <br/> <dd>Range of physical search: 0.02-1 µg/h <br/> References: [4], [5]</dd> References: [4], [5], [8]</dd> <dt>Release rate (sexyplant)</dt> <dt>Release rate (Sexy Plant)</dt> <dd>The range of search that we have considered is a little wider than the one for the release rate of females. <br/> References: It generally has been found that pheromone dispensers releasing the References: Primary sexpheromone components are approximately defined as those emitted by the calling insect that are obligatory chemicals above a certain emission rate will catch fewer males. The optimum release for trap catch in the field at component emission rates similar to that used by the insect [4].</dd> rate or dispenser load for trap catch varies greatly among species [4]. This certain emission rate above which male start to get confused could be the release rate from females.</dd> <dt>Detection threshold</dt> <dd>Range of search: 0.001-1 [Mass]/[Distante]^2</dd> <dd>Range of physical search: 1000 molecules/ cm3<br/> References: [4], [5], [8]</dd><br/> <dt>Saturation threshold </dt> <dd>Range of search: 1-5[Mass]/[ Distante]^2</dd> <dd> References: It generally has been found that pheromone dispensers releasing the chemicals above a certain emission rate will catch fewer males. The optimum release rate or dispenser load for trap catch varies greatly among species [4].<br/> Range of physical search: 1-5[Mass]/[ Distance]^2</dd><br/> <dt>Moth sensitivity</dt> <dd>This is a parameter referred to the capability of the insect to detect changes in pheromone concentration in the patch it is located and the neighbor patch. When the field becomes more homogeneous, an insect with higher sensitivity will be more able to follow the gradients.<br/> <dd>This is a parameter referred to the capability of the insect to detect changes in pheromone concentration in the patch it is located and the neighbor patch. When the field becomes more homogeneous, an insect with higher sensitivity will be more able to follow the gradients. Range: 0-0.0009 <br/> (The maximum level of moth sensitivity has to be less than the minimum level of release rate of females, since this parameter is obtained from the difference)</dd> <dt>Wind force</dt> <dd>Range: -0.1 – 0.1 cm/sec <br/> <dd>Range: 0 - 10 m/s <br/> References: [7] (700cm/sec!!!)</dd> References: [7] </dd> <dt>Population</dt> <dd>The number of males and females can be selected by the observer.</dd> <h3>Ticks (time step) !!!</h3> <p align="left"><strong>Patches</strong></p><br/> <p>We'll consider the equivalence 20 ticks= 1 hour. That is 1 tick = 3 minutes.</p> <p>One can modify the number of patches that conform the field so as to analyze its own case. In our case we used a field of 50x50 patches. </p> <h3>Patches !!!</h3> <p>The approximate velocity of a male moth flying towards the female in natural environment is 0.3 m/sec [6]. Each moth moves 1 patch per tick, so if 1 tick is equal to 3 minutes (180 sec), the patch is 54 meter long to get that velocity.</p> <p>One can modify the number of patches that conform the field so as to analyze its own case. </p> <li>Wilson et al.1969, Hirooka and Suwanai, 1976.</li> <li>Monchich abd Mauson, 1961, Lugs, 1968.</li> <li>G. A. Lugg. Diffusion Coefficients of Some Organic and Other Vapors in Air.</li> <li>R.W. Mankiny, K.W. Vick, M.S. Mayer, J.A. Coeffelt and P.S. Callahan (1980) Models For Dispersal Of Vapors in Open and Confined Spaces: Applications to Sex Pheromone Trapping in a Warehouse, Page 932, 940.</li> <li> Tal Hadad, Ally Harari, Alex Liberzon, Roi Gurka (2013) On the correlation of moth flight to characteristics of a turbulent plume. </li> <li> Average Weather For Valencia, Manises, Costa del Azahar, Spain. </li> <li>Yoshitoshi Hirooka and Masana Suwanai. Role of Insect Sex Pheromone in Mating Behavior I. Theoretical Consideration on Release and Diffusion of Sex Pheromone in the Air. J. Ethol, 4, 1986</li> <p align="left"><strong>Scenarios</strong></p><br/> The aim consists of reducing the possibility of meeting among moths of opposite sex. Thus, we will analyze the number of meetings in the three following cases: <li>When sexyplants are switched-off and males only interact with females.</li> <li>When sexy plants are switched-off and males only interact with females.</li> <li>When sexyplants are switched-on and have an effect of trapping males.</li> <li>When sexy plants are switched-on and have the effect of trapping males.</li> <li>When sexyplants are swiched-on and males get confused when the concentration of pheromone level is higher than their saturation threshold.</li> <li>When sexy plants are switched-on and males get confused as the level of pheromone concentration is higher than their saturation threshold.</li> It is also interesting to analyze a fourth case, what does it happen if females wouldn't emit pheromones and males just move randomly through the field? :</p> It is also interesting to analyze a fourth case, what does it happen if females wouldn't emit pheromones and males just move randomly through the field? This gives an idea of the minimum number of male-female encounters that we should expect in a fully random scenario, with no pheromones at play.</p> <ol start="4" style="position: relative; left: 4%; width: 90%;"> <li>Males and females move randomly. How much would our results differ from the rest of cases? </li> What is important is that between the first and the third case, the number of meetings should be less in the latter than in the former. Then we are closer to our objective fulfillment. If Sexy Plant works, the first scenario should give higher number of encounters than the second and third ones. With all values fixed excepting the number of males and females, we started the simulations. Each test was simulated more than once, in order to consider the stochastic nature of the process. Again, we considered different sub-scenarios for each one of the cases mentioned above. In particular, we considered the cases of having male and female subpopulations of equal size, or one larger than the other one. <p align="left"><strong>Experiment 1</strong></p><br/> What does it happen when the number of females is equal to the number of males? (F=M) <ul style="position: relative; left: 4%; width: 90%;"> <li>T_{0} : Start</li> <li>T_{1000}: Switch-ON</li> <li>T_{2000}: End</li> <p> The results show that the number of encounters during the time sexy plants are switched-on is almost the same, but in most cases lower than when sexy plants are switched-off. [[File:VUPV_difu_tabla1.png|600px|center]] <li>The time at which the insects start to get confused and move randomly is shorter as the population increases. Even for high numbers, males get confused before sexy plants are switched-on. That is because there is such amount of females that they saturate the field. This rarely happens in nature, so when this occurs in our simulation we should think that we are out of real scenarios, and then we should modify the rest of parameter values. In these experiments we see that at a population equal to 12 we start be on this limit (insects gets confused when the sexy plants are going to be switched-on). </li> <li>An aspect that should also be considered is the time of the insects getting confused among experiments, (when the number of females is the same). One could think that this "saturation" time would depend on the number of encounters before it happens. Since females wouldn't be emitting pheromones after mating, males should get confused later if the previous number of meetings is larger. However, results are not decisive in this matter.</li> <p> Based on the results of experiment 1, we fixed 10 as the top number of females for the next tests. The number of females is conserved in each test. <li>It is observed that the number of encounters is higher if the number of males increases (this makes sense). </li> <li>In all cases it can be deduced that while the number of males increase against the number of females, the time required for them to get confused is larger. This possibly has its origin in the number of encounters, which is higher according to the first point. When males mate females, they give up emitting pheromones during a certain period of time, so the contribution to the field saturation decreases.</li> In contrast with the Experiment 1, it is observed that while the number of males increases, the number of encounters is considerably higher when sexy plants are switched-off than when they are switched-on. This is seen with more clarity when the number of males is larger. We believe that with more experiments, this fact can be easily tested.</li> <p align="left"><strong>Comparing Experiments 1 and 2</strong></p><br/> Experiment 1: F=10 M=10 In this experiment we did not see the result we are looking for. We are interested in obtaining a high proportion in the third column when sexy plants are working. We see that the graphs counting the number of encounters (purple for the Switch-OFF, green for the Switch-ON) are very similar, so the effect is not achieved satisfactorily. [[File:VUPV_difu_orito1.png|600px|center]] In this experiment we do see the result we are looking for. We are interested in obtaining a high proportion in the third column when sexy plants are working. We see that the graphs counting the number of encounters (purple for the Switch-OFF, green for the Switch-ON) differ visibly, so the effect is achieved. <b>Females don't emit pheromones. Thus, males and females move randomly. How much would our results differ from the ones with females emitting?</b> <We decided to set out the end time according to the moment in which the pheromone level in the field is entirely over the male saturation threshold (in this case 8). We take as reference the top population female number: 10. For the rest of tests the pheromone concentration in the field will be lower.</p> In almost every cases, the number of encounters is higher when females emit pheromones. It means that in our model, males can follow females being guided by pheromone concentration gradients. Moreover, it is seen in the interface during simulations. Results for "pheromone emission". Showed below are an average of an amount of experiments. Also see the contribution of the pheromone supply to the environment depending on the number of females (directly related) and the number of meetings (inversely related) For population 1 to 1 and this time ending given, no more than 2 encounters have been observed. In contrast with the random movement, in which not encounters have been showed in the range of experiments we have checked. <p align="left"><strong>Conclusions</strong></p><br/> We have used a methodology for the results comparison in which experiments have been repeated several times. The interpretation of the performances has based on the values obtained. Nevertheless an exhaustive replay of the same realizations would give us more accurate values. </p><br/> The experiments with the same number of males than females give results we haven't expected. Maybe changing the model parameter values one would obtain a different kind of performance. <br/><p> Other aspect that we have taken into account is that some of the encounters during the time males are following pheromone traces from females may be also due to random coincidence. We have used a procedure useful to discard scenarios and contrast different realizations. With this, logic conclusions can be derived. Thus, they are a way of leading a potential user of this application to widen the search of parameters and improve our model. And that could be useful to know the limitations of our system and helpful to decide the final distribution of our synthetic plants in the field. {{:Team:Valencia_UPV/footer_img}} Modeling > Pheromone Diffusion Pheromone Diffusion and Moths Response Diffusion Equation Moth Response Sexual communication among moths is accomplished chemically by the release of an "odor" into the air. This "odor" consists of sexual pheromones. Figure 1. Female moth releasing sex pheromones and male moth. Pheromones are molecules that easily diffuse in the air. During the diffusion process, the random movement of gas molecules transport the chemical away from its source [1]. Diffusion processes are complex ones, and modeling them analytically and with accuracy is difficult. Even more when the geometry is not simple. For this reason, we decided to consider a simplified model in which pheromone chemicals obey to the heat diffusion equation. Then, the equation is solved using the Euler numeric approximation in order to obtain the spatial and temporal distribution of pheromone concentration. Moths seem to respond to gradients of pheromone concentration to be attracted towards the source. Yet, there are other factors that lead moths to sexual pheromone sources, such as optomotor anemotaxis [2]. Moreover, increasing the pheromone concentration to unnaturally high levels may disrupt male orientation [3]. Using a modeling environment called Netlogo, we simulated the approximate moths behavior during the pheromone dispersion process. So, this will help us to predict moth response when they are also in presence of Sexy Plant. Sol I. Rubinow, Mathematical Problems in the Biological Sciences, chap. 9, SIAM, 1973 J. N. Perry and C. Wall , A Mathematical Model for the Flight of Pea Moth to Pheromone Traps Through a Crop, Phil. Trans. R. Soc. Lond. B 10 May 1984 vol. 306 no. 1125 19-48 W. L. Roelofs and R. T. Carde, Responses of Lepidoptera to synthetic sex pheromone chemicals and their analogues, Annual Review of Entomology Vol. 22: 377-405, 1977 Since pheromones are chemicals released into the air, we have to consider both the motion of the fluid and the one of the particles suspended in the fluid. The motion of fluids can be described by the Navier–Stokes equations. But the typical nonlinearity of these equations when there may exist turbulences in the air flow, makes most problems difficult or impossible to solve. Thus, attending to the particles suspended in the fluid, a simpler effective option for pheromone dispersion modeling consists in the assumption of pheromones diffusive-like behavior. That is, pheromones are molecules that can undergo a diffusion process in which the random movement of gas molecules transport the chemical away from its source [1]. There are two ways to introduce the notion of diffusion: either using a phenomenological approach starting with Fick's laws of diffusion and their mathematical consequences, or a physical and atomistic one, by considering the random walk of the diffusing particles [2]. In our case, we decided to model our diffusion process using the Fick's laws. Thus, it is postulated that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient. However, diffusion processes are complex, and modelling them analytically and with accuracy is difficult. Even more when the geometry is not simple (e.g. consider the potential final distribution of our plants in the crop field). For this reason, we decided to consider a simplified model in which pheromone chemicals obey the heat diffusion equation. The diffusion equation is a partial differential equation that describes density dynamics in a material undergoing diffusion. It is also used to describe processes exhibiting diffusive-like behavior, like in our case. The equation is usually written as: where $\phi(r, t)$ is the density of the diffusing material at location r and time t, and $D(\phi, r)$ is the collective diffusion coefficient for density $\phi$ at location $r$; and $\nabla$ represents the vector differential operator. If the diffusion coefficient does not depend on the density then the equation is linear and $D$ is constant. Thus, the equation reduces to the linear differential equation: $$\frac{\partial \phi (r,t) }{\partial t} = D \nabla^2 \phi(r,t)$$ also called the heat equation. Making use of this equation we can write the pheromones chemicals diffusion equation with no wind effect consideration as: $$\frac{\partial c }{\partial t} = D \nabla^2 C = D \Delta c$$ where c is the pheromone concentration, $\Delta$ is the Laplacian operator, and $D$ is the pheromone diffusion constant in the air. If we consider the wind, we face a diffusion system with drift, and an advection term is added to the equation above. where $\vec{v}$ is the average velocity. Thus, $\vec{v}$ would be the velocity of the air flow in or case. For simplicity, we are not going to consider the third dimension. In $2D$ the equation would be: In order to determine a numeric solution for this partial differential equation, the so-called finite difference methods are used. With finite difference methods, partial differential equations are replaced by its approximations as finite differences, resulting in a system of algebraic equations. This is solved at each node $(x_i,y_j,t_k)$. These discrete values describe the temporal and spatial distribution of the particles diffusing. Although implicit methods are unconditionally stable, so time steps could be larger and make the calculus process faster, the tool we have used to solve our heat equation is the Euler explicit method, for it is the simplest option to approximate spatial derivatives. The equation gives the new value of the pheromone level in a given node in terms of initial values at that node and its immediate neighbors. Since all these values are known, the process is called explicit. Now, applying this method for the first case (with no wind consideration) we followed the next steps: 1. Split time $t$ into $n$ slices of equal length dt: $$ \left\{ \begin{array}{c} t_0 &=& 0 \\ t_k &=& k \cdot dt \\ t_n &=& t \end{array} \right. $$ 2. Considering the backward difference for the Euler explicit method, the expression that gives the current pheromone level each time step is: $$c (x, y, t) \approx c (x, y, t - dt ) + dt \left( D \cdot \nabla^2 c (x, y, t) - \nabla \vec{v} c (x, y, t) \right)$$ $$ D \cdot \nabla^2 c (x, y, t) = D \left( c_{xx} + c_{yy}\right) = D \frac{c_{i,j-1} + c_{i,j+1} + c_{i-1,j } + c_{i+1,j} – 4 c_{I,j}}{s} $$ $$ \nabla \vec{v} c (x, y, t) = v_{x} \cdot c_{x} + v_{y} \cdot c_{y} = v_{x} \frac{c_{i,j} – c_{i-1,j}}{h} + v_{y} \frac{c_{i,j} – c_{i,j-1}}{h} $$ With respect to the boundary conditions, they are null since we are considering an open space. Attending to the implementation and simulation of this method, dt must be small enough to avoid instability. J. Philibert. One and a half century of diffusion: Fick, Einstein, before and beyond. Diffusion Fundamentals, 2,1.1-1.10, 2005. When one observes moths behavior, they apparently move with erratic flight paths. This is possibly to avoid predators. This random flight is modified by the presence of sex pheromones. Since these are pheromones released by females in order to attract an individual of the opposite sex, it makes sense that males respond to gradients of sex pheromone concentration, being attracted towards the source. As soon as a flying male randomly enters into a conical pheromone-effective sphere of sex pheromone released by a virgin female, the male begins to seek the female following a zigzag way. The male approaches the female, and finally copulates with her [1]. In Sexy Plant we approximate the resulting moth movement as a vectorial combination of a gradient vector and a random vector. The magnitude of the gradient vector depends on the change in the pheromone concentration level between points separated by a differential stretch in space. More precisely, the gradient points in the direction of the greatest rate of increase of the function, and its magnitude is the slope of the graph in that direction. The random vector is constrained in this 'moth response' model by a fixed angle upper bound, assuming that the turning movement is relatively continuous. For example, one can asume that the moth cannot turn 180 degrees from one time instant to the next. Our synthetic plants are supposed to release enough sexual pheromone so as to be able to saturate moth perception. In this sense the resulting moth vector movement will depend ultimately on the pheromone concentration levels in the field and the moth ability to follow better or worse the gradient of sex pheromone concentration. The three clases of male moth behavior we consider for the characterization of males moth behavior are described in Table 1. Table 1. Male moths behaviour characterization. This ensemble of behaviors can be translated into a sum of vectors in which the random vector has constant module and changing direction within a range, whereas the module of the gradient vector is a function of the gradient in the field. The question now is how do we include the saturation effect in the resulting moth shift vector. With this in mind, and focusing on the implementation process, our approach consists on the following: To model chemoattraction, the gradient vector will be always have fixed unit magnitude, and its direction is that of the greatest rate of increase of the pheromone concentration. To model the random flight, instead of using a random direction vector with constant module, we consider a random turning angle starting from the gradient vector direction. Thus, how do we include the saturation effect in the resulting moth shift vector? This is key to achieve sexual confusion. Our answer: the behaviour dependence on the moth saturation level --in turn related to the pheromone concentration in the field-- will be included in the random turning angle. Table 1. Approximation of the male moths behaviour. This random turning angle will not follow a uniform distribution, but a Poisson distribution in which the mean is zero (no angle detour from the gradient vector direction) and the standard-deviation will be inversely proportional to the intensity of the gradient of sex pheromone concentration in the field. This approach leads to 'sexual confusion' of the insect as the field homogeneity increases. This is because the direction of displacement of the moth will equal the gradient direction with certain probability which depends on how saturated it is. Yoshitoshi Hirooka and Masana Suwanai. Role of Insect Sex Pheromone in Mating Behavior I. Theoretical Consideration on Release and Diffusion of Sex Pheromone in the Air. J. Ethol, 4, 1986 Using a modeling environment called Netlogo, we simulate the approximate moth population behavior when the pheromone diffusion process take place. The Netlogo simulator can be found in its website at Northwestern University. To download the source file of our Sexy plant simulation in Netlogo click here: sexyplants.nlogo We consider three agents: male and female moths, and sexy plants. We have two kinds of sexual pheromone emission sources: female moths and sexyplants. Our scenario is an open crop field where sexy plants are intercropped, and moths fly following different patterns depending on its sex. Females, apart from emitting sexual pheromones, move following erratic random flight paths. After mating, females do not emit pheromones for a period of 2 hours. Males also move randomly while they are under its detection threshold. But when they detect a certain pheromone concentration, they start to follow the pheromone concentration gradients until its saturation threshold is reached. Sexy plants act as continuously- emitting sources, and their activity is regulated by a Switch. The pheromone diffusion process, it is simulated in Netlogo by implementing the Euler explicit method. Figure 1. NETLOGO Simulation environment. When sexy plants are switched-off, males move randomly until they detect pheromone traces from females. In that case they follow them. When sexy plants are switched-on, the pheromone starts to diffuse from them, rising up the concentration levels in the field. At first, sexy plants have the effect of acting as pheromone traps on the male moths. Figure 2.On the left: sexy plants are switched-off and a male moth follows the pheromone trace from a female. On the right: sexy plants are switched on and a male moth go towards the static source as it happens with synthetic pheromone traps. As the concentration rises in the field, it becomes more homogeneous. Remember that the random turning angle of the insect follows a Poisson distribution, in which the standard-deviation is inversely proportional to the intensity of the gradient. Thus, the probability of the insect to take a bigger detour from the faced gradient vector direction is higher. This means that it is less able to follow pheromone concentration gradients, so sexual confusion is induced. Figure 3. NETLOGO Simulation of the field: sexyplants, female moths, pheromone diffusion and male moths. The parameters of this model are not as well-characterized as we expected at first. Finding the accurate values of these parameters is not a trivial task. In the literature it is difficult to find a number experimentally obtained. So we decided to take an inverse engineering approach. The parameters ranges we found in the literature are: Diffusion coefficient Range of physical search: 0.01-0.2 cm^2/s References: [1], [2], [3], [5] Release rate (female) Range of physical search: 0.02-1 µg/h References: [4], [5], [8] Release rate (Sexy Plant) The range of search that we have considered is a little wider than the one for the release rate of females. References: Primary sexpheromone components are approximately defined as those emitted by the calling insect that are obligatory for trap catch in the field at component emission rates similar to that used by the insect [4]. Detection threshold Range of physical search: 1000 molecules/ cm3 Saturation threshold References: It generally has been found that pheromone dispensers releasing the chemicals above a certain emission rate will catch fewer males. The optimum release rate or dispenser load for trap catch varies greatly among species [4]. Range of physical search: 1-5[Mass]/[ Distance]^2 Moth sensitivity This is a parameter referred to the capability of the insect to detect changes in pheromone concentration in the patch it is located and the neighbor patch. When the field becomes more homogeneous, an insect with higher sensitivity will be more able to follow the gradients. Range: 0 - 10 m/s References: [7] The number of males and females can be selected by the observer. One can modify the number of patches that conform the field so as to analyze its own case. In our case we used a field of 50x50 patches. Wilson et al.1969, Hirooka and Suwanai, 1976. Monchich abd Mauson, 1961, Lugs, 1968. G. A. Lugg. Diffusion Coefficients of Some Organic and Other Vapors in Air. W. L. Roelofs and R. T. Carde. Responses of Lepidoptera to Synthetic Sex Pheromone Chemicals and their Analogues, Page 386. R.W. Mankiny, K.W. Vick, M.S. Mayer, J.A. Coeffelt and P.S. Callahan (1980) Models For Dispersal Of Vapors in Open and Confined Spaces: Applications to Sex Pheromone Trapping in a Warehouse, Page 932, 940. Tal Hadad, Ally Harari, Alex Liberzon, Roi Gurka (2013) On the correlation of moth flight to characteristics of a turbulent plume. Average Weather For Valencia, Manises, Costa del Azahar, Spain. When sexy plants are switched-off and males only interact with females. When sexy plants are switched-on and have the effect of trapping males. When sexy plants are switched-on and males get confused as the level of pheromone concentration is higher than their saturation threshold. It is also interesting to analyze a fourth case, what does it happen if females wouldn't emit pheromones and males just move randomly through the field? This gives an idea of the minimum number of male-female encounters that we should expect in a fully random scenario, with no pheromones at play. Males and females move randomly. How much would our results differ from the rest of cases? T_{0} : Start T_{1000}: Switch-ON T_{2000}: End The results show that the number of encounters during the time sexy plants are switched-on is almost the same, but in most cases lower than when sexy plants are switched-off. The time at which the insects start to get confused and move randomly is shorter as the population increases. Even for high numbers, males get confused before sexy plants are switched-on. That is because there is such amount of females that they saturate the field. This rarely happens in nature, so when this occurs in our simulation we should think that we are out of real scenarios, and then we should modify the rest of parameter values. In these experiments we see that at a population equal to 12 we start be on this limit (insects gets confused when the sexy plants are going to be switched-on). An aspect that should also be considered is the time of the insects getting confused among experiments, (when the number of females is the same). One could think that this "saturation" time would depend on the number of encounters before it happens. Since females wouldn't be emitting pheromones after mating, males should get confused later if the previous number of meetings is larger. However, results are not decisive in this matter. Based on the results of experiment 1, we fixed 10 as the top number of females for the next tests. The number of females is conserved in each test. It is observed that the number of encounters is higher if the number of males increases (this makes sense). In all cases it can be deduced that while the number of males increase against the number of females, the time required for them to get confused is larger. This possibly has its origin in the number of encounters, which is higher according to the first point. When males mate females, they give up emitting pheromones during a certain period of time, so the contribution to the field saturation decreases. In contrast with the Experiment 1, it is observed that while the number of males increases, the number of encounters is considerably higher when sexy plants are switched-off than when they are switched-on. This is seen with more clarity when the number of males is larger. We believe that with more experiments, this fact can be easily tested. Comparing Experiments 1 and 2 Females don't emit pheromones. Thus, males and females move randomly. How much would our results differ from the ones with females emitting? Also see the contribution of the pheromone supply to the environment depending on the number of females (directly related) and the number of meetings (inversely related) For population 1 to 1 and this time ending given, no more than 2 encounters have been observed. In contrast with the random movement, in which not encounters have been showed in the range of experiments we have checked. Go to Modeling Overview Go to Pheromone Production Retrieved from "http://2014.igem.org/Team:Valencia_UPV/Modeling/diffusion"
CommonCrawl
A comparative study of R functions for clustered data analysis Wei Wang ORCID: orcid.org/0000-0002-0084-08141 & Michael O. Harhay1,2 Trials volume 22, Article number: 959 (2021) Cite this article Clustered or correlated outcome data is common in medical research studies, such as the analysis of national or international disease registries, or cluster-randomized trials, where groups of trial participants, instead of each trial participant, are randomized to interventions. Within-group correlation in studies with clustered data requires the use of specific statistical methods, such as generalized estimating equations and mixed-effects models, to account for this correlation and support unbiased statistical inference. We compare different approaches to estimating generalized estimating equations and mixed effects models for a continuous outcome in R through a simulation study and a data example. The methods are implemented through four popular functions of the statistical software R, "geese", "gls", "lme", and "lmer". In the simulation study, we compare the mean squared error of estimating all the model parameters and compare the coverage proportion of the 95% confidence intervals. In the data analysis, we compare estimation of the intervention effect and the intra-class correlation. In the simulation study, the function "lme" takes the least computation time. There is no difference in the mean squared error of the four functions. The "lmer" function provides better coverage of the fixed effects when the number of clusters is small as 10. The function "gls" produces close to nominal scale confidence intervals of the intra-class correlation. In the data analysis and the "gls" function yields a positive estimate of the intra-class correlation while the "geese" function gives a negative estimate. Neither of the confidence intervals contains the value zero. The "gls" function efficiently produces an estimate of the intra-class correlation with a confidence interval. When the within-group correlation is as high as 0.5, the confidence interval is not always obtainable. Clustered data Clustered data arise when the study population can be classified into different groups (referred to as clusters), and the measurements of subjects, in particular the response, within the same cluster are more alike than those in other clusters. For instance, in cluster-randomized trials, entire groups of participants such as classrooms, clinics, communities, or hospitals, rather than individuals, are randomly assigned to intervention arms [1, 2]. The key feature of clustered data is that the similarity (or homogeneity) of measurements within the same cluster induces a correlation. That is, measurements within a cluster are likely to be correlated, whereas those from separate clusters are regarded as independent. The intra-class correlation coefficient (ICC) measures this similarity of the responses within a cluster and can be defined as a function of the variance components in the model: variation between clusters and within clusters [3, 4]. Since the responses within a cluster do not contribute completely independent information, the "effective" sample size is less than the total number of subjects from all clusters [5–7]. Classical statistical methods such as ordinary least squares regression assume that each individuals' data is independent. The clustered data have a hierarchical structure where individuals are not likely independent within the same cluster (i.e., ICC > 0). Thus, methods taking the correlation into account, such as generalized estimating equation (GEE) and mixed-effects models, are well suited for the analysis of clustered data [8–10]. GEE models can be viewed as an extension of generalized linear models for correlated data where a within-cluster correlation structure is specified [11]. Parameter estimates are then obtained as solutions of the estimating equations [12]. In mixed-effects models, the cluster effect is a random variable representing a random deviation for a given cluster from the overall fixed effects [13–15]. Maximum likelihood estimation is often used to obtain estimates of parameters via iterative algorithms such as the expectation-maximization (EM) algorithm and the Newton-Raphson algorithm [16–21]. As shown in [22], for normal outcomes, GEE reduces to the score equation of the maximum likelihood estimation only when there are no missing observations and the correlation is unstructured. A further comparison shows that GEE and mixed-effects models produce the same generalized least squares estimator of the fixed effects [23]. We will review the derivation of the fixed effects estimator in the next section. Simulation studies have been conducted to compare the two methods for analyzing continuous outcomes with an emphasis on the fixed effects components. In the comparison of the estimation and the coverage probability of the confidence intervals, Park [22] found that the GEE estimation was more sensitive to missing observations. In the study [24], the authors compared the estimation and the nominal level of hypothesis testing and made several recommendations. For instance, if knowledge is available to specify the covariance structure correctly, the maximum likelihood estimation is slightly more efficient for balanced or near balanced data. When there is concern about the misspecification of the covariance structure, GEE is preferred when the number of clusters is larger than 20. For hypothesis testing, Kahan et al. [25] and Leyrat et al. [26] found that without an appropriate correction, both methods can lead to inflated type I error rates (finding a statistically significant treatment effect when it does not exist) when the number of clusters is smaller than 40. R, SAS, and Stata commands to correct the type I error rate are provided in [26]. R functions In this study, we compared the performance of the GEE method and the linear mixed-effects model to analyze clustered data through the implementation of both popular and newer packages of the statistical software R [27]. Specifically, the "geese" function of the geepack package (1.3.2) fits a GEE model [28, 29]. The "gls" function of the nlme package (3.1.149) [30, 31] fits a linear model using generalized least squares where the errors are allowed to be correlated. Two frequently used functions for conducting linear mixed-effects model analysis are "lme" of the nlme package [30, 31] and "lmer" of the lme4 package [32] (1.1.25). Detailed implementation of these functions is provided in the "Implementation" subsection of the next section and also summarized in Table 1. Table 1 Summary of how to obtain the CI's of the fixed effects and the variance-covariance parameters We compared the performance of the four functions via a simulation study and through a real data example. In the simulation study, we compared the computation time; the mean squared error (MSE) of estimating all the model parameters, including the ICC; and the coverage proportion of the 95% confidence intervals. Parameter estimates in the linear mixed-effects models are found by maximizing the likelihood of the data. In the following, we review the model setup followed by the simulation study and the example dataset. Model review Let ni be the number of subjects in the i-th cluster and let 1i be a ni×1 vector of one's. Ii is a square identity matrix of dimension ni and let Ji be a square matrix of one's of dimension ni. Let ui be the random effect associated with the i-th cluster. The linear mixed-effects modeling of the i-th cluster's response yi is $${} \mathbf{y}_{i}=\mathbf{X}_{i}{\boldsymbol{{\beta}}}+\mathbf{1}_{i} u_{i}+{\boldsymbol{\epsilon}}_{i}, \ \ \ u_{i}\sim \mathrm{N}\left(0,\sigma^{2}_{u}\right), \ \ {\boldsymbol{\epsilon}}_{i}\sim \mathrm{N}\left(\mathbf{0},\sigma_{\epsilon}^{2}\mathbf{I}_{i}\right).$$ The matrix Xi is the design matrix and β is the unknown fixed effects. The random vector εi represents the error and is independent of ui. Independence is also assumed between ui and uj, and between εi and εj whenever i≠j. It follows that yi∼N(Xiβ,Σi), where \({\boldsymbol {\Sigma }}_{i}=\sigma ^{2}_{u}\mathbf {J}_{i}+\sigma _{\epsilon }^{2}\mathbf {I}_{i}\). Let elements of yi be yij, j=1,…,ni. We obtain \(\text {Var}(y_{ij})=\sigma ^{2}_{u}+\sigma _{\epsilon }^{2}\) and \(\text {Cov}(y_{ij},y_{ik})=\sigma ^{2}_{u}\) where j≠k. The intra-class correlation coefficient ρ is naturally defined by the variance components \(\sigma ^{2}_{u}\) and \(\sigma _{\epsilon }^{2}\) as $$ \rho\equiv \text{Corr}(y_{ij},y_{ik})= \frac{\sigma^{2}_u}{\sigma^{2}},\ \text{where}\ \sigma^{2}=\sigma^{2}_{u}+\sigma_{\epsilon}^{2}. $$ By definition, 0<ρ<1 since both \(\sigma ^{2}_{u}\) and \(\sigma _{\epsilon }^{2}\) are positive. The above modeling uses the random effect ui explicitly to explain within cluster correlation. From a marginal model perspective, one can instead start with the modeling yi∼N(Xiβ,Σi) directly with a special structure for Σi. Let ρ=Corr(yij,yik) and σ2=Var(yij), and sequently we get Σi=σ2[Ii+ρ(Ji−Ii)]. Using this marginal parametrization {σ2,ρ}, the matrix Σi is positive definite if − 1/(ni−1)<ρ<1 [30, 33–35]. That is, ρ does not have to be positive as it was defined from a variance components perspective. At the same time, we also observe that when ni is large, the boundary − 1/(ni−1) can be very close to 0. Starting with this parameterization, we derive the corresponding relationship of (1) as $$ \sigma^{2}_{u}=\sigma^{2}\rho,\ \ \sigma_{\epsilon}^{2}=\sigma^{2}(1-\rho). $$ It is clear that \({\boldsymbol {\Sigma }}_{i}^{-1}=\{\mathbf {I}_{i}-\rho \mathbf {J}_{i}/[1+(n_{i}-1)\rho ]\}/\sigma ^{2}(1-\rho)\). Let \(\mathbf {T}_{i}(n_{i},\rho) \equiv \mathbf {T}_{i}(\rho)=\mathbf {I}_{i}-\rho \mathbf {J}_{i}/[1+(n_{i}-1)\rho ]=\sigma ^{2}(1-\rho){\boldsymbol {\Sigma }}_{i}^{-1}\). The estimate of β follows as \(\hat {{\boldsymbol {\beta }}}=\left \{ \sum _{i=1}^{n_c} \mathbf {X}_{i}^{\prime } {\boldsymbol {\Sigma }}_{i}^{-1}\mathbf {X}_{i}\right \}^{-1}\left \{\sum _{i=1}^{n_{c}}\mathbf {X}_{i}^{\prime } {\boldsymbol {\Sigma }}_{i}^{-1}\mathbf {y}_{i}\right \}\) which reduces to $$\hat{{\boldsymbol{\beta}}}=\left\{ \sum\limits_{i=1}^{n_c} \mathbf{X}_{i}^{\prime}\mathbf{T}_{i}(\rho)\mathbf{X}_{i}\right\}^{-1}\left\{\sum\limits_{i=1}^{n_{c}}\mathbf{X}_{i}^{\prime} \mathbf{T}_{i}(\rho)\mathbf{y}_{i}\right\}. $$ The variance-covariance matrix of the estimate has the form $$\text{Cov}(\hat{{\boldsymbol{\beta}}})\,=\,\!\left\{\! \sum\limits_{i=1}^{n_c} \mathbf{X}_{i}^{\prime} {\boldsymbol{\Sigma}}_{i}^{-1}\mathbf{X}_{i}\!\right\}^{-1}\!\,=\,\sigma^{2}(1-\rho)\!\left\{ \sum\limits_{i=1}^{n_{c}} \mathbf{X}_{i}^{\prime} \mathbf{T}_{i}(\rho)\mathbf{X}_{i}\!\right\}^{-1}. $$ We notice that if there is no within cluster correlation, i.e. ρ=0, then Ti(0)=Ii and \(\hat {{\boldsymbol {\beta }}}\) is simply the ordinary least squares model. In the extreme case of perfect correlation, i.e., ρ=1, then Ti(1)=Ii−Ji/ni and we get $$\begin{array}{@{}rcl@{}} \hat{{\boldsymbol{\beta}}}|_{\rho=1}&=&\left[ \sum\limits_{i=1}^{n_c} \mathbf{X}_{i}^{\prime}\mathbf{T}_{i}(1)\mathbf{X}_{i}\right]^{-1}\left[\sum\limits_{i=1}^{n_c}\mathbf{X}_{i}^{\prime} \left(\mathbf{y}_{i}-\mathbf{1}_{i}\bar{y}_{i}\right)\right]. \end{array} $$ In this scenario, yij=yik and \(\mathbf {y}_{i}=\bar {y}_{i}\), so \(\hat {{\boldsymbol {\beta }}}=\mathbf {0}\). Our simulation setups are similar to those in the literature. In the simulation, Feng et al. [24] tried the number of clusters nc=10, 20, and 50 with cluster sizes 10, 30, and 100, and ρ=0.1, 0.5. The simulation study [25] considered two scenarios of 5 patients per cluster with ρ=0.15, and of 100 patients per cluster with ρ=0.01. The number of clusters nc varied from 6, 10, 20, …, 90, and 100. The simulation scenarios in [26] include ρ=0.001, 0.01, and 0.05, and nc=4, 6, 8, 10, 20, 30, 40, and 200. The average cluster size ranges from 7 to 300. A review of published cluster-randomized trials by Kahan et al. [25] shows that the median number of clusters was 25 with interquartile range 15 to 44, 14% of the trials had fewer than 10 clusters, and 9% of the trials had more than 100 clusters. The cluster size had a median of 31 and an interquartile range 14 to 94. In our study, we tried nc=10, 30, 50, and 100. In each cluster, the number of subjects was simulated from a normal distribution after rounding with mean m=50 and 100, and standard deviation 5. The sample size calculation in [36] preassumed an ICC of 0.05. Some studies also found large ICC values such as 0.47 with 95% confidence interval [0.29,0.65] [6, Table I] and 0.60 [37, Table 4.4.2]. In our setup, we considered ρ=0.05, 0.1, and 0.5. We simulated two covariates independently from a Bernoulli distribution with a probability of success of 0.5, and from a standard Normal distribution. The associated regression coefficients are respectively β1=−2, and β2=1.5, and the intercept in the regression model is β0=1. We simulated the outcome from a marginal model with the variance parameter σ2=0.6. In each of the settings, we examined 2000 simulations. Example dataset We reanalyzed data from the cluster-randomized controlled trial in [36]. In this study, participants with hypertension from 15 clusters in rural India were recruited and randomized to the intervention or usual care in a 1:2 ratio. The study hypothesis was that a CHW (community health worker)-led group-based education and monitoring intervention would result in improved blood pressure control. Outcomes were assessed approximately two months after completion of the intervention. One of the main outcomes in this trial was the change in diastolic blood pressure (DBP), defined as follows: the DBP at baseline minus the DBP at follow-up. Fixed effects in the analysis include the following variables: age, sex, diastolic blood pressure at baseline (mm Hg), education, use of antihypertensive medications, change in BMI (body mass index) defined as BMI at follow-up minus the BMI at baseline, number of serves of fruit per week at follow-up, and self-reported drinking alcohol at least once in the 30 days prior to follow-up. The education variable has four categories: no formal schooling, class 1 to 6, class 7 to 11, and class 12 or more. We analyzed the data from the 1428 participants with no missing values. The histogram of the outcome shows a bell shaped pattern (Fig. 1). The intervention group has a larger proportion of a positive difference than the usual care group suggesting more DBP decline at the follow-up. The normal quantile-quantile plot in Fig. 2 shows the normal distribution assumption of the outcome is plausible which provides a justification of the application of linear mixed-effects models. Histogram of DBP difference by groups Normal Q-Q plot of DBP difference by groups All the four R functions compute \(\hat {{\boldsymbol {\beta }}}\) and the corresponding confidence interval (CI), but they adopt different parameterizations for the variance-covariance matrix. The function "geese" uses {σ2,ρ} though a specification of an "exchangeable" correlation structure. The function "gls" uses a compound symmetry structure of the parameters {σ,ρ}. Both "lme" and "lmer" find estimates and CI's of {σu,σε}. It is straightforward to obtain estimates of σ2, or \(\sigma ^{2}_{u}\) and \(\sigma _{\epsilon }^{2}\) from their corresponding square root estimates. We can then find estimates of another parameterization using Eqs. (1) or (2). As the "geese" method does not fit in the framework of hierarchical modeling with random effects, it is not appropriate to find its estimates of \(\{\sigma ^{2}_{u},\sigma _{\epsilon }^{2}\}\) using Eq. (2) due to a possible negative \(\hat {\rho }\). Thus, we do not include it for the comparison of estimating {σu,σε}. Methods obtaining the CI's of ρ have been discussed in [38, 39]. In the model-based setup, the CI of ρ is readily available from the output of the "geese" or "gls" fitted object. Below we explain how to get the CI's of the model parameters with the summary presented in Table 1. From the "geese" output, we apply the estimate ± 1.96 standard deviation rule to obtain the CI's. We apply two generic functions "confint" and "intervals" to "gls", "lme", or "lmer" fitted objects. The "confint" function assumes normality and has two options. The option method="Wald" returns approximate CI's of the fixed effects based on the estimated local curvature of the likelihood surface. The other option method="profile" computes a likelihood profile and find the appropriate cutoffs based on the likelihood ratio test [32]. The "intervals" function calculates approximate confidence intervals for the parameters in the linear model using a normal approximation to the distribution of the maximum likelihood estimators. The estimators are assumed to have a normal distribution centered at the true parameter values and with covariance matrix equal to the negative inverse Hessian matrix of the log-likelihood evaluated at the estimated parameters [30, 31]. With detailed comparison results presented in the supplementary material, we summarize our findings in the following text. The "lme" approach took the least computation time, 1.4 h, followed by the "gls" approach, 2.41 h (Table S1). Sometimes the "gls" approach failed to construct the confidence intervals for the variance-covariance parameters when ρ=0.5 (Table S2). The number of failure increases with the number of clusters nc and also the cluster size m. When nc=100 and m=100, there were 528 failures out of 2000 simulations. Occasionally, the "lme" and "lmer" approaches also failed to construct the confidence intervals. The performance of the four approaches of estimating the model parameters is very similar with almost identical standard deviation and MSE (Tables S3-S6). Next we summarize the performance of the different functions on the coverage proportion of the model parameters. First, in general, the coverage proportions of the fixed effects are very similar among "gls", "lme", and "lmer" approaches (Tables S7-S9). They are very close to the nominal 95% level except for β0 when nc=10 (Table S7). In that case, confidence intervals obtained by specifying the "profile" option of the "confint" function to a "lmer" fitted object outperforms the others. Their coverage proportions are generally closer to the nominal 95% level than the "geese" approach when nc is less than 100. Second, though the coverage proportions of {σ2,ρ} of "geese" or {σ,ρ} of "gls" are usually below the 95% nominal level, the "gls" method generally provided better coverage (Table S10). Third, both the "lme" and "lmer" produced coverage proportions about 95% for σε, and for σu when nc≠10. When nc=10, we observed unstable performance of over coverage or under coverage (Table S11). All the four methods produced similar results that suggested more DBP reduction in the intervention group, 2.161 mm Hg by "geese" and 2.252 by the other three methods (Table 2). The 95% confidence interval bounds of the intervention effect are slightly different. However, our conclusion is consistent with the finding in [36] where the analysis was conducted in Stata (Stata IC/11.2, StataCorp, College Station, TX, USA). DBP declined 2.1 mm Hg more in the intervention group with a 95% confidence interval of (0.6,3.6), and the estimation of ICC was 0.02. The "gls" method gives a positive \(\hat {\rho }\) and a confidence interval does not contain 0. The "geese" method produces a negative \(\hat {\rho }\) with negative confidence interval bounds. Table 2 CI's of the fixed effects and the variance-covariance parameters Throughout our study, we compare the performance of the four R functions, "geese", "gls", "lme", and "lmer", of analyzing single level clustered data. The "exchangeable" correlation structure of the "geese" function and the compound symmetry structure of the function "gls" both provide a single-level cluster model. We note that the "lme" and the "lmer" function can model multi-level data, and the "lmer" function is capable of modeling crossed random effects. The lme4 package also includes generalized linear mixed model capability via the "glmer" function. It does not currently implement nlme's features for modeling heteroscedasticity of residuals or offer the same flexibility for composing complex variance-covariance structures. Our simulation study found that all four methods perform equally well for model parameters estimation. This result is consistent with the study in [24]. It was found that the MSEs are very similar except when the number of clusters is 10 where the linear mixed-effects model method has slightly smaller MSE than the GEE method using SAS PROC MIXED. We observe similar coverage proportions of the fixed effects among "gls", "lme," and "lmer" approaches. They are generally closer to the nominal 95% level than the "geese" approach when the number of clusters is less than 100. The estimated ICC from the "geese" method can be negative, and the confidence interval of the ICC from the "gls" method provides better coverage. However, when the ICC is large as ρ=0.5, confidence intervals is not always obtainable from the "gls" method. In our comparison of the coverage of the variance-covariance parameters in the model, the "lme" and the "lmer" methods have similar performance while the former is considerably faster. The latter provides better coverage of the intercept in the model when the number of clusters is 10. In the simulation settings, we examined that the "gls" function is preferable to analyze single-level clustered data. The limitations of our simulation study include the lack of the scenario of the very large number of clusters (e.g., 200 as in [26]) or the scenario of small ICC values (e.g., 0.01 as in [25, 26]). It may also be of interest to further compare the performance of the four functions for complex trials such as stepped-wedge cluster-randomized trials. In a stepped-wedge cluster-randomized trial, all clusters begin in the control phase and then are randomized to interventions at different time points [40–43]. Simulations have been conducted to investigate the effect of varying degrees of imbalance in cluster size on the power [44]. The data analyzed in our example were originally studied in [36]. The data file and the data dictionary are publicly available from the figshare database (accession number https://doi.org/10.26180/5cc80de987113). The data were reanalyzed without collaboration with the original study authors. CHW: DBP: Diastolic blood pressure Expectation-maximization GEE: Generalized estimating equation ICC: Intra-class correlation coefficient MSE: Mean squared error Murray DM, Varnell SP, Blitstein JL. Design and analysis of group-randomized trials: a review of recent methodological developments. Am J Public Health. 2004; 94(3):423–32. Campbell M, Donner A, Klar N. Developments in cluster randomized trials and statistics in medicine. Stat Med. 2007; 26(1):2–19. CAS PubMed Article Google Scholar Fisher RA. Statistical Methods for Research Workers, 5th edn. Edinburgh: Oliver and Boyd Ltd.; 1934. Eldridge SM, Ukoumunne OC, Carlin JB. The intra-cluster correlation coefficient in cluster randomized trials: a review of definitions. Int Stat Rev. 2009; 77(3):378–94. Baio G, Copas A, Ambler G, Hargreaves J, Beard E, Omar RZ. Sample size calculation for a stepped wedge trial. Trials. 2015; 16(354). https://pubmed.ncbi.nlm.nih.gov/26282553/. Campbell MK, Mollison J, Grimshaw JM. Cluster trials in implementation research: estimation of intracluster correlation coefficients and sample size. Stat Med. 2001; 20(3):391–9. Rutterford C, Copas A, Eldridge S. Methods for sample size determination in cluster randomized trials. Int J Epidemiol. 2015; 44(3):1051–67. Lee KJ, Thompson SG. The use of random effects models to allow for clustering in individually randomized trials. Clin Trials. 2005; 2(2):163–73. PubMed Article Google Scholar Barker D, McElduff P, D'Este C, Campbell M. Stepped wedge cluster randomised trials: a review of the statistical methodology used and available. BMC Med Res Methodol. 2016; 16(69). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4895892/. Turner EL, Prague M, Gallis JA, Li F, Murray DM. Review of recent methodological developments in group-randomized trials: part 2-analysis. Am J Public Health. 2017; 107(7):1078–86. Liang K-Y, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986; 73(1):13–22. Zeger SL, Liang K-Y. Longitudinal data analysis for discrete and continuous outcomes. Biometrics. 1986; 1:121–30. Laird NM, Ware JH. Random-effects models for longitudinal data. Biometrics. 1982; 38:963–74. Ware JH. Linear models for the analysis of longitudinal studies. Am Stat. 1985; 39(2):95–101. Laird N, Lange N, Stram D. Maximum likelihood computations with repeated measures: application of the em algorithm. J Am Stat Assoc. 1987; 82(397):97–105. Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B. 1977; 39:1–38. Wu JCF. On the convergence properties of the EM algorithm. Ann Stat. 1983; 11(1):95–103. Jennrich RI, Schluchter MD. Unbalanced repeated-measures models with structured covariance matrices. Biometrics. 1986; 42:805–20. Lindstrom M, Bates D. Newton-Raphson and EM algorithms for linear mixed effects models for repeated measures data. J Am Stat Assoc. 1988; 83:1014–22. Meng XL, Rubin DB. Maximum likelihood estimation via the ECM algorithm: A general framework. Biometrika. 1993; 80:267–78. Liu CH, Rubin DB. The ECME algorithm - a simple extension of EM and ECM with faster monotone convergence. Biometrika. 1994; 81(4):633–48. Park T. A comparison of the generalized estimating equation approach with the maximum likelihood approach for repeated measurements. Stat Med. 1993; 12(18):1723–32. Wu C-T, Gumpertz ML, Boos DD. Comparison of GEE, MINQUE, ML, and REML estimating equations for normally distributed data. Am Stat. 2001; 55(2):125–30. Feng Z, McLerran D, Grizzle J. A comparison of statistical methods for clustered data analysis with gaussian error. Stat Med. 1996; 15(16):1793–806. Kahan BC, Forbes G, Ali Y, Jairath V, Bremner S, Harhay MO, Hooper R, Wright N, Eldridge SM, Leyrat C. Increased risk of type I errors in cluster randomised trials with small or medium numbers of clusters: a review, reanalysis, and simulation study. Trials. 2016; 17. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5013635/. Leyrat C, Morgan KE, Leurent B, Kahan BC. Cluster randomized trials with a small number of clusters: which analyses should be used?. Int J Epidemiol. 2018; 47(1):321–31. R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2020. R Foundation for Statistical Computing. https://www.R-project.org/. Yan J, Fine JP. Estimating equations for association structures. Stat Med. 2004; 23:859–80. Halekoh U, Højsgaard S, Yan J. The R package geepack for generalized estimating equations. J Stat Softw. 2006; 15(2):1–11. Pinheiro J, Bates D. Mixed-Effects Models in S and S-PLUS. New York: Springer; 2009. Pinheiro J, Bates D, DebRoy S, Sarkar D, R Core Team. nlme: Linear and Nonlinear Mixed Effects Models. 2020. R package version 3.1-149. https://cran.r-project.org/web/packages/nlme/ChangeLog. Accessed Aug 2020. Bates D, Mächler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015; 67(1):1–48. doi:10.18637/jss.v067.i01. Wang W. Identifiability of linear mixed effects models. Electron J Stat. 2013; 7:244–63. Wang W. Identifiability of covariance parameters in linear mixed effects models. Linear Algebra Appl. 2016; 506:603–13. Wang W. Checking identifiability of covariance parameters in linear mixed effects models. J Appl Stat. 2017; 44(11):1938–46. Gamage DG, Riddell MA, Joshi R, Thankappan KR, Chow CK, Oldenburg B, Evans RG, Mahal AS, Kalyanram K, Kartik K, Suresh O, Thomas N, Mini GK, Maulik PK, Srikanth VK, Arabshahi S, Varma RP, Guggilla RK, D'Esposito F, Sathish T, Alim M, Thrift AG. Effectiveness of a scalable group-based education and monitoring program, delivered by health workers, to improve control of hypertension in rural india: A cluster randomised controlled trial. PLoS Med. 2020; 17:1–22. Pal N, Lim WK. On intra-class correlation coefficient estimation. Stat Pap. 2004; 45:369–92. Ukoumunne OC. A comparison of confidence interval methods for the intraclass correlation coefficient in cluster randomized trials. Stat Med. 2002; 21(24):3757–74. Demetrashvili N, Wit EC, van den Heuvel ER. Confidence intervals for intraclass correlation coefficients in variance components models. Stat Methods Med Res. 2016; 25(5):2359–76. Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials. 2007; 28(2):182–91. Beard E, Lewis JJ, Copas A, Davey C, Osrin D, Baio G, Thompson JA, Fielding KL, Omar RZ, Ononge S, et al.Stepped wedge randomised controlled trials: systematic review of studies published between 2010 and 2014. Trials. 2015; 16(353). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4538902/. Kristunas C, Morris T, Gray L. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review. BMJ Open. 2017; 7(11). http://dx.doi.org/10.1136/bmjopen-2017-017151. https://bmjopen.bmj.com/content/7/11/e017151.full.pdf. Li F, Hughes JP, Hemming K, Taljaard M, Melnick ER, Heagerty PJ. Mixed-effects models for the design and analysis of stepped wedge cluster randomized trials: An overview. Stat Methods Med Res. 2020; 30(2):612–39. https://pubmed.ncbi.nlm.nih.gov/32631142/. Kristunas CA, Smith KL, Gray LJ. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome. Trials. 2017; 18(109). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5341460/. The authors would like to acknowledge the availability of the data in the study of [36], which were made publicly accessible by the PLOS Medicine journal. PLOS journals require authors to make all data necessary to replicate their study's findings publicly available without restriction at the time of publication (https://journals.plos.org/plosmedicine/s/data-availability). Michael O. Harhay was supported by grant R00 HL141678 from the National Heart, Lung, and Blood Institute (NHLBI) of the US National Institutes of Health (NIH). The funding body had no role in the study design, data collection and analysis, or decision to publish. Clinical Trials Methods and Outcomes Lab, Palliative and Advanced Illness Research (PAIR) Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA Wei Wang & Michael O. Harhay Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA Michael O. Harhay Wei Wang Both authors made substantial contributions to the manuscript. WW conducted the data analysis, simulation studies, and drafted the manuscript. MOH reviewed and revised the article. Both authors approved the manuscript to be submitted. Correspondence to Wei Wang. The supplementary material in a pdf format contains the tables of the simulation study results. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Wang, W., Harhay, M.O. A comparative study of R functions for clustered data analysis. Trials 22, 959 (2021). https://doi.org/10.1186/s13063-021-05900-7 Clustered data analysis Cluster randomized trials Mixed effects models
CommonCrawl
Nemirovski Stefan Yurievich 1. V. Chernov, S. Nemirovski, "Interval topology in contact geometry", Commun. Contemp. Math., 2019, 1950042 (Published online) , arXiv: 1810.01642 2. V. Chernov, S. Nemirovski, "Redshift and contact forms", J. Geom. Phys., 123 (2018), 379–384 , arXiv: 1709.01741 (cited: 3) (cited: 3) 3. Stefan Nemirovski, Rasul Gazimovich Shafikov, "Uniformization and Steinness", Canad. Math. Bull., 61 (2018), 637–639 , arXiv: 1705.05113 4. A. I. Aptekarev, V. M. Buchstaber, V. A. Vassiliev, M. L. Gromov, Yu. S. Ilyashenko, B. S. Kashin, V. M. Keselman, V. V. Kozlov, M. L. Kontsevich, I. M. Krichever, N. G. Kruzhilin, S. K. Lando, Yu. I. Manin, G. A. Margulis, S. Yu. Nemirovski, S. P. Novikov, Yu. G. Reshetnyak, Ya. G. Sinai, S. P. Suetin, D. V. Treschev, D. B. Fuchs, A. G. Khovanskii, E. M. Chirka, A. S. Schwarz, A. N. Shiryaev, "Vladimir Antonovich Zorich (on his 80th birthday)", Russian Math. Surveys, 73:5 (2018), 935–939 5. A. I. Aptekarev, V. K. Beloshapka, V. I. Buslaev, V. V. Goryainov, V. N. Dubinin, V. A. Zorich, N. G. Kruzhilin, S. Yu. Nemirovski, S. Yu. Orevkov, P. V. Paramonov, S. I. Pinchuk, A. S. Sadullaev, A. G. Sergeev, S. P. Suetin, A. B. Sukhov, K. Yu. Fedorovskiy, A. K. Tsikh, "Evgenii Mikhailovich Chirka (on his 75th birthday)", Russian Math. Surveys, 73:6 (2018), 1137–1144 6. Stefan Nemirovski, Kyler Siegel, "Rationally convex domains and singular Lagrangian surfaces in $\mathbb C^2$", Invent. Math., 203:1 (2016), 333–358 , 26 pp., arXiv: 1410.4652 (cited: 5) (cited: 3) (cited: 3) 7. Vladimir Chernov, Stefan Nemirovski, "Universal orderability of Legendrian isotopy classes", J. Symplectic Geom., 14:1 (2016), 149–170 , arXiv: 1307.5694 (cited: 4) (cited: 4) 8. A. I. Bufetov, A. A. Glutsyuk, S. M. Gusein-Zade, A. S. Gorodetski, V. Yu. Kaloshin, A. G. Khovanskii, V. A. Kleptsyn, S. Yu. Nemirovskii, A. B. Sossinsky, M. A. Tsfasman, V. A. Vassiliev, S. Yu. Yakovenko, "Yulij Sergeevich Ilyashenko", Mosc. Math. J., 14:2 (2014), 173–179 9. F. A. Bogomolov, F. Kataneze, Yu. I. Manin, S. Yu. Nemirovskii, V. V. Nikulin, A. N. Parshin, V. V. Przhiyalkovskii, Yu. G. Prokhorov, M. Teikher, A. S. Tikhomirov, V. M. Kharlamov, I. A. Cheltsov, I. R. Shafarevich, V. V. Shokurov, "Viktor Stepanovich Kulikov (k shestidesyatiletiyu so dnya rozhdeniya)", UMN, 68:2(410) (2013), 205–207 10. V. Chernov, S. Nemirovski, "Cosmic censorship of smooth structures", Comm. Math. Phys., 320:2 (2013), 469–473 (cited: 9) (cited: 3) (cited: 7) 11. S. Nemirovski, "Levi problem and semistable quotients", Complex Var. Elliptic Equ., 58:11 (2013), 1517–1525 , "Corrigendum", ibid., 1633 (cited: 2) (cited: 1) (cited: 1) 12. S. Nemirovski, "Corrigendum. Levi problem and semistable quotients", Complex Var. Elliptic Equ., 58:11 (2013), 1633 13. V. Chernov, S. Nemirovski, "Legendrian links, causality, and the Low conjecture", Geom. Funct. Anal., 19:5 (2010), 1320–1333 (cited: 22) (cited: 11) (cited: 23) 14. V. Chernov, S. Nemirovski, "Non-negative Legendrian isotopy in $ST^*M$", Geom. Topol., 14:1 (2010), 611–626 (cited: 20) (cited: 12) (cited: 20)
CommonCrawl
Research in Number Theory Symbolic notation Higher-order Bernoulli polynomials and proof of Theorem 1.1 Examples and consequences of Theorem 1.1 Additional Remarks An explicit form of the polynomial part of a restricted partition function Karl Dilcher1 and Christophe Vignat2, 3Email author Research in Number Theory20173:1 Published: 5 January 2017 We prove an explicit formula for the polynomial part of a restricted partition function, also known as the first Sylvester wave. This is achieved by way of some identities for higher-order Bernoulli polynomials, one of which is analogous to Raabe's well-known multiplication formula for the ordinary Bernoulli polynomials. As a consequence of our main result we obtain an asymptotic expression of the first Sylvester wave as the coefficients of the restricted partition grow arbitrarily large. Restricted partitions Sylvester waves Bernoulli polynomials Raabe's identity Mathematical Subject Classification Primary 11P81 Secondary 11B68 An interesting topic in the theory of partitions is that of restricted partitions, where the following question has been studied quite extensively: given a vector \(\mathbf{d }:=(d_1,d_2,\ldots ,d_m)\) of positive integers, let \(W(s,\mathbf{d })\) be the number of partitions of the integer s with parts in \(\mathbf{d }\), i.e., \(W(s,\mathbf{d })\) is the number of solutions of $$\begin{aligned} d_1x_1+d_2x_2+\cdots +d_mx_m = s \end{aligned}$$ in nonnegative integers \(x_1,\ldots ,x_m\). For a history of this problem, see [5, p. 119ff.]. A standard method of dealing with questions of this type goes back to Euler and involves a generating function, which in our case is $$\begin{aligned} F(t,{\mathbf{d }}) :=\prod _{j=1}^m\frac{1}{1-t^{d_j}} = \sum _{s=0}^\infty W(s,{\mathbf{d }})t^s. \end{aligned}$$ A major advance was made by Sylvester [22, 23] who wrote the restricted partition function \(W(s,{\mathbf{d }})\) as a sum of "waves", $$\begin{aligned} W(s,{\mathbf{d }}) = \sum _{j\ge 1}W_j(s,{\mathbf{d }}), \end{aligned}$$ where the sum is taken over all distinct divisors j of the components of \({\mathbf{d }}\). Sylvester [23] showed that for each such j, \(W_j(s,{\mathbf{d }})\) is the coefficient of \(t^{-1}\), i.e., the residue, of the function $$\begin{aligned} F_j(s,t) = \sum _{\begin{array}{c} 0\le \nu <j\\ \gcd (\nu ,j)=1 \end{array}} \frac{\rho _j^{-\nu s}e^{st}}{\left( 1-\rho _j^{\nu d_{1}}e^{-d_{1}t}\right) \cdots \left( 1-\rho _j^{\nu d_{m}}e^{-d_{m}t}\right) }, \end{aligned}$$ where \(\rho _j\) is a primitive jth root of unity, for instance \(\rho _j=e^{2\pi i/j}\), and where we set \(\gcd (0,0)=1\) by convention. In other words, the sum in (4) is taken over all primitive jth roots of unity \(\rho _j^{\nu }\). These Sylvester waves have been studied in great detail in recent years; see, e.g., [10, 19, 20]; see also [7, 8, 14] for a broader perspective, and [21] for computations related to restricted partitions. For \(j=1\), the right-hand side of (4) is recognizable as being very close to the generating function of a higher-order Bernoulli polynomial. This fact was used by Rubinstein and Fel [20] to write \(W_1(s,{\mathbf{d }})\) in a very compact form in terms of a single higher-order Bernoulli polynomial [see (33) below]. A version of this result, given in two different forms, was earlier obtained by Beck, Gessel and Komatsu [3], as mentioned in [20]. Similarly, for \(j=2\) we have \(\rho _j=-1\), and the right-hand side of (4) will typically lead to a convolution sum of higher-order Bernoulli and higher-order Euler polynomials; this was also done in [20]. Furthermore, Rubinstein and Fel extended this approach and expressed \(W_j(s,{\mathbf{d }})\) for arbitrary j in terms of generalized Eulerian polynomials of higher order, in addition to the expected higher-order Bernoulli polynomials. In a subsequent paper, Rubinstein [19] showed that all the Sylvester waves \(W_j(s,{\mathbf{d }})\) can be written as linear combinations of the first wave (\(j=1\)) alone, with modified integers s and vectors \({\mathbf{d }}\) [see (47) below]. This makes it worthwhile to give further consideration to \(W_1(s,{\mathbf{d }})\), which is the purpose of the present paper. Our main result is the following explicit formula for \(W_1(s,{\mathbf{d }})\); its significance lies in the fact that it does not contain Bernoulli numbers or polynomials. Theorem 1.1 Let \({\mathbf{d }}:=(d_1,d_2,\ldots ,d_m)\) be given, and denote \(d:=d_1\ldots d_m\) and \(\widetilde{d}_i:=d/d_i\), \(i=1,\ldots ,m\). Then $$\begin{aligned} W_1(s,{\mathbf{d }}) = \frac{1}{(m-1)!d^m} \sum _{\begin{array}{c} 0\le \ell _1\le \widetilde{d}_1-1\\ \dots \\ 0\le \ell _m\le \widetilde{d}_m-1 \end{array}} \prod _{j=1}^{m-1}\left( s+jd-\ell _1d_1-\dots -\ell _md_m\right) . \end{aligned}$$ For a more compact form of this identity, see Sect. 5. Towards proving this theorem, we derive (or re-derive) some identities which are analogous to classical results in the theory of Bernoulli polynomials and their higher-order analogues. Our main tool is a symbolic notation which, in spite of some similarities, is different from the classical umbral calculus. This will be introduced in Sect. 2, and we apply it in Sect. 3 to prove the auxiliary results as well as Theorem 1.1. In Sect. 4 we present some examples and consequences of Theorem 1.1, including an asymptotic expression. We finish this paper with some additional remarks in Sect. 5. 2 Symbolic notation Although the results in Sect. 3 could also be proved (and in some cases have been proved) by other methods, especially using generating functions, the symbolic notation described below makes the discovery and proof of some identities considerably easier. While there are similarities to the classical umbral calculus (see, e.g., [11] or [18]), our notation is more specific to Bernoulli numbers and polynomials, and is related to probability theory. The following brief exposition is partly taken from [6]; we repeat it here for the sake of completeness. The basis for our notation are two symbols, \({\mathcal {B}}\) and \({\mathcal {U}}\), which annihilate each other, as we shall see. First, we define the Bernoulli symbol \({\mathcal {B}}\) by $$\begin{aligned} {\mathcal {B}}^{n}=B_{n}\qquad (n=0, 1,\ldots ), \end{aligned}$$ where \(B_n\) is the nth Bernoulli number. So, for instance, we can be rewrite the usual definition for the Bernoulli polynomial \(B_n(x)\), $$\begin{aligned} B_n(x)=\sum _{j=0}^n\left( {\begin{array}{c}n\\ j\end{array}}\right) B_jx^{n-j}\quad \hbox {as}\;\; B_n(x) = (x+{\mathcal {B}})^n. \end{aligned}$$ Furthermore, with the usual (generating function) definition of the Bernoulli numbers we have $$\begin{aligned} \exp \left( {\mathcal {B}}z\right) = \sum _{n=0}^\infty {\mathcal {B}}^n\frac{z^n}{n!} = \sum _{n=0}^\infty B_n\frac{z^n}{n!} = \frac{z}{e^z-1}. \end{aligned}$$ We obtain a useful identity from this if we note that $$\begin{aligned} \exp (({\mathcal {B}}+1)z)=\frac{z}{e^z-1}\cdot e^z = \frac{-z}{e^{-z}-1} =\exp (-{\mathcal {B}}z), \end{aligned}$$ $$\begin{aligned} {\mathcal {B}}+1 = -\mathcal {B}. \end{aligned}$$ We also require several independent Bernoulli symbols \({\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{k}\). Independence means that if we have any two Bernoulli symbols, say \({\mathcal {B}}_1\) and \(\mathcal {B}_2\), then $$\begin{aligned} {\mathcal {B}}_1^{k}\mathcal {B}_2^{\ell }=B_kB_\ell . \end{aligned}$$ Second, the uniform symbol \({\mathcal {U}}\) is defined by $$\begin{aligned} f(x+{\mathcal {U}})=\int _0^1f(x+u)du. \end{aligned}$$ Here and elsewhere we assume that f is an arbitrary polynomial. From (11) we immediately obtain, in analogy to (6), $$\begin{aligned} {\mathcal {U}}^{n}=\frac{1}{n+1}\qquad (n=0, 1,\ldots ), \end{aligned}$$ and using this, we get $$\begin{aligned} \exp \left( {\mathcal {U}}z\right) =\sum _{n=0}^\infty {\mathcal {U}}^n\frac{z^n}{n!} =\frac{e^z-1}{z}. \end{aligned}$$ From (8) and (13) we now deduce $$\begin{aligned} \exp \left( z\left( {\mathcal {B}}+{\mathcal {U}}\right) \right) =\sum _{n=0}^\infty \left( {\mathcal {B}}+{\mathcal {U}}\right) ^n\frac{z^n}{n!} = 1, \end{aligned}$$ which means that \({\mathcal {B}}\) and \({\mathcal {U}}\) annihilate each other, i.e., \(({\mathcal {B}}+{\mathcal {U}})^n=0\) for all \(n\ne 0\), in the sense that $$\begin{aligned} f(x+{\mathcal {B}}+{\mathcal {U}})=f(x), \end{aligned}$$ for an arbitrary polynomial f. For an integer \(m\ge 1\) and a collection of not necessarily distinct real numbers \(\{a_1,\ldots ,a_m\}\) we now introduce the discrete uniform symbol \({\mathcal {U}}_{\{a_1,\ldots ,a_m\}}\) by way of the generating function $$\begin{aligned} \exp \left( z{\mathcal {U}}_{\{a_1,\ldots ,a_m\}}\right) = \frac{e^{a_1z}+\dots +e^{a_mz}}{m}, \end{aligned}$$ or equivalently by $$\begin{aligned} f\left( x+{\mathcal {U}}_{\{a_1,\ldots ,a_m\}}\right) = \frac{1}{m}\bigl (f(x+a_1)+\cdots +f(x+a_m)\bigr ), \end{aligned}$$ for an arbitrary polynomial f, which can be seen as a discrete analogue of (11). From the definition (15) we immediately obtain the identity $$\begin{aligned} c\,{\mathcal {U}}_{\{a_1,\ldots ,a_m\}} ={\mathcal {U}}_{\{ca_1,\ldots ,ca_m\}} \qquad (c\in {\mathbb R}). \end{aligned}$$ Furthermore, given two sets \({\mathbf{a }}=\{a_1,\ldots ,a_m\}\) and \({\mathbf{b }}=\{b_1,\ldots ,b_n\}\), we have $$\begin{aligned} \exp (z({\mathcal {U}}_{{\mathbf{a }}}+{\mathcal {U}}_{\mathbf{b }}))&= \frac{1}{m}\left( \sum _{i=1}^m e^{a_iz}\right) \frac{1}{n}\left( \sum _{j=1}^n e^{b_jz}\right) = \frac{1}{mn}\sum _{\begin{array}{c} 1\le i\le m\\ 1\le j\le n \end{array}}e^{(a_i+b_j)z} \\&= \exp (z{\mathcal {U}}_{\{a_1+b_1,\ldots ,a_m+b_n\}}), \end{aligned}$$ $$\begin{aligned} {\mathcal {U}}_{\{a_1,\ldots ,a_m\}}+{\mathcal {U}}_{\{b_1,\ldots ,b_n\}} ={\mathcal {U}}_{\{a_1+b_1,\ldots ,a_m+b_n\}}, \end{aligned}$$ with an obvious extension (by induction) to an arbitrary number of summands. Considering the special case \(\{0,1,\ldots ,m-1\}\), we multiply (15) and (13) and get $$\begin{aligned} \exp \left( z({\mathcal {U}}+U_{\{0,1,\ldots ,m-1\}})\right) = \frac{e^z-1}{z}\cdot \frac{1+e^z+\dots +e^{(m-1)z}}{m} = \frac{e^{mz}-1}{mz}, \end{aligned}$$ and thus, using again (13), $$\begin{aligned} m{{\mathcal {U}}} = \mathcal {U} + U_{\{0,1,\ldots ,m-1\}}. \end{aligned}$$ But by (14) we have \({\mathcal {B}}+{\mathcal {U}}=0\) and \(m({\mathcal {B}}+{\mathcal {U}})=0\), so that by (19) we have \({\mathcal {U}}+m{\mathcal {B}}+{\mathcal {U}}_{\{0,1,\ldots ,m-1\}}=0\). Since \({\mathcal {B}}+{\mathcal {U}}=0\), we deduce $$\begin{aligned} {\mathcal {B}} = m\mathcal {B} + U_{\{0,1,\ldots ,m-1\}}. \end{aligned}$$ Finally, for an integer \(k\ge 1\) we define the higher-order Bernoulli symbol \({\mathcal {B}}^{(k)}\) by $$\begin{aligned} {\mathcal {B}}^{(k)} = \mathcal {B}_1+\cdots +{\mathcal {B}}_k, \end{aligned}$$ where \({\mathcal {B}}_1,\ldots ,\mathcal {B}_k\) are independent Bernoulli symbols; see (10). 3 Higher-order Bernoulli polynomials and proof of Theorem 1.1 One of the most remarkable and useful identities for the classical Bernoulli polynomials is Raabe's formula [16] of 1851, $$\begin{aligned} B_n(mx) = m^{n-1}\sum _{j=0}^{m-1}B_n\left( x+\tfrac{j}{m}\right) , \end{aligned}$$ valid for all integers \(m\ge 1\) and \(n\ge 0\); see also [15, (24.4.18)]. For an integer \(k\ge 1\), the Bernoulli polynomial of order k is defined by the generating function $$\begin{aligned} \left( \frac{z}{e^z-1}\right) ^ke^{xz} = \sum _{n=0}^{\infty }B_n^{(k)}(x)\frac{z^n}{n!}. \end{aligned}$$ The following identity can be seen as a higher-order analogue of Raabe's formula. Let n, m, and \(d_1,\ldots ,d_m\) be positive integers, and set \(d:=d_1\ldots d_m\). Then $$\begin{aligned} \left( x+d_1{\mathcal {B}}_1+\ldots +d_m{\mathcal {B}}_m\right) ^n =d^{n-m+1}\sum _{\ell } B_n^{(m)}(\tfrac{x}{d}+\ell ), \end{aligned}$$ where the sum is taken over all values $$\begin{aligned} \ell =\frac{1}{d}(\ell _1d_1+\cdots +\ell _md_m), \quad 0\le \ell _i\le \frac{d}{d_i}-1,\quad i=1,\ldots ,m. \end{aligned}$$ Let \(\widetilde{d}_i:=d/d_i\) for \(1\le i\le m\). Then using (20) with \({\mathcal {B}}\) replaced by \({\mathcal {B}}_i\) and m by \(\widetilde{d}_i\), we get $$\begin{aligned} \sum _{i=1}^m d_i{\mathcal {B}}_i = \sum _{i=1}^m d_i\widetilde{d}_i{\mathcal {B}}_i +\sum _{i=1}^m d_i{\mathcal {U}}_{\{0,1,\ldots ,\widetilde{d}_i-1\}} =d{\mathcal {B}}^{(m)}+\sum _{i=1}^m d_i{\mathcal {U}}_{\{0,1,\ldots ,\widetilde{d}_i-1\}}, \end{aligned}$$ where we have used (21) and the fact that, by definition, \(d_i\widetilde{d}_i=d\) for all \(i=1,\ldots ,m\). Thus, $$\begin{aligned} \left( x+d_1{\mathcal {B}}_1+\cdots +d_m{\mathcal {B}}_m\right) ^n =d^n\left( \frac{x}{d}+{\mathcal {B}}^{(m)} +\frac{1}{d}\sum _{i=1}^m d_i{\mathcal {U}}_{\{0,1,\ldots ,\widetilde{d}_i-1\}}\right) ^n. \end{aligned}$$ Now, using (17), followed by an iterated version of (18), we get $$\begin{aligned} \frac{1}{d}\sum _{i=1}^m d_i{\mathcal {U}}_{\{0,1,\ldots ,\widetilde{d}_i-1\}} =\sum _{i=1}^m\frac{1}{\widetilde{d}_i}{\mathcal {U}}_{\{0,1,\ldots ,\widetilde{d}_i-1\}} =\sum _{i=1}^m{\mathcal {U}}_{\{0,\tfrac{1}{\widetilde{d}_i},\ldots ,\tfrac{\widetilde{d}_i-1}{\widetilde{d}_i}\}} = {\mathcal {U}}_{\{\ell \}}, \end{aligned}$$ where \(\{\ell \}\) indicates the collection of all values $$\begin{aligned} \ell = \frac{\ell _1}{\widetilde{d}_1}+\cdots +\frac{\ell _m}{\widetilde{d}_m}\quad 0\le \ell _i\le \widetilde{d}_i-1,\quad i=1,\ldots ,m. \end{aligned}$$ Thus we have with (25), $$\begin{aligned} \left( x+d_1{\mathcal {B}}_1+\cdots +d_m{\mathcal {B}}_m\right) ^n =d^n\left( \frac{x}{d}+{\mathcal {B}}^{(m)}+{\mathcal {U}}_{\{\ell \}}\right) ^n. \end{aligned}$$ Finally we note that the number of (not necessarily distinct) elements in \(\{\ell \}\) is $$\begin{aligned} \widetilde{d}_1\ldots \widetilde{d}_m = \frac{d^m}{d_1\ldots d_m} = d^{m-1}. \end{aligned}$$ Therefore by (16), in this case with \(d^{m-1}\) in place of m, (26) leads to (24), and we are done. \(\square \) We note that this result is not new. In fact, it is a special case of identity (60) in the classical book of Nörlund [13, p. 135]. However, the method of proof in [13] is very different from ours and relies on the theory of finite differences. On the other hand, it should also be mentioned that the symbolic notation involving the Bernoulli symbol, more or less as used on the left-hand side of (24), can also be found in [13], on p. 135 and elsewhere. Among the numerous known results about higher-order Bernoulli polynomials which can be found, for instance, in [13, Ch. 6], the identity $$\begin{aligned} B_{m-1}^{(m)}(x) = (x-1)(x-2)\dots (x-m+1)\qquad (m\ge 2), \end{aligned}$$ with \(B_0^{(1)}(x)=B_0(x)=1\), is of particular importance here; for a proof see, e.g., [13, p. 147]. There is a more general concept of a higher-order Bernoulli polynomial, which allowed Rubinstein and Fel [20] to express the first Sylvester wave in a very compact form. It can be defined as follows (see, e.g., [9, p. 39] or [20, p. 333]): For a fixed \(m\ge 1\) and a vector of positive integers \({\mathbf{d }}=(d_1,\ldots ,d_m)\) we define the polynomials \(B_n^{(m)}(x|{\mathbf{d }})\), \(n=0,1,\ldots \), by the generating function $$\begin{aligned} e^{xz}\prod _{i=1}^m\frac{d_iz}{e^{d_iz}-1} = \sum _{n=0}^\infty B_n^{(m)}(x|{\mathbf{d }})\frac{z^n}{n!}, \end{aligned}$$ or symbolically by $$\begin{aligned} B_n^{(m)}(x|{\mathbf{d }}) =\left( x+d_1{\mathcal {B}}_1+\cdots +d_m{\mathcal {B}}_m\right) ^n. \end{aligned}$$ Comparing (29) with (23), we see that \(B_n^{(m)}(x|(1,\ldots ,1))=B_n^{(m)}(x)\). The polynomials \(B_n^{(m)}(x|{\mathbf{d }})\), with a different notation and different normalization, can also be found in [2] and [4, p. 151], where they are called Bernoulli-Barnes polynomials. From (24) and (28) we can now obtain the following analogue of (28). Corollary 3.2 Let \(m\ge 1\) be an integer and \({\mathbf{d }}:=(d_1,\ldots ,d_m)\) a vector of positive integers, and denote \(d:=d_1\ldots d_m\) and \(\widetilde{d}_i:=d/d_i\) for \(1\le i\le k\). Then $$\begin{aligned} B_{m-1}^{(m)}(x|{\mathbf{d }}) = \frac{1}{d^{m-1}} \sum _{\begin{array}{c} 0\le \ell _1\le \widetilde{d}_1-1\\ \cdots \\ 0\le \ell _m\le \widetilde{d}_m-1 \end{array}} \prod _{j=1}^{m-1}\left( x-jd+\ell _1d_1+\dots +\ell _md_m\right) . \end{aligned}$$ Before proving this, we note that in the case \(d_1=\cdots =d_m=1\), the multiple sum on the right of (31) collapses to the single term \(\ell _1=\dots =\ell _m=0\), and (31) reduces to (28). Proof of Corollary 3.2 Using (30) and Theorem 3.1 with \(n=m-1\), followed by (28), we get $$\begin{aligned} B_{m-1}^{(m)}(x|{\mathbf{d }})&= \sum _y B_{m-1}^{(m)}(\tfrac{x}{d}+y) \\&=\sum _{\begin{array}{c} 0\le \ell _1\le \widetilde{d}_1-1\\ \dots \\ 0\le \ell _m\le \widetilde{d}_m-1 \end{array}} \prod _{j=1}^{m-1} \left( \tfrac{x}{d}-j+\tfrac{\ell _1}{\widetilde{d}_1}+\cdots +\tfrac{\ell _m}{\widetilde{d}_m}\right) . \end{aligned}$$ Multiplying each factor in the product on the right by d, we obtain (31). \(\square \) For the proof of Theorem 1.1 we also need the following reflection formula, which can be found in [13, p. 134]. For the sake of completeness we will provide a proof. Lemma 3.3 Let m and \(d_1,\ldots ,d_m\) be positive integers, and let \({\mathbf{d }}:=(d_1,\ldots ,d_m)\) and \(\sigma :=d_1+\cdots +d_m\). Then for all \(n\ge 0\) we have $$\begin{aligned} B_n^{(m)}(x+\sigma |{\mathbf{d }}) = (-1)^nB_n^{(m)}(-x|{\mathbf{d }}). \end{aligned}$$ Using the definition of \(\sigma \) and then (8), we get $$\begin{aligned} B_n^{(m)}(x+\sigma |{\mathbf{d }})&=\left( x+d_1({\mathcal {B}}_1+1)+\cdots +d_m({\mathcal {B}}_m+1)\right) ^n \\&=\left( x-d_1{\mathcal {B}}_1-\cdots -d_m{\mathcal {B}}_m\right) ^n \\&=(-1)^n\left( -x+d_1{\mathcal {B}}_1+\cdots +d_m{\mathcal {B}}_m\right) ^n; \end{aligned}$$ this last line is the right-hand side of (32). \(\square \) We are now ready to prove Theorem 1.1. The previously mentioned identity of Rubinstein and Fel (Eq. (8) in [20]) for the first Sylvester wave is, in our notation, $$\begin{aligned} W_1(s,{\mathbf{d }}) = \frac{1}{(m-1)!d}B_{m-1}^{(m)}(s+\sigma |{\mathbf{d }}), \end{aligned}$$ where, as before, \({\mathbf{d }}=(d_1,\ldots ,d_m)\), \(d=d_1\ldots d_m\), and \(\sigma =d_1+\dots +d_m\). (A version of (33) can also be found in [4, p. 151].) Now we use (32) with \(n=m-1\), followed by (31). This immediately gives (5), and the proof is complete. 4 Examples and consequences of Theorem 1.1 We begin this section by explicitly stating some small cases of Theorem 1.1 as examples. For \(m=2\) we obtain $$\begin{aligned} W_1(s,(d_1,d_2)) = \frac{1}{d_1d_2}\,s+\frac{d_1+d_2}{2d_1d_2}; \end{aligned}$$ this is illustrated by Fig. 1, for \({\mathbf{d }}=(3,5)\). \(W_1(s,{\mathbf{d }})\) (solid line) and numbers of solutions of (1),i.e., \(W(s,{\mathbf{d }})\) (dots) for \({\mathbf{d }}=(3,5)\) and \(s\le 200\) Next, for \(m=3\) we have $$\begin{aligned} W_1(s,(d_1,d_2,d_3))&= \frac{1}{2d_1d_2d_3}\,s^2 +\frac{d_1+d_2+d_3}{2d_1d_2d_3}\,s \nonumber \\&\quad \,+\frac{1}{12}\left( \frac{(d_1+d_2+d_3)^2}{d_1d_2d_3}+\frac{1}{d_1}+\frac{1}{d_2}+\frac{1}{d_3}\right) , \end{aligned}$$ which is illustrated by Fig. 2, for \({\mathbf{d }}=(3,5,7)\). \(W_1(s,{\mathbf{d }})\) (solid line) and numbers of solutions of (1), i.e., \(W(s,{\mathbf{d }})\) (dots) for \({\mathbf{d }}=(3,5,7)\) and \(s\le 100\) The evaluations (34) and (35) are not new; they can be found in [12, p. 275], where all cases up to \(m=7\) are given explicitly. The polynomials (34) and (35), along with the case \(m=4\), can also be found in [4, p. 152]. As specific examples, we state below the first few cases for \({\mathbf{d }}=(1,2,\ldots ,m)\), including the specializations of (34) and (35). $$\begin{aligned} W_1(s,(1,2))&= \frac{1}{2}\,s+\frac{3}{4},\end{aligned}$$ $$\begin{aligned} W_1(s,(1,2,3))&= \frac{1}{12}\,s^2+\frac{1}{2}\,s+\frac{47}{12},\end{aligned}$$ $$\begin{aligned} W_1(s,(1,\ldots ,4))&= \frac{1}{144}\,s^3+\frac{5}{48}\,s^2+\frac{15}{32}\,s +\frac{175}{288},\end{aligned}$$ $$\begin{aligned} W_1(s,(1,\ldots ,5))&= \frac{1}{2880}\,s^4+\frac{1}{96}\,s^3+\frac{31}{288}\,s^2 +\frac{85}{192}\,s+\frac{50651}{86400}. \end{aligned}$$ These polynomials also appear in [21, p. 641] as polynomial parts of the identities (2)–(5). Some historical notes with further references can also be found in [21]. Theorem 1.1 can also be used to determine the two highest coefficients of \(W_1(s,{\mathbf{d }})\); we state this as a corollary. For positive integers \(m\ge 2\) and \(d_1,\ldots d_m\), we let \({\mathbf{d }}:=(d_1,\ldots d_m)\), \(d:=d_1\dots d_m\), and \(\sigma :=d_1+\cdots +d_m\), as before. Then $$\begin{aligned} W_1(s,{\mathbf{d }}) =\frac{1}{(m-1)!d}\,s^{m-1}+\frac{\sigma }{2(m-2)!d}\,s^{m-2}+\cdots \end{aligned}$$ The leading coefficient, which has long been known (see, e.g., [3] and references therein), follows immediately from (5) if we note that by (27) the number of summands in the m-fold sum is \(d^{m-1}\). The second coefficient follows from a simple expansion of the product on the right of (5). We skip the details since both coefficients follow immediately from the identity (3) in [3]. For the special case \({\mathbf{d }}=(1,2,\ldots ,m)\), the identity (40) can be found in [17, Satz 1] and [24, p. 311]. From the fact that the first Sylvester wave is a polynomial, it is clear that for fixed m and bounded components of \({\mathbf{d }}\), \(W_1(s,{\mathbf{d }})\) is asymptotically equal to the leading term in (40), as s gets arbitrarily large. However, it is not immediately clear what happens if the components of \({\mathbf{d }}\) also grow, along with s. This is addressed in the following consequence of Theorem 1.1. Let \(m\ge 2\) be fixed, and consider the vector \({\mathbf{d }}:=(d_1,\ldots ,d_m)\) of positive integers, with \(d:=d_1\ldots d_m\). Let \(\lambda >0\) and \(s\ge \lambda d\), and let d grow arbitrarily large in such a way that at least two of the components \(d_j\), \(1\le j\le m\), are unbounded. Then $$\begin{aligned} W_1(s,{\mathbf{d }}) \sim \frac{1}{(m-1)!d}\,s^{m-1}, \end{aligned}$$ that is, \(W_1(s,{\mathbf{d }})\) has the same asymptotic behaviour as in the case of bounded d. The proof of this result relies on interpreting the m-fold sum on the right of (5) as a Riemann sum of a certain multiple integral. We therefore begin by evaluating this integral. Let \(\lambda \in \mathbb R\) be a constant and \(m\ge 1\) an integer. Then $$\begin{aligned} \int _{[0,1]^m}\prod _{j=1}^{m-1}(\lambda +j-x_1-\cdots -x_m)dx_1\ldots dx_m =\lambda ^{m-1}. \end{aligned}$$ For \(m=1\) the product is empty and is therefore 1 by convention; the identity is then trivially true. For \(m\ge 2\) we use (28) to rewrite the integral as $$\begin{aligned} (-1)^{m-1}\int _{[0,1]^m}B_{m-1}^{(m)}(x_1+\cdots +x_m-\lambda )dx_1\ldots dx_m. \end{aligned}$$ Now by (11) each integration over [0, 1] is equivalent to adding a uniform symbol \({\mathcal {U}}\), and since $$\begin{aligned} B_{m-1}^{(m)}(x_1+\cdots +x_m-\lambda ) = \left( -\lambda +{\mathcal {B}}_1+\cdots +\mathcal {B}_m\right) ^{m-1} \end{aligned}$$ [see, e.g., (30) with \({\mathbf{d }}=(1,\ldots ,1)\)], the desired integral is $$\begin{aligned} (-1)^{m-1}\left( -\lambda +{\mathcal {B}}_1+\cdots +\mathcal {B}_m +{\mathcal {U}}_1+\cdots +\mathcal {U}_m\right) ^{m-1} = \lambda ^{m-1}, \end{aligned}$$ where we have made repeated (m-fold) use of the cancellation property (14). \(\square \) Since \(s^{m-1}\) is the highest power in (40), we may as well take \(s=\lambda d\). We can then rewrite (5) as $$\begin{aligned} W_1(\lambda d,{\mathbf{d }}) = \frac{1}{(m-1)!d} \sum _{\ell }\prod _{j=1}^{m-1} \left( \lambda +j-\frac{\ell _1}{\widetilde{d}_1}-\cdots -\frac{\ell _m}{\widetilde{d}_m}\right) , \end{aligned}$$ where \(\ell \) indicates the summation as detailed in (5). We now denote $$\begin{aligned} x_j:=\frac{\ell _j}{\widetilde{d}_j}\quad \hbox {and}\quad \Delta x_j = \frac{1}{\widetilde{d}_j} = \frac{d_j}{d},\quad 1\le j\le m. \end{aligned}$$ If only one of the \(d_j\), say \(d_1\), were unbounded as their product d grows, then \(\Delta x_1=1/(d_2\ldots d_m)\) would not approach 0 as d grows. However, this cannot happen if at least two of the \(d_j\) are unbounded as d grows. If we now multiply the sum on the right of (43) by $$\begin{aligned} \Delta x_1\ldots \Delta x_m = \frac{d_1\ldots d_m}{d^m} = \frac{1}{d^{m-1}}, \end{aligned}$$ we can identify this m-fold sum as a Riemann sum that converges to the integral in (42). Hence we have, by Lemma 4.3, $$\begin{aligned} W_1(\lambda d,{\mathbf{d }}) \sim \frac{d^{m-1}}{(m-1)!d}\,\lambda ^{m-1} \quad \hbox {as}\quad d\rightarrow \infty . \end{aligned}$$ Finally, we are done if we replace \(\lambda \) by s / d. \(\square \) 5 Additional Remarks 1. If we divide each factor in the product on the right of (5) by d, we see that the resulting product can be written as a Pochhammer symbol (rising factorial) or as a falling factorial. But we can also combine it with \((m-1)!\) in the denominator; using the (generalized) binomial coefficient \(\left( {\begin{array}{c}x\\ n\end{array}}\right) =x(x-1)\ldots (x-n+1)/n!\) we can then rewrite Theorem 1.1 as follows. Let \({\mathbf{d }}:=(d_1,d_2,\ldots ,d_m)\) and \(d:=d_1\ldots d_m\). Then $$\begin{aligned} W_1(s,{\mathbf{d }}) = \frac{1}{d} \sum _{\ell }\left( {\begin{array}{c}m-1+\tfrac{s-\ell }{d}\\ m-1\end{array}}\right) , \end{aligned}$$ where the sum is taken over all \(\ell \) with $$\begin{aligned} \ell = \ell _1d_1+\cdots +\ell _md_m,\quad 0\le \ell _i\le \tfrac{d}{d_i}-1,\quad i=1,\ldots ,m. \end{aligned}$$ The binomial coefficient on the right of (44) is reminiscent of some combinatorial objects related to partitions and compositions. (Note, however, that \((s-\ell )/d\) is generally not an integer). If we set \(d_1=\dots =d_m=1\), then the sum in (44) collapses to a single term, as does the sum in (3), and we get $$\begin{aligned} W(s,{\mathbf{d }}) = W_1(s,{\mathbf{d }}) = \left( {\begin{array}{c}m-1+s\\ m-1\end{array}}\right) . \end{aligned}$$ This is a well-known elementary expression for the number of solutions of (2) for \({\mathbf{d }}=(1,\ldots ,1)\); see, e.g., [2, p. 1328]. While this is not the same as the number of compositions of s into m parts, there is a connection: The number of compositions of n into exactly m parts, each at least k, is \(\left( {\begin{array}{c}m-1+n-km\\ m-1\end{array}}\right) \); see [1, p. 63]. 2. For the sake of completeness we cite the main result of Rubinstein [19], which we referred to in the Introduction. While for \(W_1(s,{\mathbf{d }})\) the order of the components in the given vector \({\mathbf{s }}=(d_1,\ldots ,d_m)\) is irrelevant, this becomes an issue for the Sylvester waves \(W_j(s,{\mathbf{d }})\) when \(j\ge 2\). Given an integer \(j\ge 2\), we now assume that the components in \({\mathbf{d }}\) are sorted in such a way that j divides the first \(k_j\) components. We denote $$\begin{aligned} {\mathbf{d }}_j:= (d_1,\ldots ,d_{k_j},jd_{k_j+1},\ldots ,jd_m), \end{aligned}$$ so that all the components in the vector \({\mathbf{d }}_j\) are divisible by j. We also need the prime radical circulator \(\psi _j(s)\), which for positive integers s is defined by $$\begin{aligned} \psi _j(s) := \sum _{\begin{array}{c} 0\le \nu <j\\ \gcd (\nu ,j)=1 \end{array}}\rho _j^s, \end{aligned}$$ where, as before, \(\rho _j\) is a primitive jth root of unity. For prime j we have $$\begin{aligned} \psi _j(s)={\left\{ \begin{array}{ll} \varphi (j), &{} s\equiv 0\pmod {j},\\ \mu (j), &{} s\not \equiv 0\pmod {j} \end{array}\right. } \qquad (j\;\hbox {prime}), \end{aligned}$$ where \(\varphi (j)\) and \(\mu (j)\) are Euler's totient function and the Möbius function, respectively. There is also an explicit formula for composite j; see [19] and the references therein for further details. With these notations, Rubinstein's identity (19) in [19] can be stated as follows, in a slightly different form: $$\begin{aligned} W_j(s,{\mathbf{d }})&= \sum _{\begin{array}{c} 0\le r_{k_j+1}\le j-1\\ \dots \\ 0\le r_m\le j-1 \end{array}} W_1(s-r_{k_j+1}d_{k_j+1}-\cdots -r_md_m,{\mathbf{d }}_j) \nonumber \\&\quad \,\times \psi _j(s-r_{k_j+1}d_{k_j+1}-\cdots -r_md_m). \end{aligned}$$ We conclude this section with two examples, one of which is general, and the second one is more specific. If j does not divide any of the components of \({\mathbf{d }}\), then \(k_j=m\), and \(W_j(s,{\mathbf{d }}) = 0\) as the sum on the right of (47) is empty. This is consistent with (3) and the statement following it. Let \({\mathbf{d }}=(2,4,5)\). Then \(k_2=2\), and \({\mathbf{d }}_2=(2,4,10)\). By (45) or (46) we have \(\psi _2(s)=(-1)^s\). The identity (47) then gives $$\begin{aligned} W_2(s,{\mathbf{d }})&= \sum _{r_3=0}^1 W_1(s-r_3d_3,{\mathbf{d }}_3)\psi _2(s-r_3d_3) \\&= W_1(s,(2,4,10))\cdot (-1)^s+W_1(s-5,(2,4,10))\cdot (-1)^{s-5} \\&= (-1)^s\bigl (W_1(s,(2,4,10)) - W_1(s-5,(2,4,10))\bigr ). \end{aligned}$$ The two Sylvester waves \(W_1\) above can easily be given explicitly by way of (35). The last line above also illustrates the fact that \(W_2(s,{\mathbf{d }})\) is a quasipolynomial; see, e.g., [3] or [4, p. 47]. Author's contributions Both authors read and approved the final manuscript. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The first author was supported in part by the Natural Sciences and Engineering Research Council of Canada. Dedicated to the memory of our friend and colleague Jonathan M. Borwein, mathematician extraordinaire. Department of Mathematics and Statistics, Dalhousie University, Halifax, Nova Scotia, B3H 4R2, Canada LSS-Supelec, Université Paris-Sud, Orsay, France Department of Mathematics, Tulane University, New Orleans, LA 70118, USA Andrews, G.E.: The theory of partitions. Addison-Wesley, Reading (1976)MATHGoogle Scholar Bayad, A., Beck, M.: Relations for Bernoulli–Barnes numbers and Barnes zeta functions. Int. J. Number Theory 10(5), 1321–1335 (2014)MathSciNetView ArticleMATHGoogle Scholar Beck, M., Gessel, I.M., Komatsu, T.: The polynomial part of a restricted partition function related to the Frobenius problem. Electron. J. Combin. 8(1), Note 7 (2001)Google Scholar Beck, M., Robins, S.: Computing the continuous discretely. Integer-point enumeration in polyhedra, 2nd edn. Springer, New York (2015)Google Scholar Dickson, L.E.: History of the Theory of Numbers. Volume II: Diophantine Analysis. The Carnegie Institute, Washington, D.C. (1919)MATHGoogle Scholar Dilcher, K., Vignat, C.: General convolution identities for Bernoulli and Euler polynomials. J. Math. Anal. Appl. 435(2), 1478–1498 (2016)MathSciNetView ArticleMATHGoogle Scholar Dowker, J.S.: Relations between the Ehrhart polynomial, the heat kernel and Sylvester waves. (2011). Preprint arXiv:1108.1760v1 [math.NT] Dowker J.S.: On Sylvester waves and restricted partitions. (2013). Preprint arXiv:1302.6172v2 [math.NT] Magnus, W., Oberhettinger, F., Tricomi, F.G.: Higher Transcendental Functions, Vol. I. Based, in Part, on Notes Left by Harry Bateman. McGraw-Hill Book Company, Inc., New York (1953)Google Scholar Fel, L.G., Rubinstein, B.Y.: Sylvester waves in the Coxeter groups. Ramanujan J. 6(3), 307–329 (2002)MathSciNetView ArticleMATHGoogle Scholar Gessel, I.M.: Applications of the classical umbral calculus. Algebra Universalis 49, 397–434 (2003)MathSciNetView ArticleMATHGoogle Scholar Glaisher, J.W.L.: Formulae for partitions into given elements, derived from Sylvester's theorem. Quart. J. Pure Appl. Math. 40, 275–348 (1908)MATHGoogle Scholar Nörlund, N.E.: Vorlesungen über Differenzenrechnung. Springer-Verlag, Berlin (1924)View ArticleMATHGoogle Scholar O'Sullivan, C.: On the partial fraction decomposition of the restricted partition generating function. Forum Math. 27(2), 735–766 (2015)MathSciNetMATHGoogle Scholar Olver, F.W.J., et al. (eds.): NIST Handbook of Mathematical Functions. Cambridge University Press, New York (2010). http://dlmf.nist.gov/ Raabe, J.L.: Zurückführung einiger Summen und bestimmten Integrale auf die Jacob-Bernoullische Function. J. Reine Angew. Math. 42, 348–367 (1851)View ArticleGoogle Scholar Rieger, G.J.: Über Partitionen. Math. Ann. 138, 356–362 (1959)MathSciNetView ArticleMATHGoogle Scholar Rota, G.-C., Taylor, B.D.: The classical umbral calculus. SIAM J. Math. Anal. 25(2), 694–711 (1994)MathSciNetView ArticleMATHGoogle Scholar Rubinstein, B.Y.: Expression for restricted partition function through Bernoulli polynomials. Ramanujan J. 15(2), 177–185 (2008)MathSciNetView ArticleMATHGoogle Scholar Rubinstein, B.Y., Fel, L.G.: Restricted partition functions as Bernoulli and Eulerian polynomials of higher order. Ramanujan J. 11(3), 331–347 (2006)MathSciNetView ArticleMATHGoogle Scholar Sills, A.V., Zeilberger, D.: Formulae for the number of partitions of \(n\) into at most \(m\) parts (using the quasi-polynomial ansatz). Adv. Appl. Math. 48(5), 640–645 (2012)MathSciNetView ArticleMATHGoogle Scholar Sylvester, J.J.: On the partition of numbers. Quarterly J. Pure Appl. Math. 1, 141–152 (1857)Google Scholar Sylvester, J.J.: On subinvariants, i.e. semi-invariants to binary quantics of an unlimited order. Amer. J. Math. 5(1), 79–136 (1882)View ArticleGoogle Scholar Wright, E.M.: Partitions into \(k\) parts. Math. Annalen 142, 311–316 (1961)MathSciNetView ArticleMATHGoogle Scholar
CommonCrawl
Home Journals RCMA Viscoelastic Mechanical Model of Asphalt Concrete Considering the Influence of Characteristic Parameter of Fiber Content Viscoelastic Mechanical Model of Asphalt Concrete Considering the Influence of Characteristic Parameter of Fiber Content Chunshui Huang* | Danying Gao | Peibo You College of Civil Engineering, Xuchang University, Xuchang 461000, China Zhengzhou University, Zhengzhou 450000, China Henan University of Urban Construction, Pingdingshan 467000, China [email protected] This paper carries out bending creep tests on polyester fiber-reinforced asphalt concrete beams, and investigates how the volume ratio and aspect ratio of the fibers influence the parameters of viscoelastic mechanical model and the viscoelastic performance of the asphalt concrete. The results show that: with the growing volume ratio and aspect ratio of the fibers, the midspan deflection and bottom flexural-tensile strain of asphalt concrete beams first dropped and then rose over time; the characteristic parameter of fiber content (FCCP) could reflect the overall effects of the volume ratio and aspect ratio of the fibers. On this basis, a viscoelastic mechanical model was established for the asphalt concrete in the light of the influence of FCCP. The test and theoretical results show that, in our tests, the optimal volume ratio of fibers, optimal aspect ratio of fibers, and optimal FCCP are 0.348, 324, and 1.128 for polyester fiber-reinforced asphalt concrete. fiber-reinforced asphalt concrete, viscoelastic performance, bending creep test, characteristic parameter of fiber content (FCCP) Asphalt concrete is a typical viscoelastic composite. The influence of temperature and load stress on the viscoelastic properties of asphalt concrete is mainly investigated with different viscoelastic mechanical models composed of basic viscoelastic components (e.g., springs and dampers), based on uniaxial compressive creep, triaxial compressive creep, and beam bending creep tests [1-10]. Since the addition of fibers can significantly improve the road performance of asphalt concrete, more and more attention has been paid to the viscoelastic properties of fiber-reinforced asphalt concrete. Through dynamic and static triaxial tests, Benedito et al. [11] studied the influence of fiber dosage and length on cold-mix asphalt concrete. Guo et al. [12] and Guo et al. [13] conducted compressive creep tests on polyester fiber-reinforced asphalt concrete, and adopted Burgers model and modified Burgers model to examine the influence of fibers of the same length over the viscoelastic performance of asphalt concrete at different dosages. On this basis, Guo established a viscoelastic mechanical model of asphalt concrete considering the influence of fiber dosage. Feng [14] analyzed the effects of fiber length on the performance of asphalt concrete through a pull-out test. However, many research problems are yet to be solved: How does the aspect ratio of fibers impact the parameters of viscoelastic mechanical model and the viscoelastic performance of asphalt concrete? How do the volume ratio (or dosage) and aspect ratio of fibers affect the viscoelastic mechanical model and the viscoelastic performance of asphalt concrete after unloading? What is the composite influence of the volume ratio and aspect ratio of fibers over the viscoelastic performance of asphalt concrete? To answer these questions, this paper takes the AC-13F asphalt mixture as the matrix, adopts the volume ratio Vf and aspect ratio Ra as parameters, and tests the bending creep of polyester fiber-reinforced asphalt concrete beams, based on the optimal asphalt dosage determined by Marshall test. Next, Burgers model and modified Burgers model were employed to systematically analyze the influence of Vf and Ra over the parameters of viscoelastic mechanical model and the viscoelastic performance of the asphalt concrete during the loading and unloading phases. After that, the composite influence of the two parameters was characterized by the characteristic parameter of fiber content (FCCP) λf, and a viscoelastic mechanical model was established for the asphalt concrete considering the influence of λf. Our tests use petroleum asphalt 70 and polyester fibers with a mean diameter of 18.5μm. To study the influence of the volume ratio Vf and aspect ratio Ra of the fibers, the fiber length was set to 3mm, 6mm, and 9mm, respectively; the aspect ratio was set to 162, 324, 486, and 649, respectively. For the polyester fibers of the length 3mm, 9mm, and 12mm, the volume ratio was set to 0.351, 0.34, and 0.346 in turn. For those of the length 6mm, the volume ratio was set to 0.174, 0.348, 0.519, and 0.686 in turn. The aggregates were screened, cleaned, and dried, and then mixed with limestone powder back to AC-13 asphalt concrete with medium grading. Through standard Marshall test [15], the optimal asphalt content (OAC) was determined for the matrix of asphalt mix, and for the asphalt mix with each volume ratio and aspect ratio of the fibers. The designed polyester fiber-reinforced asphalt mix was compressed into specimens of 300mm×300mm×50mm, and then cut into small beams of 250mm×30mm×35mm. Then, creep tests were carried out at 15℃ on a multi-function material testing machine. The specimens were loaded with weights. The test environment is illustrated in Figure 1. The creep load is 10% of the bending failure load of the beam under the same conditions. Figure 1. Creep test environment The creep tests were performed on eight groups of small beams. Each group contains three small beams. During the tests, the midspan deflection d(t) of the small beam being tested was collected by a dial gauge, which is connected to the data acquisition system, and the time-deflection curve was plotted. When the small beam entered accelerated creep phase, the load was removed, and the midspan deflection was collected 30min after the unloading. Based on the midspan deflection, the bottom flexural-tensile strain can be calculated by: $\varepsilon (t)=\frac{6hd(t)}{{{L}^{2}}}$ (1) where, h is the height of the specimen (m); L is the span of the specimen (m). 3. Viscoelastic Deformation Features Figures 2 and 3 show the time variation of midspan deflection and bottom flexural-tensile strain of the small asphalt concrete beams of different fiber volume ratios and fiber aspect ratios, respectively. It can be observed that, with the growing volume ratio and aspect ratio of the fibers, the midspan deflection and bottom flexural-tensile strain of asphalt concrete beams first dropped and then rose over time, exhibiting a nonlinear trend. The lowest point of the time-creep deformation curve appeared at the fiber volume ratio of 0.348 and the fiber aspect ratio of 324. (a) Mid-span deflection (b) Strain Figure 2. Relationship between creep deformation and fiber volume ratio Figure 3. Relationship between creep deformation and fiber aspect ratio The reason is that, when the fiber aspect ratio is fixed, the three-dimensional (3D) fiber mesh in the asphalt concrete becomes denser, and the fibers can better constrain the deformation of asphalt concrete. When the fiber volume ratio surpasses 0.348, the fibers are less dispersive, because the fiber mesh is too dense. The undispersed fibers will cluster into bundles, forming defects in the asphalt concrete [16]. In this case, the fibers will reduce the viscoelastic performance of asphalt concrete, rather than enhance the viscoelasticity. When the fiber volume ratio remains constant, a high fiber aspect ratio means the soft fibers are long, with poor directionality, and easily curled and caked; a small fiber aspect ratio means there are many fibers per unit volume of asphalt concrete, i.e., the fiber mesh is so dense that the fibers are prone to caking, which weakens the reinforcement effect of fibers. The lower the time-creep deformation curve, the stronger the asphalt concrete against rutting deformation [13]. In our tests, the fiber-reinforced asphalt concrete can resist rutting deformation excellently at the fiber volume ratio of 0.348 and the fiber aspect ratio of 324. It can also be seen from Figures 2 and 3 that, although the fiber-reinforced asphalt concretes with different fiber volume ratios and fiber aspect ratios differed in creep deformation, their time-creep deformation curves all consist of three segments: decelerated creep, constant speed creep, and accelerated creep. The addition of fibers does not change the creep features of the asphalt concrete. Therefore, it is possible to derive a viscoelastic mechanical model for fiber-reinforced asphalt concrete from the viscoelastic mechanical model for ordinary asphalt concrete, by incorporating the fiber influence in model parameters. 4. Viscoelastic Mechanical Model Referring to the Burgers model [7, 13] in Figure 4, the viscoelastic creep equation of the fiber-reinforced asphalt concrete can be expressed as: Loading phase: $\varepsilon=\sigma_{0}\left(\frac{1}{\mathrm{E}_{1}}+\frac{t}{\eta_{1}}+\frac{1-e^{-t \tau}}{\mathrm{E}_{2}}\right)$ (2) Unloading phase: $\varepsilon=\sigma_{0}\left[\frac{t_{0}}{\eta_{1}}+\frac{\left(1-e^{-t_{0} \tau}\right) e^{-\tau\left(t-t_{0}\right)\quad}}{\mathrm{E}_{2}}\right]$ (3) where, Ε1 and η1 are the elastic modulus of elastic element 1 and viscosity of viscous element 4 considering fiber influence, respectively; Ε2 and η2 are the elastic modulus of elastic element 2 and viscosity of viscous element 3 considering fiber influence, respectively; $\tau=\frac{E_{2}}{\eta_{2}}$. Figure 4. Burgers model Figure 5. Modified Burgers model The greater the value of Ε1, the stronger the elastic deformation resistance of the asphalt concrete; the greater the value of η1, the smaller the permanent deformation induced by the same load in the same period; the greater the value of η2/Ε2 the slower the growth of viscous deformation over time, and the shallower the rutting depth on the asphalt concrete pavement; the greater the value of η2/Ε2, the slower the development of asphalt concrete deformation, and the stronger the resistance to viscoelastic deformation [1]. Drawing on the modified Burgers model (Figure 5) [8], the viscoelastic creep equation of fiber-reinforced asphalt concrete can be expressed as: $\varepsilon(t)=\sigma_{0}\left(\frac{1}{\mathrm{E}_{1}}+\frac{1-e^{-B t}}{A B}+\frac{1-e^{-t \tau}}{\mathrm{E}_{2}}\right)$ (4) $\varepsilon (t)={{\sigma }_{0}}\left[ \frac{1-{{e}^{-B{{t}_{0}}}}}{AB}+\frac{(1-{{e}^{-{{t}_{0}}\tau }}){{e}^{-(t-{{t}_{0}})\tau\quad }}}{{{E}_{2}}} \right]$ (5) where, σ0 is the test stress; t is the test time; t0 is the total loading time. Based on the creep test results, the parameters of the viscoelastic mechanical model (2)-(5) were initialized through the nonlinear fitting approach of Origin 8.5. By adjusting model parameters, the fitting approach that maximize the matching between theoretical results and the test data was adopted to determine the parameters of the viscoelastic mechanical model. On this basis, the authors analyzed the influence of fiber volume ratio and fiber aspect ratio on model parameters. Then, the FCCP was introduced to reflect the combined effect of fiber volume ratio and fiber aspect ratio, and used to build the constitutive equation of the fiber-reinforced asphalt concrete. Finally, the viscoelasticity of the fiber-reinforced asphalt concrete was examined by the creep equation and the constitutive equation. 4.1 Influence of fiber volume ratio and fiber aspect ratio on viscoelastic model parameters and viscoelastic performance Table 1 shows the viscoelastic model parameters of the fiber-reinforced asphalt concrete with different fiber volume ratios and fiber aspect ratios, which are fitted by formula (2). It can be observed that Ε1 first increased and then decreased, with the growing fiber volume ratio. The peak of Ε1 appeared at the fiber volume ratio of 0.348. In this case, the fiber-reinforced asphalt concrete has a strong resistance to the elastic deformation under instantaneous load. When the fiber volume ratio reached 0.686, the Ε1 of the fiber-reinforced asphalt concrete was smaller than that of the asphalt concrete in the matrix, indicating that the fibers can enhance the elasticity of the fiber-reinforced asphalt concrete only under a proper volume ratio. With the increase of fiber volume ratio, the η1 first increased and then declined, suggesting the good thickening effect of a proper dosage of fibers. When the fiber volume ratio was 0.348, the η1 more than doubled from that of the asphalt concrete in the matrix. In this case, the fibers exhibit a good thickening effect, and the fiber-reinforced asphalt concrete is good at resisting permanent deformation. After the fiber volume ratio reached 0.686, the fiber-reinforced asphalt concrete had a smaller η1 than the asphalt concrete in the matrix, that is, an excessive number of fibers would weaken the ability of the asphalt concrete to withstand permanent deformation. Table 1. Relationship between parameters of formula (2) and fiber volume ratio and fiber aspect ratio time/s Delay time/s Ε1/MPa η1/MPa·s The relaxation time η1/Ε1 and delay time η2/Ε2 of the fiber-reinforced asphalt concrete both increased and then decreased with the growth of fiber volume ratio. Therefore, a proper dosage of fibers could enhance the resistance of the asphalt concrete to rutting deformation. The relaxation time and delay time reached the peak at the fiber volume ratio of 0.348. In this case, the fibers significantly enhance the resistance of the asphalt concrete to rutting deformation [17]. Ε1, η1, η1/Ε1 and η2/Ε2 all increased first and then dropped with the growth of fiber aspect ratio, and peaked at the fiber aspect ratio of 324. In this case, the fiber-reinforced asphalt concrete boasts a strong capability to resist elastic deformation, viscous flow deformation, and rutting deformation. Table 3 shows the viscoelastic model parameters of the fiber-reinforced asphalt concrete with different fiber volume ratios and fiber aspect ratios, which are fitted by formula (3). It can be observed that, Ε2 and η2 were much greater than those in the creep loading phase; the delay time η2/Ε2 was longer than that in the creep loading phase; η1 was much smaller than that in that phase. This is because the asphalt concrete suffers from creep failure in the accelerated creep phase. During the load-induced bending creep deformation, the internal aggregates of each small asphalt concrete beam slip, causing material damage and performance attenuation. The actual permanent deformation, which is characterized by the parameters of the loading phase model, is greater than the theoretical permanent deformation, which is characterized by the parameters of the unloading phase model. Besides, the actual deformation recovery rate, which is characterized by the parameters of the loading phase model, is smaller than the theoretical deformation recovery rate, which is characterized by the parameters of the unloading phase model. Therefore, the actual deformation features of the unloaded asphalt concrete cannot be characterized by the mechanical model parameters obtained in the creep loading phase. Ε2, η1 and η2 all increased first and then decreased with the growth of fiber volume ratio and fiber aspect ratio. The peak values were observed at the fiber volume ratio of 0.348 and the fiber aspect ratio of 324. The variation law was the same as that in the creep loading phase. It can also be seen from Table 2 that the correlation between test data and formula (3) in the creep unloading phase was weaker than that in the creep loading phase. Overall, formulas (2) and (3) can characterize the viscoelastic deformation features of asphalt concrete excellently during the loading phase. Table 3 shows the viscoelastic model parameters of the fiber-reinforced asphalt concrete with different fiber volume ratios and fiber aspect ratios, which are fitted by formula (4). It can be observed that formula (4) is more closely correlated with the test data in the creep loading phase than formula (2). The parameters of formula (4), namely, Ε1, Ε2, η2, A and B all first increased and then decreased with the growth of fiber volume ratio and fiber aspect ratio. The peak values were observed at the fiber volume ratio of 0.348 and the fiber aspect ratio of 324. In this case, the fiber-reinforced asphalt concrete boasts a strong capability to resist elastic deformation, viscous flow deformation, and rutting deformation. Table 4 shows the viscoelastic model parameters of the fiber-reinforced asphalt concrete with different fiber volume ratios and fiber aspect ratios, which are fitted by formula (5). It can be observed that the parameters of formula (5), namely, Ε2, η2, A and B all first increased and then decreased with the growth of fiber volume ratio and fiber aspect ratio. The peak values were observed at the fiber volume ratio of 0.348 and the fiber aspect ratio of 324. The change law was the same as that in formula (4). The only difference between formulas (5) and (4) lies in the variation of the value of the same parameter. Compared with those in the creep loading phase, the model parameters Ε2 and B were relatively large, and η2 and A were relatively small in the creep unloading phase. This phenomenon indicates that the asphalt concrete recovers slower from viscoelastic deformation, and deforms more significantly than the theoretical permanent deformation, after being damaged in creep loading. Although formulas (4) and (5) describe different creep phases from formulas (2) and (3), the changes of their parameters demonstrate the same influence law of creep loading damages on material performance. It can also be learned from Table 4 that formula (5) is much less correlated with the test data in the creep unloading phase than formula (4), and slightly less correlated with the test data in that phase than formula (3). Hence, formulas (4) and (5) are not as good as formulas (2) and (3) in characterizing the deformation features of unloaded asphalt concrete. To sum up, formulas (4) and (5) work well in characterizing the deformation features of the asphalt concrete and fiber-reinforced asphalt concrete in the loading phase, while formulas (2) and (3) work well in characterizing the deformation features of the asphalt concrete and fiber-reinforced asphalt concrete in the unloading phase. A/MPa B/MPa 4.2 Viscoelastic model and viscoelasticity analysis The above analysis shows that fiber volume ratio Vf and fiber aspect ratio Ra are important impactors of the viscoelastic performance of the asphalt concrete. This paper chooses the FCCP $\lambda_{f}=V_{f} \times R_{a}$ to represent the overall impact of Vf and Ra [18]. The model parameters of formulas (2)-(5) all exhibit the same change law with the growth of FCCP: increasing first before declining. By nonlinear fitting of our test data, the relationship between the model parameters of formulas (2) and (3) and the FCCP can be expressed as: $\mathrm{E}_{1}\left(\lambda_{f}\right)=603.55+454.29 \lambda_{f}-201.49 \lambda_{f}^{2}$ (6) $\mathrm{E}_{2}\left(\lambda_{f}\right)=109.81+82.27 \lambda_{f}-34.66 \lambda_{f}^{2}$ (7) ${{\eta }_{1}}({{\lambda }_{f}})=298232.9+627942.9{{\lambda }_{f}}-290013\lambda _{f}^{2}$ (8) ${{\eta }_{2}}({{\lambda }_{f}})=29864.95+33547.72{{\lambda }_{f}}-13724.8\lambda _{f}^{2}$ (9) $\mathrm{E}_{2}\left(\lambda_{f}\right)=343.43+103.25 \lambda_{f}-46.58 \lambda_{f}^{2}$ (10) ${{\eta }_{1}}({{\lambda }_{f}})=247449.9+275225.8{{\lambda }_{f}}-136606\lambda _{f}^{2}$ (11) ${{\eta }_{2}}({{\lambda }_{f}})=109657.3+89963.02{{\lambda }_{f}}-39147.1\lambda _{f}^{2}$ (12) Substituting formulas (6)-(9) and formulas (10)-(12) into formulas (2) and (3), respectively, the creep equation of the fiber-reinforced asphalt concrete can be established considering the influence of the FCCP: $\varepsilon (t,{{\lambda }_{f}})={{\sigma }_{0}}\left[ \frac{1}{{{E}_{1}}({{\lambda }_{f}})}+\frac{t}{{{\eta }_{1}}({{\lambda }_{f}})}+\frac{1-{{e}^{-t\tau }}}{{{E}_{2}}({{\lambda }_{f}})} \right]$ (13) $\varepsilon\left(t, \lambda_{f}\right)=\sigma_{0}\left[\frac{t_{0}}{\eta_{1}\left(\lambda_{f}\right)}+\frac{\left(1-e^{-t_{0} \tau}\right) e^{-\left(t-t_{0}\right)\quad}}{E_{2}\left(\lambda_{f}\right)}\right]$ (14) where, $\tau=E_{2}\left(\lambda_{f}\right) / \eta_{2}\left(\lambda_{f}\right)$. Finding the derivative of formulas (13) and (14) relative to time t, respectively, the differential constitutive equation of the fiber-reinforced asphalt concrete can be established considering the influence of the FCCP: $d\varepsilon (t,{{\lambda }_{f}})/dt=\dot{\varepsilon }(t,{{\lambda }_{f}})={{\sigma }_{0}}\left[ \frac{1}{{{\eta }_{1}}({{\lambda }_{f}})}+\frac{{{e}^{-t\tau }}}{{{\eta }_{2}}({{\lambda }_{f}})} \right]$ (15) $d \varepsilon\left(t, \lambda_{f}\right) / d t=\dot{\varepsilon}\left(t, \lambda_{f}\right)=\sigma_{0}\left[\frac{\left(e^{-t_{0} \tau}-1\right) e^{-\left(t-t_{0}\right) \tau\quad}}{\eta_{2}\left(\lambda_{f}\right)}\right]$ (16) Figure 6 shows the relationship between creep strain and the FCCP obtained by formula (13). Under the same loading time and different stress levels, the creep strain always decreased first and then increased with the growth of the FCCP, and reached the minimum at the FCCP of 1.128. This is very close to the test results. The greater the stress, and the longer the loading, the more significant the creep strain of the fiber-reinforced asphalt concrete. Figure 6. Relationship between creep strain and FCCP Figure 7. Relationship between creep rate and FCCP Figure 7 shows the relationship between creep rate and the FCCP obtained by formula (15). It can be observed that, the creep rate first decreased and then increased, with the growth of the FCCP, and minimized at the FCCP of 1.128. This is also very close to the test results. The greater the stress, the faster the creep. With the elapse of the loading time, the creep rate continued to slow down. Figure 8. Relationship between creep strain and FCCP after unloading Figure 8 shows the relationship between creep strain of the fiber-reinforced asphalt concrete after unloading with the FCCP. It can be inferred that the residual creep deformation after unloading increased with the loading stress. The residual creep deformation first increased and then decreased, with the growth of the FCCP. After fixing the time of loading and unloading, the FCCP was 1.128 corresponding to the minimum residual creep deformation of the fiber-reinforced asphalt concrete, under different loading stresses. In this case, the fiber-reinforced asphalt concrete has a strong ability to recover from creep deformation. Formula (14) also reveals that the residual creep deformation increases with the time of creep loading. Figure 9 shows the relationship between creep rate the fiber-reinforced asphalt concrete after unloading with the FCCP. It can be inferred that, due to the inertia of elastic deformation recover after unloading, the greater the loading stress, the faster the recovery from creep deformation. With the growth of the FCCP, the creep recovery rate first increased and then decreased. After fixing the time of loading and unloading, the FCCP was 1.128 corresponding to the maximum creep recovery rate of the fiber-reinforced asphalt concrete, under different loading stresses. In this case, the fiber-reinforced asphalt concrete has a strong ability to recover from creep deformation. Formula (16) also reveals that the longer the loading time, the greater the damages of the fiber-reinforced asphalt concrete induced by creep load, and the smaller the recovery rate from the creep deformation. Figure 9. Relationship between creep rate and FCCP after unloading By nonlinear fitting of our test data, the relationship between the model parameters of formulas (4) and (5) and the FCCP can be expressed as: $\mathrm{E}_{2}\left(\lambda_{f}\right)=110.91+59.49 \lambda_{f}-27.82 \lambda_{f}^{2}$ (18) $\eta_{2}\left(\lambda_{f}\right)=327619+914046 \lambda_{f}-412653 \lambda_{f}^{2}$ (19) $\mathrm{A}\left(\lambda_{f}\right)=36674.63+95928.89 \lambda_{f}-41400.4 \lambda_{f}^{2}$ (20) $\mathrm{B}\left(\lambda_{f}\right)=1.76 \times 10^{-4}+1.975 \times 10^{-3} \lambda_{f}-8.8 \times 10^{-4} \lambda_{f}^{2}$ (21) ${{\eta }_{2}}({{\lambda }_{f}})=108072.8+96344.6{{\lambda }_{f}}-41409.7\lambda _{f}^{2}$ (23) $\mathrm{A}\left(\lambda_{f}\right)=9729.09+18998.79 \lambda_{f}-7340.91 \lambda_{f}^{2}$ (24) $\mathrm{B}\left(\lambda_{f}\right)=1.916 \times 10^{-3}+5.062 \times 10^{-3} \lambda_{f}-2.28 \times 10^{-3} \lambda_{f}^{2}$ (25) Substituting formulas (17)-(21) and formulas (22)-(25) into formulas (4) and (5), respectively, the creep equation of the fiber-reinforced asphalt concrete can be established considering the influence of the FCCP: $\varepsilon\left(t, \lambda_{f}\right)=\sigma_{0}\left[\frac{1}{\mathrm{E}_{1}\left(\lambda_{f}\right)}+\frac{1-e^{-B\left(\lambda_{f}\right) t}}{A\left(\lambda_{f}\right) B\left(\lambda_{f}\right)}+\frac{1-e^{-t \tau}}{\mathrm{E}_{2}\left(\lambda_{f}\right)}\right]$ (26) $\varepsilon\left(t, \lambda_{f}\right)=\sigma_{0}\left[\frac{1-e^{-\mathrm{B}\left(\lambda_{f}\right) t_{0}}}{A\left(\lambda_{f}\right) B\left(\lambda_{f}\right)}+\frac{\left(1-e^{-t_{0} \tau}\right) e^{-\left(t-t_{0}\right) \tau\quad}}{\mathrm{E}_{2}\left(\lambda_{f}\right)}\right]$ (27) Finding the derivative of formulas (26) and (27) relative to time t, respectively, the differential constitutive equation of the fiber-reinforced asphalt concrete can be established based on formulas (4) and (5) considering the influence of the FCCP: $d\varepsilon (t,{{\lambda }_{f}})/dt=\dot{\varepsilon }(t,{{\lambda }_{f}})={{\sigma }_{0}}\left[ \frac{{{e}^{-B({{\lambda }_{f}})t}}}{A({{\lambda }_{f}})}+\frac{{{e}^{-t\tau }}}{{{\eta }_{2}}({{\lambda }_{f}})} \right]$ (28) $d\varepsilon (t,{{\lambda }_{f}})/dt=\dot{\varepsilon }(t,{{\lambda }_{f}})={{\sigma }_{0}}\left[ \frac{({{e}^{-{{t}_{0}}\tau }}-1){{e}^{-(t-t{}_{0})\tau\quad }}}{{{\eta }_{2}}({{\lambda }_{f}})} \right]$ (29) The creep equation and differential constitutive equation of the fiber-reinforced asphalt concrete derived from formula (4), and those derived from formula (2) can yield consistent results on the correlations of creep strain and creep rate with the FCCP, stress, and loading time. Similarly, The creep equation and differential constitutive equation derived from formula (5), and those derived from formula (3) can yield consistent results on the correlations of post-unloading creep strain and creep rate with the FCCP, stress, and loading time. (1) With the growing volume ratio and aspect ratio of the fibers, the midspan deflection and bottom flexural-tensile strain of asphalt concrete beams first dropped and then rose over time. The lowest point of the time-creep deformation curve appeared at the fiber volume ratio of 0.348 and the fiber aspect ratio of 324. In this case, the fiber-reinforced asphalt concrete can resist rutting deformation excellently at the fiber volume ratio of 0.348 and the fiber aspect ratio of 324. (2) The creep loading parameters of formulas (2) and (3) and the creep unloading parameters of formulas (4) and (5) all increased first and then decreased with the growth of fiber volume ratio and fiber aspect ratio. At the fiber volume ratio of 0.348 and the fiber aspect ratio of 324, the fiber-reinforced asphalt concrete is good at resisting elastic deformation, viscous flow deformation, and rutting deformation. (3) The FCCP can reflect the overall effect of fiber volume ratio and fiber aspect ratio. Formulas (15) and (16) or formulas (28) and (29) are the viscoelastic constitutive equations of the fiber-reinforced asphalt concrete considering the influence of the FCCP. The greater the loading stress and the longer the loading time, the higher the creep strain and creep rate, the faster the recovery from creep deformation after unloading, and the larger the residual creep deformation. Creep strain, creep rate, and residual creep deformation after unloading all increased first and then decreased, with the growth of the FCCP. The optimal FCCP was 1.128 for polyester fibers. The authors gratefully acknowledge the financial support of the project from the Colleges and Universities Key Scientific Research Projects of Henan Province (Grant No. 20A580006) and Horizontal Research Project of Xuchang University (Grant No. 2019HX005) and The Scientific Research Innovation Team of Xuchang University. [1] Min, Z.H., Wang, X., Huang, W. (2004). Study on performance of epoxy asphalt concrete. Journal of Highway and Transportation Research and Development, 1(1): 1-4. 10.3969/j.issn.1002-0268.2004.01.001 [2] Lundström, R., Isacsson, U. (2004). Linear viscoelastic and fatigue characteristics of styrene–butadiene–styrene modified asphalt mixtures. Journal of Materials in Civil Engineering, 16(6): 629-638. https://doi.org/10.1061/(ASCE)0899-1561(2004)16:6(629) [3] Polacco, G., Stastna, J., Biondi, D., Zanzotto, L. (2006). Relation between polymer architecture and nonlinear viscoelastic behavior of modified asphalts. Current Opinion in Colloid & Interface Science, 11(4): 230-245. https://doi.org/10.1016/j.cocis.2006.09.001 [4] Judycki, J. (1992). Non-linear viscoelastic behaviour of conventional and modified asphaltic concrete under creep. Materials and Structures, 25(2): 95-101. https://doi.org/10.1007/BF02472462 [5] Erkens, S.M.J.G., Liu, X., Scarpas, A. (2002). 3D finite element model for asphalt concrete response simulation. The International Journal Geomechanics, 2(3): 305-330. https://doi.org/10.1061/(ASCE)1532-3641(2002)2:3(305) [6] Blab, R., Harvey, J.T. (2002). Modeling measured 3D tire contact stresses in a viscoelastic FE pavement model. The International Journal Geomechanics, 2(3): 271-290. https://doi.org/10.1061/(ASCE)1532-3641(2002)2:3(271) [7] Yang, T.Q. (1990). Viscoelastic mechanics. Wuhan: Centra China University of Technology Press, pp. 8-28. [8] Xu, S.F. (1992). A rheological model representing the deformation behavior of asphalt mixtures. Mechanics in Engineering, 14(1): 37-40. [9] Ye, Y. (2009). The experience research of visco-elastoplastic constitute model of asphalt mixture. Wuhan: Solid Mechanics of Huazhong University of Science and Technology. [10] Zhang, J.P., Xu, L., Wang, B. (2010). Modification of creep model of asphalt mixture and parameters determination. Journal of Wuhan University of Technology(Transportation Science & Engineering), 34(4): 699-702. [11] Benedito, S.B., Wander, R.S., Dario, C.L. (2003). Engineering properties of fiber reinforced cold asphalt mixes. Jour-nal of Environmental Engineering, 129(10): 952-955. https://doi.org/10.1061/(ASCE)0733-9372(2003)129:10(952) [12] Guo, N.S., Zhao, Y.H., Sun, L.L. (2004). Study on creep performance of fiber reinforced asphalt concrete. Journal of China & Foreign Highway, 1(1): 124-127. https://doi.org/10.3969/j.issn.1671-2579.2007.02.032 [13] Guo, N.S., Zhao, Y.H., Sun, L.L. (2007). Viscoelastic performance analysis of fiber reinforced asphalt concrete. Journal of Traffic and Transportation Engineering, 10(5): 37-40. https://doi.org/10.3321/j.issn:1671-1637.2007.05.009 [14] Feng, J.L. (2007). Studie of performance and mechanism of fiber reinforced asphalt mixtures. Nanjing: Department of Road and Railway Engineering of Dongnan University. https://doi.org/10.7666/d.y1040654 [15] JTJ 052-2000. (2000). Standard test methods of bitumen and bituminous mixtures for highway engineering. Beijing: China Communications Press. [16] Chen, H.X., Zhang, Z.Q., Hu, C.S. (2004). Low-temperature anti-cracking performance of fiber-reinforced asphalt mixture. Huanan Ligong Daxue Xuebai(Ziran Kexue Ban)/ Journal of South China University of Technology(Natural Science Edition)(China), 32(4): 82-86. https://doi.org/https://doi.org/10.1007/BF02911033 [17] Szydlo, A., Mackiewicz, P. (2005). Asphalt mixes deformation sensitivity to change in rheological parameters. Journal of Materials in Civil Engineering, 17(1): 1-9. https://doi.org/10.1061/(ASCE)0899-1561(2005)17:1(1) [18] Gao, D.Y., Liu, J.X. (1994). Basic theory for steel fiber reinforced concrete. Beijing: Science and Technology Literature Press, pp. 9-14.
CommonCrawl
Asymptotic behavior of solutions of Aoki-Shida-Shigesada model in bounded domains DCDS-B Home Water taxes and fines imposed on legal and illegal firms exploiting groudwater doi: 10.3934/dcdsb.2020266 Weak pullback mean random attractors for non-autonomous $ p $-Laplacian equations Anhui Gu School of Mathematics and Statistics, Southwest University, Chongqing 400715, China Received April 2019 Revised May 2020 Published September 2020 Fund Project: This work is partially supported by NSF of Chongqing Grant No. cstc2018jcyjA0897, the FRF for the Central Universities Grant No. XDJK2020B049 and K.C. Wong Education Foundation and DAAD In this paper, we obtain the existence and uniqueness of weak pullback mean random attractors for non-autonomous deterministic $ p $-Laplacian equations with random initial data and non-autonomous stochastic $ p $-Laplacian equations with general diffusion terms in Bochner spaces, respectively. Keywords: Weak mean random attractor, general diffusion, random initial data, stochastic $ p $-Laplacian equations. Mathematics Subject Classification: Primary: 35B40; Secondary: 35B41, 37L30. Citation: Anhui Gu. Weak pullback mean random attractors for non-autonomous $ p $-Laplacian equations. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020266 L. Arnold, Random Dynamical Systems, Springer-Verlag, Berlin 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar P. W. Bates, K. Lu and B. Wang, Random attractors for stochastic reaction-diffusion equations on unbounded domains, J. Differential Equations, 246 (2009), 845-869. doi: 10.1016/j.jde.2008.05.017. Google Scholar W.-J. Beyn, B. Gess, P. Lescot and M. Röckner, The global random attractor for a class of stochastic porous media equations, Commun. Partial Differ. Equ., 36 (2011), 446-469. doi: 10.1080/03605302.2010.523919. Google Scholar T. Caraballo, M. J. Garrido-Atienza and B. Schmalfuss, Existence of exponetially attracting stationary solutions for delay evolution equations, Discrete Contin. Dyn. Syst., 18 (2007), 271-293. doi: 10.3934/dcds.2007.18.271. Google Scholar T. Caraballo, M. J. Garrido-Atienza, B. Schmalfuss and J. Valero, Non-autonomous and random attractors for delay random semilinear equations without uniqueness, Discrete Contin. Dyn. Syst., 21 (2008), 415-443. doi: 10.3934/dcds.2008.21.415. Google Scholar T. Caraballo, J. Real and I. D. Chueshov, Pullback attractors for stochastic heat equations in materials with memory, Discrete Contin. Dyn. Syst. Ser. B, 9 (2008), 525-539. doi: 10.3934/dcdsb.2008.9.525. Google Scholar I. Chueshov and M. Scheutzow, On the structure of attractors and invariant measures for a class of monotone random systems, Dynamical Systems, 19 (2004), 127-144. doi: 10.1080/1468936042000207792. Google Scholar H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dyn. Diff. Equat., 9 (1997), 307-341. doi: 10.1007/BF02219225. Google Scholar H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab. Th. Re. Fields, 100 (1994), 365-393. doi: 10.1007/BF01193705. Google Scholar J. Duan and B. Schmalfuss, The 3D quasigeostrophic fluid dynamics under random forcing on boundary, Comm. Math. Sci., 1 (2003), 133-151. Google Scholar B. Fehrman and B. Gess, Well-posedness of nonlinear diffusion equations with nonlinear, conservative noise, Arch. Rational Mech. Anal., 233 (2019), 249–322. doi: 10.1007/s00205-019-01357-w. Google Scholar F. Flandoli and B. Schmalfuss, Random attractors for the 3D stochastic Navier-Stokes equation with multiplicative noise, Stoch. Stoch. Rep., 59 (1996), 21-45. doi: 10.1080/17442509608834083. Google Scholar H. Gao, M. J. Garrido-Atienza and B. Schmalfuss, Random attractors for stochastic evolution equations driven by fractional Brownian motion, SIAM J. Math. Anal., 46 (2014), 2281-2309. doi: 10.1137/130930662. Google Scholar M. J. Garrido-Atienza, K. Lu and B. Schmalfuss, Random dynamical systems for stochastic partial differential equations driven by a fractional Brownian motion, Discrete Contin. Dyn. Syst. Ser. B, 14 (2010), 473–493. doi: 10.3934/dcdsb.2010.14.473. Google Scholar M. J. Garrido-Atienza and B. Schmalfuss, Ergodicity of the infinite dimensional fractional Brownian motion, J. Dyn. Diff. Equat., 23 (2011), 671-681. doi: 10.1007/s10884-011-9222-5. Google Scholar M. J. Garrido-Atienza, K. Lu and B. Schmalfuss, Random dynamical systems for stochastic evolution equations driven by multiplicative fractional Brownian noise with Hurst parametes $H\in (1/3, 1/2]$, SIAM J. Appl. Dyn. Syst., 15 (2016), 625–654. doi: 10.1137/15M1030303. Google Scholar B. Gess, W. Liu and M. Röckner, Random attractors for a class of stochastic partial differential equations driven by general additive noise, J. Differential Equations, 251 (2011), 1225-1253. doi: 10.1016/j.jde.2011.02.013. Google Scholar B. Gess, Random attractors for degenerate stochastic partial differential equations, J. Dyn. Diff. Equat., 25 (2013), 121-157. doi: 10.1007/s10884-013-9294-5. Google Scholar B. Gess, Random attractors for singular stochastic evolution equations, J. Differential Equations, 255 (2013), 524-559. doi: 10.1016/j.jde.2013.04.023. Google Scholar A. Gu, Asymptotic behavior of random lattice dynamical systems and their Wong-Zakai approximations, Discrete Contin. Dyn. Syst. Ser. B, 24 (2019), 5737–5767. doi: 10.3934/dcdsb.2019104. Google Scholar A. Gu, K. Lu and B. Wang, Asymptotic behavior of random Navier-Stokes equations driven by Wong-Zakai approximations, Discrete Contin. Dyn. Syst. Ser. A, 39 (2019), 185-218. doi: 10.3934/dcds.2019008. Google Scholar A. Gu, D. Li, B. Wang and H. Yang, Regularity of random attractors for fractional stochastic reaction-diffusion equations on ${\mathbb R}^n$, J. Differential Equations, 264 (2018), 7094-7137. doi: 10.1016/j.jde.2018.02.011. Google Scholar A. Gu and B. Wang, Asymptotic behavior of random FitzHugh-Nagumo systems driven by colored noise, Discrete Contin. Dyn. Syst. Ser. B, 23 (2018), 1689–1720. doi: 10.3934/dcdsb.2018072. Google Scholar J. Huang and W. Shen, Pullback attractors for nonautonomous and random parabolic equations on non-smooth domains, Discrete Contin. Dyn. Syst., 24 (2009), 855-882. doi: 10.3934/dcds.2009.24.855. Google Scholar P. E. Kloeden and J. A. Langa, Flattening, squeezing and the existence of random attractors, Proc. Royal Soc. London Serie A. Math. Phys. Eng. Sci., 463 (2007), 163-181. doi: 10.1098/rspa.2006.1753. Google Scholar P. E. Kloeden and T. Lorenz, Mean-square random dynamical systems, J. Differential Equations, 253 (2012), 1422-1438. doi: 10.1016/j.jde.2012.05.016. Google Scholar Y. Li, A. Gu and J. Li, Existence and continuity of bi-spatial random attractors and application to stochastic semilinear Laplacian equations, J. Differential Equations, 258 (2015), 504-534. doi: 10.1016/j.jde.2014.09.021. Google Scholar P. Lindqvist, Notes on the $p$-Laplace Equation, 2006. Available from: http://www.math.ntnu.no/ lqvist/p-laplace.pdf. Google Scholar J.-L. Lions, Quelques Methodes de Resolution des Problemes aux Limites Non Lineaires, Dunod, Paris, 1969. Google Scholar K. Lu and B. Wang, Wong-Zakai approximations and long term behavior of stochastic partial differential equations, J. Dyn. Diff. Equat., 31 (2019), 1341-1371. doi: 10.1007/s10884-017-9626-y. Google Scholar Y. Lv and W. Wang, Limiting dynamics for stochastic wave equations, J. Differential Equations, 244 (2008), 1-23. doi: 10.1016/j.jde.2007.10.009. Google Scholar J. Málek, J. Nečas, M. Rokyta and M. R$\rm\mathring{u}$zička, Weak and Measure-Valued Solutions to Evolutionary PDEs, Chapman & Hall, London, 1996. Google Scholar C. Prévôt and M. Röckner, A Concise Course on Stochastic Partial Differential Equations, Lecture Notes in Mathematics, Springer, Berlin, 2007. Google Scholar B. Schmalfuss, Backward cocycles and attractors of stochastic differential equations, International Seminar on Applied Mathematics-Nonlinear Dynamics: Attractor Approximation and Global Behavior, 185-192, Dresden, 1992. Google Scholar Z. Shen, S. Zhou and W. Shen, One-dimensional random attractor and rotation number of the stochastic damped sine-Gordon equation, J. Differential Equations, 248 (2010), 1432-1457. doi: 10.1016/j.jde.2009.10.007. Google Scholar R. E. Showalter, Monotone Operators in Banach Space and Nonlinear Partial Differential Equations, American Mathematical Society, Providence, 1997. Google Scholar B. Wang, Random attractors for the stochastic Benjamin-Bona-Mahony equation on unbounded domains, J. Differential Equations, 246 (2009), 2506-2537. doi: 10.1016/j.jde.2008.10.012. Google Scholar B. Wang, Asymptotic behavior of stochastic wave equations with critical exponents on ${\mathbb R}^3$, Trans. Amer. Math. Soc., 363 (2011), 3639-3663. doi: 10.1090/S0002-9947-2011-05247-5. Google Scholar B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. Google Scholar B. Wang, Existence and upper semicontinuity of attractors for stochastic equations with deterministic non-autonomous terms, Stoch. Dyn., 14 (2014), 1450009, 1–31. doi: 10.1142/S0219493714500099. Google Scholar B. Wang, Weak pullback attractors for mean random dynamical systems in Bochner spaces, J. Dyn. Diff. Equat., 31 (2019), 2177-2204. doi: 10.1007/s10884-018-9696-5. Google Scholar B. Wang, Weak pullback attractors for stochastic Navier-Stokes equations with nonlinear diffusion terms, Proc. Amer. Math. Soc., 147 (2019), 1627-1638. doi: 10.1090/proc/14356. Google Scholar B. Wang and B. Guo, Asymptotic behavior of non-autonomous stochastic parabolic equations with nonlinear Laplacian principal part, Electron. J. Differ. Eq., (2013), No. 191, 25 pp. Google Scholar S. Zhou, Random exponential attractor for stochastic reaction-diffusion equation with multiplicative noise in ${\mathbb R}^3$, J. Differential Equations, 263 (2017), 6347-6383. doi: 10.1016/j.jde.2017.07.013. Google Scholar Bixiang Wang. Mean-square random invariant manifolds for stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1449-1468. doi: 10.3934/dcds.2020324 Yangrong Li, Shuang Yang, Qiangheng Zhang. Odd random attractors for stochastic non-autonomous Kuramoto-Sivashinsky equations without dissipation. Electronic Research Archive, 2020, 28 (4) : 1529-1544. doi: 10.3934/era.2020080 Chungang Shi, Wei Wang, Dafeng Chen. Weak time discretization for slow-fast stochastic reaction-diffusion equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021019 Nguyen Huy Tuan. On an initial and final value problem for fractional nonclassical diffusion equations of Kirchhoff type. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020354 Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284 Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265 Jingrui Sun, Hanxiao Wang. Mean-field stochastic linear-quadratic optimal control problems: Weak closed-loop solvability. Mathematical Control & Related Fields, 2021, 11 (1) : 47-71. doi: 10.3934/mcrf.2020026 Timothy Chumley, Renato Feres. Entropy production in random billiards. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1319-1346. doi: 10.3934/dcds.2020319 Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020352 Wenlong Sun, Jiaqi Cheng, Xiaoying Han. Random attractors for 2D stochastic micropolar fluid flows on unbounded domains. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 693-716. doi: 10.3934/dcdsb.2020189 Raffaele Folino, Ramón G. Plaza, Marta Strani. Long time dynamics of solutions to $ p $-Laplacian diffusion problems with bistable reaction terms. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020403 Haruki Umakoshi. A semilinear heat equation with initial data in negative Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 745-767. doi: 10.3934/dcdss.2020365 Patrick W. Dondl, Martin Jesenko. Threshold phenomenon for homogenized fronts in random elastic media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 353-372. doi: 10.3934/dcdss.2020329 Shiqi Ma. On recent progress of single-realization recoveries of random Schrödinger systems. Electronic Research Archive, , () : -. doi: 10.3934/era.2020121 Pablo D. Carrasco, Túlio Vales. A symmetric Random Walk defined by the time-one map of a geodesic flow. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020390 Josselin Garnier, Knut Sølna. Enhanced Backscattering of a partially coherent field from an anisotropic random lossy medium. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1171-1195. doi: 10.3934/dcdsb.2020158 Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021005 Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020048 Yongge Tian, Pengyang Xie. Simultaneous optimal predictions under two seemingly unrelated linear random-effects models. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020168 2019 Impact Factor: 1.27 HTML views (161) \begin{document}$ p $\end{document}-Laplacian equations" readonly="readonly">
CommonCrawl
Invertible matrix From formulasearchengine In linear algebra, an n-by-n square matrix A is called invertible (also nonsingular or nondegenerate) if there exists an n-by-n square matrix B such that A⁢B=B⁢A=In{\displaystyle \mathbf {AB} =\mathbf {BA} =\mathbf {I} _{n}\ } where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A and is called the inverse of A, denoted by A−1. A square matrix that is not invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is 0. Singular matrices are rare in the sense that a square matrix randomly selected from a continuous uniform distribution on its entries will almost never be singular. Non-square matrices (m-by-n matrices for which m ≠ n) do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If A is m-by-n and the rank of A is equal to n, then A has a left inverse: an n-by-m matrix B such that BA = I. If A has rank m, then it has a right inverse: an n-by-m matrix B such that AB = I. Matrix inversion is the process of finding the matrix B that satisfies the prior equation for a given invertible matrix A. While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any commutative ring. However, in this case the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a much stricter requirement than being nonzero. The conditions for existence of left-inverse resp. right-inverse are more complicated since a notion of rank does not exist over rings. 1.1 The invertible matrix theorem 1.2 Other properties 1.3 In relation to the identity matrix 1.4 Density 2 Methods of matrix inversion 2.1 Gaussian elimination 2.2 Newton's method 2.3 Cayley–Hamilton method 2.4 Eigendecomposition 2.5 Cholesky decomposition 2.6 Analytic solution 2.6.1 Inversion of 2×2 matrices 2.7 Blockwise inversion 2.8 By Neumann series 3 Derivative of the matrix inverse 4 {{safesubst:#invoke:anchor|main}}Moore–Penrose pseudoinverse 5.1 Regression/least squares 5.2 Matrix inverses in real-time simulations 5.3 Matrix inverses in MIMO wireless communication The invertible matrix theorem Let A be a square n by n matrix over a field K (for example the field R of real numbers). The following statements are equivalent, i.e., for any given matrix they are either all true or all false: A is invertible, i.e. A has an inverse, is nonsingular, or is nondegenerate. A is row-equivalent to the n-by-n identity matrix In. A is column-equivalent to the n-by-n identity matrix In. A has n pivot positions. det A ≠ 0. In general, a square matrix over a commutative ring is invertible if and only if its determinant is a unit in that ring. A has full rank; that is, rank A = n. The equation Ax = 0 has only the trivial solution x = 0 Null A = {0} The equation Ax = b has exactly one solution for each b in Kn. The columns of A are linearly independent. The columns of A span Kn Col A = Kn The columns of A form a basis of Kn. The linear transformation mapping x to Ax is a bijection from Kn to Kn. There is an n by n matrix B such that AB = In = BA. The transpose AT is an invertible matrix (hence rows of A are linearly independent, span Kn, and form a basis of Kn). The number 0 is not an eigenvalue of A. The matrix A can be expressed as a finite product of elementary matrices. The matrix A has a left inverse (i.e. there exists a B such that BA = I) or a right inverse (i.e. there exists a C such that AC = I), in which case both left and right inverses exist and B = C = A-1. Furthermore, the following properties hold for an invertible matrix A: (A−1)−1 = A; (kA)−1 = k−1A−1 for nonzero scalar k; (AT)−1 = (A−1)T; For any invertible n-by-n matrices A and B, (AB)−1 = B−1A−1. More generally, if A1,...,Ak are invertible n-by-n matrices, then (A1A2⋯Ak−1Ak)−1 = Ak−1Ak−1−1⋯A2−1A1−1; det(A−1) = det(A)−1. A matrix that is its own inverse, i.e. A = A−1 and A2 = I, is called an involution. In relation to the identity matrix It follows from the theory of matrices that if A⁢B=I{\displaystyle \mathbf {AB} =\mathbf {I} \ } for finite square matrices A and B, then also B⁢A=I{\displaystyle \mathbf {BA} =\mathbf {I} \ } [1] Over the field of real numbers, the set of singular n-by-n matrices, considered as a subset of Rn×n, is a null set, i.e., has Lebesgue measure zero. This is true because singular matrices are the roots of the polynomial function in the entries of the matrix given by the determinant. Thus in the language of measure theory, almost all n-by-n matrices are invertible. Furthermore the n-by-n invertible matrices are a dense open set in the topological space of all n-by-n matrices. Equivalently, the set of singular matrices is closed and nowhere dense in the space of n-by-n matrices. In practice however, one may encounter non-invertible matrices. And in numerical calculations, matrices which are invertible, but close to a non-invertible matrix, can still be problematic; such matrices are said to be ill-conditioned. Methods of matrix inversion Gaussian elimination Gauss–Jordan elimination is an algorithm that can be used to determine whether a given matrix is invertible and to find the inverse. An alternative is the LU decomposition which generates upper and lower triangular matrices which are easier to invert. A generalization of Newton's method as used for a multiplicative inverse algorithm may be convenient, if it is convenient to find a suitable starting seed: Xk+1=2⁢Xk−Xk⁢A⁢Xk.{\displaystyle X_{k+1}=2X_{k}-X_{k}AX_{k}.} Victor Pan and John Reif have done work that includes ways of generating a starting seed.[2] [3] Byte magazine summarised one of their approaches as follows (box with equations 8 and 9 not shown):[4]- The Pan-Reif breakthrough consists of the discovery of a simple and reliable way of evaluating B0, the initial approximation to A-1, which can safely be used for starting Newton's iteration or variants of it. Readers interested in a derivation can consult the references. I give here merely one example of the results. Let me denote the "Hermitian transpose" of A by AH. That is, if A(I,J) is the element in the Ith row and Jth column of the A matrix, then A*(J,I) is the element at the corresponding position in the matrix A*. Here the star denotes complex conjugate (i.e., if an element is x+i⁢y{\displaystyle x+iy} , where x and y are real numbers, then the complex conjugate of that element is x−i⁢y{\displaystyle x-iy} ). If, as is the case to which I have limited all my own calculations, the elements of A are all real, then AH is just the transposed matrix AT of A (wherein elements are interchanged or "reflected" with respect to the main diagonal). We now introduce a number t, defined by equation 8. In words: We consider the magnitudes of the various elements A(I,J) of the given A matrix that is to be inverted. (In the case of a complex element x+i⁢y{\displaystyle x+iy} , its magnitude is x2+y2{\displaystyle {\sqrt {x^{2}+y^{2}}}} . In the case of a real element, it is just its unsigned or absolute value.) We add up the magnitudes of the elements in a given row and record the sum. We do the same for the remaining rows and then compare the sums thus obtained. The largest of these row sums - just a number - is designated maxI⁡∑J|A⁡(I,J)|{\displaystyle \max _{I}\sum _{J}\left\vert A(I,J)\right\vert } . We do the same for the column sums and take the product of these two maxima, designating its reciprocal as the real number t. Finally, we define our initial approximate inverse matrix B0 as shown in equation 9. That is, the number t multiplies every element of the Hermitian transpose of the A matrix. Pan and Reif give alternative forms, but this will do. And that's all there is to it. Otherwise, the method may be adapted to use the starting seed from a trivial starting case by using a homotopy to "walk" in small steps from that to the matrix needed, "dragging" the inverses with them: Xk+1=2⁢Xk−Xk⁢Ak+1⁢Xk,{\displaystyle X_{k+1}=2X_{k}-X_{k}A_{k+1}X_{k},} where A0=S,{\displaystyle A_{0}=S,} X0=S−1,{\displaystyle X_{0}=S^{-1},} and AN=A{\displaystyle A_{N}=A} for some terminating N, perhaps followed by another few iterations at A to settle the inverse. Using this simplistically on real valued matrices would lead the homotopy through a degenerate matrix about half the time, so complex valued matrices should be used to bypass that, e.g. by using a starting seed S that has i in the first entry, 1 on the rest of the leading diagonal, and 0 elsewhere. If complex arithmetic is not directly available, it may be emulated at a small cost in computer memory by replacing each complex matrix element a+bi with a 2×2 real valued submatrix of the form (ab−ba){\displaystyle {\begin{pmatrix}a&b\\-b&a\\\end{pmatrix}}} (see square root of a matrix). Newton's method is particularly useful when dealing with families of related matrices that behave enough like the sequence manufactured for the homotopy above: sometimes a good starting point for refining an approximation for the new inverse can be the already obtained inverse of a previous matrix that nearly matches the current matrix, e.g. the pair of sequences of inverse matrices used in obtaining matrix square roots by Denman-Beavers iteration; this may need more than one pass of the iteration at each new matrix, if they are not close enough together for just one to be enough. Newton's method is also useful for "touch up" corrections to the Gauss–Jordan algorithm which has been contaminated by small errors due to imperfect computer arithmetic. Cayley–Hamilton method Cayley–Hamilton theorem allows to represent the inverse of A in terms of det(A), traces and powers of A A−1=1det⁡(A)⁢∑s=0n−1As⁢∑k1,k2,…,kn−1∏l=1n−1(−1)kl+1lkl⁢kl!⁢t⁢r⁢(Al)kl,{\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}\sum _{s=0}^{n-1}\mathbf {A} ^{s}\sum _{k_{1},k_{2},\ldots ,k_{n-1}}\prod _{l=1}^{n-1}{\frac {(-1)^{k_{l}+1}}{l^{k_{l}}k_{l}!}}\mathrm {tr} (\mathbf {A} ^{l})^{k_{l}},} where n is dimension of A, and the sum is taken over s and the sets of all kl ≥ 0 satisfying the linear Diophantine equation s+∑l=1n−1l⁢kl=n−1.{\displaystyle s+\sum _{l=1}^{n-1}lk_{l}=n-1.} Eigendecomposition {{#invoke:main|main}} If matrix A can be eigendecomposed and if none of its eigenvalues are zero, then A is invertible and its inverse is given by A−1=Q⁢Λ−1⁢Q−1{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}} where Q is the square (N×N) matrix whose ith column is the eigenvector qi{\displaystyle q_{i}} of A and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e., Λi⁢i=λi{\displaystyle \Lambda _{ii}=\lambda _{i}} . Furthermore, because Λ is a diagonal matrix, its inverse is easy to calculate: [Λ−1]i⁢i=1λi{\displaystyle \left[\Lambda ^{-1}\right]_{ii}={\frac {1}{\lambda _{i}}}} {{#invoke:main|main}} If matrix A is positive definite, then its inverse can be obtained as A−1=(L∗)−1⁢L−1,{\displaystyle \mathbf {A} ^{-1}=(\mathbf {L} ^{*})^{-1}\mathbf {L} ^{-1},} where L is the lower triangular Cholesky decomposition of A, and L* denotes the conjugate transpose of L. Analytic solution {{#invoke:main|main}} Writing the transpose of the matrix of cofactors, known as an adjugate matrix, can also be an efficient way to calculate the inverse of small matrices, but this recursive method is inefficient for large matrices. To determine the inverse, we calculate a matrix of cofactors: A−1=1|A|⁢CT=1|A|⁢(C11C21⋯Cn⁢1C12C22⋯Cn⁢2⋮⋮⋱⋮C1⁢nC2⁢n⋯Cn⁢n){\displaystyle \mathbf {A} ^{-1}={1 \over {\begin{vmatrix}\mathbf {A} \end{vmatrix}}}\mathbf {C} ^{\mathrm {T} }={1 \over {\begin{vmatrix}\mathbf {A} \end{vmatrix}}}{\begin{pmatrix}\mathbf {C} _{11}&\mathbf {C} _{21}&\cdots &\mathbf {C} _{n1}\\\mathbf {C} _{12}&\mathbf {C} _{22}&\cdots &\mathbf {C} _{n2}\\\vdots &\vdots &\ddots &\vdots \\\mathbf {C} _{1n}&\mathbf {C} _{2n}&\cdots &\mathbf {C} _{nn}\\\end{pmatrix}}} (A−1)i⁢j=1|A|⁢(CT)i⁢j=1|A|⁢(Cj⁢i){\displaystyle \left(\mathbf {A} ^{-1}\right)_{ij}={1 \over {\begin{vmatrix}\mathbf {A} \end{vmatrix}}}\left(\mathbf {C} ^{\mathrm {T} }\right)_{ij}={1 \over {\begin{vmatrix}\mathbf {A} \end{vmatrix}}}\left(\mathbf {C} _{ji}\right)} where |A| is the determinant of A, C is the matrix of cofactors, and CT represents the matrix transpose. Inversion of 2×2 matrices The cofactor equation listed above yields the following result for 2×2 matrices. Inversion of these matrices can be done easily as follows:[5] A−1=[abcd]−1=1det⁡(A)⁢[d−b−ca]=1a⁢d−b⁢c⁢[d−b−ca].{\displaystyle \mathbf {A} ^{-1}={\begin{bmatrix}a&b\\c&d\\\end{bmatrix}}^{-1}={\frac {1}{\det(\mathbf {A} )}}{\begin{bmatrix}\,\,\,d&\!\!-b\\-c&\,a\\\end{bmatrix}}={\frac {1}{ad-bc}}{\begin{bmatrix}\,\,\,d&\!\!-b\\-c&\,a\\\end{bmatrix}}.} This is possible because 1/(ad-bc) is the reciprocal of the determinant of the matrix in question, and the same strategy could be used for other matrix sizes. The Cayley–Hamilton method gives A−1=1det⁡(A)⁢[(t⁢r⁢A)⁢I−A].{\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}\left[\left(\mathrm {tr} \mathbf {A} \right)\mathbf {I} -\mathbf {A} \right].} A computationally efficient 3x3 matrix inversion is given by A−1=[abcdefghi]−1=1det⁡(A)⁢[ABCDEFGHI]T=1det⁡(A)⁢[ADGBEHCFI]{\displaystyle \mathbf {A} ^{-1}={\begin{bmatrix}a&b&c\\d&e&f\\g&h&i\\\end{bmatrix}}^{-1}={\frac {1}{\det(\mathbf {A} )}}{\begin{bmatrix}\,A&\,B&\,C\\\,D&\,E&\,F\\\,G&\,H&\,I\\\end{bmatrix}}^{T}={\frac {1}{\det(\mathbf {A} )}}{\begin{bmatrix}\,A&\,D&\,G\\\,B&\,E&\,H\\\,C&\,F&\,I\\\end{bmatrix}}} (where the scalar A is not to be confused with the matrix A). If the determinant is non-zero, the matrix is invertible, with the elements of the intermediary matrix on the right side above given by A=(e⁢i−f⁢h)D=−(b⁢i−c⁢h)G=(b⁢f−c⁢e)B=−(d⁢i−f⁢g)E=(a⁢i−c⁢g)H=−(a⁢f−c⁢d)C=(d⁢h−e⁢g)F=−(a⁢h−b⁢g)I=(a⁢e−b⁢d){\displaystyle {\begin{matrix}A=(ei-fh)&D=-(bi-ch)&G=(bf-ce)\\B=-(di-fg)&E=(ai-cg)&H=-(af-cd)\\C=(dh-eg)&F=-(ah-bg)&I=(ae-bd)\\\end{matrix}}} The determinant of A can be computed by applying the rule of Sarrus as follows: det⁡(A)=a⁢A+b⁢B+c⁢C.{\displaystyle \det(\mathbf {A} )=aA+bB+cC.} The Cayley–Hamilton decomposition gives A−1=1det⁡(A)⁢[12⁢((t⁢r⁢A)2−t⁢r⁢A2)⁢I−A⁢t⁢r⁢A+A2].{\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}\left[{\frac {1}{2}}\left((\mathrm {tr} \mathbf {A} )^{2}-\mathrm {tr} \mathbf {A} ^{2}\right)\mathbf {I} -\mathbf {A} \mathrm {tr} \mathbf {A} +\mathbf {A} ^{2}\right].} {{safesubst:#invoke:anchor|main}} The general 3×3 inverse can be expressed concisely in terms of the cross product and triple product. If a matrix A=[x0,x1,x2]{\displaystyle \mathbf {A} =\left[\mathbf {x_{0}} ,\;\mathbf {x_{1}} ,\;\mathbf {x_{2}} \right]} (consisting of three column vectors, x0{\displaystyle \mathbf {x_{0}} } , x1{\displaystyle \mathbf {x_{1}} } , and x2{\displaystyle \mathbf {x_{2}} } ) is invertible, its inverse is given by A−1=1det⁡(A)⁢[(x1×x2)T(x2×x0)T(x0×x1)T].{\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}{\begin{bmatrix}{(\mathbf {x_{1}} \times \mathbf {x_{2}} )}^{T}\\{(\mathbf {x_{2}} \times \mathbf {x_{0}} )}^{T}\\{(\mathbf {x_{0}} \times \mathbf {x_{1}} )}^{T}\\\end{bmatrix}}.} Note that det⁡(A){\displaystyle \det(A)} is equal to the triple product of x0{\displaystyle \mathbf {x_{0}} } , x1{\displaystyle \mathbf {x_{1}} } , and x2{\displaystyle \mathbf {x_{2}} } —the volume of the parallelepiped formed by the rows or columns: det⁡(A)=x0⋅(x1×x2).{\displaystyle \det(\mathbf {A} )=\mathbf {x_{0}} \cdot (\mathbf {x_{1}} \times \mathbf {x_{2}} ).} The correctness of the formula can be checked by using cross- and triple-product properties and by noting that for groups, left and right inverses always coincide. Intuitively, because of the cross products, each row of A−1{\displaystyle \mathbf {A} ^{-1}} is orthogonal to the non-corresponding two columns of A{\displaystyle \mathbf {A} } (causing the off-diagonal terms of I=A−1⁢A{\displaystyle \mathbf {I} =\mathbf {A} ^{-1}\mathbf {A} } be zero). Dividing by det⁡(A)=x0⋅(x1×x2){\displaystyle \det(\mathbf {A} )=\mathbf {x_{0}} \cdot (\mathbf {x_{1}} \times \mathbf {x_{2}} )} causes the diagonal elements of I=A−1⁢A{\displaystyle \mathbf {I} =\mathbf {A} ^{-1}\mathbf {A} } to be unity. For example, the first diagonal is: 1=1x0⋅(x1×x2)⁢x0⋅(x1×x2).{\displaystyle 1={\frac {1}{\mathbf {x_{0}} \cdot (\mathbf {x_{1}} \times \mathbf {x_{2}} )}}\mathbf {x_{0}} \cdot (\mathbf {x_{1}} \times \mathbf {x_{2}} ).} With increasing dimension, expressions for the inverse of A get complicated. For n = 4 the Cayley-Hamilton method leads to an expression that is still tractable: A−1=1det⁡(A)⁢[16⁢((t⁢r⁢A)3−3⁢t⁢r⁢A⁢t⁢r⁢A2+2⁢t⁢r⁢A3)⁢I−12⁢A⁢((t⁢r⁢A)2−t⁢r⁢A2)+A2⁢t⁢r⁢A−A3].{\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}\left[{\frac {1}{6}}\left((\mathrm {tr} \mathbf {A} )^{3}-3\mathrm {tr} \mathbf {A} \mathrm {tr} \mathbf {A} ^{2}+2\mathrm {tr} \mathbf {A} ^{3}\right)\mathbf {I} -{\frac {1}{2}}\mathbf {A} \left((\mathrm {tr} \mathbf {A} )^{2}-\mathrm {tr} \mathbf {A} ^{2}\right)+\mathbf {A} ^{2}\mathrm {tr} \mathbf {A} -\mathbf {A} ^{3}\right].} Blockwise inversion Matrices can also be inverted blockwise by using the following analytic inversion formula: [ABCD]−1=[A−1+A−1⁢B⁢(D−C⁢A−1⁢B)−1⁢C⁢A−1−A−1⁢B⁢(D−C⁢A−1⁢B)−1−(D−C⁢A−1⁢B)−1⁢C⁢A−1(D−C⁢A−1⁢B)−1]{\displaystyle {\begin{bmatrix}\mathbf {A} &\mathbf {B} \\\mathbf {C} &\mathbf {D} \end{bmatrix}}^{-1}={\begin{bmatrix}\mathbf {A} ^{-1}+\mathbf {A} ^{-1}\mathbf {B} (\mathbf {D} -\mathbf {CA} ^{-1}\mathbf {B} )^{-1}\mathbf {CA} ^{-1}&-\mathbf {A} ^{-1}\mathbf {B} (\mathbf {D} -\mathbf {CA} ^{-1}\mathbf {B} )^{-1}\\-(\mathbf {D} -\mathbf {CA} ^{-1}\mathbf {B} )^{-1}\mathbf {CA} ^{-1}&(\mathbf {D} -\mathbf {CA} ^{-1}\mathbf {B} )^{-1}\end{bmatrix}}} (1){\displaystyle (1)\,} where A, B, C and D are matrix sub-blocks of arbitrary size. (A and D must be square, so that they can be inverted. Furthermore, A and D−CA−1B must be nonsingular.[6]) This strategy is particularly advantageous if A is diagonal and D−CA−1B (the Schur complement of A) is a small matrix, since they are the only matrices requiring inversion. This technique was reinvented several times and is due to Hans Boltz (1923),{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} who used it for the inversion of geodetic matrices, and Tadeusz Banachiewicz (1937), who generalized it and proved its correctness. The nullity theorem says that the nullity of A equals the nullity of the sub-block in the lower right of the inverse matrix, and that the nullity of B equals the nullity of the sub-block in the upper right of the inverse matrix. The inversion procedure that led to Equation (1) performed matrix block operations that operated on C and D first. Instead, if A and B are operated on first, and provided D and A−BD−1C are nonsingular ,[7] the result is [ABCD]−1=[(A−B⁢D−1⁢C)−1−(A−B⁢D−1⁢C)−1⁢B⁢D−1−D−1⁢C⁢(A−B⁢D−1⁢C)−1D−1+D−1⁢C⁢(A−B⁢D−1⁢C)−1⁢B⁢D−1].{\displaystyle {\begin{bmatrix}\mathbf {A} &\mathbf {B} \\\mathbf {C} &\mathbf {D} \end{bmatrix}}^{-1}={\begin{bmatrix}(\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} )^{-1}&-(\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} )^{-1}\mathbf {BD} ^{-1}\\-\mathbf {D} ^{-1}\mathbf {C} (\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} )^{-1}&\quad \mathbf {D} ^{-1}+\mathbf {D} ^{-1}\mathbf {C} (\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} )^{-1}\mathbf {BD} ^{-1}\end{bmatrix}}.} Equating Equations (1) and (2) leads to (A−B⁢D−1⁢C)−1=A−1+A−1⁢B⁢(D−C⁢A−1⁢B)−1⁢C⁢A−1{\displaystyle (\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} )^{-1}=\mathbf {A} ^{-1}+\mathbf {A} ^{-1}\mathbf {B} (\mathbf {D} -\mathbf {CA} ^{-1}\mathbf {B} )^{-1}\mathbf {CA} ^{-1}\,} (A−B⁢D−1⁢C)−1⁢B⁢D−1=A−1⁢B⁢(D−C⁢A−1⁢B)−1{\displaystyle (\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} )^{-1}\mathbf {BD} ^{-1}=\mathbf {A} ^{-1}\mathbf {B} (\mathbf {D} -\mathbf {CA} ^{-1}\mathbf {B} )^{-1}\,} D−1⁢C⁢(A−B⁢D−1⁢C)−1=(D−C⁢A−1⁢B)−1⁢C⁢A−1{\displaystyle \mathbf {D} ^{-1}\mathbf {C} (\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} )^{-1}=(\mathbf {D} -\mathbf {CA} ^{-1}\mathbf {B} )^{-1}\mathbf {CA} ^{-1}\,} D−1+D−1⁢C⁢(A−B⁢D−1⁢C)−1⁢B⁢D−1=(D−C⁢A−1⁢B)−1{\displaystyle \mathbf {D} ^{-1}+\mathbf {D} ^{-1}\mathbf {C} (\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} )^{-1}\mathbf {BD} ^{-1}=(\mathbf {D} -\mathbf {CA} ^{-1}\mathbf {B} )^{-1}\,} where Equation (3) is the matrix inversion lemma, which is equivalent to the binomial inverse theorem. Since a blockwise inversion of an Template:Var×Template:Var matrix requires inversion of two half-sized matrices and 6 multiplications between two half-sized matrices, it can be shown that a divide and conquer algorithm that uses blockwise inversion to invert a matrix runs with the same time complexity as the matrix multiplication algorithm that is used internally.[8] There exist matrix multiplication algorithms with a complexity of O(n2.3727) operations, while the best proven lower bound is Ω(Template:Var2 log Template:Var).[9] By Neumann series If a matrix A has the property that limn→∞⁡(I−A)n=0{\displaystyle \lim _{n\to \infty }(\mathbf {I} -\mathbf {A} )^{n}=0} then A is nonsingular and its inverse may be expressed by a Neumann series:[10] A−1=∑n=0∞(I−A)n.{\displaystyle \mathbf {A} ^{-1}=\sum _{n=0}^{\infty }(\mathbf {I} -\mathbf {A} )^{n}.} Truncating the sum results in an "approximate" inverse which may be useful as a preconditioner. Note that a truncated series can be accelerated exponentially by noting that the Neumann series is a geometric sum. Therefore, if one wishes to compute 2L{\displaystyle 2^{L}} terms, one merely need the moments A,A2,A4,...,A2L{\displaystyle A,A^{2},A^{4},...,A^{2^{L}}} which can be found through L matrix multiplications. Then another L matrix multiplications are needed to obtain the final result by multiplying all the moments together. Therefore, 2L matrix multiplications are needed to compute 2L{\displaystyle 2^{L}} terms of the sum. More generally, if A is "near" the invertible matrix X in the sense that limn→∞⁡(I−X−1⁢A)n=0⁢o⁢r⁢limn→∞⁡(I−A⁢X−1)n=0{\displaystyle \lim _{n\to \infty }(\mathbf {I} -\mathbf {X} ^{-1}\mathbf {A} )^{n}=0\mathrm {~~or~~} \lim _{n\to \infty }(\mathbf {I} -\mathbf {A} \mathbf {X} ^{-1})^{n}=0} then A is nonsingular and its inverse is A−1=∑n=0∞(X−1⁢(X−A))n⁢X−1.{\displaystyle \mathbf {A} ^{-1}=\sum _{n=0}^{\infty }\left(\mathbf {X} ^{-1}(\mathbf {X} -\mathbf {A} )\right)^{n}\mathbf {X} ^{-1}~.} If it is also the case that A-X has rank 1 then this simplifies to A−1=X−1−X−1⁢(A−X)⁢X−11+tr⁡(X−1⁢(A−X)).{\displaystyle \mathbf {A} ^{-1}=\mathbf {X} ^{-1}-{\frac {\mathbf {X} ^{-1}(\mathbf {A} -\mathbf {X} )\mathbf {X} ^{-1}}{1+\operatorname {tr} (\mathbf {X} ^{-1}(\mathbf {A} -\mathbf {X} ))}}~.} Derivative of the matrix inverse Suppose that the invertible matrix A depends on a parameter t. Then the derivative of the inverse of A with respect to t is given by d⁢A−1d⁢t=−A−1⁢d⁢Ad⁢t⁢A−1.{\displaystyle {\frac {\mathrm {d} \mathbf {A} ^{-1}}{\mathrm {d} t}}=-\mathbf {A} ^{-1}{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {A} ^{-1}.} To derive the above expression for the derivative of the inverse of A, one can differentiate the definition of the matrix inverse A−1⁢A=I{\displaystyle \mathbf {A} ^{-1}\mathbf {A} =\mathbf {I} } and then solve for the inverse of A: d⁢A−1⁢Ad⁢t=d⁢A−1d⁢t⁢A+A−1⁢d⁢Ad⁢t=d⁢Id⁢t=0.{\displaystyle {\frac {\mathrm {d} \mathbf {A} ^{-1}\mathbf {A} }{\mathrm {d} t}}={\frac {\mathrm {d} \mathbf {A} ^{-1}}{\mathrm {d} t}}\mathbf {A} +\mathbf {A} ^{-1}{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}={\frac {\mathrm {d} \mathbf {I} }{\mathrm {d} t}}=\mathbf {0} .} Subtracting A−1⁢d⁢Ad⁢t{\displaystyle \mathbf {A} ^{-1}{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}} from both sides of the above and multiplying on the right by A−1{\displaystyle \mathbf {A} ^{-1}} gives the correct expression for the derivative of the inverse: Similarly, if ϵ{\displaystyle \epsilon } is a small number then (A+ϵ⁢X)−1=A−1−ϵ⁢A−1⁢X⁢A−1+O⁡(ϵ2).{\displaystyle \left(\mathbf {A} +\epsilon \mathbf {X} \right)^{-1}=\mathbf {A} ^{-1}-\epsilon \mathbf {A} ^{-1}\mathbf {X} \mathbf {A} ^{-1}+{\mathcal {O}}(\epsilon ^{2})\,.} {{safesubst:#invoke:anchor|main}}Moore–Penrose pseudoinverse Some of the properties of inverse matrices are shared by Moore–Penrose pseudoinverses, which can be defined for any m-by-n matrix. For most practical applications, it is not necessary to invert a matrix to solve a system of linear equations; however, for a unique solution, it is necessary that the matrix involved be invertible. Decomposition techniques like LU decomposition are much faster than inversion, and various fast algorithms for special classes of linear systems have also been developed. Regression/least squares Although an explicit inverse is not necessary to estimate the vector of unknowns, it is unavoidable to estimate their precision, found in the diagonal of the posterior covariance matrix of the vector of unknowns. Matrix inverses in real-time simulations Matrix inversion plays a significant role in computer graphics, particularly in 3D graphics rendering and 3D simulations.Examples include screen-to-world ray casting, world-to-subspace-to-world object transformations, and physical simulations. Matrix inverses in MIMO wireless communication Matrix inversion also play a significant role in the MIMO (Multiple-Input, Multiple-Output) technology in wireless communications. The MIMO system consists of N transmit and M receive antennas. Unique signals, occupying the same frequency band, are sent via N transmit antennas and are received via M receive antennas. The signal arriving at each receive antenna will be a linear combination of the N transmitted signals forming a NxM transmission matrix H. It is crucial for the matrix H to be invertible for the receiver to be able to figure out the transmitted information. Binomial inverse theorem Matrix decomposition Matrix square root Moore–Penrose pseudoinverse Pseudoinverse Singular value decomposition Woodbury matrix identity ↑ {{#invoke:citation/CS1|citation |CitationClass=book }}. ↑ {{#invoke:citation/CS1|citation |CitationClass=citation }} ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }} ↑ {{#invoke:citation/CS1|citation |CitationClass=book }}, Chapter 2, page 71 ↑ {{#invoke:citation/CS1|citation |CitationClass=book }} ↑ T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to Algorithms, 3rd ed., MIT Press, Cambridge, MA, 2009, §28.2. ↑ Ran Raz. On the complexity of matrix product. In Proceedings of the thirty-fourth annual ACM symposium on Theory of computing. ACM Press, 2002. Template:Hide in printTemplate:Only in print. Template:Introduction to Algorithms {{#invoke:citation/CS1|citation |CitationClass=citation }} Matrix Mathematics: Theory, Facts, and Formulas at Google books Equations Solver Online Lecture on Inverse Matrices by Khan Academy Linear Algebra Lecture on Inverse Matrices by MIT LAPACK is a collection of FORTRAN subroutines for solving dense linear algebra problems ALGLIB includes a partial port of the LAPACK to C++, C#, Delphi, etc. Online Inverse Matrix Calculator using AJAX Symbolic Inverse of Matrix Calculator with steps shown Moore Penrose Pseudoinverse Inverse of a Matrix Notes Module for the Matrix Inverse Calculator for Singular or Non-Square Matrix Inverse The Matrix Cookbook Template:Planetmath reference {{#invoke: Navbox | navbox }} Retrieved from "https://en.formulasearchengine.com/index.php?title=Invertible_matrix&oldid=226254" Matrix theory About formulasearchengine
CommonCrawl
Journal of The Korean Astronomical Society (천문학회지) The Korean Astronomical Society (한국천문학회) 2288-890X(eISSN) Earth Science(Earth/Atmosphere/Marine/Astronomy) > Astronomy The Journal of the Korean Astronomical Society, JKAS, is an international scientific journal publishing papers in all the fields of astronomy and astrophysics (theoretical, observational, and instrumental) with a view to the advancement of the frontier knowledge. Manuscripts are classified into original contributions, proceedings, and reviews. http://www.kas.org/view/submitpaper.jsp?lang=kor KSCI KCI SCOPUS SCIE Volume 36 Issue spc1 OPTICAL PROPERTIES AND TOPOLOGICAL CONFIGURATION OF CAUSTICS OF MICRO GRAVITATIONAL LENSES Chang, Kyong-Ae 1 The mathematical properties of gravitational lens equations are examined in the frame work of gravitational micro-lensing effects. The caustics of the gravitational lens may be defined in terms of "cusp" and "folding" in general. In some cases for overfocussing, however, the critical curves (caustics) have no cusp and no folding. If the observer is in the overfocussed region, he may not see any lensed image. PHOTOMETRIC EVOLUTION OF GALAXIES: STAR FORMATION RATE AND HUBBLE SEQUENCE Ann, Hong-Bae;Lee, Chang-Won;Lee, See-Woo 13 We construct a simple photometric evolution model of galaxies based on the evolutionary population synthesis. In our models an exponentially decreasing SFR with a power law IMF is used to compute the UBV colors of galaxies from ellipticals to late type spirals. It is shown that the integrated colors of galaxies with different Hubble type can be explained by one parameter, SFR. THE EVOLUTION OF A SPIRAL GALAXY: THE GALAXY Lee, See-Woo;Park, Byeong-Gon;Kang, Yong-Hee;Ann, Hong-Bae 25 The evolution of the Galaxy is examined by the halo-disk model, using the time-dependent bimodal IMF and contraints such as cumulative metallicity distribution, differential metallicity distribution and PDMF of main sequence stars. The time scale of the Galactic halo formation is about 3Gyr during which the most of halo stars and metal abundance are formed and ${\sim}95%$ of the initial halo mass falls to the disk. The G-dwarf problem could be explained by the time-dependent bimodal IMF which is suppressed for low mass stars at the early phase (t < 1Gyr) of the disk evolution. However, the importance of this problem is much weakened by the Pagel's differential metallicity distribution which leads to less initial metal enrichment and many long-lived metal-poor stars with Z < $1/3Z_{\odot}$ The observational distribution of abundance ratios of C, N, O elements with respect to [Fe/H] could be reproduced by the halo-disk model, including the contribution of iron product by SNIs of intermediate mass stars. The initial enrichment of elements in the disk could be explained by the halo-disk model, resulting in the slight decrease and then the increase in the slopes of the [N/Fe]- and [C/Fe]-distributions with increasing [Fe/H] in the range of [Fe/H] < -1. IRAS OBSERVATIONS OF DARK GLOBULES Lee, H.M.;Hong, S.S.;Kwon, S.M. 55 Infrared emission maps are constructed at 12.5, 25, 60, and $100{\mu}m$ for dark globules B5, B34, B133, B134, B361, L134 and L1523 by using Infrared Astronomical Satellite data base. These clouds are selected on the basis of their appearance in Palomar print as dark obscuring objects with angular sizes in the range of 3 to 30 arcminutes. The short wavelength(12.5 and $25{\mu}m$) maps show the embedded infrared sources. We found many such sources only in B5, B361 and B34 regions, Diffuse component at 12.5 and $25{\mu}m$, possibly arising from the stochastically heated very small dust grains(a < $0.01{\mu}m$) by interstellar radiation field, is found in B361 and L1523 regions. Such emission is characterized by the limb brightening, and it is confirmed in L1523 and in B361. Infrared emissions at the long wavelengths(60 and $100{\mu}m$) are due to colder dusts with temperature less than 20 K. The distribution of color index determined by the ratio 60 to $100{\mu}m$ intensities shows monotonic decrease of dust temperature toward the center. The black body temperature determined from these ratios is found to lie between 16 and 23 K. Such temperature is possible for small(i.e., $a\;{\lesssim}\;0.01{\mu}m$) graphite grains if the grains are mainly heated by interstellar radiation field. Thus IRAS 100 and $60{\mu}m$ emissions are arising mainly from small grains in the colud. The distribution of such dust grains implied from the emissivity distributions at 100 and $60{\mu}m$ resembles that of isothermal sphere. This contrasts to earlier findings of much steeper distribution of dusts contributing visible extinction. These dust grains are mainly larger ones(i.e., $a{\simeq}0.1{\mu}m$). Therefore we conclude that the average grain size increase, toward the cloud center. CO OBSERVATIONS AND STABILITY ANALYSIS OF B133 AND B134 Hong, S.S.;Kim, H.G.;Park, S.H.;Park, Y.S.;Imaoka, K. 71 With the 14 m radio telescope at DRAO and the 4 m at Nagoya University, we have made detailed maps of $^{12}CO$ and $^{13}CO$ emissions from two Barnard objects B133 and B134 in the $J=1{\rightarrow}O$ rotational transition lines. Usual LTE analyses of the CO observations led us to determine the distribution of column densities over an entire area encompassing both globules. Total gas masses estimated from the column density map are $90\;M_{\odot}$ and $20\;M_{\odot}$ for B133 and B134, respectively. The radial velocity of B133 is red shifted with respect to B134 by $0.8\;km\;s^{-1}$, which is too lagre to bind the two clouds as a binary system. We have shown that the usual stability analysis based on the simplified version of virial theorem with the second time-derivative of the moment of inertia term $\ddot{I}$ being ignored could mislead us in determining whether a given cloud eventually collapses or not. The lull version of the scalar virial theorem with the $\ddot{I}$ term is shown to be useful in following up the time-dependent variations of the cloud size R and its streaming velocity $\dot{R}$ as functions of time. Results of our stability analysis suggest that B133 will eventually collapse in $(2{\sim}4){\times}10^6$ years. THE DIFFUSION COEFFICIENT OF RELATIVISTIC PARTICLES IN AN INTRACLUSTER MEDIUM OF THE COMA CLUSTER OF GALAXIES Kim, K.T. 95 In the presence of synchrotron losses, diffusion of an ensembel of relativistic particles in an intraculster medium is investigated. The diffusion coefficient in the medium is found to be constrained by $28.8\;{\pm}\;0.4\;{\leq}\;Log\;D\;{\leq}\;30.5\;{\pm}\;0.4\;cm^2s^{-1}$ with the energy dependency of $D_0{\varepsilon}^{\mu}$ of ${\mu}=0.4{\pm}0.2$ as the previous observations suggested. As an important implication of the result, the brightest head-tail radio source NGC 4869, whose radio tail structure is indicative for its orbit within the cluster core, is considered to be the major contributor of particles for the formation of the Coma radio halo. SYSTEMATIC INCREASE OF DUST SIZE TOWARD THE CENTER OF B361 Hong, S.S.;Koo, B.C.;Yun, H.S. 107 With a modern microdensitometer and POSS glass copies, we have performed an automated star counting in two colors, blue and red, over the region containing Bok globule B361. Distribution of the measured extinction values over the projected angular distance from the cloud center was approximated by a power-law, and the resulting power-law indices for the blue and red are shown to be distinctly different from each other. The difference in the power-law index indicates that the mean dust size increases towards the cloud center. Possible physical causes for such size variation are briefly discussed. REVISIT TO THE SUNSPOT CYCLES Kim, K.T. 117 Here I report the confirmation of a long-term modulation of a period of $92^{+21}_{-13}$ years with the "time-delay correlation" method on the sunspot data compiled over the last a total of 289 years. This periodicity better specifies the cycle which falls pretty well within Gleissberg cycle, and clearly contrasts with the 55 year grand cycle which Yoshimura (1979) claimed. It is argued that the period-amplitude diagram method. which Yoshimura used, ana lysed peak amplitudes only so that a large number of data were disregarded, and thus was more susceptible to a bias. The planetary tidal force on Sun as for the possible cause to the solar activity was investigated and its possibility was ruled out in view of no period correlation between them.
CommonCrawl
Concept for Donor Coordination Published: 2016-01-23 Tags: Coordination, Computer science, Master's thesis, Effective altruism This is a proposal for a donor coordination system that aims to empower donors to harness the risk neutrality that stems from their combined work toward agent-neutral goals. Dated Content I tend to update articles only when I remember their content and realize that I want to change something about it. But I rarely remember it well enough once about two years have passed. Such articles are therefore likely to contain some statements that I no longer espouse or would today frame differently. Donor Coordination Irrational Risk-Aversion Competition for Exposure High-Level Goals Functional Requirements Challenges and Proposed Solutions Funding Gaps Donation Swaps Moral Lies This proposal is meant to encourage comments on its content as well as comments along the lines of "I would use this," because without many of those it will not seem like a worthwhile undertaking to implement it. One problem that GiveWell has struggled with emerges when two donors are not fully value-aligned but can agree on wanting to fund one of GiveWell's top charities. The result is that they wait each other out, a deadlock, both wanting the other to fund the GiveWell charity, because they value the other's counterfactual use of the donation lower than their own. GiveWell is regularly refining its response to this problem. For this problem to become relevant, there need to be at least two large donors or monolithic groups of donors, where large means that their planned donations are close to – for example within the same order of magnitude of – the funding gap of the charities in question. This is a good problem to have. More commonly, however, the funding gap is large compared to the potential individual donations (where individual is meant to exclude the aforementioned monolithic groups of donors), so that the above problem becomes an edge case while centrally we face a different problem. Donors that focus their contributions on charities that have a significant evidence base and track record for impact – a large part of the "GiveWell market" – are often accused of being too focused on just these established charities thereby missing small high-impact opportunities from nonprofit startups or projects that will stay small or short-lived by design. The distinction is similar to that between, on the one hand, passive investors that buy exchange-traded funds (ETFs) of, for example, the top 30 (DAX) or 500+ (S&P 500) companies in order to hold them, and, on the other hand, business angels or venture capitalists that invest into startups. The first group has excellent information to make relatively low risk–low return investments; the second group has to rely on rough heuristics, such as their faith into the founders, to make high risk–high return investments – of which they need to be able to make many in order to profit at least fairly reliably. But a profit motive is an agent-relative goal. Investors (such as donors) with agent-neutral goals that are shared by at least a few others have much better opportunities for cooperation. These have largely not been tapped into. While Net Analytics is clearly focused on the low risk–low return market, this high risk–high return market also calls for a software solution to its coordination problem. The central motivating problems are the following: Drops in marginal utility of a resource suggest risk aversion. In that context it is rational to prefer a low return with high probability to a high return with low probability at the same expected value. In the context of altruistic interventions,1 the utility of marginal donations only noticeably decreases when it reaches the area of millions of dollars, some of the GiveWell top charities' funding gaps. Since few donors have funds of that magnitude at their disposal, most risk aversion of average donors is disproportionate. At the same time there are many donors that see a high likelihood that effective interventions are possible in a certain cause area. Unfortunately, these intervention are, by necessity, more speculative than, for example, the interventions GiveWell prioritizes. Yet there are charity startups implementing them. The funding gaps of these charities tend to be too small for any respectable prioritization organization, like GiveWell, to warrant investing staff time into evaluating them, so donors are left to their own devices.2 When donors consider these charities, they are usually still optimistic that donating to them does yield superior impact, but they have a much harder time prioritizing between them because their central metric just remains how well they implement very similar interventions. It is well possible that the differences between these charities – charities that many impact-oriented donors are actively considering – is small enough that the value of the information would not warrant its cost. Unfortunately, some of these donors fall into a form of analysis paralysis at this point and rather donate to the charities whose lower impact is well proven. Other donors react more rationally and donate rather arbitrarily within the group of the most highly effective charities. Others again use questionable heuristics, often aware that they are likely to be unreliable but also aware of the presumably low value of information of more thorough investigations. I aver that none of these strategies is optimal. The other side of the medal is that charities are aware of these dynamics. While their values may be aligned, for funding they are yet each dependent on its own pool of donors, and any cross-promotion of another charity among the first charity's own donor base may lead to donors shifting their support to the endorsed organization. This behavior stifles cooperation. The solution presented here will instead allow all charities in a program area to fill their funding gaps to similar degrees. If a sufficient number of donors come to accept this solution, any incentive for charities to engage in uncooperative behavior will be diminished. Donor Coordination (working title) is a software system and strategy that fosters cooperation between value-aligned donors by allowing them do make large contributions in teams and donate to whole program areas rather than individual nonprofits. It can improve upon the current state if it is accepted and trusted by a sufficient number of donors. The donor coordination solution can be considered successful when it achieves the following goals: Team-level atomicity Donors can choose portfolios with whom they are value-aligned to the point that they perceive their donations as coming from the team of donors that invests into that portfolio rather than them personally. Program-level atomicity Donors can choose charity portfolios that, as a whole, represent their moral preferences well enough that they perceive their team as donating to a program area rather than an individual charity. Charities are value-aligned with the organization their donations are fungible with to the point that they make fully altruistic statements about their funding gaps. The donor coordination solution should likely take the shape of a web application to enable users of any platform to use it. The idea is roughly inspired by Wikifolio. An unauthenticated person viewing the website. A donor, a charity, or an administrator. The portfolio is an allocation rule that partitions funds among a set of charities. Every user can create portfolios, favorite or watch portfolios, and donate to portfolios. The donor is a user other than a charity. A charity is a user that only has the ability to enter some meta data about itself and its funding gap, and participate in discussions. The interests of beneficiaries are: Beneficiaries want to maximize the available funding toward the their preferences. Beneficiaries want the, at the margin, most effective interventions to receives maximal funding. Beneficiaries want the funding gaps of the most effective interventions to be greater than or equal to the available funding. In some cases the beneficiaries can give direct input, but in many cases their interests need to be represented by donors and charities because they have insufficient levels of intelligence to express them efficiently or are not yet born. Hence the interests of donors are: Donors want to maximize the available funding toward the their moral goals. Donors want the, at the margin, most effective interventions realizing these moral goals to receive maximal funding. Donors want the funding gaps of the most effective interventions realizing these moral goals to be greater than or equal to the available funding. Hence the interests of the charities are: Charities want to maximize the available funding toward the charity's moral goals. Charities want the, at the margin, most effective interventions realizing these moral goals to receive maximal funding. Charities want the funding gaps of the most effective interventions realizing these moral goals to be greater than or equal to the available funding. The main difference between donors and charities as two groups is the direction of the money flow. The main difference between the donors and charities internally is their different moral goal makeup. From these primary interests follow proximate interests for value-aligned teams of donors (all donors to a program area as defined by a public portfolio): Being value aligned, the members of a team are happy to make their donations fungible with the donations of all other members of the team. Since their value alignment with other teams varies, there may be teams with partially opposing moral goals. Teams will want to minimize fungibility with such teams. Since the funding gaps of charities are limited, teams also want to increase the funding gap of their program area by broadening its scope. Analogously for charities: The charities of popular portfolios are likely to be highly value aligned and thus happy to calculate their funding gaps cooperatively. Since their value alignment with charities of other program areas varies, there may be portfolios of charities with partially opposing moral goals. Charities will want to increase their scale in order to be able to enter greater funding gaps so portfolio authors can minimize fungibility with such opposing program areas. Clearly, the last two interests of the donor teams are in conflict. Small donation flows will favor portfolios of small, pure clusters of charities while greater donation flows will necessitate compromise in order to form greater, less pure clusters with larger funding gaps. For simplicity I assume that all donors are perfectly informed and their only differences are differences of value alignment. This is unlikely to be the case in practice, but the only difference between a donor that is not value aligned and a donor that acts as if they were not value aligned because of lacking information is that the latter can be educated. This educational mission is without the purview of Donor Coordination, but the software should provide the platform that donors will need to educate each other because this may be important for fostering user activity. Visitors can create donor accounts. Administrators can create administrator accounts. Administrators can create charity accounts. Visitors can view public portfolios including their descriptive statistics. Donors can add public portfolios to their watch list. Donors can donate to public portfolios. Donors can author public portfolios. Donors can draft and test portfolios in a private or draft state. Donors can comment on portfolios. Charities can enter new funding gaps for themselves. Charities can enter new system-external donation flows. Administrators have all privileges. At some point moderator accounts will become necessary, so moderators do not need to enjoy the same level of trust as administrators to contribute to the community maintenance. The functional requirements mention descriptive statistics. These are important for portfolio authors and other donors to decide how to structure a portfolio so not to duplicate very similar ones or which portfolio to donate to. At least two metrics are required: The sum of the funding gaps of the charities in a portfolio \(P\), \(\operatorname{gap}(P) = \sum\limits_{c}^{P} \operatorname{gap}(c)\). A ranked list of the portfolios with the highest fungibility but lowest similarity. One idea may be the quotient, \(\operatorname{compromise}(P, P') = \frac{\operatorname{fungibility}(P, P')}{\operatorname{similarity}(P, P')}\), of the following metrics: \(\operatorname{fungibility}(P, P') = \sum\limits_{c}^{P \cap P'} \operatorname{gap}(c)\) \(\operatorname{similarity}(P, P') = |\bigcup\limits_{c}^{P} \operatorname{donors}(c) \cap \bigcup\limits_{c}^{P'} \operatorname{donors}(c)|\). The fungibility and similarity metrics should also be displayed in isolation, particularly as a guide for authors of portfolios of new charities when the combined compromise metric is undefined. It may also become necessary to take weights into account, and the formulas will surely need to be tweaked further once real data become available. There needs to be a common definition of a funding gap, so that charities have hard, unyielding guidelines as to what figure to enter for a given year. Prioritization organizations already face a similar problem: Imagine two charities, charity A with the ability to invest $100 million with some baseline effectiveness \(e\) on average and charity B with the ability to invest $10 million with an average effectiveness of \(10e\) within a given year. Further assume that the charities are value aligned to simplify the problem to one dimension of impact. A commonly used uncertainty discount is 3% p.a. and for simplicity we assume that suffering in the world, absent the interventions, remains constant, so that aggregate suffering increases linearly over time. A donor that wants to invest $100 million now has the choice to donate it to charity A, knowing that it will be invested in the same year, or to charity B, knowing that $10 million of it will be invested in the same year, $90 million of it will wait on the charity's bank account at an interest rate of maybe 1% for another year, $80 million plus interest will wait for two years, and so on. Clearly, a definition of funding gaps that only takes into account a charity's ability to invest some amount per year would set very different bars for the marginal impact of the last dollar of that funding gap. Since the donor coordination solution addresses coordination problems that arise when the funding gaps of the individual charities do not warrant the attention of a prioritization organization, we assume that there are no meaningful differences of their relative effectiveness, so we face a simpler version of this problem. One solution may be to adopt GiveWell's excess assets policy: "We seek to be in a financial position such that our cash flow projections show us having 12 months' worth of unrestricted assets in each of the next 12 months." Another open question is the allocation of donations within the portfolio. Conceptually, donors donate to program areas, but factually they will have to transfer their donation to a specific organization. Splitting it up across several organizations would be an unnecessary hassle of the donor, so the algorithm that suggests the specific organization should know some ideal allocation and then recommend a recipient organization such that the actual allocation comes closest to the ideal allocation. It could also take tax deductibility into account as a tie breaker. The simplest option might be an equitable allocation where the algorithm aims to assign the same level of funding to each charity after taking donations external to the system into account. Another option may be to prioritize small funding gaps as an additional incentive for charities not to exaggerate their funding gaps in the moral lies scenario. However that would have little effect since the charities in a given program area are value aligned and can thus easily conspire with each other, and it may have the detrimental effect that charities would be incentivized to be tardy with entering new funding gaps. Donors often agree on donation swaps where each partner donates to the charity of choice of the other partner in order to harness the tax deduction of the charity in the respective country. In order to help portfolio authors to trade off fungibility against funding gaps, there would need to be a ranking of other portfolios that the given portfolio is most fungible with. However, portfolios whose audience is very similar are least interesting to portfolio authors, so the ranking should be sorted by something like the fungibility per cardinality of the cut set of donors, and here donation swaps would add noise to the calculation. It needs to be either clear to the donors that they need to enter the donation of their swap partner as their donation or the software should allow them to mark donations as swaps and enter their partner. The first is probably the better solution for an MVP, but the second may be more foolproof. When there is a pair of program areas such that the teams of each see the team of the other as an opposing team, but there is some set of charities that they can agree on, and the available funding is close to or greater than the available funding gaps of their program areas without the consensus charities, the intended result is that donors compromise and add charities to their portfolios that increase the funding gap at the cost of greater fungibility. But charities are of course value aligned with these teams. Hence it will be ethical for them to lie about their funding gaps, inflating them, to drive the opposing donors to fund the more fungible funding gaps. Analogously, the opposing team's charities can also inflate their funding gaps; they even have to lest their cause suffer. When one group defects in such a fashion, the cooperation breaks down. A classical example of the prisoner's dilemma. In practice, the donor coordination solution will be used mostly or at least at first only by donors that are all fairly value aligned at least to the extend that they value the type of moral plurality that exists among them. Hence this problem may not manifest any time soon. My experience that such a software would be helpful is based on reports of friends, some of whom are donors and some employees of affected charities. Unless, however, there is a sizable number of prospective users that are interested in the project, charities will not have sufficient faith into the growth of the user base to warrant their time investments. Apart from surveys among likely prospective users, one central market research tool needs to be a minimal viable product (MVP). Other donors and nonprofit staff have considered opening a group on a social network such as Facebook to bring together all participants in whose actions need coordination. The group would provide a means for communication but would leave any functions beyond that to the participants to be implemented in a manual, ad-hoc fashion. This way it will become clear which processes are in most urgent need of automation. It will also become clear if the community is large enough to sustain a more comprehensive solution like the one proposed here. An important strategic and marketing problem is the following: Entering funding gaps will only warrant the effort for the charities if they can expect significant donation flows from the donor coordination solution. For donors the donor coordination solution is only interesting when the program areas they want to donate to are well represented by charities working on them. One solution may be for administrators to regularly poll information on funding gaps from charities and invite them to claim their accounts themselves. That way, the administrators will have added effort during the startup phase, which will be increasingly outsourced to the charities as donors come to accept the system. To achieve said donor acceptance, it would be helpful if the project were run by a reputable organization with considerable reach, and if the project collected early signups prior to its launch, both in order for it to launch with momentum. Until such an organization has been found, I cannot consider this challenge solved. Please note that in the following I will use "intervention" and "program" semantically interchangeably conditional on which terms seems more idiomatic to me in the collocational context. ↩ "Respectable," here, is not meant to denigrate any other hypothetical prioritization organizations but rather meant as a handicap, since an organization that is highly respected has to go to great lengths to stress the low quality of its research when it wants to invest staff time proportionate to evaluating interventions with small funding gaps lest donors assume that the results are as reliable as other results the organization puts out. Taking such a risk is rarely warranted for such an organization. ↩ Chaining Retroactive Funders to Borrow Against Unlikely Utopias Toward Impact Markets SquigglyPy: Alpha Version of Squiggle for Python How Might Better Collective Decision-Making Backfire?
CommonCrawl
NET: a new framework for the vectorization and examination of network data Jana Lasser ORCID: orcid.org/0000-0002-4274-45801 & Eleni Katifori2 Source Code for Biology and Medicine volume 12, Article number: 4 (2017) Cite this article The analysis of complex networks both in general and in particular as pertaining to real biological systems has been the focus of intense scientific attention in the past and present. In this paper we introduce two tools that provide fast and efficient means for the processing and quantification of biological networks like Drosophila tracheoles or leaf venation patterns: the Network Extraction Tool (NET) to extract data and the Graph-edit-GUI (GeGUI) to visualize and modify networks. NET is especially designed for high-throughput semi-automated analysis of biological datasets containing digital images of networks. The framework starts with the segmentation of the image and then proceeds to vectorization using methodologies from optical character recognition. After a series of steps to clean and improve the quality of the extracted data the framework produces a graph in which the network is represented only by its nodes and neighborhood-relations. The final output contains information about the adjacency matrix of the graph, the width of the edges and the positions of the nodes in space. NET also provides tools for statistical analysis of the network properties, such as the number of nodes or total network length. Other, more complex metrics can be calculated by importing the vectorized network to specialized network analysis packages. GeGUI is designed to facilitate manual correction of non-planar networks as these may contain artifacts or spurious junctions due to branches crossing each other. It is tailored for but not limited to the processing of networks from microscopy images of Drosophila tracheoles. The networks extracted by NET closely approximate the network depicted in the original image. NET is fast, yields reproducible results and is able to capture the full geometry of the network, including curved branches. Additionally GeGUI allows easy handling and visualization of the networks. The analysis of complex networks both in general and in particular as pertaining to real biological systems has been the focus of intense scientific attention in the past and present [1–3]. However, before a network can be analyzed it has to be imaged and its structure has to be distilled in such a way that it is readable by a computer. This is especially important in times were datasets get increasingly large and manual processing and measurement of quantities like network length or number of branches it not feasible anymore. In the past, several protocols were published which include software and instructions for the segmentation of images and extraction of network data, with special focus on physarum [1, 4], leaves [2, 5] or the animal vasculature [6, 7]. Some of these approaches have been criticized for methodological errors [5, 8] or work as toolboxes of proprietary software [7], therefore limiting their applicability. NET and GeGUI provide an alternative software solution with an approach that focuses on speed, open access, degree of automation and versatility. One tool that warrants special mentioning is NEFI - Network Extraction From Images [9]. It has been developed recently and has strong similarities to NET with regards to its open source character and focus on high throughput. The main difference of the two is NEFI's inability to capture the detailed geometry of the network: edges with curves or kinks will be contracted to straight lines. Moreover NEFI depends on thinning for its network extraction whereas NET follows a vectorization approach which, in our opinion, increases stability and reduces occurrence of artifacts. On the other hand NEFI offers strong and versatile tools for image preprocessing and segmentation which are definitely worth considering when extracting networks from noisy images. Like NEFI, NET is freely available from GitHub [10] and we encourage the reader to download the software and follow the examples shown in this publication. In general, the problem of extracting a network from an image can be divided into two non-trivial steps: Segmentation into foreground and background. Vectorization and extraction of network data. The output of the protocol to generate a well-segmented binary image is highly dependent on the quality and characteristics of the original raw image. Therefore we will only briefly touch upon this subject here and only explain the technique we used to segment the images used in this publication to demonstrate the functionality of NET. We have to emphasize that NET's main purpose and also its main strengths lie after the image segmentation. The script for performing the segmentation in the repository involves standard image processing methods like adaptive thresholding and is not very sophisticated nor does it fit every dataset. For datasets with heavy noise, intensity gradients or incomplete networks, this script is going to fail. If the images cannot successfully be processed with the script we provide, we recommend looking into more sophisticated tools like ilastik [11] or fiji [12] or implement and tweak one of the more recent image segmentation methods like the GrabCut [13] or CoopCut [14]. The extraction of network data involves the creation of a skeleton of the shapes present in the binary image and extraction of a graph from the skeleton. This step can easily be generalized as the starting point - the binary images - all have the same basic characteristics. Previous approaches predominately used a pixel based technique called thinning [15, 16] to create a skeleton. In this work we choose a different path and create the skeleton following a vectorization approach. The methodology NET implements is very stable to noisy features, fast, completely automated and open source and creates graphs that can be easily handled and analyzed. It has already been used for the extraction of network data from Drosophila tracheoles as shown in Fig. 1 a, b [17, 18], leaf venation patterns as shown in Fig. 1 c, d [19]. It can be used to track droplets in microfluidics experiments, like in Fig. 1 e and to identify and quantify crack patterns Fig. 1 f [20]. Unprocessed images and vectorized network. The figure names used here refer to the file names of the images in the online repository. a and b Networks extracted from images tracheole3 and tracheole4. Unprocessed images courtesy of Sara Sigurbjörnsdottir, Leptin Lab, EMBL Heidelberg. c and d Leaf venation patterns extracted from image leaf2 and leaf3. Unprocessed image courtesy of Douglas Daly, New York Botanical Garden e Droplets on a fiber extracted from image bubbles1. Unprocessed image courtesy of Marcin Makowski, MPIDS, Göttingen. f Crack pattern in clay extracted from image cracks2. Unprocessed image courtesy of Pawan Nandakishore, MPIDS, Göttingen. For leaf3 only a detail is shown because the leaf as a whole is too large for plotting. The images show how NET is able to extract the network's geometry including vertex coordinates and edge trajectories in great detail. Vertices and endpoints of the networks, as indicated by yellow dots, as well as edge radii are extracted by the tool As we want to present a method potentially valuable for other research, the main part of this publication is a description of the computational framework. We describe the workflow enriched with usage examples and give some technical details where they are important for the output of the software. The overview over the framework is followed by a validation of the results generated by NET. The following provides an overview of the functionality of NET and GeGUI demonstrated on the examples of a Drosophila tracheole, a cracked clay surface and a cleared leaf, shown in Fig. 2 a to c. A more thorough description of the technical details is given in the Additional file 1. Raw images of natural networks. The corresponding files can be found in the data/originals folder of the repository [10]. a Grayscale microscopy image of the branching structure a Drosophila Tracheole. Image courtesy of Sara Sigurbjörnsdottir. b Digital photograph of cracks in dried clay. Image courtesy of Pawan Nandakishore. c High resolution scan (6400 dpi) and zoomed in detail of the vascular network of a Protium grandifolium leaf. Cleared leaf sample courtesy of Douglas Daly The processing steps are carried out by four specialized scripts written in python. Both the scripts and the images used in this publication can be found in the Git repository [10]. We strongly encourage the reader to browse NET's online repository as it is meant to be an integral part of this publication. The publication is written in a way that all results are reproducible by the reader and we give usage examples for every processing step. Before running NET, the user will have to install python 2.7 as well as a number of third party libraries on their system. Compatibility with previous versions of python, such as 2.5 and 2.6 is likely but not guaranteed. Some of the code needs to be compiled for the specific platform. NET has been designed to work for Linux, Windows and Mac operating systems. Detailed instructions on how to set up the framework on each platform can be found in the readme-file of the repository. The workflow can be broken down into four main parts: Creating a binary image using either the segmentation script binarize_adaptive.py or a custom tool or method. Extracting the graph from the network using net.py. Optional: manipulating and manually correcting the graph using gegui.py. Extracting network statistics from the graph using analyze.py. All scripts are run via the command line. The user needs to provide the path to the file to be processed as required argument. Parameters to modify the script's behavior are optional: The repository is organized into one folder for each processing step, containing the necessary scripts. Additionally there is a folder called data/originals containing all the example images used in this publication. All processing steps described in the following paragraphs can be easily reproduced by applying the aforementioned scripts to the images uploaded in the repository at the data/originals folder of [10]. Step 1: image segmentation In this step we create a binary representation from the original digital grayscale or color image that contains the network to be analyzed. The goal is to be left with the network as the largest connected structure in the image. Artifacts, stains or noise do not matter as long as they are not connected to the network. How to process the image before a suitable binary image can be created is largely dependent on the characteristics (contrast, definition, resolution) of the image. For some high quality images it might be sufficient to just use thresholding [21] with a constant threshold to separate the network from the background. For most images though it is necessary to employ more sophisticated image processing methods before the thresholding can yield acceptable results. We will not cover these methods here as they are described in depth elsewhere [22]. We provide the basic script we used to create binary images of the examples shown in this publication. These examples all originate in datasets that have been used in real-world scientific projects, the results of which are published for example at [19] and [20]. The script used to segment the images is the same for all the images - only the parameters have been tweaked. The only exception to this is the bubbles-image as it involves edge detection rather than segmentation which is not shown here. The parameters to create each binary image can be found in a text file at the location of the segmentation script. A detailed description of the processing parameters and their impact on the outcome of the binarization process can be found in the Additional file 1. For example to binarize the images of the leaves the user can run: The resulting binary images are shown in Fig. 3 a to c. For our example images some manual removal of artifacts or cropping needed to be done: In the binary of the tracheole we cropped away the small part of another cell not belonging to the focal tracheole visible in the left of the image. For the leaf we filled in small holes in the main vein. The binary of the crack pattern was not altered. If manual processing steps are involved, the dataset is not suited for automated high throughput processing. Nevertheless if the number of artifacts is negligible, the manual processing steps are not necessary to get sufficiently accurate networks. Segmented images of natural networks. Figure names refer to file names in the repository. a Binary version of the tracheole1 image. The left subtree has been digitally cropped. b Binary version of the cracks1 image. No manual editing has been performed. c Binary version of the leaf vascular network from image leaf1. Small spurious holes in the main vein have been filled Within our segmentation script we provide basic image processing methods to improve the quality of the segmentation. Concerning images that are known to contain long and thin shapes like networks, methods exist to artificially fill in gaps in the network [6, 7]. NET does not implement these methods because they might create spurious links and the fidelity of the vectorized network might be hard to validate after application of such a method. This is because a heuristic method such as gap-filling might yield drastically different results on different types of networks or even in different parts of the same network. Such gap-filling algorithms can be incorporated in future versions of NET. Step 2: extracting the network In this step we extract the network information from a binary image containing the network structure and distill this information into a weighted planar graph. For this purpose we use the main engine of NET, a fast vectorization-based script which creates a very accurate and easy to handle representation of the network. The resulting graph contains information about the spatial position of the nodes and the length and radius of the edges. NET provides a set of options to modify its behavior depending on the type of graph the user is processing. Most of the times images from the same dataset do not require individual tuning of the script's options. After the options have been modified to fit one image of the dataset, the script is able to process all the other images with the same options. The most basic use-case of the script requires only the path to the binary image as input. The most important options will be described in this paragraph while a complete list of options is given in the Additional file 1. Figure 4 a to c show the graphs extracted from the binary images in Fig. 3. Graphs extracted by NET from segmented images of natural networks. The original datasets and resulting graphs are not only different with respect to their geometric layout but also have vastly different sizes. The yellow points indicate nodes of the graph whereas gray lines represent edges. a Graph of the tracheole network. b Graph of the crack pattern. c Graph of the leaf vascular network To process for example the leaf vascular network leaf1 from image to graph, the user can run: For the tracheole network and the crack network we used the parameters p=5, r=1, plt=True and p=5, r=1, plt=True, fformat=png, dpi=2000 respectively. In the following we will describe the most important parameters to modify the behavior of the script and illustrate the impact of these options on the resulting graph. Pruning -p Even with a relatively smooth binary image small kinks in the contour might lead to the emergence of surplus branches in the extracted network. These surplus branches tend to be very short and therefore can be dealt with by pruning away branches shorter than a certain length. Enabling the pruning option removes dangling branches that are shorter than the pruning threshold p. Here p is not a fixed length in pixels but rather the number of triangles in the triangulation representing the shape. Therefore branches that are shorter than p triangles will be removed whereas all other branches remain untouched. Thinning-based algorithms either also prune away smaller branches [23] or use feature characteristics to detect and remove them [24]. The pruning option has to be handled with great care. Depending on the type of network, small branches could be a major source of information or they could be completely irrelevant. We advise to only prune away branches that are significantly shorter than the average branch length, thus ensuring that most of the removed branches are artifacts and not genuine features of the network. Figure 5 b to d show the effect of no pruning, too much pruning and correct choice of p (in this case p=3) on the network extracted from the original image (detail of tracheole2) shown in Fig. 5 a. Effects of the pruning and redundancy parameters on the vectorized graph. The colors yellow, red and purple indicate junction, tip and redundant nodes respectively. In the plots, edge widths have been downscaled by a factor of 0.5 to improve plot clarity. a Detail of the image tracheole2 from which the networks were extracted. b No pruning (p=0), surplus branches are visible. c Too much pruning (p=50), information is lost. d Adequate choice of p=3, surplus branches are gone, genuine information is preserved. e Only junctions and tips are preserved - poor approximation of the geometry (r=0). f Half of the redundant nodes are preserved: fair approximation of the geometry (r=1). g All redundant nodes are preserved - good approximation of the geometry (r=2) Redundancy -r Setting the redundancy parameter r∈{0,1,2} does not change the behavior of the network extraction mechanism but influences in how much detail the final extracted network is saved to the hard drive. Initially, the extracted network contains many points that only support the geometry of the network and have no significance for the topology. These points do not represent any junctions or endpoints and we therefore call them redundant points. Removes all redundant points in the final network representation. This reduces the size and complexity of the resulting data structure significantly and is handy if the geometry of the network is not important or the network is very large. Removes half of the redundant points and therefore is a good way to reduce the size of the data structure but still have an acceptable approximation of the network's geometry. All redundant points will be saved. This is the best option if parameters like the angle at junctions or curvature of edges need to be measured as accurate as possible. Keeping more redundant points is always a tradeoff between size and speed on one hand and an accurate description of the network's geometry on the other hand. Figure 5 e to g show the final network with no, half and all redundant points respectively. By default redundancy is set to zero and only a graph with no redundant nodes is saved. Step 3: network manipulation So far NET only works for two-dimensional images. In theory our approach is not constrained to two dimensions. However, segmentation of three-dimensional images poses several challenges which we were not able to overcome so far. NET was designed for planar networks - graphs that can be drawn without any edges crossing. The examples used in this publication include projections of Drosophila tracheoles on a plane where in reality these structures grow in three dimensions. However the tracheoles follow internal surfaces of the insect [25, 26], therefore the two-dimensional projection is very close to the structure itself and the extracted graph can be assumed to be a faithful representation of the depicted network. Nevertheless, projection introduces systematic error such as a distortion of edge lengths as well as potential edge crossings, which will result in spurious nodes. In order to remove the spurious nodes, we have to give up full automation of the process, as it is very challenging to create an algorithm that reliably distinguishes between real and false junctions in the projection. However, the number of spurious junctions, although typically non-zero, is limited. Manual correction of artifacts in such cases is possible and warranted. To make the process of spurious junction elimination and correction of artifacts in the graph fast and easy we have created a graphical user interface for graph manipulation - the GeGUI. After the network has been successfully extracted and saved, it can be displayed and manipulated using the GUI. It will load the extracted network and superimpose it on the original image. This is done to facilitate work for the human operator: By having the original image directly beside the extracted network, it is easier to recognize where mistakes were made and what needs to be done so the extracted network closer resembles the real structure. To facilitate usage, when creating a new node the radius at the position of the new node is automatically measured. An illustration of the rewiring of spurious junctions using GeGUI is shown in Fig. 6 a to e. Illustration of spurious node elimination with the GeGUI. a Detail of the extracted graph superimposed onto a detail of the original microscopy image tracheole1 (image in false-colors to improve contrast). b Highlighting of all loops still present in the graph to facilitate elimination of spurious junctions. c The nodes that form spurious junctions can be selected individually by clicking. d Deletion of selected nodes. e The final version of the graph with all spurious junctions corrected and no cycles left The GeGUI needs three files to correctly operate: the extracted network, the original image and the distance map of the image created during the extraction process (the user can simply enable -dm while running NET to save the distance map). To run GeGUI with a given network file, the user can provide the script with the path to a folder were all these three files are located for a given network: In the data/results folder of the repository the user can find the three folders tracheole1, tracheole2 and tracheole3 containing the necessary files resulting from the processing of the example images. To load the graph extracted from tracheole1, the user can run: Using the GUI involves point-and-click commands to mark and create nodes as well as key-press commands to delete nodes, create edges and switch between options. After work on the graph is completed, the new graph will be saved as .gpickle file and can then be either reloaded and further edited or used for measurements. Step 4: network statistics We provide a script, analyze.py to quantify some select basic properties of any network created either directly by NET or manipulated with GeGUI. The script measures quantities like number of nodes or total length of the graph and saves them to a text-file. To analyze the properties of the network from tracheole1, the user can run: If more complex or combined measurements are needed, the script can be expanded or adapted quite easily as it is a modular collection of measurement functions. To assess the quality of networks extracted with NET, we re-extracted known networks from 389 images of Drosophila tracheoles and then compared them to the original graphs. We created an artificially noisy background, using spatially correlated noise, and plotted the known networks on this background with varying intensity for the edges to make them more similar to real world images. Then we segmented these artificially created images again and extracted networks to compare them with the original networks. The statistics we use for the comparison aim to capture all important aspects of a network's topology and geometry: The total number of nodes N in the graph hints, whether the topology has been correctly resolved. To validate the network's geometry, we compare the total length L i.e. the sum over individual edge lengths of the networks. Moreover we compare the average edge weight \(\bar {R}\) and the ratio of the biggest to the smallest edge weight r=min(R)/max(R). For each observable, the error σ made during extraction is quantified as $$ \sigma_{S} = \frac{\left|S_{o} - S_{v}\right|}{S_{o}} $$ where S o is the respective observable measured in the original network and S v the observable measured in the artificially created validation network. All values for σ shown in Table 1 are the mean over all 389 extracted and re-extracted networks. The above-mentioned statistics leave the comparison invariant under node translations and squeezing or stretching transformations. To rule out that such a transformation has occurred, we need to assess whether the two networks look the same in real space. Therefore we plotted both networks without any translation or fitting to increase overlap and calculated the sum over the pixel-wise difference between both images normalized by the total amount of pixels in the original network. We did this for several dpi values for the plots to exclude influences of plot resolution on the results. Values given in the table have been calculated for a resolution of 80 dpi. The comparison of known to extracted networks quantifies the error that is introduced during one pass through NET. σ N and σ L lie in acceptable ranges of not more than 10% deviation from the original network whereas the edge weights in the re-extracted networks deviate by about 25%. This can be explained as a systematic error stemming from image preprocessing techniques like blurring and binary opening and closing that affect edge weights far stronger than edge lengths. The average pixel-wise difference between original and re-extracted network is very small, giving proof that the network's geometry is resolved very well and NET does not introduce any artificial transformation to node positions. Table 1 Comparison of node number N, network length L, mean edge weight \(\bar {R}\), ratio of smallest to largest weight r=min(R)/max(R) and pixel-wise difference D of the plotted networks We uploaded all images used for the validation as well as all the statistics calculated for the networks to the validation folder in the repository. The full validation process is described in more detail in the Additional file 1 and can be reproduced by the reader by running the validation.sh script. Users that heavily rely on the edge weight extracted by NET need to be wary of the error in \(\bar {R}\) and r, as it is substantial. If edge weights are of major importance, carefully adapting the parameters used for segmenting the images to potentially reduce the errors should be considered. Furthermore running the validation.sh script plots histograms of all error distributions. This can be used to make an informed decision on the errors and their impact and meaning for the specific dataset. Processing speed Most of the algorithms of NET have been ported to C using cython [27], therefore the script is able to handle extremely large networks with millions of nodes. This has already been taken advantage of in the extraction and analysis of leaf venation patterns [19]. To test how long different kinds of networks take to be processed with our framework, in Table 2 we list processing times for networks with a number of nodes (including redundant nodes) in the range of 102−106. Network extraction time scales linearly with number of nodes and loading and writing times for images can vary depending on file format. The processing times were measured on an Intel Core i7-3770 CPU @ 3.40 GHz x 8, 31.4 GB Memory, using one of the eight kernels of the CPUs. Visualization of the graphs was disabled for these measurements and all graphs were saved with all redundant nodes. Even for graphs with 106 nodes, the processing time was under 5 min. Table 2 Processing speed for different types of networks with node numbers in the range 102−106 Although NET offers some functionality with regards to image preprocessing and segmentation, its main strength lies in the extraction of a graph from an already segmented image. Its ability to extract nodes with a degree of two - nodes lying on a line and not an intersection - enables it to capture the geometry of a network with curved edges. With regards to speed and accuracy, NET performs on a comparable level to other network extraction tools and yields graphs that closely approximate the networks depicted in the images. NET also extracts edge weights based on the diameter of the edges in the image. Although the edge weights extracted by NET are dependent on image preprocessing and segmentation of the input image, its ability to extract edge weights at all is rare among comparable network extraction tools. A possibility to further improve NET's speed is the parallelization of graph creation from the triangulation. For large networks this is the process that is most time-consuming. We did not explicitly incorporate multi-threading into NET as the framework is designed with batch-processing in mind and can therefore simply be run multiple times on different portions of the dataset to use all available CPU resources and increase efficiency. In this publication we mostly show example applications from biology but the usage of the framework does not need to be limited to this kind of networks. It can be used in a broad array of datasets to detect and measure elongated shapes. Moreover every boundary of a structure can be represented by a network-like structure by using edge detection like the Sobel filter on the image. The image of the bubbles we use as an example can be created this way by using the script binarize_bubbles.py This has a wide range of applications, such as the extraction of crack patterns in dried clay or the tracking of bubbles in microfluidics. Several examples of the networks extracted by NET - biological or not - alongside the original images are shown in Fig. 1 a to f. Nevertheless we created NET mostly with application in the life sciences in mind. In our experience, manual processing is prevalent in these laboratories. Our close collaboration with the Leptin Lab at EMBL Heidelberg and Douglas Daly at the New York Botanical Garden has enabled us to simplify workflows for the scientists working with NET and switch to automated processing. Apart from saving time, automated processing also has the advantage of yielding reproducible results and avoiding human error: given a binary image and set of processing options, NET will always yield the same results, whereas manual measurement largely depends on the perception of the individual performing the measurement. Automated processing also enables us to measure more complex quantities like the area of the convex hull, curvature and angles, or other topological metrics of the network. More importantly it also works for extremely large networks where manually measuring metrics across the whole network is simply not feasible. Last but not least we are actively developing and maintaining NET. In the future one can expect to see the incorporation of some more sophisticated segmentation algorithms into the framework as well as a port to python 3. Nevertheless we are aware that we cannot compete with the most advanced segmentation toolkits available as our focus is more on the networks-part of the software. We plan to expand GeGui as it has proven to be a very useful tool in our daily work with graphs. In the future we want to include more functionality like improved drawing of graphs from scratch as well as the ability to deal with disconnected components and multiple graphs at the same time. Furthermore we want to improve NET's accuracy with regards to the extracted edge weights as we feel this is the only aspect of NET's performance that is still lacking. We want to emphasize that both the source code for the network extraction as well as for the manual graph handling can be found in the Git repository [10]. We expect that these tools will prove especially useful in facilitating quantitative analysis of large datasets. We are very happy if our software is used and we are open to suggestions regarding improvement of existing functionality, additional features or fixing of bugs. To communicate with us, we encourage the reader to use the issue-tracker system provided by GitHub. Name: Network_extractionHome page: https://github.com/JanaLasser/network_extraction Operating systems: Linux, Mac, WindowsProgramming language: PythonLicence: Copyright (C) 2015 Jana Lasser Max Planck Institute for Dynamics and Self Organization Goettingen This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/. Baumgarten W, Hauser MJB. Detection, extraction, and analysis of the vein network of the slime mould Physarum polycephalum. J Comput Interdiscip Sci. 2010; 1(3):241–9. Bohn S, Andreotti B, Douady S, Munzinger J, Couder Y. Constitutive property of the local organization of leaf venation networks. Phys Rev E. 2002; 65:061914. Lipowsky H, Zweifach B. Network analysis of microcirculation of cat mesentery. Microvasc Res. 1974; 7:73–83. Baumgarten W, Hauser MJB. Computational algorithms for extraction and analysis of two-dimensional transportation networks. J Comput Interdiscip Sci. 2012; 3:107–16. Price CA, Symonova O, Mileyko Y, Hilley T, Weitz JS. Leaf extraction and analysis framework graphical user interface: Segmenting and analyzing the structure of leaf veins and areoles. Plant Physiol. 2011; 155:236–45. Tsai PS, Kaufhold JP, Blinder P, Friedman B, Drew PJ, Karten HJ, Lyden PD, Kleinfeld D. Correlations of neuronal and microvascular densities in murine cortex revealed by direct counting and colocalization of nuclei and vessels. J Neurosci. 2009; 29:14553–70. Kaufhold JP, Tsai PS, Blinder P, Kleinfeld D. Vectorization of optically sectioned brain microvasculature: learning aids completion of vascular graphs by connecting gaps and deleting open-ended segments. Med Image Anal. 2012; 16:1241–58. Sack L, Caringella M, Scoffoni C, Mason C, Rawls M, Markesteijn L, Poorter L. Leaf vein length per unit area is not intrinsically dependent on image magnification: avoiding measurement artifacts for accuracy and precision. Plant Physiol. 2014; 166:829–38. Dirnberger M, Kehl T, Neumann A. NEFI: Network Extraction From Images. Sci Rep. 2015; 5. Lasser J. Github Repository for NET. 2016. https://github.com/JanaLasser/network_extraction. Accessed 14 Dec 2016. Sommer C, Straehle C, Köthe U. Ilastik: Interactive learning and segmentation toolkit. In: Proceedings of the eighth IEEE International Symposium on Biomedical Imaging (ISBI): 2011. p. 230–3. doi:http://dx.doi.org/10.1109/ISBI.2011.5872394. Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, Tinevez JY, White DJ, Hartenstein V, Eliceiri K, Tomancak P, Cardona A. Fiji: an open-source platform for biological-image analysis. Nat Methods. 2012; 9(7):676–82. Rother C, Kolmogorov V, Blake A. GrabCut: Interactive foreground extraction using iterated graph cuts. In: Proceedings of the 31st international conference on computer graphics and interactive techniques (SIGGRAPH): 2004. p. 309–314. doi:http://dx.doi.org/10.1145/1015706.1015720. Jegelka S, Bilmes J. Submodularity beyond submodular energies: coupling edges in graph cuts. In: Proceedings of the 24th European conference on Computer Vision and Pattern Recognition (CVPR): 2011. doi:http://dx.doi.org/10.1109/CVPR.2011.5995589. Lam L, Lee SW, Suen CY. Thinning methodologies-a comprehensive survey. IEEE Trans Pattern Anal Mach Intell. 1992; 14:869–85. Zhang TY, Suen CY. A fast parallel algorithm for thinning digital patterns. Commun ACM. 1984; 27:236–9. Lasser J. Network analysis and hidden phenotypes in large biological datasets. Master's thesis, Georg-August University Göttingen, Physics Department. 2015. http://pubman.mpdl.mpg.de/pubman/faces/viewItemOverviewPage.jsp?itemId=escidoc:2300866. Accessed 14 Dec 2016. Sigurbjörnsdóttir S. Complex cell shape: Molecular mechanisms of tracheal terminal cell development in drosophila melanogaster. phd thesis. PhD thesis, EMBL Heidelberg. 2014. http://hdl.handle.net/1946/20263. Ronellenfitsch H, Lasser J, Daly DC, Katifori E. Topological phenotypes constitute a new dimension in the phenotypic space of leaf venation networks. PLOS Comput Biol. 2015. Nandakishore P, Goehring L. Crack patterns over uneven substrates. Soft Matter. 2016; 12:2253–63. Sahoo PK, Soltani S, Wong AKC. A survey of thresholding techniques. Comput Vis Graphics, Image Process. 1988; 41:233–60. Russ JC. The Image Processing Handbook, Sixth Edition. Boca Raton: CRC Press; 2011. Mekada Y, Toriwaki J. Anchor point thinning using a skeleton based on the euclidean distance transformation. In: Proceedings of the 16th International Conference on Pattern Recognition: 2002. p. 923–6. doi:http://dx.doi.org/10.1109/ICPR.2002.1048186. Bag S, Harit G. A medial axis based thinning strategy and structural feature extraction of character images. In: Proceedings of the 17th IEEE Conference on Image Processing (ICIP): 2010. p. 2173–176. doi:http://dx.doi.org/10.1109/ICIP.2010.5654311. Wigglesworth VB. The Principles of Insect Physiology. New York: Chapman and Hall; 1972. Ghabrial A, Luschnig S, Metzstein MM, Krasnow MA. Branching morphogenesis of the drosophila tracheal system. Ann Rev Cell Dev Biol. 2003; 19:623–47. Behnel S, Bradshaw R, Citro C, Dalcin L, Seljebotn DS, Smith K. Cython: The best of both worlds. Comput Sci Eng. 2011; 13:31–9. EK acknowledges support from the Burroughs Wellcome Fund. JL and EK acknowledge the contribution of example images from Sarah Sigurbjörnsdóttir (EMBL, Heidelberg), Pawan Nandakishore and Marcin Makowski (both MPIDS, Göttingen). JL acknowledges the participation in helpful discussions about the manuscript with Henrik Ronellenfitsch and Lucas Goehring (both MPIDS, Göttingen). Work on this study was part of JL's work on her master's thesis and therefore has been privately funded. The datasets generated and analysed during the current study as well as the source code used to analyze and generate the data are available in the network_extraction repository, https://github.com/JanaLasser/network_extraction. JL wrote the source code of NET and GeGUI, carried out the validation and benchmarking and drafted the manuscript. EK contributed the idea for the software, helped in the organization of the project and helped drafting the manuscript. Both authors have read and approved the final manuscript. Max Planck Institute for Dynamics and Self-Organization, Göttingen, Am Fassberg 17, Göttingen, 37077, Germany Jana Lasser Department of Physics & Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, 19104-6396, PA, USA Eleni Katifori Search for Jana Lasser in: Search for Eleni Katifori in: Correspondence to Jana Lasser. Supplement. (PDF 178 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Lasser, J., Katifori, E. NET: a new framework for the vectorization and examination of network data. Source Code Biol Med 12, 4 (2017) doi:10.1186/s13029-017-0064-3 Received: 09 March 2016 Network extraction Leaf venation
CommonCrawl
Instrumental variable analysis in the context of dichotomous outcome and exposure with a numerical experiment in pharmacoepidemiology Babagnidé François Koladjo ORCID: orcid.org/0000-0002-1608-36351, Sylvie Escolano1 & Pascale Tubert-Bitter1 BMC Medical Research Methodology volume 18, Article number: 61 (2018) Cite this article In pharmacoepidemiology, the prescription preference-based instrumental variables (IV) are often used with linear models to solve the endogeneity due to unobserved confounders even when the outcome and the endogenous treatment are dichotomous variables. Using this instrumental variable, we proceed by Monte-Carlo simulations to compare the IV-based generalized method of moment (IV-GMM) and the two-stage residual inclusion (2SRI) method in this context. We established the formula allowing us to compute the instrument's strength and the confounding level in the context of logistic regression models. We then varied the instrument's strength and the confounding level to cover a large range of scenarios in the simulation study. We also explore two prescription preference-based instruments. We found that the 2SRI is less biased than the other methods and yields satisfactory confidence intervals. The proportion of previous patients of the same physician who were prescribed the treatment of interest displayed a good performance as a proxy of the physician's preference instrument. This work shows that when analysing real data with dichotomous outcome and exposure, appropriate 2SRI estimation could be used in presence of unmeasured confounding. In observational studies, unobserved confounding may biase the estimation of target effect. Over the last decade, this issue has recieved a growing attention in the field of epidemiological studies attempting to assess adverse effects of drugs with a few works focusing on instrumental variable (IV) approaches. Instrumental variable estimation is a well known approach for assessing endogeneity in statistical modelling [1, 2]. Endogeneity often arises when a causal model is poorly specified therby introducing a structural bias in the estimation of its parameters. This may result from a measurement error in variables [3], an unobserved variable [4] or an inverse causality between the outcome and some regressors. The general IV method of estimation attempts to remove this bias by using structural equations which incorporate instrumental variables in the model. Several theoretical approaches have been developed to build estimators of parameters and to study the properties of these estimators in causal models with endogeneity (see [5]). A well-known example is the case of linear models in which IV estimation leads to estimators with good properties of convergence such as consistency discussed in [6]. The structural bias can be completely removed in this case. In pharmacoepidemiology, we most often deal with binary covariables (drug exposure), binary responses (adverse event indicator) and confounding variables which are variables that are correlated with exposure and response. A nonlinear model should be the first choice in this context to match with the specific nature of the variables. However, to quantify the risk of the adverse effect of a treatment in the presence of unobserved confounding, researchers investigating the IV-based estimation often model the probability of dichotomous events as a linear function of covariables, thus ignoring the basic features of a probability. Terza and colleagues [7] investigated the influences of mispecification on the estimation when a linear IV model is used in an inherently nonlinear regression setting with endogeneity. A substantial bias was demonstrated in their results. In the context of pharmacoepidemiology modelling, endogeneity is often due to unobserved confounding and various nonlinear IV methods such as the Generalized Method of Moment (GMM;[8, 9]) or the two-stage residual inclusion (2SRI; [10]) can be used to solve this issue. However in a review of IV methods, Klungel and colleagues [11] claimed that the GMM estimator with the logistic regression model is not consistent for causal Odd Ratio (OR) estimation owing to the non-collapsibility of the OR. Consistency is also not guaranteed for the 2SRI when the regression models are nonlinear in both stages. For these nonlinear IV methods, theoretical results exist under very restricted assumptions which do not cover the possible frameworks of real data. Overall, in the context of binary outcome several simulation studies investigate mainly 2-stage IV methods with the first step being linear and the second step being logistic as in [12]. A few articles concern double probit models (Chapman et al. [13]). Very few address the comparison of GMM and 2-stage approaches and none study GMM, 2-stage double logistic using the prescription preference-based instrumental variables. In a simulation study, we compare the IV-based GMM and the 2SRI methods to the conventional method which does not account for endogeneity. These comparisons are based on the estimation of the regression coefficient of the exposure varible in nonlinear logistic model. Our numerical comparison of the methods involves several scenarios with different confounding levels and different instrument strengths for which computation formulas are established in the context of dichotomous outcome and exposure. We recall the general formula of the covariance matrix for the two-step estimation methods and give the corresponding expression for two-stage nonlinear least squares method in the context of logistic regressions. The paper is organized as follows: we specify the model and describe the methods of estimation that will be analysed. Then we describe the simulation design, the criteria for evaluating the performances of the methods and the results of our simulations. The final sections discuss the results and make some concluding remarks. Details on the computation of the covariance matrix of the 2SRI method, the instrument's strength and the confounding level are to be found in the appendices, as well as a detailed description of the simulation model and supplementary results. We consider a general model with dichotomous outcome and exposure that can be written as $$ Y = F(\beta_{0} + T\beta_{t}+ X_{1}\beta_{1}+ X_{2}\beta_{2}+ X_{u}\beta_{u}) + e $$ $$ \mathbb{E}(T|Z,X_{1},X_{2},X_{u}) = r(\alpha_{0} + Z\alpha_{z}+ X_{1}\alpha_{1}+ X_{2}\alpha_{2}+ X_{u}) $$ with Y and T as binary outcome (event or not) and treatment (T1 or T2) respectively, X1,X2 some covariables and X u an unobserved confounder of the outcome and treatment. The function F(.)=r(.) denotes the logistic distribution function also known as expit(.): expit(.)=exp(.)/(1+exp(.)) and e denotes the error term. The parameter β=(β0,β t ,β1,β2) denotes the vector of unknown parameters to be estimated. Without a confounding variable, all observed regressors are exogenous. In this case, the true model is written $$ Y = F(\beta_{0} + T\beta_{t}+ X_{1}\beta_{1}+ X_{2}\beta_{2}) + e $$ and conventional regression methods are suitable for estimating the parameters β. We will denote (3) the conventional model. If this model is adjusted to data in the presence of an unobserved confounder, the estimated coefficients would lead to a bias with a level depending on the confounding level. As the confounder X u is not independent of treatment, the residuals of the conventional model are associated with the treatment. This causes endogeneity so a single regression of the outcome on observed covariables will fail to estimate β t efficiently. A common strategy is to consider another regression model that links the endogenous variable with others. Equation (2) defines the auxiliary model that predicts treatment T as a function of covariables X1, X2, the confounder X u and another variable Z. Variable Z denotes the instrumental variable (or instrument) related to the treatment, i.e. a variable correlated with the treatment and which has no direct association with the outcome. The bias due to the unobserved confounder can significantly be reduced by means of the two-stage regression model using a valid instrument. As defined by Johnston and colleagues [14] and Greenland [15], a valid instrument must not be correlated with an unobserved confounder or with the error term in the true model (1). Formally, we assume that the instrument Z meets the following assumptions: \({\mathbb {C}ov}(Z,Y|T,X_{u},X_{1},X_{2})=0,\) \({\mathbb {C}ov}(Z,T) \neq 0,\) \({\mathbb {C}ov}(Z,X_{u}) = 0, {\mathbb {C}ov}(Z,X_{1}) = 0\) and \({\mathbb {C}ov}(Z,X_{2}) = 0\). We also assume that the confounder X u is not associated with the covariables X1 and X2, that is \({\mathbb {C}ov}(X_{u},X_{1}) = 0\) and \({\mathbb {C}ov}(X_{u},X_{2}) = 0\). The main goal is to estimate β t , the treatment coefficient which is the basis of risk evaluation; however an estimation of β t is obtained in general by estimating vector β which is discussed below. As already proved in the simple case of a linear model (for which the functions F and r are equal to identity in Eqs. (1) and (2)), a high association between the treatment and an instrument should improve the IV estimation of β. Finding a strong instrument is then a crucial step in all procedures of instrumental variable estimation. In what follows, we first present some specific instrumental variables often used in pharmacoepidemiology, then discuss some IV estimators of β to obtain an estimate of β t before addressing the properties of these estimators. Instrument in pharmacoepidemiology An instrumental variable can be determined in many ways, provided that it meets the assumptions listed above. One of the problems is to find a valid instrument with a reasonable strength. The strength of an instrumental variable can be defined as resulting from the level of its association with the endogenous treatment. As such, it could be quantified by using the correlation coefficient between the treatment and the related instrument. In the wide range of pharmacoepidemiologic applications, we can summarize the various instrumental variables in three categories: Geographical variation. Proximity to the care provider can positively influence access to treatment of a patient compared to others who live far away from health services. To account for this difference between patients, some researchers (see [16]) consider the distance between a patient and a care provider as an instrumental variable. Although this seems realistic as there is no direct association between this distance and the occurrence of disease, the presence or absence of health services can be associated with some socioeconomic characteristics. The latter are often considered as unmeasured confounders that call into question the suitability of using this instrument. Calendar time. The use of calendar time as an instrument in pharmacoepidemiology often relies on the occurrence of an event that could change the attitude of the physician or patient regarding a treatment. This could be a change in guidelines for example or a change due to the arrival of a new drug on the market. The time from that event to the date of treatment defines the calendar time which clearly affects the outcome of the treatment since the change in physician or patient attitude will be more pronounced immediately after the event has occurred than later. An example of use of calendar time as an instrument can be found in [17]. Physician's prescription preference. The most often used instrumental variables in pharmacoepidemiology are preference-based [18–20]. The issue is to compare the effectiveness of two treatments T1 and T2 when the assignment of treatment to the patient is not randomized. This is the case in observational studies where the prescriber of the treatment (the physician) introduces an effect that influences the outcome via the prescribed treatment. This effect results in the instrumental variable that reflects the influence of care-providers on the patient-treatment relationship. Brookhart and colleagues [21] define this instrumental variable as the "Physician's Preference" (PP) and propose to use the treatment prescribed by a physician to its previous patient as a proxy of this IV for his/her new patient. The instrumental variable \(\left (Z_{i}^{*}\right)\) of the ith patient will then be the treatment prescribed to the previous patient of the same physician. As a physician's preference could change over time, Abrahamowicz and colleagues [22] introduce a new procedure to detect the change point and build a new proxy of PP that includes not only the treatment prescribed to the previous patient but all the previous prescriptions since the change point. Those instrumental variables and some others are presented in a more detailed form in [23], [24, 25] or in [26] with enlightening discussion on their validity. In this work, we carry out a simulation study to examine how the proxy Z∗ of physician's preference performs in the context of logistic regression. We also examine the physician's preference-based IV in the continuous form (see [22]), i.e. the proportion (pr) of all previous patients of the same physician who were prescribed the treatment of interest. This corresponds to the empirical estimator of the probability for a physician to prefer the treatment of interest. IV Estimation of β t Estimating β t in model (1) with an unobserved confounder X u amounts to estimating vector β and taking the corresponding component of treatment T. Below, we present two methods that can provide consistent estimation of β and then that of β t : the two-stage residual inclusion (2SRI) method and the generalized method of moment (GMM). The two-stage residual inclusion method is a modified version of the two-stage least squares (2SLS) method used to estimate the parameters in linear models with instrumental variables. As mentioned by Greene [5], the first stage of the 2SLS method predicts the endogenous variable (the treatment here) using the instruments and other covariables (Eq. (2)). In the second stage, the endogenous variable is just replaced by its prediction from the first stage. This method is called two-stage predictor substitution (2SPS) when the first stage is nonlinear. Unlike the 2SLS, the residuals of the first regression serve as a regressor in the second stage of 2SRI method. This method also generalizes to the nonlinear models i.e. when first and second stages are nonlinear. The rationale of this approach can be intrinsically related to the form of the true model: sometimes, the prediction equation of the outcome includes the error term of the auxiliary regression. An example is the case when the confounder is the only source of error in the auxiliary regression as considered in [27]. In a linear model, both 2SPS and 2SRI approaches are equivalent. The GMM is an alternative method for obtaining a reliable estimator of parameter β in a model with an endogenous variable. It is based on the classical assumption $$ \mathbb{E}(e|T,X_{1},X_{2},X_{u}) =0 $$ of the error term in Eq. (1). This assumption does not hold in general because the confounder X u is unobserved (i.e. \(\mathbb {E}(e|T,X_{1},X_{2}) \neq 0)\). In this case, the treatment is endogenous and its coefficient β t cannot be consistently estimated. One suppose in general that there exists some observable instruments w1 such that \(\mathbb {E}(e|w) = 0\) with w=(w1,X1,X2). Typically, w corresponds to the vector of exogenous and endogenous variables with endogenous regressors replaced by their corresponding instruments. Using the law of iterated expectation, the last condition implies $$ \mathbb{E}(e.w) = 0. $$ The method of moment solves the empirical version of (5) in β, i.e. \(\frac {1}{n} \sum _{i = 1}^{n} e_{i}w_{i} = 0\) where n is the sample size, e i is the ith component of e and w i the ith row of w. In turn, the GMM minimizes the quadratic form $$ q(\beta) = \left(\frac{1}{n} \sum_{i = 1}^{n} e_{i}w_{i} \right)^{'}\Omega \left(\frac{1}{n} \sum_{i = 1}^{n} e_{i}w_{i}\right) $$ where Ω denotes a weighting matrix. As discussed in [28], there are several choices for matrix Ω leading to different estimators of β. The optimal approach is to define Ω as the inverse of the asymptotic covariance matrix (depending on β) of the estimator. Some alternative procedures are also suggested by Hansen and colleagues [29]. The properties of the 2SRI are addressed by Terza in [27] when the residual also acting as unobserved confounder in the first-stage regression (at Eq. (2)) is additive. Under this assumption and using nonlinear least squares regression in each stage, they show the consistency of the estimator \(\hat {\beta }\) from the second stage. Since the confounder is not additive in the model (2), the first stage estimate of a 2SRI will not be consistent, nor will \(\hat {\beta }\). The residual from the first-stage is indeed an unknown function of an unobserved confounder. Then there is a bias that depends on the form of this unknown function when one applies the 2SRI method to the structural model at Eqs. (1) and (2). The derivation of the covariance matrix of 2SRI estimator follows a two-step regression covariance matrix of the form $${} \begin{aligned} {\mathbb{V}ar}(\hat{\beta}) &=\! \left(\!A_{22}^{-1} S_{2} A_{22}^{-1^{\prime}}\!\right)\!\!/n +\! \left(\!A_{22}^{-1}A_{21} A_{11}^{-1} S_{1}A_{11}^{-1^{\prime}}A_{21}^{\prime} A_{22}^{-1^{\prime}}\!\right)\!\!/n\\ &\quad- \left(A_{22}^{-1} S_{21} A_{11}^{-1^{\prime}}A_{21}^{\prime} A_{22}^{-1^{\prime}}\right)/n, \end{aligned} $$ where the computation of matrices A and S is given in Appendix A. The GMM is a well documented estimation procedure. Both in linear and nonlinear models, several results on the estimator have already been established. In the literature on econometric analysis, the nonlinear GMM with an instrumental variable has received particular attention. In the pioneering work by Amemiya [30], the author demonstrated the consistency and derived the asymptotic distribution of the nonlinear two-stage least squares estimator (NL2SLS). This result provided an important insight into how to handle nonlinear models with endogeneity. Later, Hansen [31] showed the asymptotic properties (consistency and asymptotic distribution) of the GMM to be a kind of generalization of the NL2SLS. More recently, Cameron and Trivedi [28] reviewed the method and gave details on the computation of the estimator's covariance matrix in some specific cases. Despite the different results on the GMM with an instrumental variable, its performances in terms of bias and variance depend on the validity and strength of the instrument and the nature of the variables in the model. The computation of the covariance matrix of GMM is given in [28] and implemented in dedicated softwares of which [32] is a good example. Below, we investigate the performances of these methods with numerical experiments. Simulation design and data generation A numerical experiment was conducted to investigate and compare several methods of IV estimation in the context of a dichotomous outcome and exposure with endogeneity in pharmacoepidemiology. In this experiment, we cover a wide range of possible scenarios. We choose values of parameter α=(α0,α z ,α1,α2) corresponding to some values of correlation between the variable T∗=α0+PPα z +X1α1+X2α2+X u and the Physician's prescribing Preference instrument PP. In fact, we keep α0, α1 and α2 fixed and only α z varies from a scenario to the other. The computation of this correlation is given in Appendix B. It somewhat reflects the strength of the instrument when the confounder and other covariables are kept fixed. For each value of the instrument's strength, there are three levels of confounding measured by the standard deviation σ u of the confounding variable X u , σ u ∈{0.5,1,1.5} which leads to a set of correlations between T∗ and X u . We then have nine scenarios of strengths and confounding level. For each scenario, we generate ns=1000 Monte Carlo samples of size n, n=10000,20000 and 30000. The number of patients per physician is kept fixed and equals 100; the confounder X u and covariates X1 and X2 are assumed to have the normal distributions N(0,σ u ), N(−2,1) and N(−3,1) respectively and the physician's prescribing preference has the Bernoulli distribution B(0.7). We first simulate the covariables X1 and X2, the confounder X u and the physician's prescribing preference which is the same for all the patients of the same physician. Using the already fixed values of parameters α, the probability p i that patient i will be prescribed the drug of interest is calculated by inverting the logit function, i.e. p i =F(α0+PP i α z +X1iα1+X2iα2+X ui ). Treatment T i of patient i is then generated as a Bernoulli realisation with parameter p i . The same procedure as for the treatment is used to simulate for each patient i, the corresponding outcome y i . We fixed the parameter α such that the proportion of exposed patients ranges between 2 and 6% and the prevalence of the event of interest is chosen to be smaller than 5% to reflect a real-life situation of a new treatment (not frequently prescribed) and rare adverse event. With the fixed value of β, we compute the probability F(β0+T i β t +X1iβ1+X2iβ2+X ui β u ) of the event for patient i and then simulate his outcome y i . We also explored a more balanced situation in term of exposure frenquencies (between 26 and 45%). Finally, proxy Z∗ of PP is the treatment given to the previous patient and the continuous instrument pr is the proportion of patients of the same physician who were previously prescribed the treatment of interest. More details on the simulation model and the data generating R code are given in Appendix D. Estimation methods For the true and conventional models which do not assume endogeneity, the classical one-step regression method without instrumental variable is used. The estimations are performed with the existing regression functions (glm and nls) implemented in R statistical software (R Development core team 2008) in the R package stats. For the 2SRI method, a two-step regression is used following the procedure outlined in the section dedicated to IV estimation procedure. Recall that the covariance matrices of \(\hat {\beta }\) from the second step regression for both methods retrieved from the software results will not be valid since their calculation ignores the fact that some estimated parameters from the first-stage regression are included in the second stage. Then, we re-evaluated these covariance matrices using the sequential two-step estimation procedure taking into account the fact that an estimated variable is used in the second step. The computation of these covariance matrices is given in Appendix A. For the GMM, the R package gmm proposed by Chaussé [32] is a very helpful tool for computating parameters and estimating covariance. The user needs to implement the sample version of the moment condition function at Eq. (5) and its gradient if possible; if not, a numerical approximation of the gradient function will be used by the gmm function to perform the estimation. From these estimations, we calculate the asymptotic covariance matrices and the corresponding confidence intervals whose levels are evaluated below. To evaluate the performances of the various methods, we consider several criteria including the percentage of relative bias (rB in %) defined as $$\text{rB} = 100*\frac{1}{ns}\sum_{j=1}^{ns}\left(\frac{\hat{\beta}_{t}^{(j)}}{\beta_{t}}-1\right);$$ the asymptotic standard deviation estimated by the square root of the Monte-Carlo mean of variance \(\bar {\hat {\sigma }}^{2} = \frac {1}{ns}\sum _{j=1}^{ns} \hat {\sigma }_{j}^{2},\) with \(\hat {\sigma }_{j}^{2}\) the asymptotic variance of \(\hat {\beta }_{t}^{(j)}\) and the Monte-Carlo estimator of the true variance \({\mathbb {V}ar}(\hat {\beta }_{t}) = \frac {1}{ns-1}\sum _{j=1}^{ns} \left (\hat {\beta }_{t}^{(j)}- \bar {\hat {\beta }_{t}}\right)^{2}\). We also consider the square root of the mean squares error rMSE given by $$\text{rMSE} = \sqrt{\frac{1}{ns}\sum_{j=1}^{ns}\left(\hat{\beta}_{t}^{(j)}-\beta_{t} \right)^{2}},$$ and the lower and upper non-coverage probabilities (in %) defined as $${Er}_{inf} = 100*\frac{1}{ns}\sum_{j=1}^{ns}1_{\left[ \beta_{t} < {IC}_{inf}^{(j)}\right]}$$ $${Er}_{sup} = 100*\frac{1}{ns}\sum_{j=1}^{ns}1_{\left[ \beta_{t} > {IC}_{sup}^{(j)}\right]} $$ where \( IC^{(j)} = \left [{IC}_{inf}^{(j)};{IC}_{sup}^{(j)}\right ]\) denotes the confidence interval of β t from the jth Monte-Carlo sample using the asymptotic distribution of \(\hat {\beta }_{t}^{(j)}\). The nonparametric bootstrap based estimate of variance and non-coverage probabilities are also investigated in these simulations and the results are analysed below. We complete all these criteria by the equivalent of the first-stage F-statistics in linear regression (see [33]) testing instrument exclusion in the treatment choice model. Table 1 summarizes the performances of each method in terms of relative bias (rB), the standard deviation (sd), the square root of mean squares error (rMSE) and the non-coverage probabilities (pval=Er inf +Er sup ). It presents the results related to instrument pr. There were only slight differences between the results with instrument pr and those with Z∗, so we omitted the results related to instrument Z∗. For some samples, the GMM fails to converge owing to singularity problems in the covariance matrix. The estimations from these samples are simply removed (cases marked ' −' in Table 1) for all methods. For the other scenarios, infinite variance estimate or outlier coefficient estimate may be observed; the corresponding samples were dropped for the calculation of the criteria. Table 2 shows the number of samples leading to an outlier estimation (rB>100% or infinite variance) among the 1000 simulated samples. Figure 1 complements the results in Table 1 by displaying the boxplot distributions of rB. Each series of letters a,b,c and d corresponds to the results related to an instrument strength with each letter corresponding to a method as detailed in the legend of Fig. 1. In Table 1 the sd values refer to the Monte-Carlo-based standard deviation. Except for the GMM estimator where outlier values for the asymptotic variance were observed, the Monte-Carlo-based standard deviation, the bootstrap-based estimate (not shown) and the asymptotic estimate were very close. Relative bias (rB) of the methods. a: True model b: conventional model; c : 2SRI with instrument pr; d: GMM with instrument pr. Low, Medium and High indicate the corresponding level of confounding and the instrument strength grows from a, b, c, d sequence to the next (from left to right) Table 1 Performances of methods using instrument pr Table 2 Number of samples among 1000 leading to outliers in GMM estimation As expected, the relative bias shows that the estimation from the true and conventional models are insensitive to instrument strength but the confounding level affects the estimation in the conventional model: the relative bias increases with level of confounding. In the presence of a strong instrument, the 2SRI tends to improve the estimate when the level of confounding increases. This trend is reversed when the instrument is weak, i.e. the relative bias and the confounding level have the same direction of variation. The percentage of relative bias of the GMM does not seem too sensitive to instrument strength: it just changes slightly when the strength of the instrument grows. However, this bias increases with the magnitude of confounding, which shows the impact of endogeneity on this method (See Fig. 1). For the standard deviation (sd) and the square root of the mean squares error (rMSE), the asymptotic results (n=30000) show that both criteria decrease when the level of confounding or instrument strength grows, this being the case for all methods except the GMM for which the rMSE decreases very slowly or remains almost constant in some cases. This trend confirms the already obverved low sensitivity of the GMM to the strength of the instruments used in this simulation. Even though the 2SRI has the larger sd than the other methods in all scenarios with an impact on rMSE in several cases, rMSE for 2SRI method seems improved with high confounding and a strong instrument. Concerning the non coverage probabilities (pval), the true model estimation and the 2SRI displayed an estimated non-coverage probability around the nominal level of 5% in almost all scenarios. Their values ranged from 4 to 6% and reached 7% in rare cases. The non-coverage probability was very large for the other methods, even with large samples: the results showed an overestimation of the coefficient of treatment for the GMM and conventional approaches. Even at low confounding levels, the conventional method yields very poor coverage probabilities which is coherent with what was observed for the relative bias. We observe that the metric of the instrument we use retains the same direction of variation with the F-statistics eequivalents (Tables 3 and 4 of Appendix C). Finally, Table 5 in Appendix D summarizes the performances of each method for more balanced exposure frequencies and large sample size (n=30000). In general, performances are less good in comparison with small exposure frequency situation. One could also note that numerical problems arise more often (see Table 6 of Appendix D). Nevertheless, 2SRI is better in term of relative bias and non coverage probability than the conventional method and GMM. Overall, the results are satisfactory for the 2SRI approach which achieves a similar level of performance to the true model regarding estimation of the confidence interval in the imbalanced situation. We close this section by pointing out the strong numerical instability observed when computing the GMM estimator during these simulations. This could explain the modest performance displayed by the GMM in the simulation results. In this paper, we focus on the effectiveness of regression coefficient estimation in a context of endogeneity, particularly the endogeneity due to unobserved confounding. We are interested in the coefficient of an endogenous treatment which is the basis of risk assessment in pharmacoepidemiology. Linear models are often used in this context to model the probability of dichotomous events (see [7]). Through a simulation study, we investigate the behavior of parameter estimation in nonlinear models specifically logistic regression using some IV-based methods that could potentially be used to overcome the endogeneity issue. The simulation study also made it possible to assess two preference-based instrumental variables in pharmacoepidemiology. The results reported from the simulation study show that the 2SRI using nonlinear regression at each stage is an interesting alternative for estimating the coefficient of the endogenous treatment in a logistic regression model. It is very simple to implement and yields satisfactory results regarding the bias and the confidence interval estimate. It was found to yield the most accurate estimate of non-coverage probabilities and thus a more accurate estimate of confidence intervals among the IV-based methods that were compared. However, the conventional approach behaved better than the 2SRI in some cases, especially when the confounding level was weak. We believe that in these cases, the level of confounding is not sufficiently high to require the use of an instrument in the estimation. However, to our knowledge there is still no way to assess the level of unmeasured confounding. For the GMM, the estimation procedure was remarkably unstable. That instability may be attributable to the dichotomous nature of the variables (outcome and exposure) in the context of pharmacoepidemiology with the preference-based instrument. This situation makes the GMM approach is not to be recommended in this context unless another instrument has proved to behave satisfactorily with this method. Concerning the instruments under investigation in this study, the proportion of all previous patients of the same physician who were prescribed the treatment of interest proved to be a good proxy of the physician's preference instrument. This instrument was previously considered by Abrahamowicz and colleagues [22] for investigating the detection of a possible change point in the physician's preference and their results also seemed satisfactory. This proxy of the physician's preference instrument is thus a credible alternative to the well known other proxy based only on the single patient of the same physician who was prescribed the treatment of interest. Even though this work throws light on the performances of IV estimators in the context of a nonlinear model with endogeneity, more work is needed to explore the behavior of these estimators in other contexts when the prevalence of exposure and /or outcome varies. In observational studies, when assessing the effect of drug exposure on a dichotomous outcome, investigators could use appropriate 2SRI estimation to account for unmeasured confounding. This work showed that two logistic regressions as well as a physician's preference proxy for IV yeald satisfactory results. Appendix A: Asymptotic variance of the nonlinear 2SRI Using the sequential two-step estimation procedure, the nonlinear 2SRI estimator minimizes the least squares criterion $$ Q(\beta) = \frac{1}{2n}\sum_{i=1}^{n} \left(y_{i}-F_{\hat{\alpha}}\left(X_{i\hat{\alpha}}^{\prime}\beta\right)\right)^{2} $$ where \(\hat {\alpha }\) denotes the nonlinear least squares estimator of α from the first stage. If we set \(\eta _{i\beta } = X_{i\hat {\alpha }}^{\prime }\beta,\) the first-order condition in β is given by $$ - \frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\hat{\alpha}}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta} } \left(y_{i}-F_{\hat{\alpha}}(\eta_{i\beta})\right) = 0. $$ Given that \(\hat {\alpha }\) is a consistent estimator of α0, the Taylor-lagrange expansion of (9) arround the true value (α0,β0) gives $$\begin{aligned} 0 = \frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta}} \left(y_{i}-F_{\alpha}(\eta_{i\beta})\right)_{(\alpha_{c};\beta_{c})} +\\ \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial^{2} F_{\alpha}}{\partial \eta_{i\beta} \partial \eta_{i\beta}^{'} }(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\alpha}} \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}} \left(y_{i}-F_{\alpha}(\eta_{i\beta})\right) \right)_{(\alpha_{c};\beta_{c})} (\hat{\alpha}-\alpha_{0}) +\\ \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial^{2} \eta_{i\beta}}{\partial{\alpha} \partial{\beta^{\prime}} } \left(y_{i}-F_{\alpha}(\eta_{i\beta})\right) - \frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\alpha} } \frac{\partial \eta_{i\beta}}{\partial{\beta{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{c};\beta_{c})} (\hat{\alpha}-\alpha_{0}) +\\ \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial^{2} F_{\alpha}}{\partial \eta_{i\beta} \partial \eta_{i\beta}^{\prime} }(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta}} \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}} \left(y_{i}-F_{\alpha}(\eta_{i\beta})\right) - \frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta} } \frac{\partial \eta_{i\beta}}{\partial{\beta{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{c};\beta_{c})} (\hat{\beta}-\beta_{0}) \end{aligned} $$ for some (α c ;β c ) between \((\hat {\alpha };\hat {\beta })\) and (α0;β0). Under the assumption \({\mathbb {E}}\left (\frac {\partial {Q}}{\partial {\beta }}\right)_{\beta _0} = 0\) the terms involving the residuals \(\left (y_{i}-F_{\alpha }(\eta _{i\beta })\right)\) in the above expansion, except the first, all tend in probability to zero and we obtain $$\begin{aligned} \sqrt{n}(\hat{\beta}-\beta_{0}) \approx \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta} } \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{0};\beta_{0})}^{-1} \left(\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta}} \left(y_{i}-F_{\alpha}(\eta_{i\beta})\right)\right)_{(\alpha_{0};\beta_{0})}\\ - \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta} } \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{0};\beta_{0})}^{-1} \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\alpha} } \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{0};\beta_{0})} (\hat{\alpha}-\alpha_{0}). \end{aligned} $$ The quantity \(\sqrt {n}(\hat {\beta }-\beta _{0})\) may then be written $$\begin{aligned} \sqrt{n}(\hat{\beta}-\beta_{0}) &\approx \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta} } \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{0};\beta_{0})}^{-1} \left(\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta}} \left(y_{i}-F_{\alpha}(\eta_{i\beta})\right)\right)_{(\alpha_{0};\beta_{0})}\\ &\quad- \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta} } \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{0};\beta_{0})}^{-1} \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\alpha} } \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{0};\beta_{0})}\\ &\quad\times \left(\frac{1}{n}\sum_{i=1}^{n} \frac{\partial r}{\partial \eta_{i\alpha}}(\eta_{i\alpha})\frac{\partial \eta_{i\alpha}}{\partial{\alpha} } \frac{\partial \eta_{i\alpha}}{\partial{\alpha^{\prime}}}\frac{\partial r}{\partial \eta_{i\alpha}}(\eta_{i\alpha}) \right)_{\alpha_0}^{-1} \left(\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \frac{\partial r}{\partial \eta_{i\alpha}}(\eta_{i\alpha})\frac{\partial \eta_{i\alpha}}{\partial{\alpha}} \left(T_{i}-r(\eta_{i\alpha})\right)\right)_{\alpha_0} \end{aligned} $$ with \(\eta _{i\alpha } = w_{i}^{\prime }\alpha \). The latter is derived from the asymptotic approximation of \(\sqrt {n}(\hat {\alpha }-\alpha _{0})\) using a similar Taylor-Lagrange expansion for the first-stage nonlinear least squares regression. The previous expansion leads to the covariance matrix of \(\hat {\beta }\) of the form $${} \begin{aligned} {\mathbb{V}ar}(\hat{\beta}) &= \frac{1}{n} \left(A_{22}^{-1} S_{2} A_{22}^{-1^{\prime}} \,+\, A_{22}^{-1}A_{21} A_{11}^{-1} S_{1}A_{11}^{-1^{\prime}}A_{21}^{\prime} A_{22}^{-1^{\prime}}\right.\\ &\left.\quad- A_{22}^{-1} S_{21} A_{11}^{-1^{\prime}}A_{21}^{\prime} A_{22}^{-1^{\prime}}\right). \end{aligned} $$ Under the assumption of independence between observations, the matrices involved in this covariance matrix are given by $${} \begin{aligned} A_{11} &\,=\, p\lim\! \left(\frac{1}{n} \sum_{i=1}^{n} \frac{\partial r}{\partial \eta_{i\alpha}}(\eta_{i\alpha})\frac{\partial \eta_{i\alpha}}{\partial{\alpha} } \frac{\partial \eta_{i\alpha}}{\partial{\alpha^{\prime}}}\frac{\partial r}{\partial \eta_{i\alpha}}(\eta_{i\alpha}) \right)_{\alpha_{0}}\\ A_{21} &\,=\, p\lim\! \left(\frac{1}{n} \sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\alpha} } \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{0};\beta_{0})}\\ A_{22} &\,=\, p\lim\! \left(\frac{1}{n} \sum_{i=1}^{n} \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta} } \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\eta_{i\beta}) \right)_{(\alpha_{0};\beta_{0})}\\ S_{1} &\,=\, p\lim\! \left(\frac{1}{n} \sum_{i=1}^{n} \frac{\partial r}{\partial \eta_{i\alpha}}(\eta_{i\alpha})\frac{\partial \eta_{i\alpha}}{\partial{\alpha} } U_{i\alpha}^{2}\frac{\partial \eta_{i\alpha}}{\partial{\alpha^{\prime}}}\frac{\partial r}{\partial \eta_{i\alpha}}(\eta_{i\alpha}) \right)_{\alpha_0}\\ S_{2} &= p\lim\! \left(\!\frac{1}{n} \sum_{i=1}^{n}\! \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\!\eta_{i\beta})\frac{\partial \eta_{i\beta}}{\partial{\beta} } U_{i\beta}^{2} \frac{\partial \eta_{i\beta}}{\partial{\beta^{\prime}}}\frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\!\eta_{i\beta})\!\right)_{\!(\alpha_{0};\beta_{0})}\\ S_{21} &\,=\, p\lim\! \left(\!\frac{1}{n} \!\sum_{i=1}^{n}\! \frac{\partial F_{\alpha}}{\partial \eta_{i\beta}}(\!\eta_{i\beta}\!)\frac{\partial \eta_{i\beta}}{\partial{\beta} } U_{i\beta} U_{i\alpha} \frac{\partial \eta_{i\alpha}}{\partial{\alpha^{\prime}}}\frac{\partial r}{\partial \eta_{i\alpha}}(\!\eta_{i\alpha}\!)\!\!\right)_{\!(\alpha_{0};\beta_{0})} \end{aligned} $$ where \(p\lim \) denotes the limite in probability, U α =T−P α , U β =y−P β , with P α =r(wα) and P β =F α (Xβ). An estimation of this covariance matrix can be obtained using the plug-in estimator of each matrix involved in its expression. Let \(P_{\hat {\alpha }} = r(w\hat {\alpha }), P_{\hat {\beta }} = F_{\hat {\alpha }}(X\hat {\beta }),\ U_{\hat {\alpha }} = T-P_{\hat {\alpha }}\) and \( U_{\hat {\beta }} = y-P_{\hat {\alpha }},\) then the corresponding estimator of matrices at equation (10) are such that $$\begin{aligned} \hat{A}_{11} &= \frac{1}{n} \left[P_{\hat{\alpha}}(1-P_{\hat{\alpha}})w \right]^{\prime} [P_{\hat{\alpha}}(1-P_{\hat{\alpha}})w],\\ \hat{A}_{21} &= \frac{\hat{\beta}_{u}}{n} \left[P_{\hat{\beta}}^{2}\left(1-P_{\hat{\beta}}\right)^{2}X\right]^{\prime} [P_{\hat{\alpha}}(1-P_{\hat{\alpha}})w]\\ \hat{A}_{22} &= \frac{1}{n} \left[P_{\hat{\beta}}\left(1-P_{\hat{\beta}}\right)X\right]^{\prime} \left[P_{\hat{\beta}}\left(1-P_{\hat{\beta}}\right)X\right],\\ \hat{S}_{1} &= \frac{1}{n} \left[P_{\hat{\alpha}}(1-P_{\hat{\alpha}})U_{\hat{\alpha}}w\right]^{\prime} [P_{\hat{\alpha}}(1-P_{\hat{\alpha}})U_{\hat{\alpha}}w]\\ \hat{S}_{2} &= \frac{1}{n} \left[P_{\hat{\beta}}(1-P_{\hat{\beta}})U_{\hat{\beta}}X\right]^{\prime} \left[P_{\hat{\beta}}\left(1-P_{\hat{\beta}}\right)U_{\hat{\beta}}X\right],\\ \hat{S}_{21} &= \frac{1}{n} \left[P_{\hat{\beta}}\left(1-P_{\hat{\beta}}\right)U_{\hat{\beta}}X\right]^{\prime} [P_{\hat{\alpha}}(1-P_{\hat{\alpha}})U_{\hat{\alpha}}w]. \end{aligned} $$ Appendix B: Instrument strength and confounding level We give bellow the computation of instrument strength and confounding level Instrument strength The strength of an instrument results from the correlation between it and the corresponding endogenous variable. In the model considered here, the strength of an instrument Z is given by \({\mathbb {C}orr}(T,Z) = \frac {\mathbb {C}ov(T,Z)}{\sqrt {\mathbb {V}ar(T)}\sqrt {\mathbb {V}ar(Z)}}\). As the treatment has a causal link with other covariables Z,X1,X2 and X u , we have $${} \begin{aligned} {\mathbb{V}ar}(T) &= {\mathbb{V}ar}_{Z} [\!{\mathbb{E}}(T|Z,X_{1},X_{2},X_{u})]\\ &\quad+ {\mathbb{E}}_{Z}[{\mathbb{V}ar}(T|Z,X_{1},X_{2},X_{u})]. \end{aligned} $$ If we consider only the explanatory effect of the instrument in treatment T and replace other covariables and the confounder by their average effect, we have \({\mathbb {V}ar}(T) = {\mathbb {V}ar}_{Z} \left (\frac {1}{1+A_Z}\right) + {\mathbb {E}}_{Z}\left (\frac {A_Z}{(1+A_{Z})^{2}}\right)\) with A Z =exp(−(α0+Zα z +μ1α1+μ2α2+μ u )),\(\mu _{1} = {\mathbb {E}}(X_{1}), \mu _{2} = {\mathbb {E}}(X_{2})\) and \(\mu _{u} = {\mathbb {E}}(X_{u})\). This variance may then be written $$\begin{array}{*{20}l} {}{\mathbb{V}ar}(T) &=& {\mathbb{E}}_{Z} \left(\!\frac{1}{1+A_Z}\!\right)^{2}\,-\, \left(\!{\mathbb{E}}_{Z} \left(\!\frac{1}{1+A_Z}\!\right)\!\right)^{2}\\ &&+ {\mathbb{E}}_{Z}\!\left(\!\frac{A_Z}{(1+A_{Z})^{2}}\!\right) \end{array} $$ $$\begin{array}{*{20}l} &=& {\mathbb{E}}_{Z} \!\left(\!\frac{1+A_Z}{(1+A_{Z})^{2}}\!\right)\,-\, \left(\!{\mathbb{E}}_{Z}\! \left(\!\frac{1}{1+A_Z}\!\right)\!\right)^{2} \end{array} $$ $$\begin{array}{*{20}l} &=& {\mathbb{E}}_{Z}\! \left(\!\frac{1}{1+A_Z}\!\right)\! -\! \left(\!{\mathbb{E}}_{Z}\! \left(\!\frac{1}{1+A_Z}\!\right)\!\right)^{2}. \end{array} $$ Considering a dichotomous instrument Z having the Bernoulli distribution B(p), we have \({\mathbb {E}}_{Z}\left (\frac {1}{1+A_Z}\right) = \frac {p}{1+A_1} + \frac {1-p}{1+A_0},\) where A j ,j=0,1 is A Z with Z replaced by j. We finally obtain $${} \begin{aligned} {\mathbb{V}ar}(T) = \left(\!\frac{p}{1+A_1} + \frac{1-p}{1+A_0}\!\right)\!\left(\!1\,-\,\left(\!\frac{p}{1+A_1} + \frac{1-p}{1+A_0}\right)\!\right)\!. \end{aligned} $$ $$\begin{array}{*{20}l} {\mathbb{E}}(T) &=& {\mathbb{E}}_{Z}({\mathbb{E}}(T|Z,X_{1},X_{2},X_{u})) \end{array} $$ $$\begin{array}{*{20}l} &=& {\mathbb{E}}_{Z}\left(\frac{1}{1+A_Z}\right) \end{array} $$ $$\begin{array}{*{20}l} &=& \frac{p}{1+A_1} + \frac{1-p}{1+A_0} \end{array} $$ and then \({\mathbb {E}}(Z){\mathbb {E}}(T) = \frac {p^{2}}{1+A_1} + \frac {p(1-p)}{1+A_0}\). Furthermore, \({\mathbb {E}}(ZT) = {\mathbb {E}}_{Z}({\mathbb {E}}(ZT|Z,X_{1},X_{2},X_{u})) = {\mathbb {E}}_{Z}(Z{\mathbb {E}}(T|Z,X_{1},X_{2},X_{u}))\) which leads to \({\mathbb {E}}(ZT) ={\mathbb {E}}_{Z}\left (\frac {Z}{1+A_Z}\right) = \frac {p}{1+A_1}\). The covariance between Z and T is then given by $$\begin{array}{*{20}l} {\mathbb{C}ov}(Z,T) &=& {\mathbb{E}}(ZT) - {\mathbb{E}}(Z){\mathbb{E}}(T) \end{array} $$ $$\begin{array}{*{20}l} &=& \frac{p}{1+A_1} - \frac{p^{2}}{1+A_1} - \frac{p(1-p)}{1+A_0}\end{array} $$ $$\begin{array}{*{20}l} &=& p(1-p)\left(\frac{1}{1+A_1} - \frac{1}{1+A_0}\right). \end{array} $$ The correlation between Z and T is $$ {} {\mathbb{C}orr}(T,Z) = \frac{\left(\frac{1}{1+A_1}-\frac{1}{1+A_0}\right)\sqrt{p(1-p)}}{\sqrt{\left(\frac{p}{1+A_1}+\frac{1-p}{1+A_0}\right)\left(1-\left(\frac{p}{1+A_1}+\frac{1-p}{1+A_0}\right)\right)}}. $$ Then for a given instrument Z with p fixed in (20), α0, α z ,α1 and α2 can be chosen to reach a value of A Z that leads to a desired value of \({\mathbb {C}orr}(T,Z)\). Another criterion that could be used to quantify an instrument's strength is the correlation between T∗=α0+Zα z +X1α1+X2α2+X u and the instrument Z. Since \({\mathbb {C}ov}(Z,T^{*}) ={\mathbb {C}ov}(Z,\alpha _{0} + Z \alpha _{z}+ X_{1}\alpha _{1}+ X_{2}\alpha _{2}+ X_{u}) = \alpha _{z}{\mathbb {V}ar}(Z)\) and \( {\mathbb {V}ar}(T^{*}) = \alpha _{z}^{2}{\mathbb {V}ar}(Z) + \alpha _{1}^{2}{\mathbb {V}ar}(X_{1}) + \alpha _{2}^{2}{\mathbb {V}ar}(X_{2}) + {\mathbb {V}ar}(X_{u}),\) under the assumptions on these variables, we have $$ {\mathbb{C}orr}(Z,T^{*}) = \frac{\alpha_{z}\sqrt{p(1-p)}}{\left(\alpha_{z}^{2}p(1-p) + \alpha_{1}^{2} \sigma_{1}^{2} +\alpha_{2}^{2} \sigma_{2}^{2} + \sigma_{u}^{2} \right)^{1/2}} $$ where \(\sigma _{i}^{2} ={\mathbb {V}ar}(X_{i}) \) and \(\sigma _{u}^{2} = {\mathbb {V}ar}(X_{u})\). Confounding level A straightforward calculation as above leads to the following correlation between T∗ and X u that expresses the level of confounding. $${} {\mathbb{C}orr}(X_{u},T^{*}) = \frac{\sigma_u}{\left(\alpha_{z}^{2}p(1-p) + \alpha_{1}^{2} \sigma_{1}^{2} +\alpha_{2}^{2} \sigma_{2}^{2} + \sigma_{u}^{2} \right)^{1/2}}. $$ Appendix C: F-statistics for each scenario The following tables display the equivalent of the first-stage F-statistics in linear regression (see [33]) testing instrument exclusion in the treatment choice model. Table 3 Monte-carlo mean of F-statistics in each scenario using the proportion of patients who received the same treatment as proxy of instrument Table 4 Monte-carlo mean of F-statistics in each scenario using the treatment prescribed to the last patient as proxy of instrument Appendix D: Description of simulation model and parameters values, results of the second scenario For all scenarios, the model generating the binary outcome is the index function $$Y_{i} = 1(Y_{i}^{*}-\varepsilon_{i}>0), $$ with \(Y_{i}^{*} = \beta _{0} + T_{i}\beta _{t}+ X_{1i}\beta _{1}+ X_{2i}\beta _{2}+ X_{ui}\beta _{u} \) and ε i the standart logistic distribution. T i is the observed binary treatment of the individual i,X1i and X2i some characteristics of patient i and X ui the unmeasured confounding factor. Besides β0 the intercept, β0,β t ,β1,β2 and β u are parameters related to T, X1, X2 and X u respectively. We fixed these parameters β0=−0.6, β t =3, β1=1, β2=1, and β u =1 to keep the prevalence of event less than 5%. Table 6 Number of samples among 500 leading to outliers in GMM estimation The treatment choice for the ith patient was generated from a Bernoulli model with success probability p i which depends on the patient's characteristics X1∼N(−2,1) and X2∼N(−3,1), on a binary instrument Z∼b(0.7) and on the confounding factor X u ∼N(0,σ u ). The probability p i is given by p i =F(α0+Z i α z +X1iα1+X2iα2+X ui ), where F denotes the logistic distribution function. The standard deviation σ u of the confounding factor takes values 0.5, 1 and 1.5 corresponding to Low, Medium and high level of confounding respectively. For a fixed level of confounding, we varied only α z value over {1,2,3} of which each element corresponds to an instrument strength: 1 for "Low" instrument, 2 for "Moderate" and 3 for "High" instrument. All other parameters in the treatment choice model remain constant (α0=0.2, α1=2 and α2=1.2). The data are generated using the R function sim2Logit2(). We investigated the performances of methods in the context of rare exposures (2 to 6%), and rare events (less than 5%). To check whether the results obtained remain valid in the context of higher exposure (near 50%), we design new simulations in which only the intercepts α0 and β0 are modified in the previous design. We held all other parameters constant and fixed α0=5 and β0 = −2.3. The prevalence of exposure ranged then between 26% and 45% whereas that of event is maintained lower than 6%. We present in Table 5 the results from this second scope over 500 Monte Carlo samples of size 30000. Table 6 displays the number of Monte Carlo samples with outliers. This function allows to simulate a compound logistic model with covariates 2SLS: Two-Stage Least Squares 2SPS: Two-Stage Predictor Substitution 2SRI: Two-Stage Residual Inclusion GMM: Generalized Method of Moment Inserm: Institut national de la santé et de la recherche médicale IV: Instrumental variable Odd Ratio Physician Preference pval: non coverage probability rB: Relative Bias rMSE: root Mean Square Error UVSQ: Rassen JA, Schneeweiss S, Glynn RJ, Mittleman MA, Brookhart MA. Instrumental variable analysis for estimation of treatment effects with dichotomous outcomes. Am J Epidemiol. 2009; 169(3):273–84. https://0-doi-org.brum.beds.ac.uk/10.1093/aje/kwn299. http://0-aje.oxfordjournals.org.brum.beds.ac.uk/content/169/3/273.full.pdf+html. Gowrisankaran G, Town RJ. Estimating the quality of care in hospitals using instrumental variables. J Health Econ. 1999; 18(6):747–67. https://0-doi-org.brum.beds.ac.uk/10.1016/S0167-6296(99)00022-3. Carroll RJ, Ruppert D, Stefanski LA. Measurement Error in Nonlinear Models. Monographs on Statistics and Applied Probability. London: Chapman & Hall; 1995. Nestler S. Using instrumental variables to estimate the parameters in unconditional and conditional second-order latent growth models. Struct Equ Model A Multidiscip J. 2015; 22(3):461–73. Greene WH. Econometric Analysis, 7th edition, international edition. Boston: Pearson; 2012. pp. 259–89. Previous edition: 2008. Bound J, Jaeger DA, Baker RM. Problems with instrumental variables estimation when the correlation between the instruments and the endogeneous explanatory variable is weak. J Am Stat Assoc. 1995; 90(430):443–50. Terza JV, Bradford WD, Dismuke CE. The use of linear instrumental variables methods in health services research and health economics: A cautionary note. Health Serv Res. 2008; 43(3):1102–1120. Foster EM. Instrumental Variables for Logistic Regression: An Illustration. Soc Sci Res. 1997; 26(4):487–504. https://0-doi-org.brum.beds.ac.uk/10.1006/ssre.1997.0606. Accessed 02 Oct 2015. Terza JV. Estimation of policy effects using parametric nonlinear models: a contextual critique of the generalized method of moments. Health Serv Outcome Res Methodol. 2006; 6(3):177–98. https://0-doi-org.brum.beds.ac.uk/10.1007/s10742-006-0013-0. Cai B, Small DS, Have TRT. Two-stage instrumental variable methods for estimating the causal odds ratio: Analysis of bias. Stat Med. 2011; 30(15):1809–1824. https://0-doi-org.brum.beds.ac.uk/10.1002/sim.4241. Klungel OH, Martens EP, Psaty BM, Grobbee DE, Sullivan SD, Stricker BHC, Leufkens HGM, de Boer A. Methods to assess intended effects of drug treatment in observational studies are reviewed. J Clin Epidemiol. 2004; 57:1223–1231. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2004.03.011. Palmer TM, Sterne JAC, Harbord RM, Lawlor DA, Sheehan NA, Meng S, Granell R, Smith GD, Didelez V. Instrumental variable estimation of causal risk ratios and causal odds ratios in mendelian randomization analyses. Am J Epidemiol. 2011; 173(12):1392. Chapman CG, Brooks JM. Treatment effect estimation using nonlinear two-stage instrumental variable estimators: Another cautionary note. Health Serv Res. 2016; 51(6):2375–394. https://0-doi-org.brum.beds.ac.uk/10.1111/1475-6773.12463. Johnston KM, Gustafson P, Levy AR, Grootendorst P. Use of instrumental variables in the analysis of generalized linear models in the presence of unmeasured confounding with applications to epidemiological research. Stat Med. 2008; 27(9):1539–1556. https://0-doi-org.brum.beds.ac.uk/10.1002/sim.3036. Greenland S. An introduction to instrumental variables for epidemiologists. Int J Epidemiol. 2000; 29(4):722–9. https://0-doi-org.brum.beds.ac.uk/10.1093/ije/29.4.722. http://0-ije.oxfordjournals.org.brum.beds.ac.uk/content/29/4/722.full.pdf+html. McClellan M, McNeil BJ, Newhouse JP. Does more intensive treatment of acute myocardial infarction in the elderly reduce mortality?: Analysis using instrumental variables. JAMA. 1994; 272(11):859–66. https://0-doi-org.brum.beds.ac.uk/10.1001/jama.1994.03520110039026. Cain LE, Cole SR, Greenland S, Brown TT, Chmiel JS, Kingsley L, Detels R. Effect of highly active antiretroviral therapy on incident aids using calendar period as an instrumental variable. Am J Epidemiol. 2009; 169(9):1124–1132. https://0-doi-org.brum.beds.ac.uk/10.1093/aje/kwp002. http://0-aje.oxfordjournals.org.brum.beds.ac.uk/content/169/9/1124.full.pdf+html. Brookhart MA, Schneeweiss S. Preference-based instrumental variable methods for the estimation of treatment effects: assessing validity and interpreting results. Int J Biostat. 2007; 3(1):14. https://0-doi-org.brum.beds.ac.uk/10.2202/1557-4679.1072. Brooks JM, Chrischilles EA, Scott SD, Chen-Hardee SS. Was breast conserving surgery underutilized for early stage breast cancer? instrumental variables evidence for stage ii patients from iowa. Health Serv Res. 2003; 38(6p1):1385–1402. https://0-doi-org.brum.beds.ac.uk/10.1111/j.1475-6773.2003.00184.x. Johnston SC. Combining ecological and individual variables to reduce confounding by indication:: Case study–subarachnoid hemorrhage treatment. J Clin Epidemiol. 2000; 53(12):1236–1241. https://0-doi-org.brum.beds.ac.uk/10.1016/S0895-4356(00)00251-1. Brookhart MA, Wang PS, Solomon DH, Schneeweiss S. Evaluating short-term drug effects using a physician-specific prescribing preference as an instrumental variable. Epidemiology. 2006; 17(3):268–75. Abrahamowicz M, Beauchamp ME, Ionescu-Ittu R, Delaney JAC, Pilote L. Reducing the variance of the prescribing preference-based instrumental variable estimates of the treatment effect. Am J Epidemiol. 2011; 174(4):494–502. https://0-doi-org.brum.beds.ac.uk/10.1093/aje/kwr057. http://0-aje.oxfordjournals.org.brum.beds.ac.uk/content/174/4/494.full.pdf+html. Baiocchi M, Cheng J, Small DS. Instrumental variable methods for causal inference. Stat Med. 2014; 33(13):2297–340. https://0-doi-org.brum.beds.ac.uk/10.1002/sim.6128. Uddin MJ, Groenwold RHH, de Boer A, Gardarsdottir H, Martin E, Candore G, Belitser SV, Hoes AW, Roes KCB, Klungel OH. Instrumental variables analysis using multiple databases: an example of antidepressant use and risk of hip fracture. Pharmacoepidemiol Drug Saf. 2016; 25:122–31. https://0-doi-org.brum.beds.ac.uk/10.1002/pds.3863. PDS-14-0390.R2. Uddin MJ, Groenwold RHH, de Boer A, Afonso ASM, Primatesta P, Becker C, Belitser SV, Hoes AW, Roes KCB, Klungel OH. Evaluating different physician's prescribing preference based instrumental variables in two primary care databases: a study of inhaled long-acting beta2-agonist use and the risk of myocardial infarction. Pharmacoepidemiol Drug Saf. 2016; 25:132–41. https://0-doi-org.brum.beds.ac.uk/10.1002/pds.3860. PDS-14-0383.R2. Brookhart MA, Rassen JA, Schneeweiss S. Instrumental variable methods in comparative safety and effectiveness research. Pharmacoepidemiol Drug Saf. 2010; 19(6):537–54. https://0-doi-org.brum.beds.ac.uk/10.1002/pds.1908. Terza JV, Basu A, Rathouz PJ. Two-Stage Residual Inclusion Estimation: Addressing Endogeneity in Health Econometric Modeling. J Health Econ. 2008; 27(3):531–43. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jhealeco.2007.09.009. Accessed 02 Oct 2015. Cameron AC, Trivedi PK. Microeconometrics: Methods and Applications. Cambridge: Cambridge University Press; 2005. pp. 166–220. https://0-doi-org.brum.beds.ac.uk/10.1017/CBO9780511811241. Hansen LP, Heaton J, Yaron A. Finite-sample properties of some alternative gmm estimators. J Bus Econ Stat. 1996; 14(3):262–80. Amemiya T. The nonlinear two-stage least-squares estimator. J Econ. 1974; 2(2):105–10. https://0-doi-org.brum.beds.ac.uk/10.1016/0304-4076(74)90033-5. Hansen LP. Large sample properties of generalized method of moments estimators. Econometrica. 1982; 50(4):1029–1054. Chausse P. Computing generalized method of moments and generalized empirical likelihood with r. J Stat Softw. 2010; 34(1):1–35. Stock JH, Wright JH, Yogo M. A survey of weak instruments and weak identification in generalized method of moments. J Bus Econ Stat. 2002; 20(4):518–29. This work was supported by grants from the Fondation pour la Recherche Médicale ("Iatrogénie des médicaments 2013") and the Agence Nationale pour la Recherche ("Appel à projet générique 2015"). The funding bodies had no role in the contents and the writing of the manuscript. The R programm for simulation are available from the corresponding author. Biostatistics, Biomathematics, Pharmacoepidemiology and Infectious Diseases (B2PHI), Inserm, UVSQ, Institut Pasteur, Université Paris-Saclay, 16 Avenue Paul Vaillant-Couturier, Villejuif, 94807, France Babagnidé François Koladjo , Sylvie Escolano & Pascale Tubert-Bitter Search for Babagnidé François Koladjo in: Search for Sylvie Escolano in: Search for Pascale Tubert-Bitter in: BFK and PTB conceived the study. BFK performed the simulation study and drafted the manuscript. BFK, PTB and SE interpreted the results, read and approved the final manuscript. Correspondence to Babagnidé François Koladjo. Koladjo, B., Escolano, S. & Tubert-Bitter, P. Instrumental variable analysis in the context of dichotomous outcome and exposure with a numerical experiment in pharmacoepidemiology. BMC Med Res Methodol 18, 61 (2018) doi:10.1186/s12874-018-0513-y DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-018-0513-y Physician's prescription preference Observational studies Simulation study
CommonCrawl
Narrowband Spikes Observed During the 13 June 2012 Flare in the 800 – 2000 MHz Range by Marian Karlicky et al The solar radio spikes can provide detailed information about plasma processes in solar flares on kinetic scales. Among them, the decimetric spikes belong to the most interesting ones because they are recorded in some cases close to the starting frequency of Type-III bursts and in relation to hard X-ray emissions. Stepanov et al. (1999) and Bárta and Karlický (2001) presented models where spike frequencies correspond to those of the upper-hybrid waves, and Willes and Robinson (1996) presented the model with spike frequencies corresponding to the Bernstein modes. Luo et al. (2021) proposed that spikes are generated at the termination shock formed above the flare arcade, where a diffuse supraarcade fan and multitudes of plasma downflows are present. This interpretation is close to the idea that the narrowband dm spikes are generated by superthermal electrons in the magnetohydrodynamic turbulence in the magnetic reconnection outflows (Karlický, Sobotka, and Jiřička,1996). There is an important additional aspect of the narrowband dm spikes that is not frequently considered in theoretical models. Namely, spikes appear in frequency bands and the ratio of band frequencies is noninteger (1.06 – 1.54) (Krucker and Benz 1994). In the paper by Karlický, Benáček, and Rybák (2021), this result was not only confirmed, but very narrow bands of spikes in the 7 November 2013 event enabled a successful fit of the band frequencies by the Bernstein modes. We investigated spike events observed during the 13 June 2012 flare by 800-2000 MHz Ondřejov radiospectrograph with a time resolution 0.01 s and frequency resolution 4.7 MHz. Moreover, we analyzed the relation between the radio and AIA/SDO UV, HMI/SDO, and RHESSI X-ray. The radiospectrogram is shown in Figure 1, and the numbered spike events are classified in Table 1 into three categories according to their appearance in the radio spectrum: spikes distributed in a broad band or bands (SB), spikes distributed in zebra-like bands (SZ), and spikes distributed in broad and narrow bands (SBN). Characteristic ratios of neighboring spikes are determined if more than one band appears. Examples of spike types are shown in Figure 2. Figure 1. The radio spectrum observed at 13:10:00 – 13:30:00 UT during the 13 June 2012 flare by the Ondřejov radiospectrograph. Table 1. Basic parameters of groups of spikes during the flare in Figure 1: Times, frequency range, types, maximal number of frequency bands (MNFB), and characteristic ratios of neighboring bands of spikes (BR). Figure 2. Spikes observed during the 13 June 2012 flare. The radio spectrum shows spikes distributed in a) broad band or bands (SB), b) zebra-like bands (SZ), and c) in broad and narrow bands (SBN). We confirm that the dm spikes are observed mostly during the impulsive flare phase. We tried to search for some relation between spikes groups and variations of intensities in selected flare locations using AIA/SDO observations. We found interesting relation for AIA intensities taken from one end of the sigmoidal flare structure where many magnetic-field lines of flare loops were concentrated. We found that similar autocorrelations of SZ and SBN favor the same generation mechanism of these spikes. Because of its similarity to Karlický et al. (2021), we interpret SZ and SBN as generated in Bernstein modes. We supported this interpretation by simulations of Bernstein modes. We compared the SZ type and a zebra observed on 1 August 2010 in the same frequency range. We found a few differences: Separation frequency is about 220 MHz in the SZ case, while around 24 MHz in zebras, The autocorrelation variability in time is significantly higher for zebras than for SZ type, The ratios between SZ spike bands is (4.4, 5.1, 6.0, 7.0) and (4.0,4.9, 6.0), while for zebras is ~52, The separation frequency of neighboring zebra stripes changes in different ways for different pairs. This behavior excludes the possibility that zebras are generated in one emission source, as are the SZ type spikes by Bernstein modes. In accordance with our previous ideas (Bárta and Karlický, 2001), we conclude that the SZ and SBN are formed in in a region of magnetic reconnection outflow, where the plasma is in a turbulent state. Using the Bernstein model, we estimated the mean magnetic-field strength and plasma density in the SZ source as about 79 G and $8.4\times 10^9$ cm$^{−3}$, respectively. Based on recent paper by Karlicky et al., 'Narrowband spikes observed during the 13 June 2012 flare in the 800-2000 MHz range', Solar Physics 297:54 (2022), DOI: https://doi.org/10.1007/s11207-022-01989-4 Bárta, M. and Karlický, M. 2001, A&A, 379, p. 1045–1051. Karlický, M., Sobotka, M., and Jiřička, K. 1996, Sol. Phys., 168, 2. Karlický, M., Benáček, J., and Rybák, J., 2021, ApJ, 910, 2. Krucker, S. and Benz, A. O., 1994, A&A, 285, p. 1038–1046. Luo, Y., Chen, B., Yu, S., Bastian, et al. 2021, ApJ, 911, 1. Stepanov, A. V., Kliem, B., Krüger, A., et al. 1999, ApJ, 524, 2. Willes, A. J. and Robinson, P. A. 1996, ApJ, 467, 465. radio spikes
CommonCrawl
A Nature Research Journal Opposite polarities of ENSO drive distinct patterns of coral bleaching potentials in the southeast Indian Ocean Ningning Zhang1,2, Ming Feng ORCID: orcid.org/0000-0002-2855-70921,3, Harry H. Hendon4, Alistair J. Hobday5 & Jens Zinke ORCID: orcid.org/0000-0002-0634-82816,7,8,9 Scientific Reports volume 7, Article number: 2443 (2017) Cite this article Climate-change impacts Episodic anomalously warm sea surface temperature (SST) extremes, or marine heatwaves (MHWs), amplify ocean warming effects and may lead to severe impacts on marine ecosystems. MHW-induced coral bleaching events have been observed frequently in recent decades in the southeast Indian Ocean (SEIO), a region traditionally regarded to have resilience to global warming. In this study, we assess the contribution of El Niño-Southern Oscillation (ENSO) to MHWs across the mostly understudied reefs in the SEIO. We find that in extended summer months, the MHWs at tropical and subtropical reefs (divided at ~20°S) are driven by opposite ENSO polarities: MHWs are more likely to occur at the tropical reefs during eastern Pacific El Niño, driven by enhanced solar radiation and weaker Australian Monsoon, some likely alleviated by positive Indian Ocean Dipole events, and at the subtropical reefs during central Pacific La Niña, mainly caused by increased horizontal heat transport, and in some cases reinforced by local air-sea interactions. Madden-Julian Oscillations (MJO) also modulate the MHW occurrences. Projected future increases in ENSO and MJO intensity with greenhouse warming will enhance thermal stress across the SEIO. Implementing forecasting systems of MHWs can be used to anticipate future coral bleaching patterns and prepare management responses. Acute disturbances to coral reefs and adjacent marine ecosystems are increasing in frequency and severity and altering coral cover and composition of marine benthic communities1,2,3,4. Episodic warm sea surface temperature (SST) extremes, termed marine heatwaves (MHWs), which are often associated with the most powerful climate phenomenon– El Niño-Southern Oscillation (ENSO)5, are among the most important environmental pressures that threaten the sustainability of coral reefs6,7,8. A large number of the world's coral reefs were recently influenced by one of the most severe El Niño events on record that lasted from early-2014 to mid-2016, dubbed the Godzilla-El Niño9, causing severe coral bleaching worldwide, including in regions previously considered resilient to these effects, such as the southeast Indian Ocean (SEIO)10. The SEIO hosts a number of biodiverse coastal fringing and offshore oceanic atoll coral reefs4, 11 (Fig. 1a), and was considered to be under relatively low levels of duress from local human impacts and climate variability12, 13. In recent decades, however, the region has suffered from major coral bleaching events. During the 1997–98 El Niño event, the tropical Scott Reef was affected by extremely high thermal stresses of more than 13 degree heating weeks (DHW), causing 70–90% coral mortality6, 14. In contrast, the nearshore and most oceanic atoll reefs in the subtropical SEIO (south of 20°S) escaped bleaching during the 1997–98 event. During the 2010–11 La Niña, widespread coral bleaching was recorded across more than 12° of latitude south of the Montebello and Barrow Islands along the Western Australian (WA) coastline15,16,17. During the recent 2015–16 El Niño event, extremely high summer SSTs in the tropical SEIO (Fig. 1a) were reported to have caused 60–90% of the shallow coral community to bleach through April 201610. These tropical-subtropical spatial contrasts suggest that bleaching response patterns in the SEIO are associated with opposite polarities of ENSO. In addition, SST anomalies associated with other climate change modes like the Indian Ocean Dipole (IOD), Madden-Julian Oscillations (MJO) and Interdecadal Pacific Oscillation (IPO) can also dampen or exacerbate the ENSO thermal stress levels in the SEIO18,19,20,21,22,23. Some reefs in the SEIO experienced bleaching during both ENSO polarities, such as the Cocos Keeling Island24, 25. (a) Averaged SST anomalies in the SEIO during December 2015-April 2016. Stars denote locations of representative SEIO reefs. (b–e) Composited SST anomalies for the El Niño events during January 1982-April 2015 and (f–i) for the La Niña events in the same time period. The white contours and dots indicate anomalies exceeding the 95% significance level based on a two-tailed Student's t test. JJA = June to August, SON = September to November, DJF = December to February of year 1, MAM = March to May of year 1. Figures are plotted using MATLAB R2015b (http://www.mathworks.com/). The maps in this figure are generated by MATLAB R2015b with M_Map (a mapping package, http://www.eos.ubc.ca/~rich/map.html). There have been comprehensive studies of the subtropical SEIO MHWs, in term of the roles of remote tropical Pacific drivers and local air-sea interaction26,27,28, however, the research on the MHWs in the tropical SEIO is less comprehensive, except for some cases studies10, 29. There is still a lack of knowledge on how the SST at different coral reefs responds to the opposite polarities and different flavours of ENSO such as eastern and central Pacific ENSO (EP-type and CP-type ENSO). Impacts from the interplay of the IOD and the MJO also need to be clarified. In this study, we investigate the SST response patterns to different polarities and flavours of ENSO in the extended austral summer season (December-April), when the SST are seasonally warmest and thus anomalies associated with ENSO can cause SST extremes. In particular, this study determines in which flavour and polarity of ENSO that MHWs preferentially occur and reveals the relative contributions of atmospheric and oceanic processes responsible for these SST extremes. This insight can be used to predict future coral bleaching patterns and prepare management responses30. Partial correlation patterns show that SST variations in the tropical region of the SEIO are better correlated to ENSO variations that have maximum amplitude in the eastern Pacific, commonly referred to as EP-type ENSO or canonical ENSO31, 32. On the other hand, SST variations in the subtropical region of the SEIO are better correlated to ENSO variations that have maximum amplitude in the central Pacific, commonly referred to as CP-type ENSO, or ENSO Modoki31, 32 (Supplementary Fig. S1). Hence, we define El Niño and La Niña years based on EP-type El Niño and CP-type La Niña, respectively. Meanwhile we should note that these two types of ENSO are not completely independent of each other. In the developing year of an El Niño, SST anomalies in the Indian Ocean are indicative of positive IOD events, which often develop during austral winter/spring in conjunction with El Niño33 (Supplementary Table S1, 2015 is defined as a neutral IOD year as the IOD eastern pole is used to define the IOD events), and cold SST anomalies appear off northern Australia (Fig. 1b,c). As El Niño matures in austral summer, SST anomalies off northern Australia switch to positive, and then strengthen and expand to a large tropical region during the autumn (Fig. 1d,e). Cold SST anomalies persist southward of 20°S along the WA coast. After removing the El Niño effects, a positive IOD event causes significant cold SST anomalies of more than 0.4 °C/°C in tropical SEIO in preceding austral winter and at some tropical places last in the preceding spring (Fig. S2). Although an IOD event may not have direct effects on the summer SST anomalies in the tropical SEIO (Fig. S2), it may precondition the coupled air-sea processes that drive the SST anomalies and thus may alleviate the warming extent in summer when co-occurs with an El Niño event (Supplementary Fig. S2). During La Niña, positive SST anomalies first establish off the northwest coast of Australia during winter and spring, and then shift southward down the coast during austral summer, with notably high SST anomalies southward of 20°S at the mature phase of La Niña (Fig. 1f–i). Lower than normal SSTs occupy the northern coast in summer. Among 10 representative tropical/subtropical reef sites in the SEIO (Fig. 1a), those located in the tropics (CKI, CI, AR, SR, KIM and RS) tend to have positive SST anomalies in the extended austral summer months during El Niño; whereas those in the subtropics (MBI, NIN, SB and HAI) tend to have positive SST anomalies in austral summer months during La Niña (Fig. 2a–j). The magnitude of anomalous warming varies with the intensity of each event and is influenced by intraseasonal variations (Supplementary Fig. S3). Positive SST anomalies also occur at the CKI in early summer during La Niña, which makes it susceptible to coral bleaching during both polarities of ENSO. An asymmetry exists between the magnitudes of positive SST anomalies caused by opposite polarities of ENSO: positive SST anomalies at the tropical reefs during El Niño are usually weaker than those at the subtropical reefs during La Niña. (a–j) Composite SST anomalies during El Niño and La Niña events at ten selected reefs. The error bars denote 95% confidence intervals. (k) Time series of standardised SST anomaly averaged in extended summer months (December-April) at the Scott reef and standardised November-January averaged Niño 3 index; (l) time series of standardised SST anomaly averaged in extended summer months (December-April) at the Ningaloo reef, standardised Niño4 index and IPO index. The red (blue) stars in (k,l) denote the developing years of El Niño (La Niña). Figures are plotted using MATLAB R2015b (http://www.mathworks.com/). Summer SST anomalies at the Scott Reef are positively correlated with the November-January averaged Niño3 index (correlation coefficient r = 0.5), and the SST anomalies show a warming tendency, with a maximum reaching more than 1 °C in the summer of 2015/16 (Figs 2 and S4a). Conversely, the summer SST anomalies at the Ningaloo Reef are negatively correlated with the Niño4 index (correlation coefficient r = −0.68), and all the La Niña years are associated with positive SST anomalies, with a maximum reaching 2 °C in 2010–11 (Figs 2l and S4b). The 2010–11 Ningaloo Niño MHW was associated with an extraordinarily strong La Niña in the Pacific Ocean and amplified by local air-sea coupling16, 26, and was followed by another two warmer than normal summers (Fig. 2), causing severe coral bleaching events in the region15,16,17. The SST anomaly series at the Ningaloo Reef also have significant decadal variation, with more warming events after the 1997–98 El Niño (Fig. 2), in conjunction with a phase shift of the IPO towards a cold phase that acts to promote warm conditions off the WA coast21, 22, 27, 34. Composite patterns may smooth out intense short-term events, such that the MHW event at the Scott Reef in 1997–98 was not well captured by the seasonally averaged SST anomalies (Figs 2 and S4a). To characterise sub-seasonal features, we quantify episodic MHW events using daily data, defined as periods when surface temperatures are warmer than the 90th percentile and last for five or more days based on a 30-year historical baseline period35. At the Scott Reef, the longest MHW event (28 days) occurred during March-April 1998, with a mean intensity of 1.63 °C (Fig. 3a) and a cumulative intensity of 46 °C·days (equal to 6.6 DHWs; Supplementary Table S1). The cumulative intensity is similar to DHW, which is frequently used to monitor coral bleaching, and a value of 4 DHWs could cause bleaching to occur36. In the recent 2015–16 El Niño, frequent MHW events with both long durations and strong intensities occurred across all the tropical sites, in sharp contrast to almost none at the subtropical sites (Supplementary Table S1; Figs 3 and S5). On the other hand, at the Ningaloo Reef, the MHW events were mostly associated with La Niña events (Fig. 3b), such as during the 1998–99, 2007–08, 2010–12 La Niña events. The longest MHW at the Ningaloo Reef developed from September 2010 through January 2011 and then was followed by another MHW event lasting for almost a month in February 2011, amounting to a total cumulative intensity of 361 °C·days in the extended summer months (Supplementary Table S1). Overall, there were also increasing frequencies of short-term MHWs during 1982–2016 (Fig. 3a,b). (a,b) Duration (bars) and mean intensity (dots) of MHW events peaking in extended summer months (December-April) at the Scott and Ningaloo reefs, respectively. The time axis corresponds to the peak month of each MHW event. The red (blue) stars denote the developing years of El Niño (La Niña). (c–g) Composited characteristics of MHW events at the 10 reefs in the extended summer months (December-April) of El Niño and La Niña events. The error bars denote 95% confidence intervals. Figures are plotted using MATLAB R2015b (http://www.mathworks.com/). The MHWs at opposite ENSO polarities show distinct contrasts between the tropical and subtropical reefs (Fig. 3c–e). For the tropical reefs, the frequency, MHW days, and cumulative intensity are all higher during El Niño; while for the subtropical reef sites, these properties are significantly higher during La Niña. However, the duration and mean intensity of individual events do not show such contrasts for opposite polarities of ENSO (Supplementary Fig. S6). Anomalous warm conditions at the tropical reefs are mainly caused by local atmospheric forcing in responses to the EP-type El Niño teleconnection, which increases surface heat flux into the ocean33 (Fig. 4a1–f1), whereas warm conditions at the subtropical reefs were mainly attributed to oceanic processes such as increased horizontal heat advection26, 27, 37 (Fig. 4a2–f2). (a1–f1) Mixed layer heat budget terms during spring (SON) and summer (DJF) for El Niño events, and (a2–f2) for La Niña events. SON = September to November, DJF = December to February. Figures are plotted using MATLAB R2015b (http://www.mathworks.com/). The maps in this figure are generated by MATLAB R2015b with M_Map (a mapping package, http://www.eos.ubc.ca/~rich/map.html). The increase of surface heat flux in the tropical SEIO during El Niño is mainly due to shortwave radiation and latent heat anomalies (Supplementary Fig. S7), stemming from the variation of the Walker circulation33, 38. During El Niño, the eastward displacement of the ascending branch of the Walker circulation in the Pacific results in less convective cloud coverage over the Maritime Continent and eastern tropical Indian Ocean, as indicated by positive outgoing longwave radiation anomalies (OLR, Fig. 5b,e), thereby increasing the incident shortwave radiation by more than 15 Wm−2 in the tropical SEIO (Supplementary Fig. S7)39. This reduced convection acts to enhance the trade winds across the tropical SEIO. And onset of the Australian summer monsoon is delayed and weakened40 (Fig. 5g,h). In the tropical region off the south coast of Java-Lesser Sunda Islands, the anomalous easterlies during El Niño years thus acts to increase latent heat loss and promote cooling in the pre-monsoon season (spring), but act to promote warming during summer once the monsoon onsets38,39,40 (Supplementary Fig. S7). Modification of the Walker circulation caused by EP-type ENSO can extend to tropical region of the SEIO, while the variation centres of convection for CP-type ENSO are mainly limited within the tropical Pacific region, so SST variability in the SEIO is more closely correlated to the EP-type El Niño31, 32. (a–f) Climatological mean, anomalous 925 hPa wind (arrows) and OLR anomalies (shaded) during spring and summer of opposite ENSO polarities. Anomalies exceeding the 90% significant level based on a two-tailed Student's t test are displayed (black arrows for significant wind anomalies and grey for insignificant ones). The cyan box in (d) indicates the region where the Australian monsoon index is defined. (g) The composites of the Australian monsoon wind index. The red and blue circles are onset dates of Australian summer monsoon during El Niño and La Niña respectively. The shadows denote corresponding 95% confidence intervals. (h) Lagged regression of 850 hPa zonal component of wind and wind speed averaged in the box indicated in (g) onto Niño3 index in January and seasonal cycle of 850 hPa zonal component of wind averaged in the box. Figures are plotted using MATLAB R2015b (http://www.mathworks.com/). The maps in this figure are generated by MATLAB R2015b with M_Map (a mapping package, http://www.eos.ubc.ca/~rich/map.html). The subtropical MHWs, the "Ningaloo Niño", is closely tied to the variability of the Leeuwin Current (LC). The LC flows against the prevailing southerly wind along the west coast of Australia26,27,28, 41. The LC is affected by ENSO from the western Pacific via coastal Kelvin waves: transport decreases (increases) in El Niño (La Niña) years26, 27, 41 bringing less (more) heat southward, thus decreasing (increasing) the local SST. The LC variations appear to be more sensitive to the SST gradient between the Niño4 region and western tropical Pacific7, 42, thus SST variations in the subtropical region of the SEIO are more closely related to the CP-type ENSO. In addition to the response to remote forcing from the Pacific, amplification of the LC transport and coastal downwelling due to the collapse of the southerly winds also enhances warming off the WA in some years26,27,28, 43. During the locally-amplified Ningaloo Niño, such as in years 2010–11 and 2011–12, cyclonic wind anomalies developed to the northwest of Australia in September (Supplementary Fig. S8), and extended south-eastward in summer, reducing wind speed and evaporation related heat loss (Supplementary Fig. S8), and inducing coastal downwelling anomalies, thus reinforcing the positive SST anomalies27, 28, 37, 43. The MHWs at the subtropical reefs in these two years were unusually strong and long-lasting (Fig. 3, Supplementary Table S1, Supplementary Fig. S5). MJOs also affect SST variations in the SEIO by modulating the local atmospheric conditions20, 44. A clear MJO-footprint can be seen from the distributions of MHW peaks in different MJO phases (Fig. 6). At the tropical reefs such as the Ashmore and Scott Reefs, there tend to be more MHWs peaking at MJO phases 2–5 and less in the other phases. Phases 2–5 coincide with suppressed convection, anomalous heat flux into the ocean and Ekman-induced downwelling off the northwest Australian shelf, thus increasing the local SST20. The other phases of the MJO (active convection) are associated with surface heat flux cooling and Ekman-induced upwelling20, thus were not favourable for MHWs. For the subtropical reefs, such as the Ningaloo Reef and Shark Bay, there tend to be more MHWs peaking in MJO phase 3–6, although the modulation is weaker than for the northern reefs because the MJO has its biggest impacts near the equator. The increased MHWs in phases 3–6 stems from the accumulation of warm water off the northwest Australian shelf, which triggers southward propagating Kelvin waves that increase the LC heat transport20. Distribution of the MHW peaks in extended summer months as a function of MJO phases at (a) the tropical Ashmore (circles) and Scott Reefs (stars) and (b) subtropical Ningaloo Reef (circles) and Shark Bay (stars). The percentages stand for the proportions of MHW peaks in that phase of strong MJO and are boxed in red (blue) where they are significantly greater (less) than the average occurrence (8.3%). Only MHWs with duration of less than 20 days are shown. Figures are plotted using MATLAB R2015b (http://www.mathworks.com/). In the tropical region of the SEIO, strong MHWs often tend to occur in extended summer months during EP-type El Niño, modulated by preceding cooling in winter and spring when positive IOD events co-occur; while in subtropical region, long-lasting and strong MHWs tend to occur in extended summer months during CP-type La Niña. In both regions, there tend to be more MHWs in the suppressed convection phases of MJO. In response to increasing greenhouse gases, extreme El Niño and La Niña events are projected to increase45,46,47. MJOs are also projected to be more active under global warming48, 49. Thus, there will be increased likelihood of MHW disturbances in the SEIO, posing greater threat to the ecosystem. The IOD events are normally phase locked with austral winter and spring, but there are also intrinsic IOD events that mature earlier in austral winter50, 51. Further research is needed to distinguish the effects of different types of IOD events on MHWs in the SEIO. Our work shows that under the influence of different flavours of ENSO, and modified by other climate modes like IOD, MJO, IPO and local air-sea interactions, the coral bleaching potentials vary across the SEIO, and may occur more frequently in a warming world52, 53. Performances of both dynamical and statistical prediction models suggest that ENSO predictive skills have been improving, implying predictability of ENSO-induced variabilities such as MHWs37. For instance, the Predictive Ocean Atmosphere Model for Australia (POAMA) well predicted the patterns of SST and sea surface height (SSH) anomalies in March 201654), providing the possibility to predict increased risk of MHWs at least one season ahead. Such a lead time can allow time to plan for deployment of monitoring activities30 and help as an early warning system for potential bleaching55. Forecasting MHWs can also reduce coral recovery times by extending the time available to implement ways to mitigate local scale stress from human-related activities at severely impacted sites, such as restrictions of fishing or tourism activities. In future, seasonal forecasts can also be used to support preparation of active interventions that require long lead times, such as predator control programs (e.g. removal of crown of thorns) or out-planting of new coral propagules, which may enhance reef resilience in the face of a changing climate. The NOAA 1/4° daily optimally interpolated SST (OISST) data for January 1st 1982-May 31st 2016 used in this analysis are acquired from NOAA's National Centers for Environmental Information (NCEI) at https://www.ncdc.noaa.gov/oisst/data-access. The Niño 3 and Niño 4 indexes are obtained from the Ocean Observations Panel for Climate (OOPC). The IPO index is obtained from the NOAA's Earth System Research Laboratory (http://www.esrl.noaa.gov/psd/data/timeseries/IPOTPI/)56. The Real-time Multivariate MJO series (RMM) are obtained from the International Research Institute (IRI) for Climate and Society Data Library (http://iridl.ldeo.columbia.edu/SOURCES/.BoM/.MJO/.RMM/). The same 8 phases defined by Wheeler and Hendon (2004) are used57. The 925 hPa and 850 hPa winds are obtained from NCEP Reanalysis 2 data provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/, for the period January 1982- April 201658. The NOAA interpolated outgoing longwave radiation (OLR) are used as proxy for cloud cover. Heat fluxes from the ERA-Interim reanalysis during January 1982- March 2016 are used to conduct temperature budget analyses in the upper 50 m59. 10 representative tropical/subtropical reef sites in the SEIO are selected for this study: the Cocos Keeling Island (CKI), Christmas Island (CI), Ashmore Reef (AR), Scott Reef (SR), Kimberley (KIM), Rowley Shoals (RS), Montebello and Barrow Islands (MBI), Ningaloo Reef (NIN), Shark Bay (SB) and Houtman Abrolhos Island (HAI) (Fig. 1a). Definition of El Niño and La Niña years The Niño 3 and Niño 4 indexes are used to identify the EP-type El Niño and CP-type La Niña events, respectively (Supplementary Table S1), since the summer SST variations in the tropical and subtropical SEIO were sensitive to the SST anomalies in the eastern and central equatorial Pacific respectively7, 27, 53 (Supplementary Fig. S1). The El Niño years are defined using the Niño3 index, following NOAA tradition, as when the running 3-month mean SST anomaly in the Niño3 region are above 0.5 °C for at least 5 consecutive months; the La Niña years are identified when the running 3-month mean SST anomaly in the Niño4 region are below −0.5 °C for at least 5 consecutive months. Definition of a marine heatwave event A marine heatwave event is defined as a discrete prolonged anomalously warm water event in a particular location35. The relative baseline climatology is defined using all data within an 11-day window centered on the time of year with a period of 30 years and then a high percentile threshold (90% in our paper) is used to obtain the threshold value. A percentile threshold rather than an absolute value above the climatology is used since the magnitude of SST variability varies by different regions. "Prolonged" means a MHW must last more than 5 days and "discrete" means two events with a gap of 2 days or less will be considered as a continuous event. The MHW days in this paper are the number of days that anomalous warming happens in the austral summer period (December-April), and cumulative intensity is the integral of intensity over the duration of the event. Mixed-layer temperature budget The temperature budget in the upper layer can be estimated from the following equation60, $$\frac{\partial T}{\partial t}=\frac{{Q}_{0}-{q}_{d}}{\rho {C}_{p}{h}_{u}}+Residual$$ where T is the SST, ρ is the reference density (1026 kg m−3), C p is the specific heat capacity of seawater (4000 J kg−1 K−1), and h u is the upper layer depth (designated to 50 m). The terms in equation (1) are referred to as the SST tendency, surface heat flux, and residual, respectively. The residual term includes the effects of horizontal advection and entrainment, mostly dominated by advection in our study area. Q 0 is the net surface heat flux, including the shortwave radiation, longwave radiation, latent and sensible heat flux: $${Q}_{0}=SW+LW+LH+SH,$$ q d is the downward radiative flux across the upper layer, which is estimated as $${q}_{d}=SW[R{e}^{(-\frac{{h}_{u}}{{r}_{1}})}+(1-R){e}^{(-\frac{{h}_{u}}{{r}_{2}})}],$$ where SW is shortwave radiative heat flux; R, r 1 and r 2 are coefficients that depend on water turbidity, which are selected to be 0.58, 0.35 and 23 here60. Australian monsoon index and coastal wind index The Australian monsoon is described by the Australian monsoon index (AUSMI), which is defined as 850 hPa zonal wind averaged over the area (5°S–15°S, 110°E–130°E)28. The onset date of the Australian summer monsoon is defined followed the criteria of Kajikawa et al.38. The coastal wind index (CWI) is defined as the 925 hPa meridional wind anomalies averaged over the region 108°E to the coast, 28° to 22°S in austral summer (December-February)28. It defines whether or not there are significant northerly wind anomalies off the WA coast28. According to the CWI, the ten La Niña events since 1982 can be classified into two types– locally amplified and non-locally amplified modes28, 37, 43. For the non-locally amplified mode, there were no significant alongshore wind anomalies thus no contribution to the warming off the WA (Supplementary Fig. S8). The local alongshore wind anomalies were indicated to be largely affected by the Australian summer monsoon and Southern Annular Mode (SAM), and anomalous northerly alongshore winds in the locally amplified mode were often accompanied by weaker Australian summer monsoon compared to the non-locally amplified mode and positive SAM28, 43. Hoegh-Guldberg, O. et al. Coral reefs under rapid climate change and ocean acidification. Science. 318, 1737–1742, doi:10.1126/science.1152509 (2007). Hughes, T. P., Graham, N. A., Jackson, J. B., Mumby, P. J. & Steneck, R. S. Rising to the challenge of sustaining coral reef resilience. Trends Ecol. Evol. 25, 633–642, doi:10.1016/j.tree.2010.07.011 (2010). Kerswell, A. P. Biodiversity patterns of benthic marine algae. Ecology. 87, 2479–2488, doi:10.1890/0012-9658 (2006). Wernberg, T. et al. An extreme climatic event alters marine ecosystem structure in a global biodiversity hotspot. Nat. Clim. Change. 3, 78–82, doi:10.1038/nclimate1627 (2013). McPhaden, M. J., Zebiak, S. E. & Glantz, M. H. ENSO as an integrating concept in earth science. Science. 314, 1739–1745, doi:10.1126/science.1132588 (2006). Gilmour, J. P., Smith, L. D., Heyward, A. J., Baird, A. H. & Pratchett, M. S. Recovery of an isolated coral reef system following severe disturbance. Science. 340, 69–71, doi:10.1126/science.1232310 (2013). Zinke, J. et al. Coral record of southeast Indian Ocean marine heatwaves with intensified Western Pacific temperature gradient. Nat. Commun. 6, 8562, doi:10.1038/ncomms9562 (2015). Lough, J. M., Cantin, N., Benthuysen, J. & Cooper, T. Environmental drivers of growth in massive porites corals over 16 degrees of latitude along Australia's northwest shelf. Limnol. Oceanogr. 61, 684–700, doi:10.1002/lno.10244 (2016). Kintisch, E. How a "Godzilla" El Niño shook up weather forecasts. Science. 352, 1501–1502, doi:10.1126/science.352.6293.1501 (2016). Australian Institute of Marine Sciences, AIMS northwest Australian coral bleaching update http://www.aims.gov.au/docs/media/latest-news/-/asset_publisher/EnA5gMcJvXjd/content/aims-northwest-australia-coral-bleaching-update (May 2016). Veron, J. E. N. & Marsh, L. M. Hermatypic corals of Western Australia: records and annotated species list. Records of the Western Australian Museum. Supplement. 29, 1–136 (1988). Halford, A. R. & Caley, M. J. Towards an understanding of resilience in isolated coral reefs. Global Change Biol. 15, 3031–3045, doi:10.1111/j.1365-2486.2009.01972.x (2009). Speed, C. W. et al. Dynamic stability of coral reefs on the west Australian coast. PLoS ONE. 8, e69863, doi:10.1371/journal.pone.0069863 (2013). Wilkinson, C. R. Status of Coral Reefs of the World (ed. Wilkinson, C. R.) 314–315; Australian Institute of Marine Science (2004). Moore, J. A. et al. Unprecedented mass bleaching and loss of coral across 12° of latitude in Western Australia in 2010–11. PLoS ONE 7, e51807, doi:10.1371/journal.pone.0051807 (2012). Pearce, A. F. & Feng, M. The rise and fall of the "marine heat wave" off Western Australia during the summer of 2010/11. J. Mar. Syst. 111–112, 139–156, doi:10.1016/j.jmarsys.2012.10.009 (2013). Depczynsk, M. et al. Bleaching, coral mortality and subsequent survivorship on a West Australian fringing reef. Coral Reefs. 32, 233–238, doi:10.1007/s00338-012-0974-0 (2013). Kug, J. S., An, S. I., Jin, F. F. & Kang, I. S. Preconditions for El Nino and La Nina onsets and their relation to the Indian Ocean. Geophys. Res. Lett. 32, L05706, doi:10.1029/2004GL021674 (2005). Luo, J. J. et al. Interaction between El Niño and extreme Indian Ocean Dipole. J. Clim. 23, 726–742, doi:10.1175/2009JCLI3104.1 (2010). Marshall, A. G. & Hendon, H. H. Impacts of the MJO in the Indian Ocean and on the Western Australian coast. Clim. Dyn. 42, 579–595, doi:10.1007/s00382-012-1643-2 (2014). Feng, M., McPhaden, M. J. & Lee, T. Decadal variability of the Pacific subtropical cells and their influence on the southeast Indian Ocean. Geophys. Res. Lett. 37, L09606–n/a, doi:10.1029/2010GL042796 (2010). Feng, M. et al. Decadal increase in Ningaloo Niño since the late 1990s. Geophys. Res. Lett. 42, 104–112, doi:10.1002/2014GL062509 (2015). Han, W. et al. Indian Ocean decadal variability: A review. B. Am. Meteorol. Soc. 95, 1679–1703, doi:10.1175/BAMS-D-13-00028.1 (2014). Hobbs, J. P. A. & McDonald, C. A. Increased seawater temperature and decreased dissolved oxygen triggers fish kill at the Cocos (Keeling) Islands, Indian Ocean. J. Fish. Biol. 77, 1219–1229, doi:10.1111/j.1095-8649.2010.02726.x (2010). Richards, Z. T. & Hobbs, J. P. A. The status of hard coral diversity at Christmas Island and Cocos (Keeling) Islands. Raff. Bull. Zool. 30, 376–398 (2014). Feng, M., McPhaden, M. J., Xie, S. & Hafner, J. La Niña forces unprecedented Leeuwin Current warming in 2011. Sci. Rep. 3, 1277, doi:10.1038/srep01277 (2013). Marshall, A. G., Hendon, H. H., Feng, M. & Schiller, A. Initiation and amplification of the Ningaloo Niño. Clim. Dyn. 45, 2367–2385, doi:10.1007/s00382-015-2477-5 (2015). Kataoka, T., Tozuka, T., Behera, S. & Yamagata, T. On the Ningaloo Niño/Niña. Clim. Dyn 43, 1463–1482, doi:10.1007/s00382-013-1961-z (2014). Du, Y. & Qu, T. D. Interannual variability of sea surface temperature off Java and Sumatra in a global GCM. J. Clim. 21, 2451–2465, doi:10.1175/2007JCLI1753.1 (2008). Salinger, J. et al. Decadal-scale forecasting of climate drivers for marine applications. Adv. Mar. Biol. 74, 1–68, doi:10.1016/bs.amb.2016.04.002 (2016). Ashok, K., Behera, S. K., Rao, S. A., Weng, H. Y. & Yamagata, T. El Niño Modoki and its possible teleconnection. J. Geophys. Res. 112, C11007, doi:10.1029/2006JC003798 (2007). Kao, H. Y. & Yu, J. Y. Contrasting eastern-Pacific and central-Pacific types of ENSO. J. Clim. 22, 615–632, doi:10.1175/2008JCLI2309.1 (2009). Klein, S. A., Soden, B. J. & Lau, N. C. Remote sea surface temperature variations during ENSO: Evidence for a tropical atmospheric bridge. J. Clim. 12, 917–932, doi:10.1175/1520-0442 (1999). Han, W. Q. et al. Intensification of decadal and multi-decadal sea level variability in the western tropical Pacific during recent decades. Clim. Dyn. 43, 1357–1379, doi:10.1007/s00382-013-1951-1 (2013). Hobday, A. J. et al. A hierarchical approach to defining marine heatwaves. Prog. Oceanogr. 141, 227–238, doi:10.1016/j.pocean.2015.12.014 (2016). Liu, G., Strong, A. E. & Skirving, W. Remote sensing of sea surface temperature during 2002 Barrier Reef coral bleaching. Eos 84, 137–144, doi:10.1029/2003EO150001 (2003). Doi, T., Behera, S. K. & Yamagata, T. Predictability of the Ningaloo Niño/Niña. Sci. Rep 3, 2892, doi:10.1038/srep02892 (2013). Kajikawa, Y., Wang, B. & Yang, J. A multi-time scale Australian monsoon index. Int. J. Climatol. 30, 1114–1120, doi:10.1002/joc.1955 (2010). Wang, B., Wu, R. & Li, T. Atmosphere-warm ocean interaction and its impacts on Asian-Australian monsoon variation. J. Clim. 16, 1195–1211, doi:10.1175/1520-0442 (2003). Hendon, H. H. & Liebmann, B. A composite study of onset of the Australian summer monsoon. J. Atmos. Sci. 47, 2227–2240, doi:10.1175/1520-0469 (1990). Pearce, A. F. & Phillips, B. F. ENSO events, the Leeuwin Current and larval recruitment of the western rock lobster. J. Cons. Int. Explor. Mer. 45, 13–21, doi:10.1093/icesjms/45.1.13 (1988). Hoell, A. & Funk, C. The ENSO-related West Pacific sea surface temperature gradient. J. Clim. 26, 9545–9562, doi:10.1175/JCLI-D-12-00344.1 (2013). Tozuka, T., Kataoka, T. & Yamagata, T. Locally and remotely forced atmospheric circulation anomalies of Ningaloo Niño/Niña. Clim. Dyn. 43, 2197–2205, doi:10.1007/s00382-013-2044-x (2014). Wheeler, M. C., Hendon, H. H., Cleland, S., Meinke, H. & Donald, A. Impacts of the Madden-Julian Oscillation on Australian rainfall and circulation. J. Clim. 22, 1482–1498, doi:10.1175/2008JCLI2595.1 (2009). Cai, W. J. et al. Increasing frequency of extreme El Niño events due to greenhouse warming. Nat. Clim. Change. 4, 111–116, doi:10.1038/nclimate2100 (2014). Cai, W. J. et al. Increased frequency of extreme La Niña events under greenhouse warming. Nat. Clim. Change. 5, 132–137, doi:10.1038/nclimate2492 (2015). Cai, W. J. et al. ENSO and greenhouse warming. Nat. Clim. Change. 5, 849–859, doi:10.1038/nclimate2743 (2015). Slingo, J. M., Rowell, D. P., Sperber, K. R. & Nortley, F. On the predictability of the interannual behaviour of the Madden-Julian Oscillation and its relationship with El Niño. Q. J. R. Meteorol. Soc. 125, 583–609, doi:10.1002/qj.49712555411 (1999). Jones, C. & Carvalho, L. M. V. Will global warming modify the activity of the Madden-Julian Oscillation? Q. J. R. Meteorol. Soc. 137, 544–552, doi:10.1002/qj.765 (2011). Behera, S. K., Luo, J. J., Masson, S., Rao, S. A. & Sakuma, H. A CGCM study on the interaction between IOD and ENSO. J. Clim. 19, 1688–1705, doi:10.1175/JCLI3797.1 (2006). Du, Y., Cai, W. J. & Wu, Y. L. A new type of the Indian Ocean Dipole since the mid-1970s. J. Clim. 26, 959–972, doi:10.1175/JCLI-D-12-00047.1 (2013). Lima, F. P. & Wethey, D. S. Three decades of high-resolution coastal sea surface temperatures reveal more than warming. Nat. Commun. 3, 704, doi:10.1038/ncomms1713 (2012). Zinke, J. et al. Corals record long-term Leeuwin Current variability during Ningaloo Niño/Niña since 1795. Nat. Commun. 5, 3607, doi:10.1038/ncomms4607 (2014). Predictive ocean atmosphere model for Australia http://poama.bom.gov.au/ocean_monitoring.shtml (Date of access: 21 December 2016). Spillman, C. M. Operational real-time seasonal forecasts for coral reef management. J. Oper. Oceanogr. 4, 13–22, doi:10.1080/1755876X.2011.11020119 (2011). Henley, B. J. et al. A tripole index for the interdecadal Pacific oscillation. Clim. Dyn. 45, 3077–3090, doi:10.1007/s00382-015-2525-1 (2015). Wheeler, M. C. & Hendon, H. H. An all-season real-time multivariate MJO index: development of an index for monitoring and prediction. Mon. Wea. Re. 132, 1917–1932, doi:10.1175/1520-0493 (2004). Kanamitsu, M. et al. NCEP-DOE AMIP-II Reanalysis (R-2). Bull. Amer. Meteor. Soc. 83, 1631–1643, doi:10.1175/BAMS-83-11-1631 (2002). Dee, D. P. et al. The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Q. J. R. Meteorol. Soc. 137, 553–597, doi:10.1002/qj.828 (2011). Du, Y., Qu, T. D., Meyers, G., Masumoto, Y. & Sasaki, H. Seasonal heat budget in the mixed layer of the southeastern tropical Indian Ocean in a high-resolution ocean general circulation model. J. Geophys. Res. 110, C04012, doi:10.1029/2004JC002845 (2005). This research is partly supported by the Western Australian Marine Science Institution, the Gorgon Barrow Island Net Conservation Benefits Fund, which is administered by the WA Department of Parks and Wildlife. Ningning Zhang is supported by the China Scholarship Council for her visit to CSIRO. J.Z. was supported by a Senior Research Fellowship at Curtin University and an Honorary Fellowship with Wits University. We would thank two anonymous reviewers for constructive comments. CSIRO Oceans and Atmosphere, IOMRC, Crawley, Western Australia, Australia Ningning Zhang & Ming Feng School of Marine Sciences, Nanjing University of Information Science and Technology, Nanjing, China Western Australia Marine Science Institution, Perth, Western Australia, Australia Ming Feng Bureau of Meteorology, Melbourne, Australia Harry H. Hendon CSIRO Oceans and Atmosphere, Hobart, Tasmania, Australia Alistair J. Hobday Institut für Geologische Wissenschaften, Freie Universität Berlin, Berlin, Germany Jens Zinke Department of Environment and Agriculture, Curtin University, Bentley, Australia Australian Institute of Marine Science, Crawley, Australia School of Geography, Archaeology and Environmental Studies, University of Witwatersrand, Johannesburg, South Africa Search for Ningning Zhang in: Search for Ming Feng in: Search for Harry H. Hendon in: Search for Alistair J. Hobday in: Search for Jens Zinke in: M.F. and J.Z. conceived the experiment(s), H.H. advised on the Australian monsoon analysis, and N.Z. conducted the analyses. N.Z., M.F., H.H., A.H., and J.Z. all contributed to the writing and reviewed the manuscript. Correspondence to Ming Feng. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Zhang, N., Feng, M., Hendon, H.H. et al. Opposite polarities of ENSO drive distinct patterns of coral bleaching potentials in the southeast Indian Ocean. Sci Rep 7, 2443 (2017) doi:10.1038/s41598-017-02688-y DOI: https://doi.org/10.1038/s41598-017-02688-y A global assessment of marine heatwaves and their drivers Neil J. Holbrook , Hillary A. Scannell , Alexander Sen Gupta , Jessica A. Benthuysen , Ming Feng , Eric C. J. Oliver , Lisa V. Alexander , Michael T. Burrows , Markus G. Donat , Alistair J. Hobday , Pippa J. Moore , Sarah E. Perkins-Kirkpatrick , Dan A. Smale , Sandra C. Straub & Thomas Wernberg Nature Communications (2019) A seascape genetic analysis of a stress-tolerant coral species along the Western Australian coast R. D. Evans , N. M. Ryan , M. J. Travers , M. Feng , Y. Hitchen & W. J. Kennington Coral Reefs (2019) Decadal SST Variability in the Southeast Indian Ocean and Its Impact on Regional Climate Yuanlong Li , Weiqing Han , Lei Zhang & Fan Wang Journal of Climate (2019) Intra-annual variability of the North West Shelf of Australia and its impact on the Holloway Current: Excitement and propagation of coastally trapped waves Marin Maxime & Feng Ming Continental Shelf Research (2019) Temperature patterns and mechanisms influencing coral bleaching during the 2016 El Niño Tim R. McClanahan , Emily S. Darling , Joseph M. Maina , Nyawira A. Muthiga , Stéphanie D 'agata , Stacy D. Jupiter , Rohan Arthur , Shaun K. Wilson , Sangeeta Mangubhai , Yashika Nand , Ali M. Ussi , Austin T. Humphries , Vardhan J. Patankar , Mireille M. M. Guillaume , Sally A. Keith , George Shedrawi , Pagu Julius , Gabriel Grimsditch , January Ndagala & Julien Leblond Nature Climate Change (2019) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights Author Highlights Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
A height-weight formula to measure body fat in childhood obesity Maria Rosaria Licenziati1 na1, Giada Ballarin2 na1, Gabriella Iannuzzo3, Maria Serena Lonardo1, Olivia Di Vincenzo3,4, Arcangelo Iannuzzi5 & Giuliana Valerio ORCID: orcid.org/0000-0001-5063-43332 Italian Journal of Pediatrics volume 48, Article number: 106 (2022) Cite this article The assessment of body composition is central in diagnosis and treatment of paediatric obesity, but a criterion method is not feasible in clinical practice. Even the use of bioelectrical impedance analysis (BIA) is limited in children. Body mass index (BMI) Z-score is frequently used as a proxy index of body composition, but it does not discriminate between fat mass and fat-free mass. We aimed to assess the extent to which fat mass and percentage of body fat estimated by a height-weight equation agreed with a BIA equation in youths with obesity from South Italy. Furthermore, we investigated the correlation between BMI Z-score and fat mass or percentage of body mass estimated by these two models. One-hundred-seventy-four youths with obesity (52.3% males, mean age 10.8 ± 1.9) were enrolled in this cross-sectional study. Fat mass and percentage of body fat were calculated according to a height-weight based prediction model and to a BIA prediction model. According to Bland–Altman statistics, mean differences were relatively small for both fat mass (+ 0.65 kg) and percentage of body fat (+ 1.27%) with an overestimation at lower mean values; the majority of values fell within the limits of agreement. BMI Z-score was significantly associated with both fat mass and percentage of body fat, regardless of the method, but the strength of correlation was higher when the height-weight equation was considered (r = 0.82; p < 0.001). This formula may serve as surrogate for body fat estimation when instrumental tools are not available. Dealing with changes of body fat instead of BMI Z-score may help children and parents to focus on diet for health. Obesity is defined as excessive fat accumulation that presents a risk to health. Epidemiological studies, using either direct or indirect body composition methods, suggested a positive association between fat mass and mortality in the general population [1]. Therefore, the assessment of body composition is central in diagnosis and treatment of obesity, yet there are several limitations to estimate body fat (BF), especially in children and adolescents. Dual-energy X-ray absorptiometry (DXA) and air displacement pletismography are considered as criterion methods for body composition assessment in adults, but not in children. Another approach is the use of bioelectrical impedance analysis (BIA), which estimates total body water, fat-free mass (FFM) and fat mass (FM) through predictive equations based on different variables, such as age, stature and eventually weight [2, 3]. However, the use of BIA in pediatric clinical practice is limited because it requires a specialised staff, standard conditions and a relative collaboration of the patients [4]. Therefore, several methods have been developed to assess adiposity from equations based on anthropometric measurements, such as weight and height, which are practical, repeatable and less influenced by the operators [5]. In this direction, body mass index (BMI) has been recommended to evaluate overweight and obesity in healthcare or clinical settings. Despite many strengths, BMI has some limitations because FM and FFM can differ among subjects with the same BMI [3]. Moreover, BMI is a sex and age related measurement, therefore BMI-for-age percentile or its equivalent (BMI Z-score) are the main tools used for the diagnosis of overweight and obesity [2]. BMI Z-score is frequently used as a proxy index of body composition also in follow-up studies [4]. However, it is of difficult interpretation for non-professionals, such as parents, and its meaning is not directly attributed to BF. In addition, BMI z-score is flattened at very high BMI values, leading to erroneous conclusions when considering variations over time [5,6,7]. BMI and BMI Z-score may not reflect changes in body composition in children and adolescents because body proportions and BF levels change during growth in a non-linear manner. For this reason, other anthropometric equations have been proposed to estimate FM%. For instance, some Authors tested different exponential numbers in the denominator of the weight/height formula [8]. In particular, the triponderal mass index (weight/height3) was more accurate than BMI for estimating BF levels in different pediatric populations [9, 10]. Recently, Hudda et al. developed a 'height–weight equation' to calculate FM in children and adolescents, which was validated against deuterium dilution in a large multi-ethnic dataset of UK children of different ethnic groups. [11]. This equation had a very high predictive ability, yielding a mean difference between observed and predicted fat mass of − 1.29 kg (95% confidence interval − 1.62 to − 0.96 kg). Furthermore, using the deuterium dilution method as a reference standard, this model provided estimates of FM as accurate as those yielded by DXA and BIA [12]. This equation considers also ethnicity as a predictor variable, distinguishing children of different ethnic origins. In a previous study, we found that FM estimated by the Hudda's equation was independently associated with subclinical atherosclerosis in an Italian sample of children and adolescents with obesity [13]. Considering the interest toward a simple and feasible method to estimate BF, we aimed to assess the extent to which FM and percentage of BF (BF%) agreed by comparing two feasible methods to estimate BF, namely the height-weight equation by Hudda et al. and a BIA model in a sample of children and adolescents with obesity living in South Italy. As secondary outcome, we compared the correlation between BMI Z-score, and FM or BF% estimated by these two models. This retrospective study involved 174 Italian children and adolescents with obesity (52.3% males, mean age 10.8, range age 7–15 years) consecutively admitted for a first visit to the Obesity Unity, Santobono Pausilipon Children Hospital in Naples, Italy, from April 2017 to December 2017. To be included in the study the following criteria were considered: age 4–15 years, diagnosis of primary obesity, no previous weight loss treatment. Overweight, participation to competitive sports, assumption of medications that may affect body weight, or genetic, endocrine and iatrogenic causes of obesity were exclusion criteria. The institutional review board approved the clinical protocol and written informed consent for all procedures was obtained from all the children and/or their parents or legal guardians before the enrolment. Anthropometry, pubertal and lifestyle assessment Anthropometric measurements were assessed by the same investigator, specifically trained. Standing height was determined by a Harpenden Stadiometer (Holtain Limited, Crymych, Dyfed, UK). Body weight was measured to the nearest 0.1 kg, by using standard equipment. BMI was calculated as weight divided for height2 (kg/m2). The WHO standards for age-and sex-specific BMI percentiles were used for calculating the BMI standard deviation score (SDS). Obesity was defined by BMI Z-score > 2 [14]. Pubertal stage was assessed by a paediatrician. Prepubertal stage was defined by Tanner Stage I of breast development in girls and testicular volume in boys [15]. Lifestyle was assessed by questionnaire and included information about participation in competitive or recreational sports in the previous 6 months (yes/not) and the weekly hours of training. Bioelectrical impedance analysis BIA on the whole body (BIA 101 RJL, Akern, Florence Italy) was performed by the same investigator in standardized conditions: ambient temperature between 23–25 °C, fast > 3 h, empty bladder, and supine position for 10 min. Participants were asked to lie down with their legs and arms slightly abducted to ensure no contact between body segments. The measuring electrodes were placed on the anterior surface of the wrist and ankle, and the injecting electrodes on the dorsal surface of the hand and the foot, respectively [16]. Body composition calculations FM was calculated according to the height-and-weight-based prediction model, as suggested by Hudda et al. [11] and to the BIA prediction model, as suggested by Horlick et al. [17]. According to Hudda et al. [11] the following equation was used: $$\mathrm{FM}\;(\mathrm{kg})\;=\;\mathrm{Weight}\;-\;\exp\;\lbrack0.3073\;\times\;\mathrm{height}^2\;-\;10.0155\;\times\;\mathrm{weight}^{-1}\;+\;0.004571\;\times\;\mathrm{weight}\;+\;0.01408\;\times\;\mathrm{BA}\;-\;0.06509\;\times\;\mathrm{SA}\;-\;0.02624\;\times\;\mathrm{AO}\;-\;0.01745\;\times\;\mathrm{other}\;-\;0.9180\;\times\;\ln(\mathrm{age})\;+\;0.6488\;\times\;\mathrm{age}^{0.5}\;+\;0.04723\;\times\;\mathrm{sex}\;+\;2.8055\rbrack$$ exp = exponential function; ln = natural logarithmic transformation; score 1 if child is of black (BA), south Asian (SA), other Asian (AO), or other (other) ethnic origins and score = 0 if not; sex = 1 for male and sex = 0 for female. For the BIA estimation model, the following equation, previously validated in children aged 4–18 years by Horlick et al. [17], was used: $$\mathrm{FM}-\mathrm{BIA}(\mathrm{kg})\;=\;\mathrm{weight}\;-\;\lbrack3.474\;+\;0.459\;\mathrm x\;\mathrm{height}^2\;/\mathrm R\;+\;0.064\;\mathrm x\;\mathrm{weight}\rbrack\;/\;\lbrack0.769\;-\;0.009\;\mathrm x\;\mathrm{age}\;-\;0.016\;\mathrm x\;\mathrm{sex}\rbrack$$ R = electrical resistance; sex = 1 for male and sex = 0 for female. BF% was calculated for both equations as FM/weight × 100. Statistical Package for Social Sciences (version 26.0, SPSS, Inc., Chicago, Illinois) was used for statistical analyses. The Shapiro–Wilk test was performed to assess the normal distribution of the variables. Data not normally distributed were logarithmically transformed before analyses; for clarity of interpretation, results are expressed as untransformed values. Continuous variables were described as mean ± SD. Unpaired and paired student's T-tests were used to compare means between boys and girls and within the same sex. Differences between the two estimation methods were normally distributed. Thus, Bland–Altman statistics [18] was used to compare the two models of FM and BF%. General linear model was performed to exclude any interactions between mean differences of the selected equations and gender or gender and age. Unadjusted and adjusted linear regression models were performed between the differences of the two methods and their corresponding averages. Partial correlation adjusted for gender, age and prepubertal stage was performed to compare the association between weekly hours of sports participation or BMI Z-score and FM or BF% estimated by the two different equations. General characteristics of the study group as a whole or stratified by gender are shown in Table 1. No statistical differences were found for age, weight and stature, while BMI-Z score was significantly higher in boys than girls. BF% was significantly higher in girls than boys regardless the method used to assess the body composition, while there was no significant difference in FM. Only 37% of girls and 49% of boys participated in recreational sports. FM and BF% significantly differed between the two methods in girls, while no statistical differences were found in boys. Table 1 Demographic, anthropometric and body composition data in the study sample considered as a whole or stratified by gender Partial correlation analysis did not show any significant relationship between weekly hours of sports and FM or %BF, adjusting for sex, age and prepubertal stage (FM height-weight equation r -0.076, %BF height-weight equation r – 0.156; FMBIA r -0.042; %BFBIA r -0.048). Bland–Altman analysis was performed in the whole study group. The agreements of FM or BF% estimated by the two methods are shown as plots (Figs. 1 and 2), while statistics was synthetized in Table 2. The mean differences were relatively small for both FM and BF% and the majority of values fell within the upper and lower limits of agreement. Specifically, the averages of the differences in FM or BF% were positive and significantly different from 0 in the comparison of both equations (Table 2). Of note, a systematic error was more evident in BF% differences with an overestimation at lower mean values. Comparison of height-weight or biompedance based equations for determining fat mass in the whole sample of youths with obesity Comparison of height-weight or biompedance based equations for determining percentage of body fat in the whole sample of youths with obesity Table 2 Bland–Altman statistics, correlation and partial correlation in the study sample Compared to unadjusted coefficients of correlations, partial correlation analysis adjusted for gender, age and prepubertal stage showed a stronger negative correlation between differences and averages in both FM and BF% (Table 2). Partial correlation adjusted by gender, age and prepubertal stage between BMI Z-score and FM or BF% estimated by different equations is shown in Table 3. BMI Z-score was significantly associated both with absolute and relative FM distribution estimated with either method. However, the relationship was greatly higher when the height-weight equation was used. Table 3 Partial correlation adjusted by gender, age and prepubertal stage in the study sample The present study found a fair agreement in the estimation of BF between a height-weight equation [11] and a validated model of body composition assessment based on BIA [17] in a sample of children and adolescents with obesity living in South Italy. As far as we know, few studies have compared anthropometric estimation of body composition to BIA equation [12], while a robust body of literature exists on the comparison of anthropometric or BIA equation with DXA or air displacement pletismography [19]. Recently, Hudda et al. proposed a 'height–weight equation' to estimate FM in children and adolescents, which was validated against to deuterium dilution [11]; this equation provided FM estimation at least as accurate as DXA and BIA [12]. In a retrospective study performed in a Danish cohort, FM estimated by this height-weight equation at 10 or 13 years was associated with adult risk of type 2 diabetes between 30 and 70 years better than childhood weight [20]. Furthermore, FM estimated by this equation was independently associated with subclinical atherosclerosis [13] and showed a good ability for detection of metabolic syndrome in Italian children and adolescents [21]. The present study provides further insights by comparing the extent to which FM and BF% estimated by the Hudda's prediction agreed with another feasible model based on BIA in a clinical sample of Italian youths with obesity. Both equations revealed the expected gender differences in body composition. However, the height-weight equation provided slightly higher values of FM and BF% compared to the BIA method in girls, while no significant differences were found in boys. By using the Bland–Altman analysis we found a difference between the two methods of only 0.65 kg in the estimation of FM and 1.27% in the estimation of BF%. Considering as ideal an agreement of zero difference between measurements, the comparison between the two methods is acceptable. Yet, the height-weight equation overestimated the amount of FM or BF% at lower mean values compared to the BIA equation, even after controlling by gender, age and prepubertal stage. Since measures of BF in children are dependent on individual characteristics (such as age, height, gender and pubertal status), differences are expected when body composition is estimated with different methods. Indeed, errors in estimation of FM using BIA and DXA were demonstrated by Reilly et al. [22] in a sample of normal weight children, and by Lazzer et al. [23] in a sample of youths with severe obesity. Furthermore, Hudda et al. [12] demonstrated that the accuracy of DXA and BIA assessments in paediatric age varied considerably in children across the range of FM values, with DXA overestimating FM at higher levels and underestimating at lower levels. As secondary outcome, we found a greater relationship between BMI Z-score and FM or BF% as estimated by the height–weight equation than the BIA prediction model, after adjusting for gender, age and prepubertal stage. As far as we know, only Calcaterra et al. have assessed the correlation between the height-weight equation with other indices of obesity, such as BMI (r = 0.69, p < 0.001), waist circumference (r = 0.32, p < 0.01) [21]. A recent systematic review reported that BMI Z-score was more frequently used than BMI or BMI percentage (24%, 9% and 13%, respectively) to assess weight loss in paediatric obesity; DXA or BIA were by far under-reported (5% and 1% respectively) [4]. The BMI Z-score is a measure of relative weight adjusted for child age and sex, used to classify overweight and obesity and to monitor weight loss, but the information associated to this index is difficult to explain to parents and children. A practical application of the height-weight equation compared to BMI Z-score is shown in the Supplementary table. Two examples are presented, showing either 3.0 kg of weight increase or 1.0 kg of weight loss after 6 months of treatment in a 10.5 year-old boy with a linear growth of + 3 cm. In both cases a reduction of BMI Z-score above 0.25 was hypothesized, an amount that has been deemed to be effective in reducing cardiovascular risk factors in obese children [24]. Reduction of 0.7 or 2.4 BF% was estimated according to the height-weight formula against to a reduction of BMI Z-score of 0.26 or 0.54, respectively. Our study has several limitations. Firstly, the study group consisted of a small sample of children and adolescents with obesity, hence it is not representative of the general pediatric population. In addition, our results are applicable only to a Caucasian population. Secondly, we could not validate the formula against a reference method, such as DXA or deuterium dilution. On the contrary, one strength of our study is the use of two equations that were previously validated in youths with the same age range of our population against reference methods. Lastly, either the anthropometric variables or the BIA data were estimated by a unique operator, reducing the observer bias. We found a fair agreement between two simple-to-use tools to assess body composition in a sample of youths with obesity living in South Italy. Therefore, the height-weight formula may serve as surrogate for FM estimation when feasible instrumental tools, such as BIA are not available. Calculation of FM from the relevant predictor variables is simple, using the Excel calculator provided by the Authors [11]. The strict correlation between BMI Z-score and BF% estimated by the height-weight equation suggests that the two formulas may be interchangeable. The usefulness of the height-weight formula may rely on the opportunity to deal with changes of BF instead of weight or BMI Z-score, helping children and parents to focus on diet and exercise for health. Further studies are needed to compare the height-weight equation and BMI Z-Score against DXA in the estimation of BF in children. Whether the height-weight formula should perform better than BMI, the development of adiposity-related cut offs specific for gender and age, might help to better define pediatric obesity. BIA: BF: DXA: Dual-energy X-ray absorptiometry FFM: Fat-free mass Fat mass Lee DH, Giovannucci EL. Body composition and mortality in the general population: a review of epidemiologic studies. Exp Biol Med (Maywood). 2018;243:1275–85. Must A, Anderson SE. Body mass index in children and adolescents: considerations for population-based applications. Int J Obes. 2006;30:590–4. https://doi.org/10.1038/sj.ijo.0803300. Daniels SR. The use of BMI in the clinical setting: table 1. Pediatrics. 2009;124:S35-41. https://doi.org/10.1542/peds.2008-3586F. Bryant M, Ashton L, Brown J, Jebb S, Wright J, Roberts K, et al. Systematic review to identify and appraise outcome measures used to evaluate childhood obesity treatment interventions (CoOR): evidence of purpose, application, validity, reliability and sensitivity. Health Technol Assess Winch Engl. 2014;18:1–380. https://doi.org/10.3310/hta18510. Jensen NSO, Camargo TFB, Bergamaschi DP. Comparison of methods to measure body fat in 7-to-10-year-old children: a systematic review. Public Health. 2016;133:3–13. https://doi.org/10.1016/j.puhe.2015.11.025. Freedman DS, Butte NF, Taveras EM, Goodman AB, Ogden CL, Blanck HM. The limitations of transforming very high body mass indexes into z -scores among 8.7 million 2- to 4-year-old children. J Pediatr. 2017;188:50-56.e1. Kelly AS, Daniels SR. Rethinking the use of body mass index z-score in children and adolescents with severe obesity: time to kick it to the curb? J Pediatr. 2017;188:7–8. https://doi.org/10.1016/j.jpeds.2017.05.003. Cole TJ. Weight/height2 compared to weight/height2 for assessing adiposity in childhood: influence of age and bone age on p during puberty. Ann Hum Biol Taylor & Francis. 1986;13:433–51. https://doi.org/10.1080/03014468600008621. Peterson CM, Su H, Thomas DM, Heo M, Golnabi AH, Pietrobelli A, et al. Tri-ponderal mass index vs body mass index in estimating body fat during adolescence. JAMA Pediatr. 2017;171:629. https://doi.org/10.1001/jamapediatrics.2017.0460. De Lorenzo A, Romano L, Di Renzo L, Gualtieri P, Salimei C, Carrano E, et al. Triponderal mass index rather than body mass index: an indicator of high adiposity in Italian children and adolescents. Nutrition. 2019;60:41–7. https://doi.org/10.1016/j.nut.2018.09.007. Hudda MT, Fewtrell MS, Haroun D, Lum S, Williams JE, Wells JCK, et al. Development and validation of a prediction model for fat mass in children and adolescents: meta-analysis using individual participant data. BMJ. 2019;l4293. https://doi.org/10.1136/bmj.l4293 Hudda MT, Owen CG, Rudnicka AR, Cook DG, Whincup PH, Nightingale CM. Quantifying childhood fat mass: comparison of a novel height-and-weight-based prediction approach with DXA and bioelectrical impedance. Int J Obes. 2021;45:99–103. https://doi.org/10.1038/s41366-020-00661-w. Licenziati MR, Iannuzzo G, Morlino D, Campana G, Renis M, Iannuzzi A, et al. Fat mass and vascular health in overweight/obese children. Nutr Metab Cardiovasc Dis. 2021;31:1317–23. https://doi.org/10.1016/j.numecd.2020.12.017. de Onis M, Onyango AW, Borghi E, Siyam A, Nishida C, Siekmann J. Development of a WHO growth reference for school-aged children and adolescents. Bull World Health Organ. 2007;85:660–7. https://doi.org/10.1590/S0042-96862007000900010. Tanner JM, Whitehouse RH. Clinical longitudinal standards for height, weight, height velocity, weight velocity, and stages of puberty. Arch Dis Child. 1976;51:170–9. https://doi.org/10.1136/adc.51.3.170. Bioelectrical impedance analysis in body composition measurement: National institutes of health technology assessment conference statement. Am J Clin Nutr. 1996;64:524S-532S. https://doi.org/10.1093/ajcn/64.3.524S Horlick M, Arpadi SM, Bethel J, Wang J, Moye J, Cuff P, et al. Bioelectrical impedance analysis models for prediction of total body water and fat-free mass in healthy and HIV-infected children and adolescents. Am J Clin Nutr. 2002;76:991–9. https://doi.org/10.1093/ajcn/76.5.991. Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999;8:135–60. https://doi.org/10.1177/096228029900800204. Sergi G, De Rui M, Stubbs B, Veronese N, Manzato E. Measurement of lean body mass using bioelectrical impedance analysis: a consideration of the pros and cons. Aging Clin Exp Res. 2017;29:591–7. https://doi.org/10.1007/s40520-016-0622-6. Hudda MT, Aarestrup J, Owen CG, Cook DG, Sørensen TIA, Rudnicka AR, et al. Association of childhood fat mass and weight with adult-onset type 2 diabetes in Denmark. JAMA Netw Open. 2021;4:e218524. https://doi.org/10.1001/jamanetworkopen.2021.8524. Calcaterra V, Verduci E, De Silvestri A, Magenes VC, Siccardo F, Schneider L, et al. Predictive ability of the estimate of fat mass to detect early-onset metabolic syndrome in prepubertal children with obesity. Children (Basel). 2021;8:966. https://doi.org/10.3390/children8110966. Reilly JJ, Gerasimidis K, Paparacleous N, Sherriff A, Carmichael A, Ness AR, et al. Validation of dual-energy x-ray absorptiometry and foot-foot impedance against deuterium dilution measures of fatness in children. Int J Pediatr Obes IJPO Off J Int Assoc Study Obes. 2010;5:111–5. https://doi.org/10.3109/17477160903060010. Lazzer S, Bedogni G, Agosti F, De Col A, Mornati D, Sartorio A. Comparison of dual-energy X-ray absorptiometry, air displacement plethysmography and bioelectrical impedance analysis for the assessment of body composition in severely obese Caucasian children and adolescents. Br J Nutr. 2008;100:918–24. https://doi.org/10.1017/S0007114508922558. Reinehr T, Lass N, Toschke C, Rothermel J, Lanzinger S, Holl RW. Which amount of BMI-SDS reduction is necessary to improve cardiovascular risk factors in overweight children? J Clin Endocrinol Metab. 2016;101:3171–9. https://doi.org/10.1210/jc.2016-1885. This research received no specific grant from any funding agency. Maria Rosaria Licenziati and Giada Ballarin contributed equally to this work. Department of Neurosciences, Obesity and Endocrine Disease Unit, Santobono-Pausilipon Children's Hospital, Naples, Italy Maria Rosaria Licenziati & Maria Serena Lonardo Department of Movement Sciences and Wellbeing, University of Naples "Parthenope", Naples, Italy Giada Ballarin & Giuliana Valerio Department of Clinical Medicine and Surgery, Federico II University of Naples, Naples, Italy Gabriella Iannuzzo & Olivia Di Vincenzo Department of Public Health, Federico II University of Naples, Naples, Italy Olivia Di Vincenzo Department of Medicine and Medical Specialties, A. Cardarelli Hospital, Naples, Italy Arcangelo Iannuzzi Maria Rosaria Licenziati Giada Ballarin Gabriella Iannuzzo Maria Serena Lonardo Giuliana Valerio MRL contributed to conceptualization, design of the study, and acquisition of the data, and critically revised the manuscript; GB contributed to design of the study, analysis and interpretation of the data, and drafted the manuscript; GI contributed to interpretation of the data, and critically revised the manuscript; MSL contributed to acquisition of the data, and critically revised the manuscript; ODV contributed to acquisition of the data, and critically revised the manuscript; AI contributed to conceptualization, design of the study, interpretation of the data, and critically revised the manuscript; GV contributed to conceptualization, design of the study, analysis and interpretation of the data, and drafted the manuscript. All authors approved the submitted version and agreed both to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. Correspondence to Giuliana Valerio. All methods were carried out in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. The institutional review board of the Cardarelli Hospital, Naples approved the clinical protocol and written informed consent for all procedures was obtained from all the children and/or their parents or legal guardians before the enrolment. Supplementary table 1. Changes of BMIZ-score or body fat percentage estimated by the height-weight equation in anadolescent boy after 6 months of treatment. Example: comparison between changesin BMI Z-score and body fat percentage in an adolescent boy after 6 months of treatment. Licenziati, M.R., Ballarin, G., Iannuzzo, G. et al. A height-weight formula to measure body fat in childhood obesity. Ital J Pediatr 48, 106 (2022). https://doi.org/10.1186/s13052-022-01285-8 Bioimpedance analysis Height-weight equation
CommonCrawl
Chapter 6 - Legal and Ethical Behavior lnuss1234 Horizontal Price Fixing Occurs when a group of competing retailers (or other channel members operating at a given level of distribution) establishes a fixed price at which to sell certain brands of products. Vertical Price Fixing Occurs when a retailer collaborates with the manufacturer or wholesaler to resell an item at an agreed upon price. Occurs when two retailers buy an identical amount of "like grade and quality" merchandise from the same supplier but pay different prices. Deceptive Pricing Occurs when a misleading price is used to lure customers into the store and then hidden charges are added; or the item advertised may be unavailable. Predatory Pricing Exists when a retail chain charges different prices in different geographic areas to eliminate competition in selected geographic areas. Palming Off Occurs when a retailer represents that merchandise is made by a firm other than the true manufacturer. Deceptive Advertising Occurs when a retailer makes false or misleading advertising claims about the physical makeup of a product, the benefits to be gained by its use, or the appropriate uses for the product. Bait-and-Switch Advertising Advertising or promoting a product at an unrealistically low price to serve as "bait" and then trying to "switch" the consumer to a higher-priced product. Product Liability Laws Deal with the seller's responsibility to market safe products. These laws invoke the foreseeability doctrine, which states that a seller of a product must attempt to foresee how a product may be misused and warn the consumer against the hazards of misuse. Expressed Warranties Are either written or verbalized agreements about the performance of a product and can cover all attributes of the merchandise or only one attribute. Implied Warranty of Merchantability Is made by every retailer when the retailer sells goods and implies that the merchandise sold is for for the ordinary purpose for which such goods are typically used. Implied Warranty of Fitness Is a warranty that implies that the merchandise is fit for a particular purpose and arises when the customer relies on the retailer to assist or make the selection of goods to serve a particular purpose. Territorial Restrictions Are attempts by the supplier, usually a manufacturer, to limit the geographic area in which a retailer may resell its merchandise. Dual Distribution Occurs when a manufacturer sells to independent retailers and also through its own retail outlets. One-Way Exclusive-Dealing Arrangement Occurs when the supplier agrees to give the retailer the exclusive right to sell the supplier's product in a particular trade area. Two-Way Exclusive-Dealing Agreement Occurs when the supplier offers the retailer the exclusive distribution of a merchandise line or product in a particular trade area if in return the retailer will agree to do something for the manufacturer, such as heavily promote the supplier's products or not handle competing brands. Tying Agreement Exists when a seller with a strong product or service requires a buyer (the retailer) to purchase a weak product or service as a condition for buying the strong product or service. Is a set of rules for human moral behavior. Explicit Code of Ethics Consists of a written policy that states what is ethical and unethical behavior. Implicit Code of Ethics Is an unwritten but well understood set of rules or standards of moral responsibility. Slotting Fees (Slotting Allowances) Are fees paid by a vendor for space or a slot on a retailer's shelves, as well as having its UPC number given a slot in the retailer's computer system. Markdown Money Markdown money is what retailers charge to suppliers when merchandise does not sell at what the vendor intended. 'Marketing research' can be a broader term than 'market research', covering research into the whole of the marketing process. True or false? At what stage of the product life cycle are marketing objectives likely to include both customer acquisition and customer retention: When a company such as Chevrolet looks at social media and reads online reviews from customers to identify the features that customers want, it is engaging in which phase of the new-product development process? This research can include data from a company, such as sales reports and reports on merchandise returns and complaints, or from external sources, such as books, magazine articles, the Internet, or from companies that specialize in market research Chapter 4 - Evaluating the Competition in Retailing Retail Management Chapter 1 abbey_bouchard RCS 210 CHAPTER 6 breanna_pate ayeayeron_12 Marketing - Unit 3 Vocabulary vreising Principles of Business, Marketing, and Finance - C… woodvhhs Chapter 3 - Retail Customers Chapter 2 - Retail Strategic Planning and Operatio… Chapter 1 - Perspectives on Retailing MKT 350 Test 4 Review for Unit 3 Jeremiah_Klunker APGOPO SEMESTER EXAM usaut Pd Bio test 4 tmhubert Literature FINAL for Jan 21 2k15 Lizzy_Luvs_Kittys_ You are the vice president of Worldwide InfoXchange, headquartered in Minneapolis, Minnesota. All shareholders of the firm live in the United States. Earlier this month you obtained a loan of 10 million Canadian dollars from a bank in Toronto to finance the construction of a new plant in Montreal. At the time the loan was received, the exchange rate was $0.81 to the Canadian dollar. By the end of the month, it has unexpectedly dropped to$0.75. Has your company made a gain or a loss as a result, and by how much? Here is the condensed 2016 balance sheet for Skye Computer Company (in thousands of dollars): $$ \begin{matrix} \text{ } & \text{2016}\\ \text{Current assets} & \text{\$ 2.000}\\ \text{Net fixed assets} & \text{3.000}\\ \text{Total assets} & \text{\$ 5.000}\\ \text{Accounts payable and accruals} & \text{\$ 900}\\ \text{Short-term debt} & \text{100}\\ \text{Long-term debt} & \text{1.110}\\ \text{Preferred stock (10.000 shares)} & \text{250}\\ \text{Common stock (50.000 shares)} & \text{1.300}\\ \text{Retained earnings} & \text{1.350}\\ \text{Total common equity} & \text{\$ 2.650}\\ \text{Total liabilities and equity} & \text{\$ 5.000}\\ \end{matrix} $$ Skye's earnings per share last year were $3.20. The common stock sells for$55.00, last year's dividend $\left(\mathrm{D}_{0}\right)$ was $2.10, and a flotation cost of 10% would be required to sell new common stock. Security analysts are projecting that the common dividend will grow at an annual rate of 9%. Skye's preferred stock pays a dividend of$3.30 per share, and its preferred stock sells for $30.00 per share. The firm's before-tax cost of debt is 10%, and its marginal tax rate is 35%. The firm's currently outstanding 10% annual coupon rate, long-term debt sells at par value. The market risk premium is 5%, the risk-free rate is 6%, and Skye's beta is 1.516. The firm's total debt, which is the sum of the company's short-term debt and long-term debt, equals$1.2 million. a. Calculate the cost of each capital component, that is, the after-tax cost of debt, the cost of preferred stock, the cost of equity from retained earnings, and the cost of newly issued common stock. Use the DCF method to find the cost of common equity. b. Now calculate the cost of common equity from retained earnings, using the CAPM method. c. What is the cost of new common stock based on the CAPM? d. If Skye continues to use the same market-value capital structure, what is the firm's WACC assuming that (1) it uses only retained earnings for equity? (2) If it expands so rapidly that it must issue new common stock? Most firms generate cash inflows every day, not just once at the end of the year. In capital budgeting, should we recognize this fact by estimating daily project cash flows and then using them in the analysis? If we do not, are our results biased? If so, would the NPV be biased up or down? Explain. Does interest rate parity imply that interest rates are the same in all countries?
CommonCrawl
Stochastic Models for the Inference of Life Evolution PublicationsPeopleCovid-19In the newsSharing scienceSmile SeminarsEvents SMILE is an interdisciplinary research group gathering probabilists, statisticians, bio-informaticians and biologists. SMILE is affiliated to the Stochastics and Biology group of LPSM (Lab of Probability, Statistics and Modeling) at Sorbonne Université (ex Université Pierre et Marie Curie Paris 06). SMILE is hosted within the CIRB (Center for Interdisciplinary Research in Biology) at Collège de France. SMILE is supported by Collège de France and CNRS. Visit also our homepage at CIRB. Recent contributions of the SMILE group related to SARS-Cov2 and COVID-19. SMILE is hosted at Collège de France in the Latin Quarter of Paris. To reach us, go to 11 place Marcelin Berthelot (stations Luxembourg or Saint-Michel on RER B). Our working spaces are rooms 107, 121 and 122 on first floor of building B1 (ask us for the code). Building B1 is facing you upon exiting the traversing hall behind Champollion's statue. You can reach us by email (amaury.lambert - at - upmc.fr) or (smile - at - listes.upmc.fr). Mass extinction in poorly known taxa Since the 1980s, many have suggested we are in the midst of a massive extinction crisis, yet only 799 (0.04\%) of the 1.9 million known recent species are recorded as extinct, questioning the reality of the crisis. This low figure is due to the fact that the status of very few invertebrates, which represent the bulk of biodiversity, have been evaluated. Here we show, based on extrapolation from a random sample of land snail species via two independent approaches, that we may already have lost 7\% (130,000 extinctions) of the species on Earth. However, this loss is masked by the emphasis on terrestrial vertebrates, the target of most conservation actions. Projections of species extinction rates are controversial because invertebrates are essentially excluded from these scenarios. Invertebrates can and must be assessed if we are to obtain a more realistic picture of the sixth extinction crisis. Testing for Independence between Evolutionary Processes Evolutionary events co-occurring along phylogenetic trees usually point to complex adaptive phenomena, sometimes implicating epistasis. While a number of methods have been developed to account for co-occurrence of events on the same internal or external branch of an evolutionary tree, there is a need to account for the larger diversity of possible relative positions of events in a tree. Here we propose a method to quantify to what extent two or more evolutionary events are associated on a phylogenetic tree. The method is applicable to any discrete character, like substitutions within a coding sequence or gains/losses of a biological function. Our method uses a general approach to statistically test for significant associations between events along the tree, which encompasses both events inseparable on the same branch, and events genealogically ordered on different branches. It assumes that the phylogeny and themapping of branches is known without errors. We address this problem from the statistical viewpoint by a linear algebra representation of the localization of the evolutionary events on the tree.We compute the full probability distribution of the number of paired events occurring in the same branch or in different branches of the tree, under a null model of independence where each type of event occurs at a constant rate uniformly inthephylogenetic tree. The strengths and weaknesses of themethodare assessed via simulations; we then apply the method to explore the loss of cell motility in intracellular pathogens. How Ecology and Landscape Dynamics Shape Phylogenetic Trees Whether biotic or abiotic factors are the dominant drivers of clade diversification is a long-standing question in evolutionary biology. The ubiquitous patterns of phylogenetic imbalance and branching slowdown have been taken as supporting the role of ecological niche filling and spatial heterogeneity in ecological features, and thus of biotic processes, in diversification. However, a proper theoretical assessment of the relative roles of biotic and abiotic factors in macroevolution requires models that integrate both types of factors, and such models have been lacking. In this study, we use an individual-based model to investigate the temporal patterns of diversification driven by ecological speciation in a stochastically fluctuating geographic landscape. The model generates phylogenies whose shape evolves as the clade ages. Stabilization of tree shape often occurs after ecological saturation, revealing species turnover caused by competition and demographic stochasticity. In the initial phase of diversification (allopatric radiation into an empty landscape), trees tend to be unbalanced and branching slows down. As diversification proceeds further due to landscape dynamics, balance and branching tempo may increase and become positive. Three main conclusions follow. First, the phylogenies of ecologically saturated clades do not always exhibit branching slowdown. Branching slowdown requires that competition be wide or heterogeneous across the landscape, or that the characteristics of landscape dynamics vary geographically. Conversely, branching acceleration is predicted under narrow competition or frequent local catastrophes. Second, ecological heterogeneity does not necessarily cause phylogenies to be unbalanced--short time in geographical isolation or frequent local catastrophes may lead to balanced trees despite spatial heterogeneity. Conversely, unbalanced trees can emerge without spatial heterogeneity, notably if competition is wide. Third, short isolation time causes a radically different and quite robust pattern of phylogenies that are balanced and yet exhibit branching slowdown. In conclusion, biotic factors have a strong and diverse influence on the shape of phylogenies of ecologically saturating clades and create the evolutionary template in which branching slowdown and tree imbalance may occur. However, the contingency of landscape dynamics and resource distribution can cause wide variation in branching tempo and tree balance. Finally, considerable variation in tree shape among simulation replicates calls for caution when interpreting variation in the shape of real phylogenies. Ranked Tree Shapes, Nonrandom Extinctions, and the Loss of Phylogenetic Diversity Phylogenetic diversity (PD) is a measure of the evolutionary legacy of a group of species, which can be used to define conservation priorities. It has been shown that an important loss of species diversity can sometimes lead to a much less important loss of PD, depending on the topology of the species tree and on the distribution of its branch lengths. However, the rate of decrease of PD strongly depends on the relative depths of the nodes in the tree and on the order in which species become extinct. We introduce a new, sampling-consistent, three-parameter model generating random trees with covarying topology, clade relative depths and clade relative extinction risks. This model can be seen as an extension to Aldous' one parameter splitting model (\$$\beta\$$, which controls for tree balance) with two additional parameters: a new parameter \$$\alpha\$$ quantifying the correlation between the richness of a clade and its relative depth, and a parameter \$$\eta\$$ quantifying the correlation between the richness of a clade and its frequency (relative abundance or range), taken herein as a proxy for its overall extinction risk. We show on simulated phylogenies that loss of PD depends on the combined effect of all three parameters, \$$\beta\$$, \$$\alpha\$$ and \$$\eta\$$. In particular, PD may decrease as fast as species diversity when high extinction risks are clustered within small, old clades, corresponding to a parameter range that we term the `thin ice zone' (\$$\beta<-1\$$ or \$$\alpha<0\$$; \$$\eta>1\$$). Besides, when high extinction risks are clustered within large clades, the loss of PD can be higher in trees that are more balanced (\$$\beta>0\$$), in contrast to the predictions of earlier studies based on simpler models. We propose a Monte-Carlo algorithm, tested on simulated data, to infer all three parameters. Applying it to a real dataset comprising 120 bird clades (class Aves) with known range sizes , we show that parameter estimates precisely fall close to close to a 'thin ice zone': the combination of their ranking tree shape and non-random extinctions risks makes them prone to a sudden collapse of PD. Pathogen evolution in finite populations: slow and steady spreads the best The theory of life history evolution provides a powerful framework to understand the evolutionary dynamics of pathogens in both epidemic and endemic situations. This framework, however, relies on the assumption that pathogen populations are very large and that one can neglect the effects of demographic stochasticity. Here we expand the theory of life history evolution to account for the effects of finite population size on the evolution of pathogen virulence. We show that demographic stochasticity introduces additional evolutionary forces that can qualitatively affect the dynamics and the evolutionary outcome. We discuss the importance of the shape of pathogen fitness landscape and host heterogeneity on the balance between mutation, selection and genetic drift. In particular, we discuss scenarios where finite population size can dramatically affect classical predictions of deterministic models. This analysis reconciles Adaptive Dynamics with population genetics in finite populations and thus provides a new theoretical toolbox to study life-history evolution in realistic ecological scenarios. Planning des salles du Collège de France. Intranet du Collège de France. Webpage created in April 2015. Contact us to report any bug.
CommonCrawl
Determining rhythmicity and determinism of temperature curves in septic and non-septic critically ill patients through chronobiological and recurrence quantification analysis: a pilot study Vasilios E. Papaioannou1, Eleni N. Sertaridou ORCID: orcid.org/0000-0002-1540-81191, Ioanna G. Chouvarda2, George C. Kolios3 & Ioannis N. Pneumatikos1 A few studies have demonstrated that critically ill patients exhibit circadian deregulation and reduced complexity of different time series, such as temperature. In this prospective study, we enrolled 21 patients divided into three groups: group A (N = 10) included subjects who had septic shock at the time of ICU entry, group B (N = 6) included patients who developed septic shock during ICU stay, and group C consisted of 5 non-septic critically ill patients. Core body temperature (CBT) was recorded for 24 h at a rate of one sample per hour (average of CBT for that hour) and during different occasions: upon ICU entry and exit in groups A and C and upon entry, septic shock development, and exit in group B. Markers of circadian rhythmicity included mean values, amplitude that is the difference between peak and mean values, and peak time. Furthermore, recurrence quantification analysis (RQA) was employed for assessing different markers of complexity of temperature signals. Patients from group C exhibited higher temperature amplitude upon entry (0.45 ± 0.19) in relation with both groups A (0.28 ± 0.18, p < 0.05) and B (0.32 ± 0.13, p < 0.05). Circadian features did not differ within all groups. Temperature amplitude in groups B and C upon entry was negatively correlated with SAPS II (r = − 0.72 and − 0.84, p < 0.003) and APACHE II scores (r = − 0.70 and − 0.63, p < 0.003), respectively, as well as duration of ICU and hospital stay in group B (r = − 0.62 and − 0.64, p < 0.003) and entry SOFA score in group C (r = − 0.82, p < 0.003). Increased periodicity of CBT was found for all patients upon exit related to entry in the ICU. Different RQA features indicating periodic patterns of change of entry CBT were negatively correlated with severity of disease and length of ICU stay for all patients. Increased temperature rhythmicity during ICU entry was related with lower severity of disease and better clinical outcomes, whereas the more deterministic CBT patterns were found in less critically ill patients with shorter ICU stay. Physiologic systems exhibit extraordinary complexity due to continuous non-linear interactions of numerous structural units and feedback loops, constituting the basic adaptation mechanism of the organism [1]. Non-linear is a term reflecting non-additive coupling of different subsystems. Recognition that physiologic time series contain hidden information related to such complexity has fueled growing interest in applying techniques from statistical physics for the study of living organisms. In this respect, we have previously shown that the analysis of continuously monitored temperature curves in critically ill patients using sophisticated techniques from signal processing theory was able to discriminate patients with systemic inflammatory response syndrome (SIRS), sepsis, and septic shock with an accuracy of 80% [2]. Similarly, Varela et al. have employed temperature signals and applied different complexity features in patients with multiple organ failure. They were able to show that reduced complexity measures were associated with increased mortality [3, 4]. Physiologic time series are not only complex but they also exhibit different rhythmic patterns in time since they fluctuate with a period of approximately 24 h. Such rhythms are called circadian and reflect synchronization of internal rhythms with external timekeepers (e.g., the change between daylight and dark, meal times). Critically ill patients experience severe circadian deregulation associated with both systemic inflammation and intensive care unit (ICU) environment [5,6,7]. The main circadian biomarkers are melatonin, cortisol, and temperature, whose internal rhythms are orchestrated by the "master" biologic clock located in the hypothalamic suprachiasmatic nuclei (SCN) [5]. In a recent study, we examined the potential alterations of circadian rhythmicity of urine melatonin and cortisol excretion in critically ill patients who were admitted in the ICU with septic shock or developed septic shock during their ICU stay, in a prospective fashion and on successive occasions (septic shock and recovery) [8]. We found that septic shock induced inverse changes of melatonin and cortisol circadian rhythm profiles both within and between different groups of patients, depending on the timing of onset. Furthermore, reduced rhythmicity was correlated with the severity of disease and longer ICU stay. In this study, since circadian profiles of both melatonin and cortisol could be altered by different medications, sleep deprivation, or ICU milieu, we decided to further evaluate core body temperature (CBT) rhythmic fluctuations in the same groups of patients and with the same study design as in our previous publication. CBT and melatonin-based measurements have been considered as complementary in assessing circadian rhythms [5, 9]. Moreover, and since no study so far has evaluated a potential link between different complexity measures and circadian features of CBT signals in critically ill patients, we tried to implement recurrence quantification analysis (RQA) as a novel method of data analysis [10, 11]. RQA quantifies the number and duration of recurrences of a dynamical system presented by its state space trajectory. It has been successfully used in pilot projects in cardiology, evaluating heart rate and blood pressure regulation [12, 13]. Finally, we tried to identify potential correlations between circadian and complexity features with different clinical outcomes of interest, such as severity of disease upon entry to the ICU and both ICU and hospital length of stay. We hypothesized that altered circadian fluctuations would be significantly associated with reduced complexity of temperature curves, particularly in the more severely ill patients. As we described in our previous study, subjects were recruited among critically ill patients admitted to the ICU of Alexandroupolis University Hospital, Greece, between February 2015 and July 2017 [8]. The inclusion criteria were as follows: (1) patients admitted with septic shock or those who developed septic shock during their ICU stay according to the Third International Consensus Definitions for Sepsis and Septic Shock (microbiologically confirmed infection, Specific Organ Failure Assessment score (SOFA) ≥ 2, hypotension despite adequate fluid resuscitation, plus need for vasopressor administration for maintaining blood pressure ≥ 65 mmHg, plus serum lactate > 2 mmol/L) [14], (2) age > 18 and < 80 years, and (3) ICU stay more than 48 h. Exclusion criteria were as follows: (1) history of neurological disorders; (2) history of cancer, autoimmune disease, or immune-suppressive therapy; and (3) history of recent brain injury, since such cases are commonly associated with temperature deregulation [3, 9]. Furthermore, and in accordance with Gazendam's study, fever (CBT ≥ 38.5 °C) and/or hypothermia (CBT ≤ 36.5 °C) were also considered criteria of exclusion [9]. Mechanically ventilated patients were under stable ventilatory settings and deeply sedated with propofol and remifentanil, during CBT measurements. Vasoactive drugs and antibiotics were administered when indicated. Finally, all patients were tube-fed continuously during the day but not during the night, as has been previously described [9]. Patients were allocated into three groups: Patients who had septic shock at the time of ICU entry consisted group A (initial septic shock, n = 10). Group B included critically ill patients who developed septic shock during ICU stay (in-hospital septic shock, n = 6). Patients from groups A and B were the same as in our previous study, but we included only those who were afebrile during CBT recordings. In addition, group C (n = 5) included vascular surgery and trauma patients as controls, since they did not develop septic shock during their ICU stay. None of these patients received hydrocortisone or non-steroidal anti-inflammatory drugs during the whole study period. Patients during sedation had their eyes closed throughout the whole 24-h sampling period. To avoid artifacts by artificial light during night, lights were turned off during the night hours except during the nursing rounds. The ICU environment allowed regular changes between night and daylight for awaked subjects. All patients were followed until discharge from the hospital, to assess ICU and in-hospital length of stay and mortality. Data collection and measures Demographic, clinical, and biochemical data were collected from the patient's electronic medical record. Severity of disease upon admission was assessed during the first day to the ICU using both the Acute Physiology and Chronic Health Evaluation (APACHE II) score and Simplified Acute Physiology Score II (SAPS II). Furthermore, daily severity of illness was evaluated with SOFA score every morning and during the whole ICU stay, while depth of sedation was assessed once daily with the Ramsay scale. CBT recordings CBT was recorded for 24 h at a rate of one sample every 5 min, and hourly averages were computed for each subject, starting between 9.00 and 10.00 a.m. Measurements were performed with a bladder temperature sensor connected to an indwelling catheter (Rϋsch sensor series 400 Teleflex) [9]. Historically, rectal temperature has been preferred to measure a patients' CBT. However, as has been previously shown, rectal temperature significantly lags behind other core sites, such as urinary bladder site, during acute temperature alterations [15]. Moreover, Lefrant and colleagues have found that in critically ill patients requiring a pulmonary artery catheter, urinary bladder electronic thermometer was more reliable than electronic rectal thermometer to measure core temperature with an accuracy ± 0.4 °C [15]. Entry CBT recordings were performed for all patients within less than 24 h of admission and within 24 h before discharge from the ICU. Thus, for group A, 24 h CBT measurements occurred at enrollment during septic shock (entry phase) and within 24 h before discharge from the ICU (recovery/exit phase). For group B, sampling occurred upon entry (entry phase), at the occurrence of septic shock since a diagnosis of infection was confirmed [14] (septic shock phase) and within 24 h before discharge from the ICU (recovery/exit phase). Similarly, for group C, CBT recordings were made upon entry and within 24 h before discharge from the ICU. During each 24-h study period, all patients were monitored for fever or hypothermia. CBT curve analyses The CBT curve analyses were performed as follows: Circadian rhythm analysis The circadian analysis shows whether CBT can exhibit periodicities in a 24-h scale. Briefly, this technique fits a cosine function of a fixed anticipated period to the data and approximates the following equation to experimental data, using the least squares method for minimization [16]: $$ \mathrm{CBT}\ (t)=M+A\times \cos\ \left(2\mathrm{pi}\times t/24+\mathrm{acrophase}\right) $$ where M (mesor) is the mean level of CBT fluctuations, A is the amplitude which is the difference between peak and mean values, pi is 3.14159, and acrophase is the peak time that reflects the time of peak value in relation with midnight (e.g., acrophase = 0 at local midnight when the fitted period is 24 h). These three metrics are considered markers of circadian rhythmicity. The hourly CBT values were fit to this equation, and the three parameters were estimated. The single cosinor method is appropriate for modeling the individual data when only one frequency is present; otherwise (presence of multiple periods, non-sinusoidal shape), the use of multiple components analysis is recommended [16]. The methods are related to the Fourier harmonic analysis, with Fourier analysis performed in the frequency domain and rhythmometric analysis in the time domain. Recently, the sigmoidally transformed cosine model has been proposed as a non-parametric statistic for the analysis of activity rhythms since these data resemble more closely a square wave pattern [17]. Since for some data there is no well-defined short interval of maximum values, the calculation of acrophase might be inappropriate. On the contrary, different intervals of many hours may exist during which the data are "relatively high" or "relatively low." In this case, non-linear sigmoid transformation of the traditional cosine curves might better model biological rhythms, particularly CBT, since it can represent more accurately through rectangular waves, alternating high and low values with long time intervals ("on" and "off" states). Nevertheless, in our case, we performed a cosinor analysis of CBT curves using the R package cosinor [18]. Briefly, the cosinor model was applied separately in each time series. In each case, it was first checked that the cosinor model estimation was valid and adequate, before adding the model parameters to the analysis. The statistics implemented in the cosinor/cosinor2 package were used for this reason [18]. First, the rhythm detection test was performed, for the significance of the estimated model for single cosinor. It calculates an F ratio with respect to the estimated and observed values, degrees of freedom, and the p value [19]. When p was higher than 0.05, the cosinor values were not further employed. All data passed the overall test for model significance. Subsequently, the level of correlation between model and real CBT data, as well as the percent rhythm that is the coefficient of determination obtained by squaring the correlation between the observed and estimated data, was measured [18, 19]. Recurrence quantification analysis Recurrence is a term more general than periodicity, including also more irregular cyclicities, and corresponds to times at which a state of a dynamical system recurs, i.e., CBT at time i and j is very close. Starting from the visualization of recurrence, one would consider a square matrix in which columns and rows correspond to a certain pair of times, and recurrences are such (i, j) points with roughly similar CBT. Natural processes considered as deterministic dynamical systems can have a distinct recurrent behavior, with states coming close after some time. Trajectories passing from nearby paths are typical also for non-linear chaotic systems. Such recurrence of a state at time i at a different time j is marked within a two-dimensional squared matrix with one and zero dots (black and white dots in the plot), where both axes are time axes. The visual representation is called recurrence plot (RP) [10, 11]. The RPs exhibit characteristic large-scale and small-scale patterns. The first patterns are denoted as typology and the latter as texture. The typology offers a global impression and is characterized by the following: Abrupt changes in the dynamics or extreme events causing white areas or bands in RPs. Homogeneous plot for random time series. Oscillating systems with diagonal oriented periodic recurrent structures. The closer inspection of the RPs reveals small-scale structures (the texture) which are single dots and diagonal lines as well as vertical and horizontal lines (the combination of vertical and horizontal lines obviously forms rectangular clusters of recurrence points): Single, isolated recurrence points can occur if states are rare, if they do not persist for any time or if they fluctuate heavily. A diagonal line occurs when a segment of the trajectory runs parallel to another segment, i.e., the trajectory visits the same region of the phase space at different times. A vertical (horizontal) line shows a state that does not change or changes very slowly, i.e., the state is trapped for some time. These small-scale structures are the base for recurrence quantification analysis (RQA) [20]. RQA attempts to capture and quantify the amount and distribution of points on the RP. The basis of the RQA approach is phase-space reconstruction through time-delayed embedding. A phase space is a space in which all possible states of a system under study can be charted. If full determination of the state of a system requires N independent variables, then the phase space has N dimensions. The method of time-delayed embedding allows the reconstruction of phase-space profiles from a single, one-dimensional variable, in this case CBT, according to Takens' theorem [21]. Briefly, if a system is comprised of multiple interdependent variables and one has access only to a single observable x from the system (i.e., CBT), then the multidimensional dynamics of that system can be reconstructed from the single measured dimension by plotting the variable x against itself a certain number of times and at a certain time delay. This variable is called Takens vector's index, and in our case, plotting was 24 number of times with a time delay of 1 h. The following features were calculated from the matrix, with NonlinearTseries R package (https://cran.r-project.org/web/packages/nonlinearTseries/index.html), as described below: Recurrence (REC): percentage of recurrence points in a recurrence plot. It is a measure of the density of recurrence points in the RP and corresponds to the correlation sum. Determinism (DET): percentage of recurrence points that form diagonal lines. Processes with uncorrelated or weakly correlated, stochastic, or chaotic behavior cause none or very short diagonals, whereas deterministic processes cause longer diagonals and less single, isolated recurrence points. DET is a measure for predictability in the system. In general, deterministic systems are often characterized by repeated similar state evolution, corresponding to a local predictability. Laminarity (LAM): percentage of recurrent points that form vertical lines. LAM will decrease if the RP consists of more single recurrence points than vertical structures. Length of the longest diagonal line (Lmax): this measure is related to the exponential divergence of the phase space trajectory. The faster the trajectory segments diverge, the shorter are the diagonal lines and the higher is the measure DIV. Divergence (DIV): inverse of Lmax related to the largest positive Lyapunov exponent, which is a property of complex dynamic systems and characterizes the rate of separation of close trajectories. Briefly, dynamic systems with high Lyapunov exponents exhibit fast divergence, whereas chaotic systems have the highest exponents, reflecting limited predictability of their state. There were three main arms in the analysis of CBT circadian and RQA features: (1) to compare circadian and RQA parameters between different time points of measurements both between and within study groups, (2) to evaluate which metrics correlate with different clinical outcomes of interest, and (3) to reveal if there is a relation between recurrence within each 24 h period and 24 h circadian rhythmicity. A Kolmogorov-Smirnov test confirmed that CBT, age, and different scoring systems of severity of diseases followed a normal distribution. In this respect, one-way ANOVA was employed to detect differences between the three groups of patients. Regarding circadian and RQA features, as normality remained inconclusive considering the 3 groups involved, non-parametric statistical analysis via the Wilcoxon signed rank (within groups) and Kruskall-Wallis (between groups) tests was used. The package corrplot was employed for an enhanced visualization of correlation analysis based on estimation of Pearson's r, between circadian and RQA features with different clinical variables, such as severity scores at entry, length of ICU and hospital stay, and mortality [22]. Positive correlations are displayed in blue, and negative correlations in red color. The color intensity and the size of ellipse are proportional to the correlation. Due to multiple comparisons and in order to protect from type I error (false-positive results), a post hoc Bonferroni correction was conducted, getting an adjusted p value after dividing the original p value (0.05) by the number of analyses performed. In our case, since we evaluated 3 circadian and 5 RQA features along with 7 clinical parameters, the new corrected p value was 0.003 (0.05/15). Data are presented as mean ± standard deviation. All analyses were performed with R version 3.4.4. Five of the 15 patients from group A and 5 from the 11 patients from group B constituting the patients' population that we studied in our previous investigation [8] were excluded from the analyses due to fever (CBT ≥ 38.5 °C). Controls from group C included three multiple trauma patients and two subjects who underwent emergency vascular surgery for abdominal aortic aneurysm rupture. From group B, one patient died in hospital due to decompensated heart failure and two patients, who suffered from multiple trauma injuries, died during ICU stay with multiple organ failure. However, CBT recordings were performed 1–2 days before death in the ICU. Since we did not perform any regression analysis in order to assess the potential impact of circadian deregulation or complexity pattern changes of CBT curves upon ICU entry on mortality, we cannot estimate the association between survival and our results. Nevertheless, entry APACHE II, SAPS II, SOFA scores, and Ramsay scale, as well as length of hospital and ICU stay, did not differ between the three groups (Table 1). In patients constituting group A, septic shock was diagnosed by the carrying physician before their transfer to the ICU and according to the Third International Consensus Definitions for Sepsis and Septic Shock (Table 1) [14]. Septic shock in patients from group B was diagnosed 7.2 ± 4.3 days after ICU admission and was attributed to ventilator-associated pneumonia (n = 4) and bloodstream infections (n = 2), according to published guidelines [23, 24]. No correlation was found between duration of illness prior to septic shock development and any circadian or RQA features. Table 1 Patients' characteristics Correlations between estimated and observed CBT data The implementation of the single cosinor model was found to exhibit a moderate to high correlation with real CBT data. Thus, the mean and standard deviation (mean ± SD) of correlation values were as follows: (1) for group A, 0.66 ± 0.19; (2) for group B, 0.62 ± 0.19; and (3) for group C, 0.69 ± 0.19. Moreover, the percent rhythm (coefficient of determination) was as follows: for group A, 0.58 ± 0.19; for group B, 0.56 ± 0.15; and for group C, 0.59 ± 0.20, respectively. CBT circadian rhythm profiles The Wilcoxon and Kruskall-Wallis tests of the hourly averages across the 24 h of CBT recordings showed no significant alterations in any circadian feature within and between groups, except for amplitude. Thus, patients from group C (controls) exhibited higher CBT amplitude upon entry (0.45 ± 0.19) in relation with both groups A (0.28 ± 0.18, p = 0.041) and B (0.32 ± 0.13, p = 0.042). Furthermore, patients from group A (initial septic shock) had reduced CBT amplitude upon entry compared to both groups B (in-hospital septic shock, p = 0.054) and C (controls, p = 0.041). All groups exhibited similar CBT peak times (acrophase) across different periods of measurements (18.00–20.00, Fig. 1a, b). CBT amplitude in groups B and C upon entry was negatively correlated with SAPS II (r = − 0.72 and − 0.84) and APACHE II scores (r = − 0.70 and − 0.63), respectively, as well as duration of ICU and hospital stay in group B (r = − 0.62 and − 0.64) and entry SOFA score in group C (r = − 0.82; Fig. 2a, b). For all correlations, p was less than 0.003. Such findings indicate that increased CBT circadian fluctuations during entry are related with lower severity of disease and better clinical outcomes, whereas septic shock did not seem to induce any significant deregulation on rhythm profiles of CBT within both groups A and B. Longitudinal trends of mean core body temperature (CBT) values (y axis) per hour and within 24 h during different time points of measurements (x axis) for patients from groups A and B. a Linear line plots of mean CBT values along with standard deviations (SDs) in group A (initial septic shock, n = 10). Peak time is around 19.00 during entry/septic shock and exit whereas onset of measurements is between 9.00 and 10.00. b Linear line plots of mean CBT values along with SDs in group B (in-hospital septic shock, n = 6). Similarly, peak time is around 18.00–20.00 during entry, septic shock onset, and exit with the same time onset of recordings as in group A. The value 0 corresponds to midnight Corrplot or correlation matrix of CBT circadian and RQA metrics upon entry in the ICU, along with clinical outcomes. a Corrplot in group B (in-hospital septic shock, n = 6). Positive correlations are displayed in blue and negative correlations in red color. Color intensity and the size of ellipse are proportional to the correlation coefficients. b Corrplot in group C (controls, n = 5) CBT RQA profiles RQA metrics did not differ between the three groups of patients except for LAM, which was found to be significantly increased during exit in relation with entry values for all groups of patients (n = 21, 0.92 ± 0.12 vs 0.83 ± 0.23, p = 0.034). Thus, exit CBT curves exhibited more periodic and predictable behavior. Regarding group A, both LAM and REC were significantly reduced in entry in relation with exit (0.67 ± 0.13 vs 0.88 ± 0.07, p = 0.042, and 0.18 ± 0.06 vs 0.29 ± 0.16, p = 0.009, respectively). Furthermore, entry SOFA score was found to be negatively correlated with LAM (r = − 0.67, p < 0.003) of CBT upon entry. Such findings indicate increased probability of state recurrence and more vertical lines in exit, whereas upon entry, there are more isolated points, reflecting decreased CBT periodicity during septic shock (Fig. 3). Recurrence plot (RP) of CBT curves of a patient from group A (initial septic shock). a Entry RP. b Exit RP. Both x and y axes denote time (24 h of measurements). It seems by simple inspection that upon entry, there are less diagonal and vertical lines and more single recurrence dots, whereas during exit, the combination of numerous vertical and horizontal lines forms rectangular clusters of recurrence points, indicating more periodicity of CBT curves. c CBT longitudinal values over 24 h. Blue corresponds to entry and red to exit. Measurements started at around 9.00 a.m. It seems that the trace during exit is less smooth and more "erratic" than that during entry. This might signal an increased rhythmicity with many fluctuations upon exit from the ICU and is associated with a more periodic pattern of CBT change in RQA analysis, depicted in b. Takens vector's index in temperature as a single variable describing the systems' dynamics and projected in a two-dimensional phase space (see text for details) Considering group B, APACHE II score upon entry was found to be negatively correlated with Lmax (r = − 0.94), REC (r = − 0.92), and LAM (r = − 0.90) and positively correlated with DIV (r = 0.84) measures of entry CBT curves (Fig. 2a). In addition, DIV was positively correlated with SAPS II and entry SOFA score (r = 0.82 and 0.63, respectively). Such findings indicate that the more severely ill patients exhibited more single recurrence points than vertical structures and high divergence in their phase space, something that could reflect a less periodic and more chaotic behavior. Moreover, CBT amplitude during entry was negatively correlated with DIV (r = − 0.73) and positively correlated with DET, Lmax, and LAM (r = 0.73, r = 0.70 and 0.78, respectively), meaning that increased CBT circadian rhythmicity is associated with a more deterministic and predictable CBT profile. Finally, both ICU and hospital outcome were found to correlate with different RQA features. Thus, ICU and hospital survival were negatively correlated with DIV (r = − 0.82, − 0.61) and positively correlated with REC (r = 0.61, 0.65), Lmax (r = 0.72, 0.66), and LAM (r = 0.73, 0.75) of entry CBT signals, respectively (Fig. 2a). For all correlations, p was less than 0.003. In group C, entry SOFA score was negatively correlated with CBT amplitude, DET, and LAM values upon entry (r = − 0.92, 0.87, and − 0.94, respectively), reflecting reduced rhythmicity and periodicity of CBT curves in more severely ill patients (Fig. 2b). Moreover, high DIV entry values were associated with increased ICU and hospital length of stay (r = 0.63 and 0.84, respectively). Finally, CBT amplitude upon entry was positively correlated with both DET and LAM entry values (r = 0.85 and 0.82, respectively). For all correlations, p was less than 0.003. Circadian rhythms are disrupted by critical illness, patient care interactions, and unregulated light-dark patterns [5]. Such circadian deregulation usually appears with a change in amplitude of the 24-h cycle of different circadian biomarkers, such as melatonin and cortisol urine excretion or temperature curves, characterized by an almost flattened time series with reduced fluctuations. This loss of amplitude has been described as an index of impaired capacity for adaptation of the organism during stress [8, 25]. Usually, serum or urine melatonin and cortisol have been used as circadian biomarkers in different experimental and clinical studies. In addition, circadian output can be accessed through time-series profiles of CBT, as expression of central circadian pacemaker function. Circadian changes in CBT are associated to inputs from SCN upon thermoregulatory centers that modulate the set point and thresholds for cutaneous vasodilatation, involving mainly mechanisms of heat loss under resting conditions. In humans, circadian rhythms of CBT show peaks in the late evening and minima in the morning around 04.00. Daily temperature fluctuations can be affected by feeding, physical activity, and sleep, whereas with increased age, both average body temperature and the amount of its daily variability tend to decrease [26]. A few authors have evaluated CBT circadian rhythm profiles alone or in association with melatonin or cortisol, in different and heterogeneous groups of critically ill patients. Using a retrospective design, Tweedie et al. [27] characterized CBT 24-h profiles of 15 ICU patients for 8 to 26 days. They reported that 80% of all patient days had a significant circadian rhythm with substantially varying acrophases and normal amplitudes. However, the investigators did not determine if CBT parameters were related to patient condition or the ICU environment. Nuttall et al. [28] retrospectively explored the clinical significance of circadian rhythms in 137 patients with (n = 17) and without (n = 120) ICU psychosis, by comparing for 24 h the time of both temperature and urine output nadir. They found that both groups had altered circadian rhythms and that although all "patient days" had a significant rhythm, 83% of those days had abnormal cosinor-derived parameters. Paul and Lemmer [29] prospectively measured tympanic temperature every 1 h and plasma cortisol and melatonin levels every 2 h for 24 h, in 13 sedated ICU patients following surgery or respiratory failure and 11 patients with brain injury. They found that the 24-h circadian profiles of all measured variables were significantly disturbed, with no physiological day-night rhythm in both groups of patients in relation with healthy controls, whereas circadian rhythm alterations were more pronounced in patients with brain injuries. Additionally, although the investigators sampled each patient for 24 h and indexed each patient's illness severity with an APACHE II score for the study day, they did not report if the patients developed sepsis or septic shock. Pina and colleagues [6] prospectively analyzed hourly CBT and 4-h interval urine cortisol and melatonin profiles in eight burn patients for 24 h in three sessions, occurring between ICU days 1–3, day 10, and days 20–30. They also used reference values from 14 healthy young controls. Circadian rhythms of all measured variables were abolished in all patients in relation with controls. Finally, Gazendam and coworkers [9], in a recent investigation of circadian rhythms disruption in a general ICU population, studied CBT profiles over a 48-h period in 21 patients and found significant acrophase shift in all cases (both advance and delay). Moreover, they showed that APACHE III score was significantly predictive of circadian misplacement. In this study and using the same patients' population from our recent publication regarding melatonin and cortisol circadian deregulation during septic shock, we tried to evaluate for the first time potential circadian disruption of CBT in patients who either were admitted with septic shock in the ICU or developed septic shock during ICU stay. In addition, we included a control group of patients who did not become septic during ICU hospitalization. We demonstrated that increased amplitude of CBT's circadian fluctuations upon entry in patients from group B and controls was related with lower severity of disease and better clinical outcomes, such as reduced ICU and hospital length of stay. Our findings are in accordance with those of Gazendam and colleagues regarding correlations of circadian abnormalities upon ICU entry and severity of illness, although in our case, it was the amplitude instead of the acrophase that exhibited the most significant associations. This could be attributed to the fact that in Gazendam et al.'s study, most patients were recruited 20 days on average after ICU entry [9], whereas in our study, significant correlations concerned admission CBT amplitude values, which seem to be reduced early in the course of critical illness [25]. Regarding patients' inclusion criteria, we chose to recruit afebrile patients since both fever, fever-reducing medications, and hypothermia may mask CBT circadian rhythmicity [9]. Sedation can also affect CBT regulation and subsequently circadian rhythm profiles [2,3,4]. However, during ICU entry, all patients were sedated and under mechanical ventilation, whereas no differences were found in any circadian feature between the entry and recovery phase, when all subjects were awake with spontaneous respiration. Consequently, a potential impact of sedatives on our results seems unlikely. Different time of measurements between recording periods might also affect circadian features due to a potential impact of ICU environment. However, the length of both ICU and hospital stay did not differ between all groups of patients. Furthermore, no correlations were found between the duration of illness before septic shock development and any circadian or RQA feature in patients from group B. Although the ICU milieu might cause masking effects on melatonin profiles, CBT rhythmicity is not significantly affected by the lack of different timekeepers or other external factors that are known to influence CBT daily fluctuations [9]. Thus, physical activity and particularly upright position are nearly abolished, ambient temperature in the ICU remains more or less the same, and sleep, as has been previously described [30], is fragmented and almost evenly distributed over the whole day and night, limiting its effects on CBT rhythm profiles. However, we cannot exclude the potential impact of sleep or activity gradual restoration before discharge on our findings. Nevertheless, the lack of significant CBT circadian alterations within all groups limits the potential for such effects. Finally, food intake might affect CBT; however, all patients were tube-fed during the day, based on the same nutrition protocols [9, 31]. Inflammatory stimuli can induce a state of stress to the central nervous system (CNS), through afferent peripheral neural signaling. For instance, different pro-inflammatory cytokines produced during the acute phase of sepsis may cross the blood-brain barrier at leaky points which reduce SCN neurons' spiking. Since SCN regulates the hypothalamic-preoptic thermoregulatory control center, a deregulation of CBT circadian profile may take place with reduced amplitude and/or a shift in acrophase [32]. In our study, patients admitted with septic shock exhibited lower CBT amplitude upon entry in relation with controls and those who developed septic shock during ICU stay. Such findings could be probably attributed to a high level of pro-inflammatory cytokines during the onset of septic shock. In addition, CBT circadian features did not change significantly within any group, although a non-significant trend toward increased values upon exit from the ICU was noticed. According to previous studies, this lack of significant longitudinal changes could be due to intra-individual variability in thermoregulation [33, 34], time-varying levels of corticotropin-releasing factor (CRF) that has been found to affect temperature fluctuations [35], or a long time needed for circadian rhythmicity restoration [36]. In this respect, Mundigler and colleagues [36] found that melatonin circadian rhythm assessment of septic patients during recovery exhibited a tendency toward a slightly restored circadian pattern in some individuals but lack of significant rhythm in any patient. Finally, the lack of significant changes between CBT circadian rhythms within all groups could be also associated with very small sample size, leading to potentially false-negative results. Thermoregulation can be considered as a complex system, since it involves both core and peripheral temperature control mechanisms through multiple feedback loops. Loss of complexity during illness might be attributed to altered coupling between the system's components [1]. In this respect, sepsis and critical illness have been described as a manifestation of "uncoupling of oscillators," which are responsible for a systems' dynamics [1, 37,38,39,40,41]. Different measures for assessing the complexity of biological signals, such as entropy analysis, have been investigated in different populations of critically ill patients [38]. Among different biosignals, temperature has been the less investigated system so far. We [2, 39] and others [3, 4] have found that in critically ill patients, low-temperature entropy values were associated with increased severity of disease with accuracy similar to that of SOFA. It has been proposed that the breakdown of long-range correlations and coupling within a physiologic system may lead to either a random/chaotic or a highly regular/periodic state, and for this reason, different and multiple metrics should be used in parallel to assess the complexity of physiologic systems [1]. Non-linear methods for biological data analysis derived from chaos theory seem to be sensitive enough to uncover early phases of the disease development. Nevertheless, such calculations require large data sets. In this study, we decided to estimate CBT complexity by applying for the first time a novel non-linear method for data analysis, recurrence quantification analysis (RQA). RQA has the advantage compared to other techniques to capture the inherent dynamics of a complex system without a need of a long data series, whereas it remains relatively immune to noise [10, 11]. It has been used for evaluating heart rate variability in patients suffering from cardiovascular diseases, diabetes, and epilepsy [13, 42,43,44], whereas recently, its adoption as an analytical method of both heart and respiratory rate signals has been proposed, for earlier weaning outcome prediction in the ICU [45]. In our study, we found that CBT time series were more periodic upon exit and recovery in relation with entry in the ICU, for all the studying population. The more deterministic and periodic patterns of CBT, reflected in increased values of DET, LAM, Lmean, and REC from groups B and C upon ICU admission, correlated with high CBT amplitude. The more severely ill patients with longer ICU and hospital stay exhibited a more random pattern in their entry CBT signals, with increased DIV and reduced Lmean. Furthermore, in group A, an initially more random behavior upon entry was found to characterize CBT recordings, whereas before exit, more deterministic patterns seemed to occur. Similar changes were found in both groups B and C but did not reach statistical significance based on the Bonferroni criterion. Lack of sedation upon exit might constitute a potential bias on our findings; however, someone would expect increased rather than decreased randomness and divergence in awake versus sedated individuals [38, 39]. In conclusion, it seems that preserved circadian rhythmicity upon entry is associated with a more deterministic CBT profile, whereas circadian deregulation is correlated with a more random pattern of change, something that could signal impairment in the capability of adaptation to stress. Nevertheless, our results need to be validated in different and larger data sets for standardization of the techniques involved, as well as their adoption for early and more accurate evaluation of CBT inherent dynamics during different states of stress. Moreover, RQA should be tested in association with other complexity measures in future studies. Other limitations of our study include the small sample size that might be responsible for false-negative results (error type II), whereas multiple comparisons between different variables probably increased error type I (false-positive results). In any case, we believe that the adoption of a Bonferroni correction in our analyses has strengthened the accuracy and reproducibility of our findings. Furthermore, we consider our data set is comparable with the majority of similar studies evaluating circadian deregulation, as well as "decomplexification" in critically ill patients [40]. Finally, the absence of fever or hypothermia during CBT recordings constitutes a major limitation in terms of temperature rhythmicity and complexity assessment during critical illness and septic shock. Thus, we suggest that future studies should measure and compare circadian and complexity metrics of CBT curves in both febrile and afebrile patients, in order to evaluate possible applicability of such methods for more accurate and earlier outcome prediction of critical illness. In this study, we demonstrated that increased amplitude of CBT circadian fluctuations upon entry in the ICU of patients who developed in-hospital septic shock (group B) and controls (group C) is related with lower severity of disease and better clinical outcomes. Furthermore, the more deterministic and periodic patterns of CBT profiles upon ICU entry were correlated with high CBT rhythmicity and shorter ICU and hospital stay. Although correlations do not mean causality, the present study suggests that increased CBT randomness is related with flattened CBT curves in the most critically ill patients, in the early phases of the disease. Nevertheless, increased intra-individual and inter-individual variabilities limit the generalization of our results to different groups of patients, whereas further investigations with larger sample sizes are required to investigate the pathophysiological and potential clinical implications of our findings. The data sets used and/or analyzed during the current study are available from the corresponding author on reasonable request. APACHE II: Acute Physiology and Chronic Health Evaluation II CBT: Core body temperature CNS: CRF: Corticotropin-releasing factor DET: DIV: LAM: Laminarity REC: RP: Recurrence plot RQA: Recurrence quantitative analysis SAPS II: Simplified Acute Physiology Score II SCN: Suprachiasmatic nuclei SOFA: Specific Organ Failure Assessment Goldberger AL, Peng CK, Lipsitz LA (2002) What is physiologic complexity and how does it change with aging and disease? Neurobiol Aging 23(1):23–26 Papaioannou V, Chouvarda I, Maglaveras N, Pneumatikos IA (2012) Temperature variability analysis using wavelets and multiscale entropy in patients with systemic inflammatory response syndrome, sepsis and septic shock. Crit Care 16(2):R51. https://doi.org/10.1186/cc11255 Varela M, Jimenez L, Farina R (2003) Complexity analysis of the temperature curve: new information from body temperature. Eur J Appl Physiol 89:230–237 Varela M, Calvo M, Chana M (2005) M, Gomez-Mestre I, Asensio R, Galdos P. Clinical implications of temperature curve complexity in critically ill patients. Crit Care Med 33(12):2764–2771 Papaioannou V, Mebazaa A, Plaud B, Legrand M (2014) "Chronomics" in ICU: circadian aspects of immune response and therapeutic perspectives in the critically ill. Intensive Care Med Exp 2(1):18 Pina G, Brun J, Tissot S, Claustrat B (2010) Long-term alteration of daily melatonin, 6-sulfatoxymelatonin, cortisol, and temperature profiles in burn patients: a pre-liminary report. Chronobiol Int 27(2):378–392 Verceles AC, Silhan L, Terrin M, Netzer G, Shanholtz C, Scharf SM (2012) Circadian rhythm disruption in severe sepsis: the effect of ambient light on urinary 6-sulfatoxymelatonin secretion. Intensive Care Med 38(5):804–810 Sertaridou E, Chouvarda I, Arvanitidis K, Filidou E, Kolios G, Pneumatikos I, Papaioannou V (2018) Melatonin and cortisol exhibit different circadian rhythm profiles during septic shock depending on timing of onset: a prospective observational study. Ann Intensive Care 4;8(1):118. https://doi.org/10.1186/s13613-018-0462-y Gazendam JAC, Van Dongen HPA, Grant DA, Freedman NS, Zwaveling JH, Schwab RJ (2013) Altered circadian rhythmicity in patients in the ICU. Chest 144(2):483–489 Eckmann JP, Kamphorst SO, Ruelle D (1987) Recurrence plots of dynamical systems. Europhys Lett 5(9):973–977 Marwan N, Romano MC, Thiel M, Kurths J (2007) Recurrence plots for the analysis of complex systems. Phys Rep. 438(5–6):237–329 Jr W, Zbilut JP (1994) Dynamical assessment of physiological systems and states using recurrence plot strategies. J Appl Physiol 76(2):965–973 Marwan N, Wessel N, Meyerfeldt U, Schirdewan A, Kurths J (2002) Recurrence plot based measures of complexity and its application to heart rate variability data. Phys Rev E 66(2):026702 Singer M, Deutschman CS, Seymour CW, Shankar-Hari M, Annane D, Bauer M, Bellomo R, Bernard GR, Chiche JD, Coopersmith CM, Hotchkiss RS, Levy MM, Marshall JC, Martin GS, Opal SM, Rubenfeld GD, van der Poll T, Vincent JL, Angus DC (2016) The third international consensus definitions for sepsis and septic shock (sepsis-3). JAMA. 315(8):801–810 Lefrant JY, Muller L, Coussaye JE, Benbabaali M, Lebris C, Zeitoun N, Mari C, Saissi G, Ripart J, Eledjam JJ (2003) Temperature measurement in intensive care patients: comparison of urinary bladder, oesophageal, rectal, axillary, and inguinal methods versus pulmonary artery core method. Intensive Care Med 29:414–418 Fernández JR, Hermida RC, Mojón A (2009) Chronobiological analysis techniques. Application to blood pressure. Philos Trans A Math Phys Eng Sci 367(1887):431–445 Marler MR, Gehrman P, Martin JL, Ancoli-Israel S (2006) The sigmoidally transformed cosine curve: a mathematical model of circadian rhythms with symmetric non-sinusoidal shapes. Stat Med 25(22):3893–3904 Moutac. Extended tools for cosinor analysis of rhythms. Package "cosinor 2". 2017; https://cran.r-project.org/web/packages/cosinor/cosinor.pdf Cornelissen G (2014) Cosinor-based rhythmometry. Theor Biol Med Model 11:16 Wendi D, Marwan N (2018) Extended recurrence plot and quantification for noisy continuous dynamical systems. Chaos 28(8):85722 Takens F Detecting strange attractors in turbulence. In: Rand D, Young LS (eds) Dynamical systems and turbulence, Warwick 1981, vol 898. Lecture Notes in Mathematics, Springer, Berlin, Heidelberg Taiyun W, Simko V. R package "corrplot": visualization of a correlation matrix (Version 0.84). 2017; https://github.com/taiyun/corrplot. O'Grady NP, Barie PS, Bartlett JG, Bleck T, Caroll K, Kalil AC, Linden P, Maki DG, Nierman D, Pasculle W, Masur H (2008) Guidelines for evaluation of new fever in critically ill adult patients: 2008 update from the American College of Critical Care Medicine and the Infectious Diseases Society of America. Crit Care Med 36:1330–1349 Torres A, Niederman MS, Chastre J, Ewig S, Fernandez-Vandellos P, Hanberger H, Kollef M, Li Bassi G, Luna CM, Martin-Loeches I, J. Paiva A, Read RC, Rigau D, Timsit JF, Welte T, Wunderink R (2018) Summary of the international clinical guidelines for the management of hospital-acquired and ventilator-acquired pneumonia. ERJ Open Res 4:00028–02018 McKenna HT, Reiss IKM, Martin DS (2017) The significance of circadian rhythms and dysrhythmias in critical illness. J Intensive Care Soc. 18:121–129 Weinert W, Waterhouse J (2007) The circadian rhythm of core temperature: effects of physical activity and aging. Physiol Behav 90:246–256 Tweedie IE, Bell CF, Clegg A, Campbell IT, Minors DS, Waterhouse JM (1989) Retrospective study of temperature rhythms of intensive care patients. Crit Care Med 17(11):1159–1165 Nuttall GA, Kumar M, Murray MJ (1998) No difference exists in the alteration of circadian rhythm between patients with and without intensive care unit psychosis. Crit Care Med 26(8):1351–1355 Paul T, Lemmer B (2007) Disturbance of circadian rhythms in analgosedated intensive care unit patients with and without craniocerebral injury. Chronobiol Int 24(1):45–61 Freedman NS, Gazendam J, Levan L, Pack AI, Schwab RJ (2001) Abnormal sleep/wake cycles and the effect of environmental noise on sleep disruption in the intensive care unit. Am J Respir Crit Care Med. 163(2):451–457 Schibler U, Ripperger J, Brown SA (2003) Peripheral circadian oscillators in mammals: time and food. J Biol Rhythms 18(3):250–260 Esquifino AI, Cano P, Jimenez-Ortega V, Fernandez-Mateos P, Cardinali DP (2007) Neuro-endocrine-immune correlates of circadian physiology: studies in experimental models of arthritis, ethanol feeding, aging, social isolation and calorie restriction. Endocr 32:1–19 Lebelle PDJ, Prevot E (2001) The temperature rhythms delay of intensive care patients after surgery. Sleep 24:A200–A201 Gazendam JAC, Freedman NS (2002) The circadian rhythm of core body temperature in the intensive care unit. J Intensive Care Med 28:S153 Buwalda B, de Boer SF, Van Kalkeren AA, Koolhass JM (1997) Physiological and behavioral effects of chronic intracerebroventricular infusion of corticotropin-releasing factor in the rat. Psychoneuroendocrinology 22(5):297–309 Mundigler G, Delle-Karth G, Koreny M, Zehetgruber M, Steindl-Munda P, Marktl W, Ferti L, Siostrzonek P (2002) Impaired circadian rhythm of melatonin secretion in sedated critically ill patients with severe sepsis. Crit Care Med 30:536–540 Pincus SM, Goldberger AL (1994) Physiological time-series: what does regularity quantify. Am J Physiol 266:1643–1645 Seely AJE, Macklem PT (2004) Complex systems and the technology of variability analysis. Crit Care 8:R367–R384 Papaioannou V, Chouvarda I, Maglaveras N, Baltopoulos G, Pneumatikos I (2013) Temperature multiscale entropy analysis: a promising marker for early prediction of mortality in septic patients. Physiol Meas 34:1449–1466 Godin PJ, Buchman TG (1996) Uncoupling of biological oscillators: a complementary hypothesis concerning the pathogenesis of multiple organ dysfunction syndrome. Crit Care Med 24(7):1107–1116 Goldstein B, Fiser DH, Kelly MM, Mickelsen D, Ruttimann U, Pollack MM (1998) Decomplexification in critical illness and injury: relationship between heart rate variability, severity of illness, and outcome. Crit Care Med 26(2):352–357 Mohebbi M, Ghassemian H (2011) Prediction of paroxysmal atrial fibrillation using recurrence plot-based features of the RR interval signal. Physiol Meas. 32:1147–1162 Javorka M, Trunkvalterova Z, Tonhajzerova I, Lazarova Z, Javorkova J, Javorka K (2008) Recurrences in heart rate dynamics are changed in patients with diabetes mellitus. Clin Physiol Funct Imaging. 28(5):326–331 Acharya UR, Sree SV, Chattopadhyay S, Yu W, Ang PC (2011) Application of recurrence quantification analysis for the automated identification of epileptic EEG signals. Int J Neural Syst. 21(3):199–211 Arcentales A, Giraldo BF, Caminal P, Benito S, Voss A (2011) Recurrence quantification analysis of heart rate variability and respiratory flow series in patients on weaning trials. Conf Proc IEEE Eng Med Biol Soc 2011:2724–2727 Intensive Care Unit, Alexandroupolis University Hospital, Democritus University of Thrace, Dragana, 68100, Alexandroupolis, Greece Vasilios E. Papaioannou , Eleni N. Sertaridou & Ioannis N. Pneumatikos Laboratory of Computing, Medical Informatics and Biomedical Imaging Technologies, Faculty of Medicine, Aristotle University of Thessaloniki, 54124, Thessaloniki, Greece Ioanna G. Chouvarda Laboratory of Pharmacology, Faculty of Medicine, Democritus University of Thrace, Dragana, 68100, Alexandroupolis, Greece George C. Kolios Search for Vasilios E. Papaioannou in: Search for Eleni N. Sertaridou in: Search for Ioanna G. Chouvarda in: Search for George C. Kolios in: Search for Ioannis N. Pneumatikos in: VP is the principal investigator who reviewed the literature and wrote the manuscript. ES performed the patient screening, enrollment, data collection, and analysis. IC supervised all the chronobiologic, complexity, and statistical analyses. GK reviewed and edited the manuscript. IP reviewed and edited the manuscript. All authors read and approved the final manuscript. Correspondence to Eleni N. Sertaridou. This study was approved by our Institutional Review Board. Informed consent was waived. Papaioannou, V.E., Sertaridou, E.N., Chouvarda, I.G. et al. Determining rhythmicity and determinism of temperature curves in septic and non-septic critically ill patients through chronobiological and recurrence quantification analysis: a pilot study. ICMx 7, 53 (2019). https://doi.org/10.1186/s40635-019-0267-9 Septic shock
CommonCrawl
Existence and multiplicity of positive solutions for a class of quasilinear Schrödinger equations in $ \mathbb R^N $$ ^\diamondsuit $ DCDS-S Home Solution of contrast structure type for a reaction-diffusion equation with discontinuous reactive term September 2021, 14(9): 3267-3284. doi: 10.3934/dcdss.2020335 Chaotic oscillations of linear hyperbolic PDE with variable coefficients and implicit boundary conditions Qigui Yang 1,, and Qiaomin Xiang 2, Department of Mathematics, South China University of Technology, Guangzhou, 510640, China Department of Mathematics and Big Data, Foshan University, Foshan, 528000, China * Corresponding author: Qigui Yang Received March 2019 Revised November 2019 Published September 2021 Early access April 2020 Full Text(HTML) In this paper, the chaotic oscillations of the initial-boundary value problem of linear hyperbolic partial differential equation (PDE) with variable coefficients are investigated, where both ends of boundary conditions are nonlinear implicit boundary conditions (IBCs). It separately considers that IBCs can be expressed by general nonlinear boundary conditions (NBCs) and cannot be expressed by explicit boundary conditions (EBCs). Finally, numerical examples verify the effectiveness of theoretical prediction. Keywords: Chaotic oscillation, hyperbolic PDE; variable coefficient, implicit boundary condition, nonlinear boundary condition. Mathematics Subject Classification: Primary: 34C28, 35L70; Secondary: 35L05. Citation: Qigui Yang, Qiaomin Xiang. Chaotic oscillations of linear hyperbolic PDE with variable coefficients and implicit boundary conditions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (9) : 3267-3284. doi: 10.3934/dcdss.2020335 G. Chen, S.-B. Hsu and J. X. Zhou, Chaotic vibrations of the one-dimensional wave equation due to a self-excitation boundary condition. Ⅰ: Controlled hysteresis, Trans. Amer. Math. Soc., 350 (1998), 4265-4311. doi: 10.1090/S0002-9947-98-02022-4. Google Scholar G. Chen, S.-B. Hsu and J. X. Zhou, Chaotic vibrations of the one-dimensional wave equation due to a self-excitation boundary condition. Ⅱ: Energy injection, period doubling and homoclinic orbits, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 8 (1998), 423-445. doi: 10.1142/S0218127498000280. Google Scholar G. Chen, S.-B. Hsu and J. X. Zhou, Chaotic vibrations of the one-dimensional wave equation due to a self-excitation boundary condition. Ⅲ: Natural hysteresis memory effects, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 8 (1998), 447-470. doi: 10.1142/S0218127498000292. Google Scholar G. Chen, S.-B. Hsu and J. X. Zhou, Nonisotropic spatiotemporal chaotic vibration of the wave equation due to mixing energy transport and a van der Pol boundary condition, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 12 (2002), 535-559. doi: 10.1142/S0218127402004504. Google Scholar G. Chen, T. W. Huang and Y. Huang, Chaotic behavior of interval maps and total variations of iterates, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 14 (2004), 2161-2186. doi: 10.1142/S0218127404010540. Google Scholar G. Chen, B. Sun and T. W. Huang, Chaotic oscillations of solutions of the Klein-Gordon equation due to inbalance of distributed and boundary energy flows, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 24 (2014), 1430021, 19 pp. doi: 10.1142/S0218127414300213. Google Scholar X. P. Dai, Chaotic dynamics of continuous-time topological semiflow on Polish spaces, J. Differential Equations, 258 (2015), 2794-2805. doi: 10.1016/j.jde.2014.12.027. Google Scholar M. W. Hirsch, S. Smale and R. L. Devaney, Differential Equations, Dynamical Systems, and an Introduction to Chaos, Second edition, Pure and Applied Mathematics (Amsterdam), 60. Elsevier/Academic Press, Amsterdam, 2004. doi: 10.1016/C2009-0-61160-0. Google Scholar C.-C. Hu, Chaotic vibrations of the one-dimensional mixed wave system, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 19 (2009), 579-590. doi: 10.1142/S0218127409023202. Google Scholar Y. Huang, Growth rates of total variations of snapshots of the 1D linear wave equation with composite nonlinear boundary reflection relations, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 13 (2003), 1183-1195. doi: 10.1142/S0218127403007138. Google Scholar Y. Huang, J. Luo and Z. L. Zhou, Rapid fluctuations of snapshots of one-dimensional linear wave equations with a van der Pol nonlinear boundary condition, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 15 (2005), 567-580. doi: 10.1142/S0218127405012223. Google Scholar L. L. Li, Y. L. Chen and Y. Huang, Nonisotropic spatiotemporal chaotic vibrations of the one-dimensional wave equation with a mixing transport term and general nonlinear boundary condition, J. Math. Phys., 51 (2010), 102703, 22 pp. doi: 10.1063/1.3486070. Google Scholar L. L. Li, Y. Huang, G. Chen and T. W. Huang, Chaotic oscillations of second order linear hyperbolic equations with nonlinear boundary conditions: A factorizable but noncommutative case, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 25 (2015), 1530032, 20 pp. doi: 10.1142/S0218127415300323. Google Scholar L. L. Li, T. W. Huang and X. Y. Huang, Chaotic oscillations of the 1D wave equation due to extreme imbalance of self-regulations, J. Math. Anal. Appl., 450 (2017), 1388-1400. doi: 10.1016/j.jmaa.2017.01.095. Google Scholar [15] Y. C. Li, Chaos in Partial Differential Equations, Graduate Series in Analysis. International Press, omerville, MA, 2004. doi: 10.1002/cnm.650. Google Scholar S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, 2$^nd$ edition, Texts in Applied Mathematics, 2. Springer-Verlag, New York, 2003. doi: 10.1063/1.4822950. Google Scholar Q. M. Xiang and Q. G. Yang, Nonisotropic chaotic oscillations of the wave equation due to the interaction of mixing transport term and superlinear boundary condition, J. Math. Anal. Appl., 462 (2018), 730-746. doi: 10.1016/j.jmaa.2018.02.031. Google Scholar Q. M. Xiang and Q. G. Yang, Chaotic oscillations of a linear hyperbolic PDE with a general nonlinear boundary condition, J. Math. Anal. Appl., 472 (2019), 94-111. doi: 10.1016/j.jmaa.2018.10.083. Google Scholar Q. G. Yang, G. R. Jiang and T. S. Zhou, Chaotification of linear impulsive differential systems with applications, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 22 (2012), 1250297, 12 pp. doi: 10.1142/S0218127412502975. Google Scholar Q. G. Yang and Q. M. Xiang, Existence of chaotic oscillations in second-order linear hyperbolic PDEs with implicit boundary conditions, J. Math. Anal. Appl., 457 (2018), 751-775. doi: 10.1016/j.jmaa.2017.08.018. Google Scholar Z. B. Yin and Q. G. Yang, Distributionally scrambled set for an annihilation operator, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 25 (2015), 1550178, 13 pp. doi: 10.1142/S0218127415501783. Google Scholar Z. B. Yin and Q. G. Yang, Generic distributional chaos and principal measure in linear dynamics, Ann. Pol. Math., 118 (2016), 71-94. doi: 10.4064/ap3908-9-2016. Google Scholar Z. B. Yin and Q. G. Yang, Distributionally n-chaotic dynamics for linear operators, Rev. Mat. Complut., 31 (2018), 111-129. doi: 10.1007/s13163-017-0226-5. Google Scholar Z. B. Yin and Q. G. Yang, Distributionally n-scrambled set for weighted shift operators, J. Dyn. Control Syst., 23 (2017), 693-708. doi: 10.1007/s10883-017-9359-6. Google Scholar Figure 1. The spatiotemporal profiles of system (23) with $ (\alpha_1,\beta_1) = (0.1,1) $, $ (\alpha_2,\beta_2) = (0.5,1) $, $ x\in [0,1] $ and $ t\in [60,64] $: (a) $ w_{x}(x,t) $; (b) $ w_{t}(x,t) $. Figure Options Download as PowerPoint slide Figure 2. The spatiotemporal profiles of system (23) with $ \gamma_1 = 1.1\pi $ and $ \gamma_2 = 0.4\pi $, $ x\in [0,1] $ and $ t\in [60,64] $: (a) $ w_{x}(x,t) $; (b) $ w_{t}(x,t) $. R.G. Duran, J.I. Etcheverry, J.D. Rossi. Numerical approximation of a parabolic problem with a nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 1998, 4 (3) : 497-506. doi: 10.3934/dcds.1998.4.497 Jesús Ildefonso Díaz, L. Tello. On a climate model with a dynamic nonlinear diffusive boundary condition. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 253-262. doi: 10.3934/dcdss.2008.1.253 Hongwei Zhang, Qingying Hu. Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2005, 4 (4) : 861-869. doi: 10.3934/cpaa.2005.4.861 Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71 Shijin Deng, Linglong Du, Shih-Hsien Yu. Nonlinear stability of Broadwell model with Maxwell diffuse boundary condition. Kinetic & Related Models, 2013, 6 (4) : 865-882. doi: 10.3934/krm.2013.6.865 Marek Fila, Kazuhiro Ishige, Tatsuki Kawakami. Convergence to the Poisson kernel for the Laplace equation with a nonlinear dynamical boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1285-1301. doi: 10.3934/cpaa.2012.11.1285 Patrick Winkert. Multiplicity results for a class of elliptic problems with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (2) : 785-802. doi: 10.3934/cpaa.2013.12.785 Zuodong Yang, Jing Mo, Subei Li. Positive solutions of $p$-Laplacian equations with nonlinear boundary condition. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 623-636. doi: 10.3934/dcdsb.2011.16.623 Kazuhiro Ishige, Ryuichi Sato. Heat equation with a nonlinear boundary condition and uniformly local $L^r$ spaces. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2627-2652. doi: 10.3934/dcds.2016.36.2627 G. Acosta, Julián Fernández Bonder, P. Groisman, J.D. Rossi. Numerical approximation of a parabolic problem with a nonlinear boundary condition in several space dimensions. Discrete & Continuous Dynamical Systems - B, 2002, 2 (2) : 279-294. doi: 10.3934/dcdsb.2002.2.279 Larissa V. Fardigola. Transformation operators in controllability problems for the wave equations with variable coefficients on a half-axis controlled by the Dirichlet boundary condition. Mathematical Control & Related Fields, 2015, 5 (1) : 31-53. doi: 10.3934/mcrf.2015.5.31 Takeshi Taniguchi. Exponential boundary stabilization for nonlinear wave equations with localized damping and nonlinear boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1571-1585. doi: 10.3934/cpaa.2017075 Jiayue Zheng, Shangbin Cui. Bifurcation analysis of a tumor-model free boundary problem with a nonlinear boundary condition. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4397-4410. doi: 10.3934/dcdsb.2020103 Samia Challal, Abdeslem Lyaghfouri. The heterogeneous dam problem with leaky boundary condition. Communications on Pure & Applied Analysis, 2011, 10 (1) : 93-125. doi: 10.3934/cpaa.2011.10.93 Nicolas Van Goethem. The Frank tensor as a boundary condition in intrinsic linearized elasticity. Journal of Geometric Mechanics, 2016, 8 (4) : 391-411. doi: 10.3934/jgm.2016013 H. Beirão da Veiga. Vorticity and regularity for flows under the Navier boundary condition. Communications on Pure & Applied Analysis, 2006, 5 (4) : 907-918. doi: 10.3934/cpaa.2006.5.907 Wenzhen Gan, Peng Zhou. A revisit to the diffusive logistic model with free boundary condition. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 837-847. doi: 10.3934/dcdsb.2016.21.837 Jean-François Coulombel, Frédéric Lagoutière. The Neumann numerical boundary condition for transport equations. Kinetic & Related Models, 2020, 13 (1) : 1-32. doi: 10.3934/krm.2020001 Raffaela Capitanelli. Robin boundary condition on scale irregular fractals. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1221-1234. doi: 10.3934/cpaa.2010.9.1221 Kei Fong Lam, Hao Wu. Convergence to equilibrium for a bulk–surface Allen–Cahn system coupled through a nonlinear Robin boundary condition. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1847-1878. doi: 10.3934/dcds.2020096 PDF downloads (165) HTML views (576) Qigui Yang Qiaomin Xiang Article outline
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing (1708.01530) DES Collaboration: T. M. C. Abbott, F. B. Abdalla, A. Alarcon, J. Aleksić, S. Allam, S. Allen, A. Amara, J. Annis, J. Asorey, S. Avila, D. Bacon, E. Balbinot, M. Banerji, N. Banik, W. Barkhouse, M. Baumer, E. Baxter, K. Bechtol, M. R. Becker, A. Benoit-Lévy, B. A. Benson, G. M. Bernstein, E. Bertin, J. Blazek, S. L. Bridle, D. Brooks, D. Brout, E. Buckley-Geer, D. L. Burke, M. T. Busha, D. Capozzi, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. Chang, N. Chen, M. Childress, A. Choi, C. Conselice, R. Crittenden, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, R. Das, T. M. Davis, C. Davis, J. De Vicente, D. L. DePoy, J. DeRose, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, A. Drlica-Wagner, T. F. Eifler, A. E. Elliott, F. Elsner, J. Elvin-Poole, J. Estrada, A. E. Evrard, Y. Fang, E. Fernandez, A. Ferté, D. A. Finley, B. Flaugher, P. Fosalba, O. Friedrich, J. Frieman, J. García-Bellido, M. Garcia-Fernandez, M. Gatti, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, M. S. S. Gill, K. Glazebrook, D. A. Goldstein, D. Gruen, R. A. Gruendl, J. Gschwend, G. Gutierrez, S. Hamilton, W. G. Hartley, S. R. Hinton, K. Honscheid, B. Hoyle, D. Huterer, B. Jain, D. J. James, M. Jarvis, T. Jeltema, M. D. Johnson, M. W. G. Johnson, T. Kacprzak, S. Kent, A. G. Kim, A. King, D. Kirk, N. Kokron, A. Kovacs, E. Krause, C. Krawiec, A. Kremin, K. Kuehn, S. Kuhlmann, N. Kuropatkin, F. Lacasa, O. Lahav, T. S. Li, A. R. Liddle, C. Lidman, M. Lima, H. Lin, N. MacCrann, M. A. G. Maia, M. Makler, M. Manera, M. March, J. L. Marshall, P. Martini, R. G. McMahon, P. Melchior, F. Menanteau, R. Miquel, V. Miranda, D. Mudd, J. Muir, A. Möller, E. Neilsen, R. C. Nichol, B. Nord, P. Nugent, R. L. C. Ogando, A. Palmese, J. Peacock, H.V. Peiris, J. Peoples, W. J. Percival, D. Petravick, A. A. Plazas, A. Porredon, J. Prat, A. Pujol, M. M. Rau, A. Refregier, P. M. Ricker, N. Roe, R. P. Rollins, A. K. Romer, A. Roodman, R. Rosenfeld, A. J. Ross, E. Rozo, E. S. Rykoff, M. Sako, A. I. Salvador, S. Samuroff, C. Sánchez, E. Sanchez, B. Santiago, V. Scarpine, R. Schindler, D. Scolnic, L. F. Secco, S. Serrano, I. Sevilla-Noarbe, E. Sheldon, R. C. Smith, M. Smith, J. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, G. Tarle, D. Thomas, M. A. Troxel, D. L. Tucker, B. E. Tucker, S. A. Uddin, T. N. Varga, P. Vielzeuf, V. Vikram, A. K. Vivas, A. R. Walker, M. Wang, R. H. Wechsler, J. Weller, W. Wester, R. C. Wolf, B. Yanny, F. Yuan, A. Zenteno, B. Zhang, Y. Zhang, J. Zuntz March 1, 2019 astro-ph.CO We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg$^2$ of $griz$ imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while blind to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat $\Lambda$CDM and $w$CDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for $\Lambda$CDM) or 7 (for $w$CDM) cosmological parameters including the neutrino mass density and including the 457 $\times$ 457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions, and from their combination obtain $S_8 \equiv \sigma_8 (\Omega_m/0.3)^{0.5} = 0.783^{+0.021}_{-0.025}$ and $\Omega_m = 0.264^{+0.032}_{-0.019}$ for $\Lambda$CDM for $w$CDM, we find $S_8 = 0.794^{+0.029}_{-0.027}$, $\Omega_m = 0.279^{+0.043}_{-0.022}$, and $w=-0.80^{+0.20}_{-0.22}$ at 68% CL. The precision of these DES Y1 results rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for $S_8$ and $\Omega_m$ are lower than the central values from Planck ... Dark Energy Survey Year 1 Results: Galaxy-Galaxy Lensing (1708.01537) J. Prat, C. Sánchez, Y. Fang, D. Gruen, J. Elvin-Poole, N. Kokron, L. F. Secco, B. Jain, R. Miquel, N. MacCrann, M. A. Troxel, A. Alarcon, D. Bacon, G. M. Bernstein, J. Blazek, R. Cawthon, C. Chang, M. Crocce, C. Davis, J. De Vicente, J. P. Dietrich, A. Drlica-Wagner, O. Friedrich, M. Gatti, W. G. Hartley, B. Hoyle, E. M. Huff, M. Jarvis, M. M. Rau, R. P. Rollins, A. J. Ross, E. Rozo, E. S. Rykoff, S. Samuroff, E. Sheldon, T. N. Varga, P. Vielzeuf, J. Zuntz, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Lévy, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, S. Desai, H. T. Diehl, S. Dodelson, T. F. Eifler, E. Fernandez, B. Flaugher, P. Fosalba, J. Frieman, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, D. A. Goldstein, R. A. Gruendl, J. Gschwend, G. Gutierrez, K. Honscheid, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, D. Kirk, E. Krause, K. Kuehn, S. Kuhlmann, O. Lahav, T. S. Li, M. Lima, M. A. G. Maia, M. March, J. L. Marshall, P. Martini, P. Melchior, F. Menanteau, J. J. Mohr, R. C. Nichol, B. Nord, A. A. Plazas, A. K. Romer, A. Roodman, M. Sako, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, I. Sevilla-Noarbe, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, D. L. Tucker, V. Vikram, A. R. Walker, R. H. Wechsler, B. Yanny, Y. Zhang Sept. 4, 2018 astro-ph.CO We present galaxy-galaxy lensing measurements from 1321 sq. deg. of the Dark Energy Survey (DES) Year 1 (Y1) data. The lens sample consists of a selection of 660,000 red galaxies with high-precision photometric redshifts, known as redMaGiC, split into five tomographic bins in the redshift range $0.15 < z < 0.9$. We use two different source samples, obtained from the Metacalibration (26 million galaxies) and Im3shape (18 million galaxies) shear estimation codes, which are split into four photometric redshift bins in the range $0.2 < z < 1.3$. We perform extensive testing of potential systematic effects that can bias the galaxy-galaxy lensing signal, including those from shear estimation, photometric redshifts, and observational properties. Covariances are obtained from jackknife subsamples of the data and validated with a suite of log-normal simulations. We use the shear-ratio geometric test to obtain independent constraints on the mean of the source redshift distributions, providing validation of those obtained from other photo-$z$ studies with the same data. We find consistency between the galaxy bias estimates obtained from our galaxy-galaxy lensing measurements and from galaxy clustering, therefore showing the galaxy-matter cross-correlation coefficient $r$ to be consistent with one, measured over the scales used for the cosmological analysis. The results in this work present one of the three two-point correlation functions, along with galaxy clustering and cosmic shear, used in the DES cosmological analysis of Y1 data, and hence the methodology and the systematics tests presented here provide a critical input for that study as well as for future cosmological analyses in DES and other photometric galaxy surveys. Dark Energy Survey Year 1 Results: Galaxy clustering for combined probes (1708.01536) J. Elvin-Poole, M. Crocce, A. J. Ross, T. Giannantonio, E. Rozo, E. S. Rykoff, S. Avila, N. Banik, J. Blazek, S. L. Bridle, R. Cawthon, A. Drlica-Wagner, O. Friedrich, N. Kokron, E. Krause, N. MacCrann, J. Prat, C. Sanchez, L. F. Secco, I. Sevilla-Noarbe, M. A. Troxel, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, J. Asorey, K. Bechtol, M. R. Becker, A. Benoit-Levy, G. M. Bernstein, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, D. Carollo, M. Carrasco Kind, J. Carretero, F. J. Castander, C. E. Cunha, C. B. DAndrea, L. N. da Costa, T. M. Davis, C. Davis, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, T. F. Eifler, A. E. Evrard, E. Fernandez, B. Flaugher, P. Fosalba, J. Frieman, J. Garcia-Bellido, E. Gaztanaga, D. W. Gerdes, K. Glazebrook, D. Gruen, R. A. Gruendl, J. Gschwend, G. Gutierrez, W. G. Hartley, S. R. Hinton, K. Honscheid, J. K. Hoormann, B. Jain, D. J. James, M. Jarvis, T. Jeltema, M. W. G. Johnson, M. D. Johnson, A. King, K. Kuehn, S. Kuhlmann, N. Kuropatkin, O. Lahav, G. Lewis, T. S. Li, C. Lidman, M. Lima, H. Lin, E. Macaulay, M. March, J. L. Marshall, P. Martini, P. Melchior, F. Menanteau, R. Miquel, J. J. Mohr, A. Moller, R. C. Nichol, B. Nord, C. R. ONeill, W.J. Percival, D. Petravick, A. A. Plazas, A. K. Romer, M. Sako, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, E. Sheldon, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, N. E. Sommer, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, B. E. Tucker, D. L. Tucker, S. A. Uddin, V. Vikram, A. R. Walker, R. H. Wechsler, J. Weller, W. Wester, R. C. Wolf, F. Yuan, B. Zhang, J. Zuntz (DES Collaboration) Aug. 28, 2018 astro-ph.CO We measure the clustering of DES Year 1 galaxies that are intended to be combined with weak lensing samples in order to produce precise cosmological constraints from the joint analysis of large-scale structure and lensing correlations. Two-point correlation functions are measured for a sample of $6.6 \times 10^{5}$ luminous red galaxies selected using the \textsc{redMaGiC} algorithm over an area of $1321$ square degrees, in the redshift range $0.15 < z < 0.9$, split into five tomographic redshift bins. The sample has a mean redshift uncertainty of $\sigma_{z}/(1+z) = 0.017$. We quantify and correct spurious correlations induced by spatially variable survey properties, testing their impact on the clustering measurements and covariance. We demonstrate the sample's robustness by testing for stellar contamination, for potential biases that could arise from the systematic correction, and for the consistency between the two-point auto- and cross-correlation functions. We show that the corrections we apply have a significant impact on the resultant measurement of cosmological parameters, but that the results are robust against arbitrary choices in the correction method. We find the linear galaxy bias in each redshift bin in a fiducial cosmology to be $b(z$=$0.24)=1.40 \pm 0.08$, $b(z$=$0.38)=1.61 \pm 0.05$, $b(z$=$0.53)=1.60 \pm 0.04$ for galaxies with luminosities $L/L_*>$$0.5$, $b(z$=$0.68)=1.93 \pm 0.05$ for $L/L_*>$$1$ and $b(z$=$0.83)=1.99 \pm 0.07$ for $L/L_*$$>1.5$, broadly consistent with expectations for the redshift and luminosity dependence of the bias of red galaxies. We show these measurements to be consistent with the linear bias obtained from tangential shear measurements. Density split statistics: Cosmological constraints from counts and lensing in cells in DES Y1 and SDSS data (1710.05045) D. Gruen, O. Friedrich, E. Krause, J. DeRose, R. Cawthon, C. Davis, J. Elvin-Poole, E. S. Rykoff, R. H. Wechsler, A. Alarcon, G. M. Bernstein, J. Blazek, C. Chang, J. Clampitt, M. Crocce, J. De Vicente, M. Gatti, M. S. S. Gill, W. G. Hartley, S. Hilbert, B. Hoyle, B. Jain, M. Jarvis, O. Lahav, N. MacCrann, T. McClintock, J. Prat, R. P. Rollins, A. J. Ross, E. Rozo, S. Samuroff, C. Sánchez, E. Sheldon, M. A. Troxel, J. Zuntz, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Lévy, E. Bertin, S. L. Bridle, D. Brooks, E. Buckley-Geer, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, A. Drlica-Wagner, E. Fernandez, B. Flaugher, P. Fosalba, J. Frieman, J. García-Bellido, E. Gaztanaga, T. Giannantonio, R. A. Gruendl, J. Gschwend, G. Gutierrez, K. Honscheid, D. J. James, T. Jeltema, K. Kuehn, N. Kuropatkin, M. Lima, M. March, J. L. Marshall, P. Martini, P. Melchior, F. Menanteau, R. Miquel, J. J. Mohr, A. A. Plazas, A. Roodman, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, M. E. C. Swanson, G. Tarle, D. Thomas, V. Vikram, A. R. Walker, J. Weller, Y. Zhang July 24, 2018 astro-ph.CO We derive cosmological constraints from the probability distribution function (PDF) of evolved large-scale matter density fluctuations. We do this by splitting lines of sight by density based on their count of tracer galaxies, and by measuring both gravitational shear around and counts-in-cells in overdense and underdense lines of sight, in Dark Energy Survey (DES) First Year and Sloan Digital Sky Survey (SDSS) data. Our analysis uses a perturbation theory model (see companion paper Friedrich at al.) and is validated using N-body simulation realizations and log-normal mocks. It allows us to constrain cosmology, bias and stochasticity of galaxies w.r.t. matter density and, in addition, the skewness of the matter density field. From a Bayesian model comparison, we find that the data weakly prefer a connection of galaxies and matter that is stochastic beyond Poisson fluctuations on <=20 arcmin angular smoothing scale. The two stochasticity models we fit yield DES constraints on the matter density $\Omega_m=0.26^{+0.04}_{-0.03}$ and $\Omega_m=0.28^{+0.05}_{-0.04}$ that are consistent with each other. These values also agree with the DES analysis of galaxy and shear two-point functions (3x2pt) that only uses second moments of the PDF. Constraints on $\sigma_8$ are model dependent ($\sigma_8=0.97^{+0.07}_{-0.06}$ and $0.80^{+0.06}_{-0.07}$ for the two stochasticity models), but consistent with each other and with the 3x2pt results if stochasticity is at the low end of the posterior range. As an additional test of gravity, counts and lensing in cells allow to compare the skewness $S_3$ of the matter density PDF to its LCDM prediction. We find no evidence of excess skewness in any model or data set, with better than 25 per cent relative precision in the skewness estimate from DES alone. Calibration of the Logarithmic-Periodic Dipole Antenna (LPDA) Radio Stations at the Pierre Auger Observatory using an Octocopter (1702.01392) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello June 13, 2018 astro-ph.IM, astro-ph.HE An in-situ calibration of a logarithmic periodic dipole antenna with a frequency coverage of 30 MHz to 80 MHz is performed. Such antennas are part of a radio station system used for detection of cosmic ray induced air showers at the Engineering Radio Array of the Pierre Auger Observatory, the so-called Auger Engineering Radio Array (AERA). The directional and frequency characteristics of the broadband antenna are investigated using a remotely piloted aircraft (RPA) carrying a small transmitting antenna. The antenna sensitivity is described by the vector effective length relating the measured voltage with the electric-field components perpendicular to the incoming signal direction. The horizontal and meridional components are determined with an overall uncertainty of 7.4^{+0.9}_{-0.3} % and 10.3^{+2.8}_{-1.7} % respectively. The measurement is used to correct a simulated response of the frequency and directional response of the antenna. In addition, the influence of the ground conductivity and permittivity on the antenna response is simulated. Both have a negligible influence given the ground conditions measured at the detector site. The overall uncertainties of the vector effective length components result in an uncertainty of 8.8^{+2.1}_{-1.3} % in the square root of the energy fluence for incoming signal directions with zenith angles smaller than 60{\deg}. Dark Energy Survey Year 1 Results: Cosmological Constraints from Cosmic Shear (1708.01538) M. A. Troxel, N. MacCrann, J. Zuntz, T. F. Eifler, E. Krause, S. Dodelson, D. Gruen, J. Blazek, O. Friedrich, S. Samuroff, J. Prat, L. F. Secco, C. Davis, A. Ferté, J. DeRose, A. Alarcon, A. Amara, E. Baxter, M. R. Becker, G. M. Bernstein, S. L. Bridle, R. Cawthon, C. Chang, A. Choi, J. De Vicente, A. Drlica-Wagner, J. Elvin-Poole, J. Frieman, M. Gatti, W. G. Hartley, K. Honscheid, B. Hoyle, E. M. Huff, D. Huterer, B. Jain, M. Jarvis, T. Kacprzak, D. Kirk, N. Kokron, C. Krawiec, O. Lahav, A. R. Liddle, J. Peacock, M. M. Rau, A. Refregier, R. P. Rollins, E. Rozo, E. S. Rykoff, C. Sánchez, I. Sevilla-Noarbe, E. Sheldon, A. Stebbins, T. N. Varga, P. Vielzeuf, M. Wang, R. H. Wechsler, B. Yanny, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Lévy, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, D. L. DePoy, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, E. Fernandez, B. Flaugher, P. Fosalba, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, D. A. Goldstein, R. A. Gruendl, J. Gschwend, G. Gutierrez, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, S. Kent, K. Kuehn, S. Kuhlmann, N. Kuropatkin, T. S. Li, M. Lima, H. Lin, M. A. G. Maia, M. March, J. L. Marshall, P. Martini, P. Melchior, F. Menanteau, R. Miquel, J. J. Mohr, E. Neilsen, R. C. Nichol, B. Nord, D. Petravick, A. A. Plazas, A. K. Romer, A. Roodman, M. Sako, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, D. L. Tucker, V. Vikram, A. R. Walker, J. Weller, Y. Zhang April 30, 2018 astro-ph.CO We use 26 million galaxies from the Dark Energy Survey (DES) Year 1 shape catalogs over 1321 deg$^2$ of the sky to produce the most significant measurement of cosmic shear in a galaxy survey to date. We constrain cosmological parameters in both the flat $\Lambda$CDM and $w$CDM models, while also varying the neutrino mass density. These results are shown to be robust using two independent shape catalogs, two independent \photoz\ calibration methods, and two independent analysis pipelines in a blind analysis. We find a 3.5\% fractional uncertainty on $\sigma_8(\Omega_m/0.3)^{0.5} = 0.782^{+0.027}_{-0.027}$ at 68\% CL, which is a factor of 2.5 improvement over the fractional constraining power of our DES Science Verification results. In $w$CDM, we find a 4.8\% fractional uncertainty on $\sigma_8(\Omega_m/0.3)^{0.5} = 0.777^{+0.036}_{-0.038}$ and a dark energy equation-of-state $w=-0.95^{+0.33}_{-0.39}$. We find results that are consistent with previous cosmic shear constraints in $\sigma_8$ -- $\Omega_m$, and see no evidence for disagreement of our weak lensing data with data from the CMB. Finally, we find no evidence preferring a $w$CDM model allowing $w\ne -1$. We expect further significant improvements with subsequent years of DES data, which will more than triple the sky coverage of our shape catalogs and double the effective integrated exposure time per galaxy. Density split statistics: joint model of counts and lensing in cells (1710.05162) O. Friedrich, D. Gruen, J. DeRose, D. Kirk, E. Krause, T. McClintock, E. S. Rykoff, S. Seitz, R. H. Wechsler, G. M. Bernstein, J. Blazek, C. Chang, S. Hilbert, B. Jain, A. Kovacs, O. Lahav, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Levy, E. Bertin, D. Brooks, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, C. Davis, S. Desai, H. T. Diehl, J. P. Dietrich, A. Drlica-Wagner, T. F. Eifler, P. Fosalba, J. Frieman, J. Garcia-Bellido, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, R. A. Gruendl, J. Gschwend, G. Gutierrez, K. Honscheid, D. J. James, M. Jarvis, K. Kuehn, N. Kuropatkin, M. Lima, M. March, J. L. Marshall, P. Melchior, F. Menanteau, R. Miquel, J. J. Mohr, B. Nord, A. A. Plazas, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, I. Sevilla-Noarbe, E. Sheldon, M. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, M. A. Troxel, V. Vikram, J. Weller We present density split statistics, a framework that studies lensing and counts-in-cells as a function of foreground galaxy density, thereby providing a large-scale measurement of both 2-point and 3-point statistics. Our method extends our earlier work on trough lensing and is summarized as follows: given a foreground (low redshift) population of galaxies, we divide the sky into subareas of equal size but distinct galaxy density. We then measure lensing around uniformly spaced points separately in each of these subareas, as well as counts-in-cells statistics (CiC). The lensing signals trace the matter density contrast around regions of fixed galaxy density. Through the CiC measurements this can be related to the density profile around regions of fixed matter density. Together, these measurements constitute a powerful probe of cosmology, the skewness of the density field and the connection of galaxies and matter. In this paper we show how to model both the density split lensing signal and CiC from basic ingredients: a non-linear power spectrum, clustering hierarchy coefficients from perturbation theory and a parametric model for galaxy bias and shot-noise. Using N-body simulations, we demonstrate that this model is sufficiently accurate for a cosmological analysis on year 1 data from the Dark Energy Survey. DES Y1 Results: Validating cosmological parameter estimation using simulated Dark Energy Surveys (1803.09795) N. MacCrann, J. DeRose, R. H. Wechsler, J. Blazek, E. Gaztanaga, M. Crocce, E. S. Rykoff, M. R. Becker, B. Jain, E. Krause, T. F. Eifler, D. Gruen, J. Zuntz, M. A. Troxel, J. Elvin-Poole, J. Prat, M. Wang, S. Dodelson, A. Kravtsov, P. Fosalba, M. T. Busha, A. E. Evrard, D. Huterer, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, S. Avila, G. M. Bernstein, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, C. Davis, J. De Vicente, H. T. Diehl, P. Doel, J. Frieman, J. García-Bellido, D. W. Gerdes, R. A. Gruendl, G. Gutierrez, W. G. Hartley, D. Hollowood, K. Honscheid, B. Hoyle, D. J. James, T. Jeltema, D. Kirk, K. Kuehn, N. Kuropatkin, M. Lima, M. A. G. Maia, J. L. Marshall, F. Menanteau, R. Miquel, A. A. Plazas, A. Roodman, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, A. R. Walker, J. Weller March 26, 2018 astro-ph.CO We use mock galaxy survey simulations designed to resemble the Dark Energy Survey Year 1 (DES Y1) data to validate and inform cosmological parameter estimation. When similar analysis tools are applied to both simulations and real survey data, they provide powerful validation tests of the DES Y1 cosmological analyses presented in companion papers. We use two suites of galaxy simulations produced using different methods, which therefore provide independent tests of our cosmological parameter inference. The cosmological analysis we aim to validate is presented in DES Collaboration et al. (2017) and uses angular two-point correlation functions of galaxy number counts and weak lensing shear, as well as their cross-correlation, in multiple redshift bins. While our constraints depend on the specific set of simulated realisations available, for both suites of simulations we find that the input cosmology is consistent with the combined constraints from multiple simulated DES Y1 realizations in the $\Omega_m-\sigma_8$ plane. For one of the suites, we are able to show with high confidence that any biases in the inferred $S_8=\sigma_8(\Omega_m/0.3)^{0.5}$ and $\Omega_m$ are smaller than the DES Y1 $1-\sigma$ uncertainties. For the other suite, for which we have fewer realizations, we are unable to be this conclusive; we infer a roughly 70% probability that systematic biases in the recovered $\Omega_m$ and $S_8$ are sub-dominant to the DES Y1 uncertainty. As cosmological analyses of this kind become increasingly more precise, validation of parameter inference using survey simulations will be essential to demonstrate robustness. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory (1612.07155) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Feb. 26, 2018 astro-ph.HE We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above $5 \cdot 10^{18}$ eV, i.e.~the region of the all-particle spectrum above the so-called "ankle" feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results. Dark Energy Survey Year 1 Results: Methodology and Projections for Joint Analysis of Galaxy Clustering, Galaxy Lensing, and CMB Lensing Two-point Functions (1802.05257) E. J. Baxter, Y. Omori, C. Chang, T. Giannantonio, D. Kirk, E. Krause, J. Blazek, L. Bleem, A. Choi, T. M. Crawford, S. Dodelson, T. F. Eifler, O. Friedrich, D. Gruen, G. P. Holder, B. Jain, M. Jarvis, N. MacCrann, A. Nicola, S. Pandey, J. Prat, C. L. Reichardt, S. Samuroff, C. Sánchez, L. F. Secco, E. Sheldon, M. A. Troxel, J. Zuntz, T. M. C. Abbott, F. B. Abdalla, J. Annis, S. Avila, K. Bechtol, B. A. Benson, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, C. Davis, J. De Vicente, D. L. DePoy, H. T. Diehl, P. Doel, J. Estrada, A. E. Evrard, B. Flaugher, P. Fosalba, J. Frieman, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, R. A. Gruendl, J. Gschwend, G. Gutierrez, W. G. Hartley, D. Hollowood, B. Hoyle, D. J. James, S. Kent, K. Kuehn, N. Kuropatkin, O. Lahav, M. Lima, M. A. G. Maia, M. March, J. L. Marshall, P. Melchior, F. Menanteau, R. Miquel, A. A. Plazas, A. Roodman, E. S. Rykoff, E. Sanchez, R. Schindler, M. Schubnell, I. Sevilla-Noarbe, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, A. R. Walker, W. L. K. Wu, J. Weller Feb. 14, 2018 astro-ph.CO Optical imaging surveys measure both the galaxy density and the gravitational lensing-induced shear fields across the sky. Recently, the Dark Energy Survey (DES) collaboration used a joint fit to two-point correlations between these observables to place tight constraints on cosmology (DES Collaboration et al. 2017). In this work, we develop the methodology to extend the DES Collaboration et al. (2017) analysis to include cross-correlations of the optical survey observables with gravitational lensing of the cosmic microwave background (CMB) as measured by the South Pole Telescope (SPT) and Planck. Using simulated analyses, we show how the resulting set of five two-point functions increases the robustness of the cosmological constraints to systematic errors in galaxy lensing shear calibration. Additionally, we show that contamination of the SPT+Planck CMB lensing map by the thermal Sunyaev-Zel'dovich effect is a potentially large source of systematic error for two-point function analyses, but show that it can be reduced to acceptable levels in our analysis by masking clusters of galaxies and imposing angular scale cuts on the two-point functions. The methodology developed here will be applied to the analysis of data from the DES, the SPT, and Planck in a companion work. Indication of anisotropy in arrival directions of ultra-high-energy cosmic rays through comparison to the flux pattern of extragalactic gamma-ray sources (1801.06160) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A.C. Cobos Cerutti, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, R. Gaïor, B. García, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L.A.S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. F. Soriano, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedeński, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Feb. 6, 2018 astro-ph.CO, astro-ph.HE A new analysis of the dataset from the Pierre Auger Observatory provides evidence for anisotropy in the arrival directions of ultra-high-energy cosmic rays on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources. The data consist of 5514 events above 20 EeV with zenith angles up to 80 deg recorded before 2017 April 30. Sky models have been created for two distinct populations of extragalactic gamma-ray emitters: active galactic nuclei from the second catalog of hard Fermi-LAT sources (2FHL) and starburst galaxies from a sample that was examined with Fermi-LAT. Flux-limited samples, which include all types of galaxies from the Swift-BAT and 2MASS surveys, have been investigated for comparison. The sky model of cosmic-ray density constructed using each catalog has two free parameters, the fraction of events correlating with astrophysical objects and an angular scale characterizing the clustering of cosmic rays around extragalactic sources. A maximum-likelihood ratio test is used to evaluate the best values of these parameters and to quantify the strength of each model by contrast with isotropy. It is found that the starburst model fits the data better than the hypothesis of isotropy with a statistical significance of 4.0 sigma, the highest value of the test statistic being for energies above 39 EeV. The three alternative models are favored against isotropy with 2.7-3.2 sigma significance. The origin of the indicated deviation from isotropy is examined and prospects for more sensitive future studies are discussed. Science with the Cherenkov Telescope Array (1709.07997) The Cherenkov Telescope Array Consortium: B.S. Acharya, I. Agudo, I. Al Samarai, R. Alfaro, J. Alfaro, C. Alispach, R. Alves Batista, J.-P. Amans, E. Amato, G. Ambrosi, E. Antolini, L.A. Antonelli, C. Aramo, M. Araya, T. Armstrong, F. Arqueros, L. Arrabito, K. Asano, M. Ashley, M. Backes, C. Balazs, M. Balbo, O. Ballester, J. Ballet, A. Bamba, M. Barkov, U. Barres de Almeida, J.A. Barrio, D. Bastieri, Y. Becherini, A. Belfiore, W. Benbow, D. Berge, E. Bernardini, M.G. Bernardini, M. Bernardos, K. Bernlöhr, B. Bertucci, B. Biasuzzi, C. Bigongiari, A. Biland, E. Bissaldi, J. Biteau, O. Blanch, J. Blazek, C. Boisson, J. Bolmont, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, Z. Bosnjak, M. Böttcher, C. Braiding, J. Bregeon, A. Brill, A.M. Brown, P. Brun, G. Brunetti, T. Buanes, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, R. Canestrari, M. Capalbi, F. Capitanio, A. Caproni, P. Caraveo, V. Cárdenas, C. Carlile, R. Carosi, E. Carquín, J. Carr, S. Casanova, E. Cascone, F. Catalani, O. Catalano, D. Cauz, M. Cerruti, P. Chadwick, S. Chaty, R.C.G. Chaves, A. Chen, X. Chen, M. Chernyakova, M. Chikawa, A. Christov, J. Chudoba, M. Cieślar, V. Coco, S. Colafrancesco, P. Colin, V. Conforti, V. Connaughton, J. Conrad, J.L. Contreras, J. Cortina, A. Costa, H. Costantini, G. Cotter, S. Covino, R. Crocker, J. Cuadra, O. Cuevas, P. Cumani, A. D'Aì, F. D'Ammando, P. D'Avanzo, D. D'Urso, M. Daniel, I. Davids, B. Dawson, F. Dazzi, A. De Angelis, R. de Cássia dos Anjos, G. De Cesare, A. De Franco, E.M. de Gouveia Dal Pino, I. de la Calle, R. de los Reyes Lopez, B. De Lotto, A. De Luca, M. De Lucia, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, F. De Persio, V. de Souza, C. Deil, M. Del Santo, C. Delgado, D. della Volpe, T. Di Girolamo, F. Di Pierro, L. Di Venere, C. Díaz, C. Dib, S. Diebold, A. Djannati-Ataï, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, H. Drass, D. Dravins, G. Dubus, V.V. Dwarkadas, J. Ebr, C. Eckner, K. Egberts, S. Einecke, T.R.N. Ekoume, D. Elsässer, J.-P. Ernenwein, C. Espinoza, C. Evoli, M. Fairbairn, D. Falceta-Goncalves, A. Falcone, C. Farnier, G. Fasola, E. Fedorova, S. Fegan, M. Fernandez-Alonso, A. Fernández-Barral, G. Ferrand, M. Fesquet, M. Filipovic, V. Fioretti, G. Fontaine, M. Fornasa, L. Fortson, L. Freixas Coromina, C. Fruck, Y. Fujita, Y. Fukazawa, S. Funk, M. Füßling, S. Gabici, A. Gadola, Y. Gallant, B. Garcia, R. Garcia López, M. Garczarczyk, J. Gaskins, T. Gasparetto, M. Gaug, L. Gerard, G. Giavitto, N. Giglietto, P. Giommi, F. Giordano, E. Giro, M. Giroletti, A. Giuliani, J.-F. Glicenstein, R. Gnatyk, N. Godinovic, P. Goldoni, G. Gómez-Vargas, M.M. González, J.M. González, D. Götz, J. Graham, P. Grandi, J. Granot, A.J. Green, T. Greenshaw, S. Griffiths, S. Gunji, D. Hadasch, S. Hara, M.J. Hardcastle, T. Hassan, K. Hayashi, M. Hayashida, M. Heller, J.C. Helo, G. Hermann, J. Hinton, B. Hnatyk, W. Hofmann, J. Holder, D. Horan, J. Hörandel, D. Horns, P. Horvath, T. Hovatta, M. Hrabovsky, D. Hrupec, T.B. Humensky, M. Hütten, M. Iarlori, T. Inada, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Iori, K. Ishio, Y. Iwamura, M. Jamrozy, P. Janecek, D. Jankowsky, P. Jean, I. Jung-Richardt, J. Jurysek, P. Kaaret, S. Karkar, H. Katagiri, U. Katz, N. Kawanaka, D. Kazanas, B. Khélifi, D.B. Kieda, S. Kimeswenger, S. Kimura, S. Kisaka, J. Knapp, J. Knödlseder, B. Koch, K. Kohri, N. Komin, K. Kosack, M. Kraus, M. Krause, F. Krauß, H. Kubo, G. Kukec Mezek, H. Kuroda, J. Kushida, N. La Palombara, G. Lamanna, R.G. Lang, J. Lapington, O. Le Blanc, S. Leach, J.-P. Lees, J. Lefaucheur, M.A. Leigui de Oliveira, J.-P. Lenain, R. Lico, M. Limon, E. Lindfors, T. Lohse, S. Lombardi, F. Longo, M. López, R. López-Coto, C.-C. Lu, F. Lucarelli, P.L. Luque-Escamilla, E. Lyard, M.C. Maccarone, G. Maier, P. Majumdar, G. Malaguti, D. Mandat, G. Maneva, M. Manganaro, S. Mangano, A. Marcowith, J. Marín, S. Markoff, J. Martí, P. Martin, M. Martínez, G. Martínez, N. Masetti, S. Masuda, G. Maurin, N. Maxted, D. Mazin, C. Medina, A. Melandri, S. Mereghetti, M. Meyer, I.A. Minaya, N. Mirabal, R. Mirzoyan, A. Mitchell, T. Mizuno, R. Moderski, M. Mohammed, L. Mohrmann, T. Montaruli, A. Moralejo, D. Morcuende-Parrilla, K. Mori, G. Morlino, P. Morris, A. Morselli, E. Moulin, R. Mukherjee, C. Mundell, T. Murach, H. Muraishi, K. Murase, A. Nagai, S. Nagataki, T. Nagayoshi, T. Naito, T. Nakamori, Y. Nakamura, J. Niemiec, D. Nieto, M. Nikołajuk, K. Nishijima, K. Noda, D. Nosek, B. Novosyadlyj, S. Nozaki, P. O'Brien, L. Oakes, Y. Ohira, M. Ohishi, S. Ohm, N. Okazaki, A. Okumura, R.A. Ong, M. Orienti, R. Orito, J.P. Osborne, M. Ostrowski, N. Otte, I. Oya, M. Padovani, A. Paizis, M. Palatiello, M. Palatka, R. Paoletti, J.M. Paredes, G. Pareschi, R.D. Parsons, A. Pe'er, M. Pech, G. Pedaletti, M. Perri, M. Persic, A. Petrashyk, P. Petrucci, O. Petruk, B. Peyaud, M. Pfeifer, G. Piano, A. Pisarski, S. Pita, M. Pohl, M. Polo, D. Pozo, E. Prandini, J. Prast, G. Principe, D. Prokhorov, H. Prokoph, M. Prouza, G. Pühlhofer, M. Punch, S. Pürckhauer, F. Queiroz, A. Quirrenbach, S. Rainò, S. Razzaque, O. Reimer, A. Reimer, A. Reisenegger, M. Renaud, A.H. Rezaeian, W. Rhode, D. Ribeiro, M. Ribó, T. Richtler, J. Rico, F. Rieger, M. Riquelme, S. Rivoire, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, G. Rojas, P. Romano, G. Romeo, J. Rosado, A.C. Rovero, G. Rowell, B. Rudak, A. Rugliancich, C. Rulten, I. Sadeh, S. Safi-Harb, T. Saito, N. Sakaki, S. Sakurai, G. Salina, M. Sánchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, S. Sarkar, K. Satalecka, F.G. Saturni, E.J. Schioppa, S. Schlenstedt, M. Schneider, H. Schoorlemmer, P. Schovanek, A. Schulz, F. Schussler, U. Schwanke, E. Sciacca, S. Scuderi, I. Seitenzahl, D. Semikoz, O. Sergijenko, M. Servillat, A. Shalchi, R.C. Shellard, L. Sidoli, H. Siejkowski, A. Sillanpää, G. Sironi, J. Sitarek, V. Sliusar, A. Slowikowska, H. Sol, A. Stamerra, S. Stanič, R. Starling, Ł. Stawarz, S. Stefanik, M. Stephan, T. Stolarczyk, G. Stratta, U. Straumann, T. Suomijarvi, A.D. Supanitsky, G. Tagliaferri, H. Tajima, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, L.A. Tejedor, P. Temnikov, Y. Terada, R. Terrier, T. Terzic, M. Teshima, V. Testa, S. Thoudam, W. Tian, L. Tibaldo, M. Tluczykont, C.J. Todero Peixoto, F. Tokanai, J. Tomastik, D. Tonev, M. Tornikoski, D.F. Torres, E. Torresi, G. Tosti, N. Tothill, G. Tovmassian, P. Travnicek, C. Trichard, M. Trifoglio, I. Troyano Pujadas, S. Tsujimoto, G. Umana, V. Vagelli, F. Vagnetti, M. Valentino, P. Vallania, L. Valore, C. van Eldik, J. Vandenbroucke, G.S. Varner, G. Vasileiadis, V. Vassiliev, M. Vázquez Acosta, M. Vecchi, A. Vega, S. Vercellone, P. Veres, S. Vergani, V. Verzi, G.P. Vettolani, A. Viana, C. Vigorito, J. Villanueva, H. Voelk, A. Vollhardt, S. Vorobiov, M. Vrastil, T. Vuillaume, S.J. Wagner, R. Wagner, R. Walter, J.E. Ward, D. Warren, J.J. Watson, F. Werner, M. White, R. White, A. Wierzcholska, P. Wilcox, M. Will, D.A. Williams, R. Wischnewski, M. Wood, T. Yamamoto, R. Yamazaki, S. Yanagita, L. Yang, T. Yoshida, S. Yoshiike, T. Yoshikoshi, M. Zacharias, G. Zaharijas, L. Zampieri, F. Zandanel, R. Zanin, M. Zavrtanik, D. Zavrtanik, A.A. Zdziarski, A. Zech, H. Zechlin, V.I. Zhdanov, A. Ziegler, J. Zorn Jan. 22, 2018 hep-ex, astro-ph.IM, astro-ph.HE The Cherenkov Telescope Array, CTA, will be the major global observatory for very high energy gamma-ray astronomy over the next decade and beyond. The scientific potential of CTA is extremely broad: from understanding the role of relativistic cosmic particles to the search for dark matter. CTA is an explorer of the extreme universe, probing environments from the immediate neighbourhood of black holes to cosmic voids on the largest scales. Covering a huge range in photon energy from 20 GeV to 300 TeV, CTA will improve on all aspects of performance with respect to current instruments. The observatory will operate arrays on sites in both hemispheres to provide full sky coverage and will hence maximize the potential for the rarest phenomena such as very nearby supernovae, gamma-ray bursts or gravitational wave transients. With 99 telescopes on the southern site and 19 telescopes on the northern site, flexible operation will be possible, with sub-arrays available for specific tasks. CTA will have important synergies with many of the new generation of major astronomical and astroparticle observatories. Multi-wavelength and multi-messenger approaches combining CTA data with those from other instruments will lead to a deeper understanding of the broad-band non-thermal properties of target sources. The CTA Observatory will be operated as an open, proposal-driven observatory, with all data available on a public archive after a pre-defined proprietary period. Scientists from institutions worldwide have combined together to form the CTA Consortium. This Consortium has prepared a proposal for a Core Programme of highly motivated observations. The programme, encompassing approximately 40% of the available observing time over the first ten years of CTA operation, is made up of individual Key Science Projects (KSPs), which are presented in this document. The Dark Energy Survey Data Release 1 (1801.03181) T. M. C. Abbott, F. B. Abdalla, S. Allam, A. Amara, J. Annis, J. Asorey, S. Avila, O. Ballester, M. Banerji, W. Barkhouse, L. Baruah, M. Baumer, K. Bechtol, M . R. Becker, A. Benoit-Lévy, G. M. Bernstein, E. Bertin, J. Blazek, S. Bocquet, D. Brooks, D. Brout, E. Buckley-Geer, D. L. Burke, V. Busti, R. Campisano, L. Cardiel-Sas, A. C arnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. Chang, C. Conselice, G. Costa, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, R. Das, G. Daues, T. M. Davis, C. Davis, J. De Vicente, D. L. DePoy, J. DeRose, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, A. Drlica-Wagner, T. F. Eifler, A. E. Elliott, A. E. Evrard, A. Farahi, A. Fausti Neto, E. Fernandez, D. A. Finley, M. Fitzpatrick, B. Flaugher, R. J. Foley, P. Fosalba, D. N. Friedel, J. Frieman, J. García-Bellido, E. Gaz tanaga, D. W. Gerdes, T. Giannantonio, M. S. S. Gill, K. Glazebrook, D. A. Goldstein, M. Gower, D. Gruen, R. A. Gruendl, J. Gschwend, R. R. Gupta, G. Gutierrez, S. Hamilton, W. G. Hartley, S. R. Hinton, J. M. Hislop, D. Hollowood, K. Honscheid, B. Hoyle, D. Huterer, B. Jain, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, S. Juneau, T. Kacpr zak, S. Kent, G. Khullar, M. Klein, A. Kovacs, A. M. G. Koziol, E. Krause, A. Kremin, R. Kron, K. Kuehn, S. Kuhlmann, N. Kuropatkin, O. Lahav, J. Lasker, T. S. Li, R. T. Li, A. R. Liddle, M. Lima, H. Lin, P. López-Reyes, N. MacCrann, M. A. G. Maia, J. D. Maloney, M. Manera, M. March, J. Marriner, J. L. Marshall, P. Martini, T. McClintock, T. McKay, R . G. McMahon, P. Melchior, F. Menanteau, C. J. Miller, R. Miquel, J. J. Mohr, E. Morganson, J. Mould, E. Neilsen, R. C. Nichol, D. Nidever, R. Nikutta, F. Nogueira, B. Nord, P. Nugent, L. Nunes, R. L. C. Ogando, L. Old, K. Olsen, A. B. Pace, A. Palmese, F. Paz-Chinchón, H. V. Peiris, W. J. Percival, D. Petravick, A. A. Plazas, J. Poh, C. Pond, A. Por redon, A. Pujol, A. Refregier, K. Reil, P. M. Ricker, R. P. Rollins, A. K. Romer, A. Roodman, P. Rooney, A. J. Ross, E. S. Rykoff, M. Sako, E. Sanchez, M. L. Sanchez, B. Santiago, A. Saro, V. Scarpine, D. Scolnic, A. Scott, S. Serrano, I. Sevilla-Noarbe, E. Sheldon, N. Shipp, M.L. Silveira, R. C. Smith, J. A. Smith, M. Smith, M. Soares-Santos, F. Sobre ira, J. Song, A. Stebbins, E. Suchyta, M. Sullivan, M. E. C. Swanson, G. Tarle, J. Thaler, D. Thomas, R. C. Thomas, M. A. Troxel, D. L. Tucker, V. Vikram, A. K. Vivas, A. R. Wal ker, R. H. Wechsler, J. Weller, W. Wester, R. C. Wolf, H. Wu, B. Yanny, A. Zenteno, Y. Zhang, J. Zuntz Jan. 9, 2018 astro-ph.CO, astro-ph.GA, astro-ph.SR, astro-ph.IM We describe the first public data release of the Dark Energy Survey, DES DR1, consisting of reduced single epoch images, coadded images, coadded source catalogs, and associated products and services assembled over the first three years of DES science operations. DES DR1 is based on optical/near-infrared imaging from 345 distinct nights (August 2013 to February 2016) by the Dark Energy Camera mounted on the 4-m Blanco telescope at Cerro Tololo Inter-American Observatory in Chile. We release data from the DES wide-area survey covering ~5,000 sq. deg. of the southern Galactic cap in five broad photometric bands, grizY. DES DR1 has a median delivered point-spread function of g = 1.12, r = 0.96, i = 0.88, z = 0.84, and Y = 0.90 arcsec FWHM, a photometric precision of < 1% in all bands, and an astrometric precision of 151 mas. The median coadded catalog depth for a 1.95" diameter aperture at S/N = 10 is g = 24.33, r = 24.08, i = 23.44, z = 22.69, and Y = 21.44 mag. DES DR1 includes nearly 400M distinct astronomical objects detected in ~10,000 coadd tiles of size 0.534 sq. deg. produced from ~39,000 individual exposures. Benchmark galaxy and stellar samples contain ~310M and ~ 80M objects, respectively, following a basic object quality selection. These data are accessible through a range of interfaces, including query web clients, image cutout servers, jupyter notebooks, and an interactive coadd image visualization tool. DES DR1 constitutes the largest photometric data set to date at the achieved depth and photometric precision. Inferences on Mass Composition and Tests of Hadronic Interactions from 0.3 to 100 EeV using the water-Cherenkov Detectors of the Pierre Auger Observatory (1710.07249) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 19, 2017 astro-ph.HE We present a new method for probing the hadronic interaction models at ultra-high energy and extracting details about mass composition. This is done using the time profiles of the signals recorded with the water-Cherenkov detectors of the Pierre Auger Observatory. The profiles arise from a mix of the muon and electromagnetic components of air-showers. Using the risetimes of the recorded signals we define a new parameter, which we use to compare our observations with predictions from simulations. We find, firstly, inconsistencies between our data and predictions over a greater energy range and with substantially more events than in previous studies. Secondly, by calibrating the new parameter with fluorescence measurements from observations made at the Auger Observatory, we can infer the depth of shower maximum for a sample of over 81,000 events extending from 0.3 EeV to over 100 EeV. Above 30 EeV, the sample is nearly fourteen times larger than currently available from fluorescence measurements and extending the covered energy range by half a decade. The energy dependence of the average depth of shower maximum is compared to simulations and interpreted in terms of the mean of the logarithmic mass. We find good agreement with previous work and extend the measurement of the mean depth of shower maximum to greater energies than before, reducing significantly the statistical uncertainty associated with the inferences about mass composition. Search for High-energy Neutrinos from Binary Neutron Star Merger GW170817 with ANTARES, IceCube, and the Pierre Auger Observatory (1710.05839) ANTARES, IceCube, Pierre Auger, LIGO Scientific, Virgo Collaborations: A. Albert, M. Andre, M. Anghinolfi, M. Ardid, J.-J. Aubert, J. Aublin, T. Avgitas, B. Baret, J. Barrios-Marti, S. Basa, B. Belhorma, V. Bertin, S. Biagi, R. Bormuth, S. Bourret, M.C. Bouwhuis, H. Branzacs, R. Bruijn, J. Brunner, J. Busto, A. Capone, L. Caramete, J. Carr, S. Celli, R. Cherkaoui El Moursli, T. Chiarusi, M. Circella, J.A.B. Coelho, A. Coleiro, R. Coniglione, H. Costantini, P. Coyle, A. Creusot, A. F. Diaz, A. Deschamps, G. De Bonis, C. Distefano, I. Di Palma, A. Domi, C. Donzaud, D. Dornic, D. Drouhin, T. Eberl, I. El Bojaddaini, N. El Khayati, D. Elsasser, A. Enzenhofer, A. Ettahiri, F. Fassi, I. Felis, L.A. Fusco, P. Gay, V. Giordano, H. Glotin, T. Gregoire, R. Gracia Ruiz, K. Graf, S. Hallmann, H. van Haren, A.J. Heijboer, Y. Hello, J.J. Hernandez-Rey, J. Hossl, J. Hofestadt, G. Illuminati, C.W. James, M. de Jong, M. Jongen, M. Kadler, O. Kalekin, U. Katz, D. Kiessling, A. Kouchner, M. Kreter, I. Kreykenbohm, V. Kulikovskiy, C. Lachaud, R. Lahmann, D. Lef`evre, E. Leonora, M. Lotze, S. Loucatos, M. Marcelin, A. Margiotta, A. Marinelli, J.A. Martinez-Mora, R. Mele, K. Melis, T. Michael, P. Migliozzi, A. Moussa, S. Navas, E. Nezri, M. Organokov, G.E. Puavualacs, C. Pellegrino, C. Perrina, P. Piattelli, V. Popa, T. Pradier, L. Quinn, C. Racca, G. Riccobene, A. Sanchez-Losa, M. Salda na, I. Salvadori, D. F. E. Samtleben, M. Sanguineti, P. Sapienza, F. Schussler, C. Sieger, M. Spurio, Th. Stolarczyk, M. Taiuti, Y. Tayalati, A. Trovato, D. Turpin, C. Tonnis, B. Vallage, V. Van Elewyck, F. Versari, D. Vivolo, A. Vizzoca, J. Wilms, J.D. Zornoza, J. Zu niga, M. G. Aartsen, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M. Ahrens, I. Al Samarai, D. Altmann, K. Andeen, T. Anderson, I. Ansseau, G. Anton, C. Arguelles, J. Auffenberg, S. Axani, H. Bagherpour, X. Bai, J. P. Barron, S. W. Barwick, V. Baum, R. Bay, J. J. Beatty, J. Becker Tjus, K.-H. Becker, S. BenZvi, D. Berley, E. Bernardini, D. Z. Besson, G. Binder, D. Bindig, E. Blaufuss, S. Blot, C. Bohm, M. Borner, F. Bos, D. Bose, S. Boser, O. Botner, E. Bourbeau, J. Bourbeau, F. Bradascio, J. Braun, L. Brayeur, M. Brenzke, H.-P. Bretz, S. Bron, J. Brostean-Kaiser, A. Burgman, T. Carver, J. Casey, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, G. H. Collin, J. M. Conrad, D. F. Cowen, R. Cross, M. Day, J. P. A. M. de Andre, C. De Clercq, J. J. DeLaunay, H. Dembinski, S. De Ridder, P. Desiati, K. D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J. C. Diaz-Velez, V. di Lorenzo, H. Dujmovic, J. P. Dumm, M. Dunkman, E. Dvorak, B. Eberhardt, T. Ehrhardt, B. Eichmann, P. Eller, P. A. Evenson, S. Fahey, A. R. Fazely, J. Felde, K. Filimonov, C. Finley, S. Flis, A. Franckowiak, E. Friedman, T. Fuchs, T. K. Gaisser, J. Gallagher, L. Gerhardt, K. Ghorbani, W. Giang, T. Glauch, T. Glusenkamp, A. Goldschmidt, J. G. Gonzalez, D. Grant, Z. Griffith, C. Haack, A. Hallgren, F. Halzen, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, S. Hickford, J. Hignight, G. C. Hill, K. D. Hoffman, R. Hoffmann, B. Hokanson-Fasig, K. Hoshina, F. Huang, M. Huber, K. Hultqvist, M. Hunnefeld, S. In, A. Ishihara, E. Jacobi, G. S. Japaridze, M. Jeong, K. Jero, B. J. P. Jones, P. Kalaczynski, W. Kang, A. Kappes, T. Karg, A. Karle, U. Katz, M. Kauer, A. Keivani, J. L. Kelley, A. Kheirandish, J. Kim, M. Kim, T. Kintscher, J. Kiryluk, T. Kittler, S. R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, L. Kopke, C. Kopper, S. Kopper, J. P. Koschinsky, D. J. Koskinen, M. Kowalski, K. Krings, M. Kroll, G. Kruckl, J. Kunnen, S. Kunwar, N. Kurahashi, T. Kuwabara, A. Kyriacou, M. Labare, J. L. Lanfranchi, M. J. Larson, F. Lauber, M. Lesiak-Bzdak, M. Leuermann, Q. R. Liu, L. Lu, J. Lunemann, W. Luszczak, J. Madsen, G. Maggi, K. B. M. Mahn, S. Mancina, R. Maruyama, K. Mase, R. Maunu, F. McNally, K. Meagher, M. Medici, M. Meier, T. Menne, G. Merino, T. Meures, S. Miarecki, J. Micallef, G. Momente, T. Montaruli, R. W. Moore, M. Moulai, R. Nahnhauer, P. Nakarmi, U. Naumann, G. Neer, H. Niederhausen, S. C. Nowicki, D. R. Nygren, A. Obertacke Pollmann, A. Olivas, A. O'Murchadha, T. Palczewski, H. Pandya, D. V. Pankova, P. Peiffer, J. A. Pepper, C. Perez de los Heros, D. Pieloth, E. Pinat, M. Plum, D. Pranav, P. B. Price, G. T. Przybylski, C. Raab, L. Radel, M. Rameez, K. Rawlins, I. C. Rea, R. Reimann, B. Relethford, M. Relich, E. Resconi, W. Rhode, M. Richman, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D. Ryckbosch, D. Rysewyk, T. Salzer, S. E. Sanchez Herrera, A. Sandrock, J. Sandroos, M. Santander, S. Sarkar, S. Sarkar, K. Satalecka, P. Schlunder, T. Schmidt, A. Schneider, S. Schoenen, S. Schoneberg, L. Schumacher, D. Seckel, S. Seunarine, J. Soedingrekso, D. Soldin, M. Song, G. M. Spiczak, C. Spiering, J. Stachurska, M. Stamatikos, T. Stanev, A. Stasik, J. Stettner, A. Steuer, T. Stezelberger, R. G. Stokstad, A. Stossl, N. L. Strotjohann, T. Stuttard, G. W. Sullivan, M. Sutherland, I. Taboada, J. Tatar, F. Tenholt, S. Ter-Antonyan, A. Terliuk, G. Tevsic, S. Tilav, P. A. Toale, M. N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, C. F. Tung, A. Turcati, C. F. Turley, B. Ty, E. Unger, M. Usner, J. Vandenbroucke, W. Van Driessche, N. van Eijndhoven, S. Vanheule, J. van Santen, M. Vehring, E. Vogel, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, F. D. Wandler, N. Wandkowsky, A. Waza, C. Weaver, M. J. Weiss, C. Wendt, J. Werthebach, S. Westerhoff, B. J. Whelan, K. Wiebe, C. H. Wiebusch, L. Wille, D. R. Williams, L. Wills, M. Wolf, J. Wood, T. R. Wood, E. Woolsey, K. Woschnagg, D. L. Xu, X. W. Xu, Y. Xu, J. P. Yanez, G. Yodh, S. Yoshida, T. Yuan, M. Zoll, A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, J.M. Albury, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Mu niz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Bohavcova, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A.C. Cobos Cerutti, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceicc ao, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Croninaltaffiliation Deceased, August 2016., S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, J.A. Day, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Diaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Feldbusch, F. Fenu, B. Fick, J.M. Figueira, A. Filipvcivc, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. Garcia, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Glas, C. Glaser, G. Golup, M. Gomez Berisso, P.F. Gomez Vitale, N. Gonzalez, A. Gorgi, M. Gottowik, A.F. Grilloaltaffiliation Deceased, February 2017., T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, V.M. Harvey, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Horandel, P. Horvath, M. Hrabovsky, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kaapa, K.H. Kampert, B. Keilhauer, N. Kemmerich, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. Lopez, A. Lopez Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Marics, G. Marsella, D. Martello, H. Martinez, O. Martinez Bravo, J.J. Masias Meza, H.J. Mathes, S. Mathys, J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafa, A.L. Muller, G. Muller, M.A. Muller, S. Muller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Novzka, L.A. Nu nez, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pcekala, R. Pelayo, J. Pe na-Rodriguez, L.A.S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sanchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovanek, F.G. Schroder, S. Schroder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, R. vSmida, G.R. Snow, P. Sommers, S. Sonntag, J.F. Soriano, R. Squartini, D. Stanca, S. Stanivc, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Duran, T. Sudholz, T. Suomijarvi, A.D. Supanitsky, J. vSupik, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tome, G. Torralba Elipe, P. Travnicek, M. Trini, M. Tueros, R. Ulrich, M. Unger, M. Urban, J.F. Valdes Galicia, I. Vali no, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cardenas, R.A. Vazquez, D. Veberivc, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villase nor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedenski, L. Wiencke, H. Wilczynski, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello, B. P. Abbott, R. Abbott, T. D. Abbott, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, V. B. Adya, C. Affeldt, M. Afrough, B. Agarwal, M. Agathos, K. Agatsuma, N. Aggarwal, O. D. Aguiar, L. Aiello, A. Ain, P. Ajith, B. Allen, G. Allen, A. Allocca, P. A. Altin, A. Amato, A. Ananyeva, S. B. Anderson, W. G. Anderson, S. V. Angelova, S. Antier, S. Appert, K. Arai, M. C. Araya, J. S. Areeda, N. Arnaud, K. G. Arun, S. Ascenzi, G. Ashton, M. Ast, S. M. Aston, P. Astone, D. V. Atallah, P. Aufmuth, C. Aulbert, K. AultONeal, C. Austin, A. Avila-Alvarez, S. Babak, P. Bacon, M. K. M. Bader, S. Bae, P. T. Baker, F. Baldaccini, G. Ballardin, S. W. Ballmer, S. Banagiri, J. C. Barayoga, S. E. Barclay, B. C. Barish, D. Barker, K. Barkett, F. Barone, B. Barr, L. Barsotti, M. Barsuglia, D. Barta, J. Bartlett, I. Bartos, R. Bassiri, A. Basti, J. C. Batch, M. Bawaj, J. C. Bayley, M. Bazzan, B. Becsy, C. Beer, M. Bejger, I. Belahcene, A. S. Bell, B. K. Berger, G. Bergmann, J. J. Bero, C. P. L. Berry, D. Bersanetti, A. Bertolini, J. Betzwieser, S. Bhagwat, R. Bhandare, I. A. Bilenko, G. Billingsley, C. R. Billman, J. Birch, R. Birney, O. Birnholtz, S. Biscans, S. Biscoveanu, A. Bisht, M. Bitossi, C. Biwer, M. A. Bizouard, J. K. Blackburn, J. Blackman, C. D. Blair, D. G. Blair, R. M. Blair, S. Bloemen, O. Bock, N. Bode, M. Boer, G. Bogaert, A. Bohe, F. Bondu, E. Bonilla, R. Bonnand, B. A. Boom, R. Bork, V. Boschi, S. Bose, K. Bossie, Y. Bouffanais, A. Bozzi, C. Bradaschia, P. R. Brady, M. Branchesi, J. E. Brau, T. Briant, A. Brillet, M. Brinkmann, V. Brisson, P. Brockill, J. E. Broida, A. F. Brooks, D. A. Brown, D. D. Brown, S. Brunett, C. C. Buchanan, A. Buikema, T. Bulik, H. J. Bulten, A. Buonanno, D. Buskulic, C. Buy, R. L. Byer, M. Cabero, L. Cadonati, G. Cagnoli, C. Cahillane, J. Calderon Bustillo, T. A. Callister, E. Calloni, J. B. Camp, M. Canepa, P. Canizares, K. C. Cannon, H. Cao, J. Cao, C. D. Capano, E. Capocasa, F. Carbognani, S. Caride, M. F. Carney, J. Casanueva Diaz, C. Casentini, S. Caudill, M. Cavagli`a, F. Cavalier, R. Cavalieri, G. Cella, C. B. Cepeda, P. Cerda-Duran, G. Cerretani, E. Cesarini, S. J. Chamberlin, M. Chan, S. Chao, P. Charlton, E. Chase, E. Chassande-Mottin, D. Chatterjee, B. D. Cheeseboro, H. Y. Chen, X. Chen, Y. Chen, H.-P. Cheng, H. Chia, A. Chincarini, A. Chiummo, T. Chmiel, H. S. Cho, M. Cho, J. H. Chow, N. Christensen, Q. Chu, A. J. K. Chua, S. Chua, A. K. W. Chung, S. Chung, G. Ciani, R. Ciolfi, C. E. Cirelli, A. Cirone, F. Clara, J. A. Clark, P. Clearwater, F. Cleva, C. Cocchieri, E. Coccia, P.-F. Cohadon, D. Cohen, A. Colla, C. G. Collette, L. R. Cominsky, M. Constancio Jr., L. Conti, S. J. Cooper, P. Corban, T. R. Corbitt, I. Cordero-Carrion, K. R. Corley, N. Cornish, A. Corsi, S. Cortese, C. A. Costa, M. W. Coughlin, S. B. Coughlin, J.-P. Coulon, S. T. Countryman, P. Couvares, P. B. Covas, E. E. Cowan, D. M. Coward, M. J. Cowart, D. C. Coyne, R. Coyne, J. D. E. Creighton, T. D. Creighton, J. Cripe, S. G. Crowder, T. J. Cullen, A. Cumming, L. Cunningham, E. Cuoco, T. Dal Canton, G. Dalya, S. L. Danilishin, S. D'Antonio, K. Danzmann, A. Dasgupta, C. F. Da Silva Costa, V. Dattilo, I. Dave, M. Davier, D. Davis, E. J. Daw, B. Day, S. De, D. DeBra, J. Degallaix, M. De Laurentis, S. Deleglise, W. Del Pozzo, N. Demos, T. Denker, T. Dent, R. De Pietri, V. Dergachev, R. De Rosa, R. T. DeRosa, C. De Rossi, R. DeSalvo, O. de Varona, J. Devenson, S. Dhurandhar, M. C. Diaz, L. Di Fiore, M. Di Giovanni, T. Di Girolamo, A. Di Lieto, S. Di Pace, I. Di Palma, F. Di Renzo, Z. Doctor, V. Dolique, F. Donovan, K. L. Dooley, S. Doravari, I. Dorrington, R. Douglas, M. Dovale Alvarez, T. P. Downes, M. Drago, C. Dreissigacker, J. C. Driggers, Z. Du, M. Ducrot, P. Dupej, S. E. Dwyer, T. B. Edo, M. C. Edwards, A. Effler, H.-B. Eggenstein, P. Ehrens, J. Eichholz, S. S. Eikenberry, R. A. Eisenstein, R. C. Essick, D. Estevez, Z. B. Etienne, T. Etzel, M. Evans, T. M. Evans, M. Factourovich, V. Fafone, H. Fair, S. Fairhurst, X. Fan, S. Farinon, B. Farr, W. M. Farr, E. J. Fauchon-Jones, M. Favata, M. Fays, C. Fee, H. Fehrmann, J. Feicht, M. M. Fejer, A. Fernandez-Galiana, I. Ferrante, E. C. Ferreira, F. Ferrini, F. Fidecaro, D. Finstad, I. Fiori, D. Fiorucci, M. Fishbach, R. P. Fisher, M. Fitz-Axen, R. Flaminio, M. Fletcher, H. Fong, J. A. Font, P. W. F. Forsyth, S. S. Forsyth, J.-D. Fournier, S. Frasca, F. Frasconi, Z. Frei, A. Freise, R. Frey, V. Frey, E. M. Fries, P. Fritschel, V. V. Frolov, P. Fulda, M. Fyffe, H. Gabbard, B. U. Gadre, S. M. Gaebel, J. R. Gair, L. Gammaitoni, M. R. Ganija, S. G. Gaonkar, C. Garcia-Quiros, F. Garufi, B. Gateley, S. Gaudio, G. Gaur, V. Gayathri, N. Gehrelsaltaffiliation Deceased, February 2017., G. Gemme, E. Genin, A. Gennai, D. George, J. George, L. Gergely, V. Germain, S. Ghonge, Abhirup Ghosh, Archisman Ghosh, S. Ghosh, J. A. Giaime, K. D. Giardina, A. Giazotto, K. Gill, L. Glover, E. Goetz, R. Goetz, S. Gomes, B. Goncharov, G. Gonzalez, J. M. Gonzalez Castro, A. Gopakumar, M. L. Gorodetsky, S. E. Gossan, M. Gosselin, R. Gouaty, A. Grado, C. Graef, M. Granata, A. Grant, S. Gras, C. Gray, G. Greco, A. C. Green, E. M. Gretarsson, P. Groot, H. Grote, S. Grunewald, P. Gruning, G. M. Guidi, X. Guo, A. Gupta, M. K. Gupta, K. E. Gushwa, E. K. Gustafson, R. Gustafson, O. Halim, B. R. Hall, E. D. Hall, E. Z. Hamilton, G. Hammond, M. Haney, M. M. Hanke, J. Hanks, C. Hanna, M. D. Hannam, O. A. Hannuksela, J. Hanson, T. Hardwick, J. Harms, G. M. Harry, I. W. Harry, M. J. Hart, C.-J. Haster, K. Haughian, J. Healy, A. Heidmann, M. C. Heintze, H. Heitmann, P. Hello, G. Hemming, M. Hendry, I. S. Heng, J. Hennig, A. W. Heptonstall, M. Heurs, S. Hild, T. Hinderer, D. Hoak, D. Hofman, K. Holt, D. E. Holz, P. Hopkins, C. Horst, J. Hough, E. A. Houston, E. J. Howell, A. Hreibi, Y. M. Hu, E. A. Huerta, D. Huet, B. Hughey, S. Husa, S. H. Huttner, T. Huynh-Dinh, N. Indik, R. Inta, G. Intini, H. N. Isa, J.-M. Isac, M. Isi, B. R. Iyer, K. Izumi, T. Jacqmin, K. Jani, P. Jaranowski, S. Jawahar, F. Jimenez-Forteza, W. W. Johnson, D. I. Jones, R. Jones, R. J. G. Jonker, L. Ju, J. Junker, C. V. Kalaghatgi, V. Kalogera, B. Kamai, S. Kandhasamy, G. Kang, J. B. Kanner, S. J. Kapadia, S. Karki, K. S. Karvinen, M. Kasprzack, M. Katolik, E. Katsavounidis, W. Katzman, S. Kaufer, K. Kawabe, F. Kefelian, D. Keitel, A. J. Kemball, R. Kennedy, C. Kent, J. S. Key, F. Y. Khalili, I. Khan, S. Khan, Z. Khan, E. A. Khazanov, N. Kijbunchoo, Chunglee Kim, J. C. Kim, K. Kim, W. Kim, W. S. Kim, Y.-M. Kim, S. J. Kimbrell, E. J. King, P. J. King, M. Kinley-Hanlon, R. Kirchhoff, J. S. Kissel, L. Kleybolte, S. Klimenko, T. D. Knowles, P. Koch, S. M. Koehlenbeck, S. Koley, V. Kondrashov, A. Kontos, M. Korobko, W. Z. Korth, I. Kowalska, D. B. Kozak, C. Kramer, V. Kringel, B. Krishnan, A. Krolak, G. Kuehn, P. Kumar, R. Kumar, S. Kumar, L. Kuo, A. Kutynia, S. Kwang, B. D. Lackey, K. H. Lai, M. Landry, R. N. Lang, J. Lange, B. Lantz, R. K. Lanza, A. Lartaux-Vollard, P. D. Lasky, M. Laxen, A. Lazzarini, C. Lazzaro, P. Leaci, S. Leavey, C. H. Lee, H. K. Lee, H. M. Lee, H. W. Lee, K. Lee, J. Lehmann, A. Lenon, M. Leonardi, N. Leroy, N. Letendre, Y. Levin, T. G. F. Li, S. D. Linker, T. B. Littenberg, J. Liu, R. K. L. Lo, N. A. Lockerbie, L. T. London, J. E. Lord, M. Lorenzini, V. Loriette, M. Lormand, G. Losurdo, J. D. Lough, C. O. Lousto, G. Lovelace, H. Luck, D. Lumaca, A. P. Lundgren, R. Lynch, Y. Ma, R. Macas, S. Macfoy, B. Machenschalk, M. MacInnis, D. M. Macleod, I. Maga na Hernandez, F. Maga na-Sandoval, L. Maga na Zertuche, R. M. Magee, E. Majorana, I. Maksimovic, N. Man, V. Mandic, V. Mangano, G. L. Mansell, M. Manske, M. Mantovani, F. Marchesoni, F. Marion, S. Marka, Z. Marka, C. Markakis, A. S. Markosyan, A. Markowitz, E. Maros, A. Marquina, F. Martelli, L. Martellini, I. W. Martin, R. M. Martin, D. V. Martynov, K. Mason, E. Massera, A. Masserot, T. J. Massinger, M. Masso-Reid, S. Mastrogiovanni, A. Matas, F. Matichard, L. Matone, N. Mavalvala, N. Mazumder, R. McCarthy, D. E. McClelland, S. McCormick, L. McCuller, S. C. McGuire, G. McIntyre, J. McIver, D. J. McManus, L. McNeill, T. McRae, S. T. McWilliams, D. Meacher, G. D. Meadors, M. Mehmet, J. Meidam, E. Mejuto-Villa, A. Melatos, G. Mendell, R. A. Mercer, E. L. Merilh, M. Merzougui, S. Meshkov, C. Messenger, C. Messick, R. Metzdorff, P. M. Meyers, H. Miao, C. Michel, H. Middleton, E. E. Mikhailov, L. Milano, A. L. Miller, B. B. Miller, J. Miller, M. Millhouse, M. C. Milovich-Goff, O. Minazzoli, Y. Minenkov, J. Ming, C. Mishra, S. Mitra, V. P. Mitrofanov, G. Mitselmakher, R. Mittleman, D. Moffa, A. Moggi, K. Mogushi, M. Mohan, S. R. P. Mohapatra, M. Montani, C. J. Moore, D. Moraru, G. Moreno, S. R. Morriss, B. Mours, C. M. Mow-Lowry, G. Mueller, A. W. Muir, Arunava Mukherjee, D. Mukherjee, S. Mukherjee, N. Mukund, A. Mullavey, J. Munch, E. A. Mu niz, M. Muratore, P. G. Murray, K. Napier, I. Nardecchia, L. Naticchioni, R. K. Nayak, J. Neilson, G. Nelemans, T. J. N. Nelson, M. Nery, A. Neunzert, L. Nevin, J. M. Newport, G. Newtonaltaffiliation Deceased, December 2016., K. K. Y. Ng, T. T. Nguyen, D. Nichols, A. B. Nielsen, S. Nissanke, A. Nitz, A. Noack, F. Nocera, D. Nolting, C. North, L. K. Nuttall, J. Oberling, G. D. O'Dea, G. H. Ogin, J. J. Oh, S. H. Oh, F. Ohme, M. A. Okada, M. Oliver, P. Oppermann, Richard J. Oram, B. O'Reilly, R. Ormiston, L. F. Ortega, R. O'Shaughnessy, S. Ossokine, D. J. Ottaway, H. Overmier, B. J. Owen, A. E. Pace, J. Page, M. A. Page, A. Pai, S. A. Pai, J. R. Palamos, O. Palashov, C. Palomba, A. Pal-Singh, Howard Pan, Huang-Wei Pan, B. Pang, P. T. H. Pang, C. Pankow, F. Pannarale, B. C. Pant, F. Paoletti, A. Paoli, M. A. Papa, A. Parida, W. Parker, D. Pascucci, A. Pasqualetti, R. Passaquieti, D. Passuello, M. Patil, B. Patricelli, B. L. Pearlstone, M. Pedraza, R. Pedurand, L. Pekowsky, A. Pele, S. Penn, C. J. Perez, A. Perreca, L. M. Perri, H. P. Pfeiffer, M. Phelps, O. J. Piccinni, M. Pichot, F. Piergiovanni, V. Pierro, G. Pillant, L. Pinard, I. M. Pinto, M. Pirello, M. Pitkin, M. Poe, R. Poggiani, P. Popolizio, E. K. Porter, A. Post, J. Powell, J. Prasad, J. W. W. Pratt, G. Pratten, V. Predoi, T. Prestegard, M. Prijatelj, M. Principe, S. Privitera, G. A. Prodi, L. G. Prokhorov, O. Puncken, M. Punturo, P. Puppo, M. Purrer, H. Qi, V. Quetschke, E. A. Quintero, R. Quitzow-James, F. J. Raab, D. S. Rabeling, H. Radkins, P. Raffai, S. Raja, C. Rajan, B. Rajbhandari, M. Rakhmanov, K. E. Ramirez, A. Ramos-Buades, P. Rapagnani, V. Raymond, M. Razzano, J. Read, T. Regimbau, L. Rei, S. Reid, D. H. Reitze, W. Ren, S. D. Reyes, F. Ricci, P. M. Ricker, S. Rieger, K. Riles, M. Rizzo, N. A. Robertson, R. Robie, F. Robinet, A. Rocchi, L. Rolland, J. G. Rollins, V. J. Roma, R. Romano, C. L. Romel, J. H. Romie, D. Rosinska, M. P. Ross, S. Rowan, A. Rudiger, P. Ruggi, G. Rutins, K. Ryan, S. Sachdev, T. Sadecki, L. Sadeghian, M. Sakellariadou, L. Salconi, M. Saleem, F. Salemi, A. Samajdar, L. Sammut, L. M. Sampson, E. J. Sanchez, L. E. Sanchez, N. Sanchis-Gual, V. Sandberg, J. R. Sanders, B. Sassolas, B. S. Sathyaprakash, P. R. Saulson, O. Sauter, R. L. Savage, A. Sawadsky, P. Schale, M. Scheel, J. Scheuer, J. Schmidt, P. Schmidt, R. Schnabel, R. M. S. Schofield, A. Schonbeck, E. Schreiber, D. Schuette, B. W. Schulte, B. F. Schutz, S. G. Schwalbe, J. Scott, S. M. Scott, E. Seidel, D. Sellers, A. S. Sengupta, D. Sentenac, V. Sequino, A. Sergeev, D. A. Shaddock, T. J. Shaffer, A. A. Shah, M. S. Shahriar, M. B. Shaner, L. Shao, B. Shapiro, P. Shawhan, A. Sheperd, D. H. Shoemaker, D. M. Shoemaker, K. Siellez, X. Siemens, M. Sieniawska, D. Sigg, A. D. Silva, L. P. Singer, A. Singh, A. Singhal, A. M. Sintes, B. J. J. Slagmolen, B. Smith, J. R. Smith, R. J. E. Smith, S. Somala, E. J. Son, J. A. Sonnenberg, B. Sorazu, F. Sorrentino, T. Souradeep, A. P. Spencer, A. K. Srivastava, K. Staats, A. Staley, M. Steinke, J. Steinlechner, S. Steinlechner, D. Steinmeyer, S. P. Stevenson, R. Stone, D. J. Stops, K. A. Strain, G. Stratta, S. E. Strigin, A. Strunk, R. Sturani, A. L. Stuver, T. Z. Summerscales, L. Sun, S. Sunil, J. Suresh, P. J. Sutton, B. L. Swinkels, M. J. Szczepanczyk, M. Tacca, S. C. Tait, C. Talbot, D. Talukder, D. B. Tanner, M. Tapai, A. Taracchini, J. D. Tasson, J. A. Taylor, R. Taylor, S. V. Tewari, T. Theeg, F. Thies, E. G. Thomas, M. Thomas, P. Thomas, K. A. Thorne, E. Thrane, S. Tiwari, V. Tiwari, K. V. Tokmakov, K. Toland, M. Tonelli, Z. Tornasi, A. Torres-Forne, C. I. Torrie, D. Toyra, F. Travasso, G. Traylor, J. Trinastic, M. C. Tringali, L. Trozzo, K. W. Tsang, M. Tse, R. Tso, L. Tsukada, D. Tsuna, D. Tuyenbayev, K. Ueno, D. Ugolini, C. S. Unnikrishnan, A. L. Urban, S. A. Usman, H. Vahlbruch, G. Vajente, G. Valdes, N. van Bakel, M. van Beuzekom, J. F. J. van den Brand, C. Van Den Broeck, D. C. Vander-Hyde, L. van der Schaaf, J. V. van Heijningen, A. A. van Veggel, M. Vardaro, V. Varma, S. Vass, M. Vasuth, A. Vecchio, G. Vedovato, J. Veitch, P. J. Veitch, K. Venkateswara, G. Venugopalan, D. Verkindt, F. Vetrano, A. Vicere, A. D. Viets, S. Vinciguerra, D. J. Vine, J.-Y. Vinet, S. Vitale, T. Vo, H. Vocca, C. Vorvick, S. P. Vyatchanin, A. R. Wade, L. E. Wade, M. Wade, R. Walet, M. Walker, L. Wallace, S. Walsh, G. Wang, H. Wang, J. Z. Wang, W. H. Wang, Y. F. Wang, R. L. Ward, J. Warner, M. Was, J. Watchi, B. Weaver, L.-W. Wei, M. Weinert, A. J. Weinstein, R. Weiss, L. Wen, E. K. Wessel, P. Wessels, J. Westerweck, T. Westphal, K. Wette, J. T. Whelan, B. F. Whiting, C. Whittle, D. Wilken, D. Williams, R. D. Williams, A. R. Williamson, J. L. Willis, B. Willke, M. H. Wimmer, W. Winkler, C. C. Wipf, H. Wittel, G. Woan, J. Woehler, J. Wofford, K. W. K. Wong, J. Worden, J. L. Wright, D. S. Wu, D. M. Wysocki, S. Xiao, H. Yamamoto, C. C. Yancey, L. Yang, M. J. Yap, M. Yazback, Hang Yu, Haocun Yu, M. Yvert, A. Zadro.zny, M. Zanolin, T. Zelenova, J.-P. Zendri, M. Zevin, L. Zhang, M. Zhang, T. Zhang, Y.-H. Zhang, C. Zhao, M. Zhou, Z. Zhou, S. J. Zhu, X. J. Zhu, M. E. Zucker, J. Zweizig The Advanced LIGO and Advanced Virgo observatories recently discovered gravitational waves from a binary neutron star inspiral. A short gamma-ray burst (GRB) that followed the merger of this binary was also recorded by the Fermi Gamma-ray Burst Monitor (Fermi-GBM), and the Anticoincidence Shield for the Spectrometer for the International Gamma-Ray Astrophysics Laboratory (INTEGRAL), indicating particle acceleration by the source. The precise location of the event was determined by optical detections of emission following the merger. We searched for high-energy neutrinos from the merger in the GeV--EeV energy range using the ANTARES, IceCube, and Pierre Auger Observatories. No neutrinos directionally coincident with the source were detected within $\pm500$ s around the merger time. Additionally, no MeV neutrino burst signal was detected coincident with the merger. We further carried out an extended search in the direction of the source for high-energy neutrinos within the 14-day period following the merger, but found no evidence of emission. We used these results to probe dissipation mechanisms in relativistic outflows driven by the binary neutron star merger. The non-detection is consistent with model predictions of short GRBs observed at a large off-axis angle. Muon Counting using Silicon Photomultipliers in the AMIGA detector of the Pierre Auger Observatory (1703.06193) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, M. Lauscher, P. Lautridou, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, L. Nellen, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollant, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, A. Valbuena-Delgado, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 4, 2017 physics.ins-det, astro-ph.IM AMIGA (Auger Muons and Infill for the Ground Array) is an upgrade of the Pierre Auger Observatory designed to extend its energy range of detection and to directly measure the muon content of the cosmic ray primary particle showers. The array will be formed by an infill of surface water-Cherenkov detectors associated with buried scintillation counters employed for muon counting. Each counter is composed of three scintillation modules, with a 10 m$^2$ detection area per module. In this paper, a new generation of detectors, replacing the current multi-pixel photomultiplier tube (PMT) with silicon photo sensors (aka. SiPMs), is proposed. The selection of the new device and its front-end electronics is explained. A method to calibrate the counting system that ensures the performance of the detector is detailed. This method has the advantage of being able to be carried out in a remote place such as the one where the detectors are deployed. High efficiency results, i.e. 98 % efficiency for the highest tested overvoltage, combined with a low probability of accidental counting ($\sim$2 %), show a promising performance for this new system. Cherenkov Telescope Array Contributions to the 35th International Cosmic Ray Conference (ICRC2017) (1709.03483) F. Acero, B.S. Acharya, V. Acín Portella, C. Adams, I. Agudo, F. Aharonian, I. Al Samarai, A. Alberdi, M. Alcubierre, R. Alfaro, J. Alfaro, C. Alispach, R. Aloisio, R. Alves Batista, J.-P. Amans, E. Amato, L. Ambrogi, G. Ambrosi, M. Ambrosio, J. Anderson, M. Anduze, E.O. Angüner, E. Antolini, L.A. Antonelli, V. Antonuccio, P. Antoranz, C. Aramo, M. Araya, C. Arcaro, T. Armstrong, F. Arqueros, L. Arrabito, M. Arrieta, K. Asano, A. Asano, M. Ashley, P. Aubert, C. B. Singh, A. Babic, M. Backes, S. Bajtlik, C. Balazs, M. Balbo, O. Ballester, J. Ballet, L. Ballo, A. Balzer, A. Bamba, R. Bandiera, P. Barai, C. Barbier, M. Barcelo, M. Barkov, U. Barres de Almeida, J.A. Barrio, D. Bastieri, C. Bauer, U. Becciani, Y. Becherini, J. Becker Tjus, W. Bednarek, A. Belfiore, W. Benbow, M. Benito, D. Berge, E. Bernardini, M.G. Bernardini, M. Bernardos, S. Bernhard, K. Bernlöhr, C. Bertinelli Salucci, B. Bertucci, M.-A. Besel, V. Beshley, J. Bettane, N. Bhatt, W. Bhattacharyya, S. Bhattachryya, B. Biasuzzi, G. Bicknell, C. Bigongiari, A. Biland, A. Bilinsky, R. Bird, E. Bissaldi, J. Biteau, M. Bitossi, O. Blanch, P. Blasi, J. Blazek, C. Boccato, C. Bockermann, C. Boehm, M. Bohacova, C. Boisson, J. Bolmont, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, J. Borkowski, R. Bose, Z. Bosnjak, M. Böttcher, C. Boutonnet, F. Bouyjou, L. Bowman, V. Bozhilov, C. Braiding, S. Brau-Nogué, J. Bregeon, M. Briggs, A. Brill, W. Brisken, D. Bristow, R. Britto, E. Brocato, A.M. Brown, S. Brown, K. Brügge, P. Brun, P. Brun, F. Brun, L. Brunetti, G. Brunetti, P. Bruno, M. Bryan, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, S. Buson, J. Buss, K. Byrum, A. Caccianiga, R. Cameron, F. Canelli, R. Canestrari, M. Capalbi, M. Capasso, F. Capitanio, A. Caproni, R. Capuzzo-Dolcetta, P. Caraveo, V. Cárdenas, J. Cardenzana, M. Cardillo, C. Carlile, S. Caroff, R. Carosi, A. Carosi, E. Carquín, J. Carr, J.-M. Casandjian, S. Casanova, E. Cascone, A.J. Castro-Tirado, J. Castroviejo Mora, F. Catalani, O. Catalano, D. Cauz, C. Celestino Silva, S. Celli, M. Cerruti, E. Chabanne, P. Chadwick, N. Chakraborty, C. Champion, A. Chatterjee, S. Chaty, R. Chaves, A. Chen, X. Chen, K. Cheng, M. Chernyakova, M. Chikawa, V.R. Chitnis, A. Christov, J. Chudoba, M. Cieślar, P. Clark, V. Coco, S. Colafrancesco, P. Colin, E. Colombo, J. Colome, S. Colonges, V. Conforti, V. Connaughton, J. Conrad, J.L. Contreras, R. Cornat, J. Cortina, A. Costa, H. Costantini, G. Cotter, B. Courty, S. Covino, G. Covone, P. Cristofari, S.J. Criswell, R. Crocker, J. Croston, C. Crovari, J. Cuadra, O. Cuevas, X. Cui, P. Cumani, G. Cusumano, A. D'Aì, F. D'Ammando, P. D'Avanzo, D. D'Urso, P. Da Vela, Ø. Dale, V.T. Dang, L. Dangeon, M. Daniel, I. Davids, B. Dawson, F. Dazzi, A. De Angelis, V. De Caprio, R. de Cássia dos Anjos, G. De Cesare, A. De Franco, F. De Frondat, E.M. de Gouveia Dal Pino, I. de la Calle, C. De Lisio, R. de los Reyes Lopez, B. De Lotto, A. De Luca, M. De Lucia, J.R.T. de Mello Neto, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, F. De Persio, V. de Souza, J. Decock, C. Deil, P. Deiml, M. Del Santo, E. Delagnes, G. Deleglise, M. Delfino Reznicek, C. Delgado, J. Delgado Mengual, R. Della Ceca, D. della Volpe, M. Detournay, J. Devin, T. Di Girolamo, C. Di Giulio, F. Di Pierro, L. Di Venere, L. Diaz, C. Díaz, C. Dib, H. Dickinson, S. Diebold, S. Digel, A. Djannati-Ataï, M. Doert, A. Domínguez, D. Dominis Prester, I. Donnarumma, D. Dorner, M. Doro, J.-L. Dournaux, T. Downes, G. Drake, S. Drappeau, H. Drass, D. Dravins, L. Drury, G. Dubus, K. Dundas Morå, A. Durkalec, V. Dwarkadas, J. Ebr, C. Eckner, E. Edy, K. Egberts, S. Einecke, J. Eisch, F. Eisenkolb, T.R.N. Ekoume, C. Eleftheriadis, D. Elsässer, D. Emmanoulopoulos, J.-P. Ernenwein, P. Escarate, S. Eschbach, C. Espinoza, P. Evans, C. Evoli, M. Fairbairn, D. Falceta-Goncalves, A. Falcone, V. Fallah Ramazani, K. Farakos, E. Farrell, G. Fasola, Y. Favre, E. Fede, R. Fedora, E. Fedorova, S. Fegan, M. Fernandez-Alonso, A. Fernández-Barral, G. Ferrand, O. Ferreira, M. Fesquet, E. Fiandrini, A. Fiasson, M. Filipovic, D. Fink, J.P. Finley, C. Finley, A. Finoguenov, V. Fioretti, M. Fiorini, H. Flores, L. Foffano, C. Föhr, M.V. Fonseca, L. Font, G. Fontaine, M. Fornasa, P. Fortin, L. Fortson, N. Fouque, B. Fraga, F.J. Franco, L. Freixas Coromina, C. Fruck, D. Fugazza, Y. Fujita, S. Fukami, Y. Fukazawa, Y. Fukui, S. Funk, A. Furniss, M. Füßling, S. Gabici, A. Gadola, Y. Gallant, D. Galloway, S. Gallozzi, B. Garcia, A. Garcia, R. García Gil, R. Garcia López, M. Garczarczyk, D. Gardiol, F. Gargano, C. Gargano, S. Garozzo, M. Garrido-Ruiz, D. Gascon, T. Gasparetto, F. Gaté, M. Gaug, B. Gebhardt, M. Gebyehu, N. Geffroy, B. Genolini, A. Ghalumyan, A. Ghedina, G. Ghirlanda, P. Giammaria, F. Gianotti, B. Giebels, N. Giglietto, V. Gika, R. Gimenes, P. Giommi, F. Giordano, G. Giovannini, E. Giro, M. Giroletti, J. Gironnet, A. Giuliani, J.-F. Glicenstein, R. Gnatyk, N. Godinovic, P. Goldoni, J.L. Gómez, G. Gómez-Vargas, M.M. González, J.M. González, K.S. Gothe, D. Gotz, J. Goullon, T. Grabarczyk, R. Graciani, J. Graham, P. Grandi, J. Granot, G. Grasseau, R. Gredig, A.J. Green, T. Greenshaw, I. Grenier, S. Griffiths, A. Grillo, M.-H. Grondin, J. Grube, V. Guarino, B. Guest, O. Gueta, S. Gunji, G. Gyuk, D. Hadasch, L. Hagge, J. Hahn, A. Hahn, H. Hakobyan, S. Hara, M.J. Hardcastle, T. Hassan, T. Haubold, A. Haupt, K. Hayashi, M. Hayashida, H. He, M. Heller, J.C. Helo, F. Henault, G. Henri, G. Hermann, R. Hermel, J. Herrera Llorente, A. Herrero, O. Hervet, N. Hidaka, J. Hinton, N. Hiroshima, K. Hirotani, B. Hnatyk, J.K. Hoang, D. Hoffmann, W. Hofmann, J. Holder, D. Horan, J. Hörandel, M. Hörbe, D. Horns, P. Horvath, J. Houles, T. Hovatta, M. Hrabovsky, D. Hrupec, J.-M. Huet, G. Hughes, D. Hui, G. Hull, T.B. Humensky, M. Hussein, M. Hütten, M. Iarlori, Y. Ikeno, J.M. Illa, D. Impiombato, T. Inada, A. Ingallinera, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Ionica, M. Iori, A. Iriarte, K. Ishio, G.L. Israel, Y. Iwamura, C. Jablonski, A. Jacholkowska, J. Jacquemier, M. Jamrozy, P. Janecek, F. Jankowsky, D. Jankowsky, P. Jansweijer, C. Jarnot, P. Jean, C.A. Johnson, M. Josselin, I. Jung-Richardt, J. Jurysek, P. Kaaret, P. Kachru, M. Kagaya, J. Kakuwa, O. Kalekin, R. Kankanyan, A. Karastergiou, M. Karczewski, S. Karkar, H. Katagiri, J. Kataoka, K. Katarzyński, U. Katz, N. Kawanaka, L. Kaye, D. Kazanas, N. Kelley-Hoskins, B. Khélifi, D.B. Kieda, T. Kihm, S. Kimeswenger, S. Kimura, S. Kisaka, S. Kishida, R. Kissmann, W. Kluźniak, J. Knapen, J. Knapp, J. Knödlseder, B. Koch, J. Kocot, K. Kohri, N. Komin, A. Kong, Y. Konno, K. Kosack, G. Kowal, S. Koyama, M. Kraus, M. Krause, F. Krauß, F. Krennrich, P. Kruger, H. Kubo, V. Kudryavtsev, G. Kukec Mezek, S. Kumar, H. Kuroda, J. Kushida, P. Kushwaha, N. La Palombara, V. La Parola, G. La Rosa, R. Lahmann, K. Lalik, G. Lamanna, M. Landoni, D. Landriu, H. Landt, R.G. Lang, J. Lapington, P. Laporte, O. Le Blanc, T. Le Flour, P. Le Sidaner, S. Leach, A. Leckngam, S.-H. Lee, W.H. Lee, J.-P. Lees, J. Lefaucheur, M.A. Leigui de Oliveira, M. Lemoine-Goumard, J.-P. Lenain, G. Leto, R. Lico, M. Limon, R. Lindemann, E. Lindfors, L. Linhoff, A. Lipniacka, S. Lloyd, T. Lohse, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, T. Louge, F. Louis, M. Louys, F. Lucarelli, D. Lucchesi, P.L. Luque-Escamilla, E. Lyard, M.C. Maccarone, T. Maccarone, E. Mach, G.M. Madejski, G. Maier, A. Majczyna, P. Majumdar, M. Makariev, G. Malaguti, A. Malouf, S. Maltezos, D. Malyshev, D. Malyshev, D. Mandat, G. Maneva, M. Manganaro, S. Mangano, P. Manigot, K. Mannheim, N. Maragos, D. Marano, A. Marcowith, J. Marín, M. Mariotti, M. Marisaldi, S. Markoff, J. Martí, J.-M. Martin, P. Martin, L. Martin, M. Martínez, G. Martínez, O. Martínez, R. Marx, N. Masetti, P. Massimino, A. Mastichiadis, M. Mastropietro, S. Masuda, H. Matsumoto, N. Matthews, S. Mattiazzo, G. Maurin, N. Maxted, M. Mayer, D. Mazin, M.N. Mazziotta, L. Mc Comb, I. McHardy, C. Medina, A. Melandri, C. Melioli, D. Melkumyan, S. Mereghetti, J.-L. Meunier, T. Meures, M. Meyer, S. Micanovic, T. Michael, J. Michałowski, I. Mievre, J. Miller, I.A. Minaya, T. Mineo, F. Mirabel, J.M. Miranda, R. Mirzoyan, A. Mitchell, T. Mizuno, R. Moderski, M. Mohammed, L. Mohrmann, C. Molijn, E. Molinari, R. Moncada, T. Montaruli, I. Monteiro, D. Mooney, P. Moore, A. Moralejo, D. Morcuende-Parrilla, E. Moretti, K. Mori, G. Morlino, P. Morris, A. Morselli, F. Moscato, D. Motohashi, E. Moulin, S. Mueller, R. Mukherjee, P. Munar, C. Mundell, J. Mundet, T. Murach, H. Muraishi, K. Murase, A. Murphy, A. Nagai, N. Nagar, S. Nagataki, T. Nagayoshi, B.K. Nagesh, T. Naito, D. Nakajima, T. Nakamori, Y. Nakamura, K. Nakayama, D. Naumann, P. Nayman, D. Neise, L. Nellen, R. Nemmen, A. Neronov, N. Neyroud, T. Nguyen, T.T. Nguyen, T. Nguyen Trung, L. Nicastro, J. Nicolau-Kukliński, J. Niemiec, D. Nieto, M. Nievas-Rosillo, M. Nikołajuk, K. Nishijima, K.-I. Nishikawa, G. Nishiyama, K. Noda, L. Nogues, S. Nolan, D. Nosek, M. Nöthe, B. Novosyadlyj, S. Nozaki, F. Nunio, P. O'Brien, L. Oakes, C. Ocampo, J.P. Ochoa, R. Oger, Y. Ohira, M. Ohishi, S. Ohm, N. Okazaki, A. Okumura, J.-F. Olive, R.A. Ong, M. Orienti, R. Orito, A. Orlati, J.P. Osborne, M. Ostrowski, N. Otte, Z. Ou, E. Ovcharov, I. Oya, A. Ozieblo, M. Padovani, S. Paiano, A. Paizis, J. Palacio, M. Palatiello, M. Palatka, J. Pallotta, J.-L. Panazol, D. Paneque, M. Panter, R. Paoletti, M. Paolillo, A. Papitto, A. Paravac, J.M. Paredes, G. Pareschi, R.D. Parsons, P. Paśko, S. Pavy, A. Pe'er, M. Pech, G. Pedaletti, P. Peñil Del Campo, A. Perez, M.A. Pérez-Torres, L. Perri, M. Perri, M. Persic, A. Petrashyk, S. Petrera, P.-O. Petrucci, O. Petruk, B. Peyaud, M. Pfeifer, G. Piano, Q. Piel, D. Pieloth, F. Pintore, C. Pio García, A. Pisarski, S. Pita, L. Pizarro, Ł. Platos, M. Pohl, V. Poireau, A. Pollo, J. Porthault, J. Poutanen, D. Pozo, E. Prandini, P. Prasit, J. Prast, K. Pressard, G. Principe, D. Prokhorov, H. Prokoph, M. Prouza, G. Pruteanu, E. Pueschel, G. Pühlhofer, I. Puljak, M. Punch, S. Pürckhauer, F. Queiroz, J. Quinn, A. Quirrenbach, I. Rafighi, S. Rainò, P.J. Rajda, R. Rando, R.C. Rannot, S. Razzaque, I. Reichardt, O. Reimer, A. Reimer, A. Reisenegger, M. Renaud, T. Reposeur, B. Reville, A.H. Rezaeian, W. Rhode, D. Ribeiro, M. Ribó, M.G. Richer, T. Richtler, J. Rico, F. Rieger, M. Riquelme, P.R. Ristori, S. Rivoire, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, G. Rojas, P. Romano, G. Romeo, M. Roncadelli, J. Rosado, S. Rosen, S. Rosier Lees, J. Rousselle, A.C. Rovero, G. Rowell, B. Rudak, A. Rugliancich, J.E. Ruíz del Mazo, W. Rujopakarn, C. Rulten, F. Russo, O. Saavedra, S. Sabatini, B. Sacco, I. Sadeh, E. Sæther Hatlen, S. Safi-Harb, V. Sahakian, S. Sailer, T. Saito, N. Sakaki, S. Sakurai, D. Salek, F. Salesa Greus, G. Salina, D. Sanchez, M. Sánchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, A. Santangelo, E.M. Santos, A. Sanuy, L. Sapozhnikov, S. Sarkar, K. Satalecka, Y. Sato, F.G. Saturni, R. Savalle, M. Sawada, S. Schanne, E.J. Schioppa, S. Schlenstedt, T. Schmidt, J. Schmoll, M. Schneider, H. Schoorlemmer, P. Schovanek, A. Schulz, F. Schussler, U. Schwanke, J. Schwarz, T. Schweizer, S. Schwemmer, E. Sciacca, S. Scuderi, M. Seglar-Arroyo, A. Segreto, I. Seitenzahl, D. Semikoz, O. Sergijenko, N. Serre, M. Servillat, K. Seweryn, K. Shah, A. Shalchi, M. Sharma, R.C. Shellard, I. Shilon, L. Sidoli, M. Sidz, H. Siejkowski, J. Silk, A. Sillanpää, D. Simone, B.B. Singh, G. Sironi, J. Sitarek, P. Sizun, V. Sliusar, A. Slowikowska, A. Smith, D. Sobczyńska, A. Sokolenko, H. Sol, G. Sottile, W. Springer, O. Stahl, A. Stamerra, S. Stanič, R. Starling, D. Staszak, Ł. Stawarz, R. Steenkamp, S. Stefanik, C. Stegmann, S. Steiner, C. Stella, M. Stephan, R. Sternberger, M. Sterzel, B. Stevenson, M. Stodulska, M. Stodulski, T. Stolarczyk, G. Stratta, U. Straumann, R. Stuik, M. Suchenek, T. Suomijarvi, A.D. Supanitsky, T. Suric, I. Sushch, P. Sutcliffe, J. Sykes, M. Szanecki, T. Szepieniec, G. Tagliaferri, H. Tajima, K. Takahashi, H. Takahashi, M. Takahashi, L. Takalo, S. Takami, J. Takata, J. Takeda, T. Tam, M. Tanaka, T. Tanaka, Y. Tanaka, S. Tanaka, C. Tanci, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, L.A. Tejedor, F. Temme, P. Temnikov, Y. Terada, J.C. Terrazas, R. Terrier, D. Terront, T. Terzic, D. Tescaro, M. Teshima, V. Testa, S. Thoudam, W. Tian, L. Tibaldo, A. Tiengo, D. Tiziani, M. Tluczykont, C.J. Todero Peixoto, F. Tokanai, M. Tokarz, K. Toma, J. Tomastik, A. Tonachini, D. Tonev, M. Tornikoski, D.F. Torres, E. Torresi, G. Tosti, T. Totani, N. Tothill, F. Toussenel, G. Tovmassian, N. Trakarnsirinont, P. Travnicek, C. Trichard, M. Trifoglio, I. Troyano Pujadas, M. Tsirou, S. Tsujimoto, T. Tsuru, Y. Uchiyama, G. Umana, M. Uslenghi, V. Vagelli, F. Vagnetti, M. Valentino, P. Vallania, L. Valore, A.M. Van den Berg, W. van Driel, C. van Eldik, B. van Soelen, J. Vandenbroucke, J. Vanderwalt, G.S. Varner, G. Vasileiadis, V. Vassiliev, J.R. Vázquez, M. Vázquez Acosta, M. Vecchi, A. Vega, P. Veitch, P. Venault, C. Venter, S. Vercellone, P. Veres, S. Vergani, V. Verzi, G.P. Vettolani, C. Veyssiere, A. Viana, J. Vicha, C. Vigorito, J. Villanueva, P. Vincent, J. Vink, F. Visconti, V. Vittorini, H. Voelk, V. Voisin, A. Vollhardt, S. Vorobiov, I. Vovk, M. Vrastil, T. Vuillaume, S.J. Wagner, R. Wagner, P. Wagner, S.P. Wakely, T. Walstra, R. Walter, M. Ward, J.E. Ward, D. Warren, J.J. Watson, N. Webb, P. Wegner, O. Weiner, A. Weinstein, C. Weniger, F. Werner, H. Wetteskind, M. White, R. White, A. Wierzcholska, S. Wiesand, R. Wijers, P. Wilcox, A. Wilhelm, M. Wilkinson, M. Will, D.A. Williams, M. Winter, P. Wojcik, D. Wolf, M. Wood, A. Wörnlein, T. Wu, K.K. Yadav, C. Yaguna, T. Yamamoto, H. Yamamoto, N. Yamane, R. Yamazaki, S. Yanagita, L. Yang, D. Yelos, T. Yoshida, M. Yoshida, S. Yoshiike, T. Yoshikoshi, P. Yu, D. Zaborov, M. Zacharias, G. Zaharijas, A. Zajczyk, L. Zampieri, F. Zandanel, R. Zanin, R. Zanmar Sanchez, D. Zaric, M. Zavrtanik, D. Zavrtanik, A.A. Zdziarski, A. Zech, H. Zechlin, V.I. Zhdanov, A. Ziegler, J. Ziemann, K. Ziętara, A. Zink, J. Ziółkowski, V. Zitelli, A. Zoli, J. Zorn Oct. 3, 2017 astro-ph.HE List of contributions from the Cherenkov Telescope Array Consortium presented at the 35th International Cosmic Ray Conference, July 12-20 2017, Busan, Korea. Spectral Calibration of the Fluorescence Telescopes of the Pierre Auger Observatory (1709.01537) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello We present a novel method to measure precisely the relative spectral response of the fluorescence telescopes of the Pierre Auger Observatory. We used a portable light source based on a xenon flasher and a monochromator to measure the relative spectral efficiencies of eight telescopes in steps of 5 nm from 280 nm to 440 nm. Each point in a scan had approximately 2 nm FWHM out of the monochromator. Different sets of telescopes in the observatory have different optical components, and the eight telescopes measured represent two each of the four combinations of components represented in the observatory. We made an end-to-end measurement of the response from different combinations of optical components, and the monochromator setup allowed for more precise and complete measurements than our previous multi-wavelength calibrations. We find an overall uncertainty in the calibration of the spectral response of most of the telescopes of 1.5% for all wavelengths; the six oldest telescopes have larger overall uncertainties of about 2.2%. We also report changes in physics measureables due to the change in calibration, which are generally small. The Pierre Auger Observatory: Contributions to the 35th International Cosmic Ray Conference (ICRC 2017) (1708.06592) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, R. Gaïor, B. García, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, R.C. Shellard, G. Sigl, G. Silli, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. F. Soriano, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedeński, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 2, 2017 astro-ph.CO, astro-ph.IM, astro-ph.HE Contributions of the Pierre Auger Collaboration to the 35th International Cosmic Ray Conference (ICRC 2017), 12-20 July 2017, Bexco, Busan, Korea. Galaxy bias from galaxy-galaxy lensing in the DES Science Verification Data (1609.08167) J. Prat, C. Sánchez, R. Miquel, J. Kwan, J. Blazek, C. Bonnett, A. Amara, S. L. Bridle, J. Clampitt, M. Crocce, P. Fosalba, E. Gaztanaga, T. Giannantonio, W. G. Hartley, M. Jarvis, N. MacCrann, W.J. Percival, A. J. Ross, E. Sheldon, J. Zuntz, T. M. C. Abbott, F. B. Abdalla, J. Annis, A. Benoit-Lévy, E. Bertin, D. Brooks, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, L. N. da Costa, D. L. DePoy, S. Desai, H. T. Diehl, P. Doel, T. F. Eifler, A. E. Evrard, A. Fausti Neto, B. Flaugher, J. Frieman, D. W. Gerdes, D. Gruen, R. A. Gruendl, G. Gutierrez, K. Honscheid, D. J. James, K. Kuehn, N. Kuropatkin, O. Lahav, M. Lima, J. L. Marshall, P. Melchior, F. Menanteau, B. Nord, A. A. Plazas, K. Reil, A. K. Romer, A. Roodman, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, A. R. Walker Sept. 26, 2017 astro-ph.CO We present a measurement of galaxy-galaxy lensing around a magnitude-limited ($i_{AB} < 22.5$) sample of galaxies from the Dark Energy Survey Science Verification (DES-SV) data. We split these lenses into three photometric-redshift bins from 0.2 to 0.8, and determine the product of the galaxy bias $b$ and cross-correlation coefficient between the galaxy and dark matter overdensity fields $r$ in each bin, using scales above 4 Mpc/$h$ comoving, where we find the linear bias model to be valid given our current uncertainties. We compare our galaxy bias results from galaxy-galaxy lensing with those obtained from galaxy clustering (Crocce et al. 2016) and CMB lensing (Giannantonio et al. 2016) for the same sample of galaxies, and find our measurements to be in good agreement with those in Crocce et al. (2016), while, in the lowest redshift bin ($z\sim0.3$), they show some tension with the findings in Giannantonio et al. (2016). We measure $b\cdot r$ to be $0.87\pm 0.11$, $1.12 \pm 0.16$ and $1.24\pm 0.23$, respectively for the three redshift bins of width $\Delta z = 0.2$ in the range $0.2<z <0.8$, defined with the photometric-redshift algorithm BPZ. Using a different code to split the lens sample, TPZ, leads to changes in the measured biases at the 10-20\% level, but it does not alter the main conclusion of this work: when comparing with Crocce et al. (2016) we do not find strong evidence for a cross-correlation parameter significantly below one in this galaxy sample, except possibly at the lowest redshift bin ($z\sim 0.3$), where we find $r = 0.71 \pm 0.11$ when using TPZ, and $0.83 \pm 0.12$ with BPZ. Observation of a Large-scale Anisotropy in the Arrival Directions of Cosmic Rays above $8 \times 10^{18}$ eV (1709.07321) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Sept. 21, 2017 astro-ph.HE Cosmic rays are atomic nuclei arriving from outer space that reach the highest energies observed in nature. Clues to their origin come from studying the distribution of their arrival directions. Using $3 \times 10^4$ cosmic rays above $8 \times 10^{18}$ electron volts, recorded with the Pierre Auger Observatory from a total exposure of 76,800 square kilometers steradian year, we report an anisotropy in the arrival directions. The anisotropy, detected at more than the 5.2$\sigma$ level of significance, can be described by a dipole with an amplitude of $6.5_{-0.9}^{+1.3}$% towards right ascension $\alpha_{d} = 100 \pm 10$ degrees and declination $\delta_{d} = -24_{-13}^{+12}$ degrees. That direction indicates an extragalactic origin for these ultra-high energy particles. Dark Energy Survey Year 1 Results: Multi-Probe Methodology and Simulated Likelihood Analyses (1706.09359) E. Krause, T. F. Eifler, J. Zuntz, O. Friedrich, M. A. Troxel, S. Dodelson, J. Blazek, L. F. Secco, N. MacCrann, E. Baxter, C. Chang, N. Chen, M. Crocce, J. DeRose, A. Ferte, N. Kokron, F. Lacasa, V. Miranda, Y. Omori, A. Porredon, R. Rosenfeld, S. Samuroff, M. Wang, R. H. Wechsler, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Levy, G. M. Bernstein, D. Brooks, D. L. Burke, D. Capozzi, M. Carrasco Kind, J. Carretero, C. B. D'Andrea, L. N. da Costa, C. Davis, D. L. DePoy, S. Desai, H. T. Diehl, J. P. Dietrich, A. E. Evrard, B. Flaugher, P. Fosalba, J. Frieman, J. Garcia-Bellido, E. Gaztanaga, T. Giannantonio, D. Gruen, R. A. Gruendl, J. Gschwend, G. Gutierrez, K. Honscheid, D. J. James, T. Jeltema, K. Kuehn, S. Kuhlmann, O. Lahav, M. Lima, M. A. G. Maia, M. March, J. L. Marshall, P. Martini, F. Menanteau, R. Miquel, R. C. Nichol, A. A. Plazas, A. K. Romer, E. S. Rykoff, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, I. Sevilla-Noarbe, M. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. L. Tucker, V. Vikram, A. R. Walker, J. Weller June 28, 2017 astro-ph.CO We present the methodology for and detail the implementation of the Dark Energy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines configuration-space two-point statistics from three different cosmological probes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data from the first year of DES observations. We have developed two independent modeling pipelines and describe the code validation process. We derive expressions for analytical real-space multi-probe covariances, and describe their validation with numerical simulations. We stress-test the inference pipelines in simulated likelihood analyses that vary 6-7 cosmology parameters plus 20 nuisance parameters and precisely resemble the analysis to be presented in the DES 3x2pt analysis paper, using a variety of simulated input data vectors with varying assumptions. We find that any disagreement between pipelines leads to changes in assigned likelihood $\Delta \chi^2 \le 0.045$ with respect to the statistical error of the DES Y1 data vector. We also find that angular binning and survey mask do not impact our analytic covariance at a significant level. We determine lower bounds on scales used for analysis of galaxy clustering (8 Mpc$~h^{-1}$) and galaxy-galaxy lensing (12 Mpc$~h^{-1}$) such that the impact of modeling uncertainties in the non-linear regime is well below statistical errors, and show that our analysis choices are robust against a variety of systematics. These tests demonstrate that we have a robust analysis pipeline that yields unbiased cosmological parameter inferences for the flagship 3x2pt DES Y1 analysis. We emphasize that the level of independent code development and subsequent code comparison as demonstrated in this paper is necessary to produce credible constraints from increasingly complex multi-probe analyses of current data. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory (1611.06812) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello June 20, 2017 astro-ph.HE We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to $80^\circ$ and energies in excess of 4 EeV ($4 \times 10^{18}$ eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured; while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding $p$-values obtained after accounting for searches blindly performed at several angular scales, are $1.3 \times 10^{-5}$ in the case of the angular power spectrum, and $2.5 \times 10^{-3}$ in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales. Cosmology from Cosmic Shear with DES Science Verification Data (1507.05552) The Dark Energy Survey Collaboration, T. Abbott, F. B. Abdalla, S. Allam, A. Amara, J. Annis, R. Armstrong, D. Bacon, M. Banerji, A. H. Bauer, E. Baxter, M. R. Becker, A. Benoit-Lévy, R. A. Bernstein, G. M. Bernstein, E. Bertin, J. Blazek, C. Bonnett, S. L. Bridle, D. Brooks, C. Bruderer, E. Buckley-Geer, D. L. Burke, M. T. Busha, D. Capozzi, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, C. Chang, J. Clampitt, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, R. Das, D. L. DePoy, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, A. Drlica-Wagner, G. Efstathiou, T. F. Eifler, B. Erickson, J. Estrada, A. E. Evrard, A. Fausti Neto, E. Fernandez, D. A. Finley, B. Flaugher, P. Fosalba, O. Friedrich, J. Frieman, C. Gangkofner, J. Garcia-Bellido, E. Gaztanaga, D. W. Gerdes, D. Gruen, R. A. Gruendl, G. Gutierrez, W. Hartley, M. Hirsch, K. Honscheid, E. M. Huff, B. Jain, D. J. James, M. Jarvis, T. Kacprzak, S. Kent, D. Kirk, E. Krause, A. Kravtsov, K. Kuehn, N. Kuropatkin, J. Kwan, O. Lahav, B. Leistedt, T. S. Li, M. Lima, H. Lin, N. MacCrann, M. March, J. L. Marshall, P. Martini, R. G. McMahon, P. Melchior, C. J. Miller, R. Miquel, J. J. Mohr, E. Neilsen, R. C. Nichol, A. Nicola, B. Nord, R. Ogando, A. Palmese, H.V. Peiris, A. A. Plazas, A. Refregier, N. Roe, A. K. Romer, A. Roodman, B. Rowe, E. S. Rykoff, C. Sabiu, I. Sadeh, M. Sako, S. Samuroff, C. Sánchez, E. Sanchez, H. Seo, I. Sevilla-Noarbe, E. Sheldon, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, J. Thaler, D. Thomas, M. A. Troxel, V. Vikram, A. R. Walker, R. H. Wechsler, J. Weller, Y. Zhang, J. Zuntz May 3, 2017 astro-ph.CO We present the first constraints on cosmology from the Dark Energy Survey (DES), using weak lensing measurements from the preliminary Science Verification (SV) data. We use 139 square degrees of SV data, which is less than 3\% of the full DES survey area. Using cosmic shear 2-point measurements over three redshift bins we find $\sigma_8 (\Omega_{\rm m}/0.3)^{0.5} = 0.81 \pm 0.06$ (68\% confidence), after marginalising over 7 systematics parameters and 3 other cosmological parameters. We examine the robustness of our results to the choice of data vector and systematics assumed, and find them to be stable. About $20$\% of our error bar comes from marginalising over shear and photometric redshift calibration uncertainties. The current state-of-the-art cosmic shear measurements from CFHTLenS are mildly discrepant with the cosmological constraints from Planck CMB data; our results are consistent with both datasets. Our uncertainties are $\sim$30\% larger than those from CFHTLenS when we carry out a comparable analysis of the two datasets, which we attribute largely to the lower number density of our shear catalogue. We investigate constraints on dark energy and find that, with this small fraction of the full survey, the DES SV constraints make negligible impact on the Planck constraints. The moderate disagreement between the CFHTLenS and Planck values of $\sigma_8 (\Omega_{\rm m}/0.3)^{0.5}$ is present regardless of the value of $w$. Prospects for CTA observations of the young SNR RX J1713.7-3946 (1704.04136) The CTA Consortium: F. Acero, R. Aloisio, J. Amans, E. Amato, L.A. Antonelli, C. Aramo, T. Armstrong, F. Arqueros, K. Asano, M. Ashley, M. Backes, C. Balazs, A. Balzer, A. Bamba, M. Barkov, J.A. Barrio, W. Benbow, K. Bernlöhr, V. Beshley, C. Bigongiari, A. Biland, A. Bilinsky, E. Bissaldi, J. Biteau, O. Blanch, P. Blasi, J. Blazek, C. Boisson, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, C. Braiding, S. Brau-Nogué, J. Bregeon, A.M. Brown, V. Bugaev, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, M. Böttcher, R. Cameron, M. Capalbi, A. Caproni, P. Caraveo, R. Carosi, E. Cascone, M. Cerruti, S. Chaty, A. Chen, X. Chen, M. Chernyakova, M. Chikawa, J. Chudoba, J. Cohen-Tanugi, S. Colafrancesco, V. Conforti, J.L. Contreras, A. Costa, G. Cotter, S. Covino, G. Covone, P. Cumani, G. Cusumano, F. D'Ammando, D. D'Urso, M. Daniel, F. Dazzi, A. De Angelis, G. De Cesare, A. De Franco, F. De Frondat, E.M. de Gouveia Dal Pino, C. De Lisio, R. de los Reyes Lopez, B. De Lotto, M. de Naurois, F. De Palma, M. Del Santo, C. Delgado, D. della Volpe, T. Di Girolamo, C. Di Giulio, F. Di Pierro, L. Di Venere, M. Doro, J. Dournaux, D. Dumas, V. Dwarkadas, C. Díaz, J. Ebr, K. Egberts, S. Einecke, D. Elsässer, S. Eschbach, D. Falceta-Goncalves, G. Fasola, E. Fedorova, A. Fernández-Barral, G. Ferrand, M. Fesquet, E. Fiandrini, A. Fiasson, M.D. Filipovíc, V. Fioretti, L. Font, G. Fontaine, F.J. Franco, L. Freixas Coromina, Y. Fujita, Y. Fukui, S. Funk, A. Förster, A. Gadola, R. Garcia López, M. Garczarczyk, N. Giglietto, F. Giordano, A. Giuliani, J. Glicenstein, R. Gnatyk, P. Goldoni, T. Grabarczyk, R. Graciani, J. Graham, P. Grandi, J. Granot, A.J. Green, S. Griffiths, S. Gunji, H. Hakobyan, S. Hara, T. Hassan, M. Hayashida, M. Heller, J.C. Helo, J. Hinton, B. Hnatyk, J. Huet, M. Huetten, T.B. Humensky, M. Hussein, J. Hörandel, Y. Ikeno, T. Inada, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, K. Ioka, M. Iori, J. Jacquemier, P. Janecek, D. Jankowsky, I. Jung, P. Kaaret, H. Katagiri, S. Kimeswenger, S. Kimura, J. Knödlseder, B. Koch, J. Kocot, K. Kohri, N. Komin, Y. Konno, K. Kosack, S. Koyama, M. Kraus, H. Kubo, G. Kukec Mezek, J. Kushida, N. La Palombara, K. Lalik, G. Lamanna, H. Landt, J. Lapington, P. Laporte, S. Lee, J. Lees, J. Lefaucheur, J.-P. Lenain, G. Leto, E. Lindfors, T. Lohse, S. Lombardi, F. Longo, M. Lopez, F. Lucarelli, P.L. Luque-Escamilla, R. López-Coto, M.C. Maccarone, G. Maier, G. Malaguti, D. Mandat, G. Maneva, S. Mangano, A. Marcowith, J. Martí, M. Martínez, G. Martínez, S. Masuda, G. Maurin, N. Maxted, C. Melioli, T. Mineo, N. Mirabal, T. Mizuno, R. Moderski, M. Mohammed, T. Montaruli, A. Moralejo, K. Mori, G. Morlino, A. Morselli, E. Moulin, R. Mukherjee, C. Mundell, H. Muraishi, K. Murase, S. Nagataki, T. Nagayoshi, T. Naito, D. Nakajima, T. Nakamori, R. Nemmen, J. Niemiec, D. Nieto, M. Nievas-Rosillo, M. Nikołajuk, K. Nishijima, K. Noda, L. Nogues, D. Nosek, B. Novosyadlyj, S. Nozaki, Y. Ohira, M. Ohishi, S. Ohm, A. Okumura, R.A. Ong, R. Orito, A. Orlati, M. Ostrowski, I. Oya, M. Padovani, J. Palacio, M. Palatka, J.M. Paredes, S. Pavy, A. Pe'er, M. Persic, P. Petrucci, O. Petruk, A. Pisarski, M. Pohl, A. Porcelli, E. Prandini, J. Prast, G. Principe, M. Prouza, E. Pueschel, G. Pühlhofer, A. Quirrenbach, M. Rameez, O. Reimer, M. Renaud, M. Ribó, J. Rico, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, P. Romano, G. Romeo, J. Rosado, J. Rousselle, G. Rowell, B. Rudak, I. Sadeh, S. Safi-Harb, T. Saito, N. Sakaki, D. Sanchez, P. Sangiorgi, H. Sano, M. Santander, S. Sarkar, M. Sawada, E.J. Schioppa, H. Schoorlemmer, P. Schovanek, F. Schussler, O. Sergijenko, M. Servillat, A. Shalchi, R.C. Shellard, H. Siejkowski, A. Sillanpää, D. Simone, V. Sliusar, H. Sol, S. Stanič, R. Starling, Ł. Stawarz, S. Stefanik, M. Stephan, T. Stolarczyk, M. Szanecki, T. Szepieniec, G. Tagliaferri, H. Tajima, M. Takahashi, J. Takeda, M. Tanaka, S. Tanaka, L.A. Tejedor, I. Telezhinsky, P. Temnikov, Y. Terada, D. Tescaro, M. Teshima, V. Testa, S. Thoudam, F. Tokanai, D.F. Torres, E. Torresi, G. Tosti, C. Townsley, P. Travnicek, C. Trichard, M. Trifoglio, S. Tsujimoto, V. Vagelli, P. Vallania, L. Valore, W. van Driel, C. van Eldik, J. Vandenbroucke, V. Vassiliev, M. Vecchi, S. Vercellone, S. Vergani, C. Vigorito, S. Vorobiov, M. Vrastil, M.L. Vázquez Acosta, S.J. Wagner, R. Wagner, S.P. Wakely, R. Walter, J.E. Ward, J.J. Watson, A. Weinstein, M. White, R. White, A. Wierzcholska, P. Wilcox, D.A. Williams, R. Wischnewski, P. Wojcik, T. Yamamoto, H. Yamamoto, R. Yamazaki, S. Yanagita, L. Yang, T. Yoshida, M. Yoshida, S. Yoshiike, T. Yoshikoshi, M. Zacharias, L. Zampieri, R. Zanin, M. Zavrtanik, D. Zavrtanik, A. Zdziarski, A. Zech, H. Zechlin, V. Zhdanov, A. Ziegler, J. Zorn April 13, 2017 astro-ph.HE We perform simulations for future Cherenkov Telescope Array (CTA) observations of RX~J1713.7$-$3946, a young supernova remnant (SNR) and one of the brightest sources ever discovered in very-high-energy (VHE) gamma rays. Special attention is paid to explore possible spatial (anti-)correlations of gamma rays with emission at other wavelengths, in particular X-rays and CO/H{\sc i} emission. We present a series of simulated images of RX J1713.7$-$3946 for CTA based on a set of observationally motivated models for the gamma-ray emission. In these models, VHE gamma rays produced by high-energy electrons are assumed to trace the non-thermal X-ray emission observed by {\it XMM-Newton}, whereas those originating from relativistic protons delineate the local gas distributions. The local atomic and molecular gas distributions are deduced by the NANTEN team from CO and H{\sc i} observations. Our primary goal is to show how one can distinguish the emission mechanism(s) of the gamma rays (i.e., hadronic vs leptonic, or a mixture of the two) through information provided by their spatial distribution, spectra, and time variation. This work is the first attempt to quantitatively evaluate the capabilities of CTA to achieve various proposed scientific goals by observing this important cosmic particle accelerator.
CommonCrawl
Machine Learning. Literally. Arguably, the most successful application of machine learning is largely unknown to most practitioners. Appropriately, it literally involves machines that learn. What's the most successful application of machine learning? Ten different experts would probably give ten different answers. Likely, none of those answers would coincide with mine. In fact, I believe that the most successful application of machine learning is largely unknown to most practitioners. Moreover, the application I have in mind is rather the oddball. It uses an adaptive and pure online prediction algorithms, and has neither an explicit underlying statistical nor geometric model. Yet, it solves a non-trivial problem of processing samples with both temporal and spatial structure. It is undoubtedly successful: It achieves, in practice, predictions with an accuracy of over 99%. It is widely used. The number of products that make use of it, is easily in the billions. It has a significant role in the overall technological advances of the last few decades. And what really gives it the edge, is the fact that it literally involves machines that learn. Poetic justice in action. I'm talking about branch predictors. Those are components that are built in the hardware of most modern high-performance CPUs, and constantly predict what the currently running program is about to do next. They greatly improve both the latency and throughput of CPUs, and are central to the architectural design of modern processors. Do they really deserve to be included under the title "machine learning"? I mean - ok, those are indeed prediction algorithms, but they are very specific, and usually described in terms of logical gates and registers. It doesn't feel like ML. But I argue they most definitely are. I'd even go further and say they are an excellent example of "machine learning done right". At their core, they solve a problem of learning a policy in the context of a markov decision process, so they fall under the umbrella of reinforcement learning. But this can be easily missed, since those algorithms incorporate so much domain-specific knowledge, and employ bags of custom tweaks and optimizations - to the point they're barely recognizable by statisticians. That's actually a good thing, and a sharp and refreshing contrast compared to the unguided "plug-and-play" approach for ML which is way too common for anyone's good. They are a great case study. I can personally testify that the ideas they present are applicable in very general settings, and were a direct inspiration for some work I've done in "traditional machine learning". The lessons they teach are especially useful for integrating learning algorithms in embedded or real-time systems. In this context, branch predictors are extreme: they are usually required to make a prediction within a few instruction cycles (possibly just 1) - as real-time as it gets. And their design go to great lengths in order to use the least resources possible. CPUs predict other things aside from branches (e.g. for prefetching, or for cache replacement). But all-in-all, branch predictions are likely the most important predictions CPUs make, and their effectiveness is crucial for instruction level parallelism, which is a major driving force behind high-performance computing. The following post starts with a gentle introduction to branch-prediction, and then moves to discuss the prediction algorithms involved: Domain Knowledge What is Branch Prediction? Target Prediction Branches and Targets Subroutine Return Stack Outcome Prediction Non-Stationary Bias Correlating Predictors Shared Biases Ensembles and Hierarchies 1. Domain Knowledge 1.1. What is Branch Prediction? Central processing units, or shortly CPUs, are the computational engines of computers. Schematically, a CPU is a gadget that executes instructions; an implementation in hardware of an interpreter for some programming language. The specific language a given CPU interprets is referred to as the CPU's ISA (an acronym of "instruction set architecture"), and its human-friendly version (in addition to some fluff) is called assembly language. In principal, when the CPU starts to run a program it first loads the program into the RAM. From that point onwards, any of the instructions in the program has an address that marks its location. The memory is logically linear, and there is a well-defined total ordering over the addresses. The CPU then goes to the address of the first instruction (known as the "entry point" of the program), and sequentially executes the instructions from there. The placeholder for the address of the next instruction to execute is usually denoted PC (it stands for "program counter", but the name and details can be ignored for the purpose of this post). Of course, there got to be a way to break this sequential modus operandi, or else some essential algorithmic building-blocks such as iterations and recursions would not be expressible, and the CPU could only run very boring programs. In most ISAs, this is done by special instructions called jumps. Those instructions have the affect that after executing them, the CPU doesn't go to the following instruction, but instead goes to some other location in memory (whose address is an argument of the jump), and starts sequentially executing instructions from there. Some jump instructions are unconditional, and when encountered, cause the CPU to simply jump to a remote instruction - no questions asked. But some jumps are conditional, and except of the target destination for the jump they require an additional argument, which is a predicate. When the CPU executes a conditional jump instruction, it first tests the predicate, and only if it is true then the CPU jumps. Otherwise, it just keeps going sequentially by executing the following instructions as usual. Conditional jump instructions are called branches. branches - an explanation for "visual thinkers" :) There are some caveats in the story above, which anyone with an interest in high-performance computing should very much care about - but they are outside the scope of this post. Most importantly, there could be other types of predicated instructions except jumps (e.g. CMOVxx in x86), and they often provide an alternative way to mitigate some of the issues associated with branches (e.g. via "if conversions"). There are delicate trade-offs involved here, that come into play both in latency-guided optimizations for CPUs (as in search algorithms), and throughput-guided optimizations for SIMD devices (e.g. most GPUs and APUs). Most modern CPUs have on the chip hardwired algorithms to predict the expected results of jump instructions before executing them. Those are branch- predictors. The term is a misnomer, since they're used to predict unconditional jumps as well - but that's life. In fact, in principle they are applied to ALL instructions, and for each, perform a threefold prediction: (1) First, they predict whether an instruction is a jump, based solely on its address, without decoding the instruction. (2) Then, if the instruction is predicted to be a jump, they go on the predict if the jump is going to be taken. Unconditional jumps are always taken, but for conditional jumps, the outcome of their associated predicate requires a prediction. The predictors must guess the outcome of the predicate without evaluating it - since at the time of the prediction, the needed information for its evaluation is still unknown (3) Finally, if the instruction seems to be a taken-jump, its target address needs to be predicted. That's because the address itself is often a result of a computation, or stored in a remote location (e.g. the main memory) - and at the time of the prediction is unknown. Usually, "branch prediction" refers to steps 2 and 3 above (step 1 is often redundant in practice, as we shall later see). It worth noting that "outcome prediction" and "target prediction" are two very different problems, and are solved by different methods. But first, there's an open issue to tackle. So far, I explained what branch- prediction is, but said nothing about what is it good for. To understand the motivation and importance of this mechanism, a discussion about the architecture of modern CPUs is unavoidable. 1.2. Computer Architecture There is really no way to do justice with this topic in just a few paragraphs (in a few books, perhaps). The following is a "big-picture" explanation, with simplifications that could trigger seizures and twitches in knowledgeable readers. So be warned. Processors break programs to very small tasks, and execute them in discrete time steps (known as "cycles"). My laptop's CPU has a clock-rate of 2.2GHz, which means that its cores execute 2,200,000,000 cycles per second. That's only one of the factors that determine how fast can my CPU execute a given program. A fuller picture is given by the simple formula, known as the "Iron Law" of processor performance: $$\frac{\text{Time}}{\text{Program}}=\frac{\text{Instruction}}{\text{Program}}\times\frac{\text{Cycles}}{\text{Instruction}}\times\frac{\text{Time}}{\text{Cycle}}$$ The clock rate gives the value for the last factor, but in recent years, most of the performance gains in CPUs were obtained by improving the second factor. The key for squeezing more instructions in each cycle is parallelism. If an average instruction takes 3 cycles to execute, but the processor can execute 6 instructions simultaneously - then the resulting throughput is 2 instructions per cycle. Note that we deal here with in-core parallelism (in contrast to multi-core parallelism), and specifically, with instruction-level parallelism (ILP). Other forms of in-core parallelism (e.g. data-level parallelism and thread-level parallelism) are mostly incidental to branch-prediction. The simplest way to achieve ILP, is by employing pipelines. These are pretty much the computational analogue of manufacturing assembly lines: The CPU breaks an execution of an instruction to a sequence of stages, and at any given time it can handle many instructions simultaneously, given that each is in a different stage. Now, ideally, at any given cycle the CPU completes 1 instruction, even though any given instruction takes multiple cycles to complete. Real CPUs usually implement much more sophisticated mechanisms of ILP, by introducing duplicated functional units that allow the processor to deal with parallel instructions that are in the same stage. Additionally, CPUs may reorder the instructions, when later quick-to-execute instructions are independent of earlier slow-to-execute instructions. This requires a real-time management (in hardware) of dependencies between different instructions - which may get pretty complicated pretty fast. There are two major approaches to accomplish this: nowadays superscalar processors are prevalent, but up to not so long ago, a different variation of the idea - called VLIW (for Very Long Instruction Word) - was what all the cool kids got excited about. I won't get into any of these right now. Anyway, pipelines in practice can get rather long (10-30 stages). Yet, it's helpful to simply think about an ideal 6-stages pipeline, with the following stages: F: Fetching the next instruction to be executed from memory. D: Decoding the instruction (here possible dependencies issue are dealt with). I: Issuing, by forwarding the instruction to the appropriate unit, possibly via bypassing (+reading the register file). X: Executing the instruction (i.e. in the ALUs' gates). M: Memory stage, where memory access is done. W: Writeback, where results are written in the register file. After an instruction is fetched, the CPU normally goes and fetches the next instruction into the pipeline. That's where things get messy for branches: if the last instruction was a branch, the CPU doesn't know what will be the next instruction it should fetch until it makes a lot of progress with the jump instruction. Even if it was an unconditional branch, the CPU won't know the destination address before it goes through the D stage if the destination is fixed, or the X stage if the destination is variable. If the CPU had no choice but waiting this long before fetching the next instruction, the overall execution speed of programs would have dropped drastically (in a long-pipelined superscalars, a slowdown by a factor of 5(!) or so wouldn't be surprising). Branch predictors to the rescue! Turns out that programs naturally feature predictable patterns that allow CPUs to accurately guess, for conditional jumps, both the destination and the evaluation of the condition, and keep fetching instructions continuously. If down the road the CPU discovers it guessed wrong, it undos all the incorrect work it mistakenly did, and starts again. That's an example of speculative execution, and CPUs do this quite a-lot. When they speculate wrong, it's painful and expensive, so branch predictors must be very accurate, making such events rare. Actually, in modern CPUs an accuracy of less than 98%-99% is considered unpractical. 1.3. Control Structures Branch prediction is possible because the algorithmical constructs in which branches appear provide regularity and contextual information that make them predictable. We shall now briefly explore the predictability of some common constructs: the if–then–else construct, loops (and nested loops) and function calls. We shall also discuss indirect jumps, which are often used to implement polymorphism (as in C++'s virtual functions) and switch-statements (via jump-tables). First, loops. Whether the code in the fancy high-level language-de-jour expresses a "while loop", a "for-each enumeration", a "map high-order function" or any other fashionable abstraction - at the end, it all looks about the same when it comes down to actual instructions: loop_label: ... ; code je loop_label ; Goto loop_label if some condition holds. ... ; more code In many ways, loop branches are the most easily predictable jumps. Short nested loops commonly obey a pattern in which their branch is taken a fixed number of times, after which it is not taken once (and so on): while (long_time) { for (size_t i = 0; i < 2; ++i) { /*...*/ } // Yes, Yes, No, Yes, Yes, No, .... And for long loops, the simple strategy of always predict "taken" achieves a low error rate. But to take advantage of those regularities, the predictor needs a way to identify that a branch instruction is actually a part of a loop. Sometimes the ISA has special instructions to deal with loops (e.g. LOOPxx in x86), which immediately solves this problem (at least after the first time the instruction is decoded). But even if a loop is implemented using regular jumps (either due to lack of support from the ISA, or just because the compiler decided so), it can still be easily identified: it involves backwards jumps. A naive static branch predictor could simple predict that backward-jumps are always taken. All that was presuming that the destination of the jump is known. Luckily, the destinations of loop-related branches are almost always constants, thus cacheable. After the first time a loop's branch is encountered, a good strategy for target prediction is to predict the previously taken destination. Forward-branches, on the other hand, are usually a part of an if–then–else construct. So reason (and empirical tests) suggests they will be taken about 50% of the times. That's not encouraging, and the best static strategy available here would be to always predict "not taken" (since it's much easier than predicting "taken", which then requires a prediction for the destination as well). Two things can help here. First, is the fact that even if–then–else constructs commonly show some regularity that can be exploited for prediction, as in the following example: for (size_t i = 0; i < N; ++i) { if (i%3 == 0) { /*...*/ } // Yes, No, No, Yes, No, No, .... That's an example of temporal patterns. Secondly, conditional blocks are often dependent, and once some previous conditions are known, other conditions may become highly predictable. Those are spatial patterns: if (animal.subspecies() == Subspecies::DOG) { /*...*/ } if (animal.sound() == Sounds::WOOF) { /*...*/ } // Alway true when the previous condition holds. In contrast to if-else-thens and loops, function calls usually involve unconditional jumps - so they require only a target prediction. Moreover, most CPUs have specialized instructions for functions (e.g. CALL and RET in x86), which makes their related jumps easily identifiable. So it all seems pretty easy, and indeed it partially is: target prediction for function calls is as easy it gets. The problem starts when the function returns. By design, functions encapsulate pieces of reusable code and are typically called from many different locations. So when the CPU encounter a RET instruction, the last used target address is useless for predicting the next target address. We later see how CPUs cope with this. There are also situations in which the target prediction for the function call itself is non-trivial, for example, when the function is polymorphic: class Animal { virtual void TakeAShower() = 0; }; class Dog : public Animal { virtual void TakeAShower() {throw NoFreakingWay();} }; class Cat : public Animal { virtual void TakeAShower() {assert(m_already_clean==true);}; std::vector<std::unique_ptr<Animal>> animals; animals[0]->TakeAShower(); In practice, such function calls involve an indirect jump (also known as "jump register"). Another common usage of indirect jumps are for the implementation of jump- tables. Trampolines, which are thunks that are arranged in the procedure linkage tables used by share libraries that are compiled as a position independent code, may be also implemented using indirect jumps - but they are still perfectly predictable, since the final address of the actual function is constant throughout the execution. As a final note, it's useful to know that it's possible to profile code for branch miss-predictions using valgrind. This profiling is rough and conservative: since the actual branch-predictors in the CPU can't be queried, valgrind will simulate a branch-predictor while running the code, and it will do so by using prediction methods which are almost certanily less accurate than those actually used by the CPU. The syntax is: valgrind --tool=cachegrind --branch-sim=yes program 2. Target Prediction 2.1. Branches and Targets At the beginning of any given cycle, a guess for the location of the next instruction to be fetched must be available. Usually this prediction is easy - at least in theory: if the instruction to be fetched is not a jump, the location of the next instruction is simply the location of the following instruction. That's in theory. And while in theory, theory and practice are the same, in practice - well… In this case there are two issues that complicates even the simplest of cases, where no jumps are involved. Firstly, the CPU does not know in general whether the current instruction is a jump or not until it decodes it. So even if the procedure of dealing with non-jumps was easy, knowing when to apply it - is not. By itself, that's not a big problem. The CPU can maintain a cache of addresses that contain jumps, and use it to test whether the instruction it just fetched is a jump. Obviously this cache mechanism would have to be implemented cleverly, since this check must be doable within a single cycle and its space usage is stringently constrained - but that's also not a problem. The standard data- structure in such cases is a narrow table that is indexed by some of the address's LSBs and its entries contain "tags", taken from the address's MSBs of the last seen jump instruction that was mapped to the entry. But then, there's the second issue: the address of an instruction does not provide enough information for accessing its sequential instruction. Not without decoding it. That's because in many ISAs (even of RISC processors), instructions have a variable length. The naive assumption that instructions are laid in memory such that their addresses form an arithmetic progression is not true in general. For those reasons, CPUs often perform a partial decoding for instructions in the iCache, before fetching them. This solves both problem. And it has other advantages as well, since there are other types of useful "meta information" that even a partial decoding of instructions may provide - such as whether the instruction is a direct branch, an indirect branch, a function call or a return instruction. Additionally there are more advantages, unrelated to branch- prediction, which won't be discussed here (e.g. better cache-utilization of μops-based microarchitectures). Once this is done, and the CPU can identify jump instructions as soon as they're fetched, target prediction is almost exclusively done by a form of caching, using a data structure called Branch Target Buffer (BTB). The BTB works by tagged-indexing as described earlier (almost: usually it is set-associative), only that it holds an associated value for each of its entries: the target address from the last time the corresponding jump was taken. 2.2. Subroutine Return Stack Target prediction using the BTB works well in many common cases, including jump- tables and switch-statements with a repeatedly used case, and indirect polymorphic function calls via a pointer which usually points to objects of the same class. But one very common case in which BTBs perform poorly, is returning from functions, This is an unconditional jump, so only the target prediction plays a role in here - but since functions are commonly being called from many different locations and places, the return address is constantly changing. To deal with this case, return instructions are treated differently, using a Subroutine Return Stack. This is a special stack, maintained by the CPU, in which the locations of function calls are being stored. This stack is bounded, and when overflowed - the oldest entries are dismissed. This means that long chains of function-calls may reduce performance due to target- mispredictions. 3. Outcome Prediction 3.1. Non-Stationary Bias The most basic temporal pattern a specific branch may demonstrate is a simple bias. While a general forward branch has about 50% chance of being taken, the branch associated with, say, the condition 1!=0 will be always taken. Of course, most branches are not as degenerated, but most are nevertheless biased. Going by the book, this is basically an estimation of a Bernoulli distribution, and the naive way to learn the bias of a branch is by counting the times it was encountered and taken and the times it was encountered but not-taken, and use the ratio as an estimation for the bias (possibly after applying Laplace smoothing). This idea has two major flaws. Firstly, it's expansive. Keep in mind that those algorithms should be implemented in the CPU and are constrained to execute within 1 cycle. Maintaining 2 counters, and requiring comparison, addition and division is just too much. Secondly, and even more importantly, the distribution of a branch's behaviour is usually non-stationary. When the code with the given branch is executed in a certain context, the branch may be biased in a very different way than when the code is executed in some other context. A more appropriate solution is caching: simply saving the last evaluation of the branch, and use it as the next prediction. This requires just 1 bit of storage, no computations, and accounts for non-stationarity. Equivalently, this idea can be also be thought of as using a finite state machine (FSM) with 2 states: While such FSM requires just 1 bit to implement, which is really good - it suffers from some real drawbacks. For example, it performs poorly on repeated short loops, which are a common pattern (since it always gets wrong the last branch and then again the first branch upon reentry, and for short loops, it means a high mis-predictions ratio). It also performs poorly in some other common scenarios, such as small biases (since each time to predicate will be different than the last time - which is often - two mispredictions will occur). A generalization of this idea, that overcomes some of those issues, is using larger state machines. For technical reasons, 4 states is the most common FSM used in commercial CPUs. The 4 states a branch can be in, are usually referred to "Strongly True", "Weakly True", "Strongly False" and "Weakly False". A branching predicate that is either in a "Strongly True" or in a "Weakly True" state is predicted as "True" (and the same goes for "False"), but the transition model leads to fewer mistakes in many common situations (e.g. now a nested loop with a short inner loop, has 1 mis-prediction per outer loop instead of 2). There are two common variations for such FSMs: The right FSM is sometimes called 2BC (for 2-bits counter), since it acts as a counter with a 2-bits saturation. In a sense, it keeps balance of "what recently happened more" (with a quota), and uses it for prediction. The idea in the left FSM, is to enter a new prediction state via a strong state. It basically treats consecutive mistakes as a strong signal. The "weak" and "strong" labels can be seen as a "confidence score" for the prediction. Those two FSMs are very simple and almost identical. But they are a manifestation of two different ideas (a common situation). The right FSM has a strong resemblance to adaptive control algorithms, while the left FSM is the purest form of a state-space predictor. 3.2. Correlating Predictors Estimation of the (time-varying) bias of a branch is just the very beginning. The next step is to deal with temporal and spatial patterns. For example, a branch that is associated with the condition i%3==0 inside a loop that enumerates using i, will have the pattern FFTFFTFFTFFT... which is highly-predictable. In order to track such patterns, CPUs use a special data-structure. The first element of the data-structure is called "Branch History Register" (BHR), which as its name suggests, is a register (shift-register, actually), that holds the last K evaluations of the branching condition. The BHR is used as in index into the second element of this data-structure, called "Branch History Table" (BHT), which has $2^K$ entries in it. Each entry keeps track of the local-bias estimated for its associated pattern. This is done using a FSM as discussed in the previous section. This means that typically, each entry of the BHT takes 2 bits. Of course, a program has many branches, not just one, and the different branches are not independent. The dependencies between different branch are referred to as "spatial patterns", and they usually occur due to branches that are near to each others in the code. e.g. if (x > 0) { The second branch is obviously dependent on the first one. The simplest way to take this into account is to make the BHR global. So now the recent evaluations of the K predicates (regardless if they are a multiple evaluations of the same branch, or separate evaluations of different branches) are used. A better approach is to use a local branch-predictor for each branch. This can be done by maintaining M separate BHTs, and use the address of the predicted branch to choose the BHT that learns this branch. Then the BHR is used to index to correct BHT as usual. The result if called "Two-Level Branch-Predictor". Of course, it is not possible to keep a BHT for any possible instruction of a given program (that would require way to much space). Instead some of the LSBs of the address are used as an index to the correct BHT, and only about 100 BHTs are used. This can be taken further, by maintaining a separate BHR for each branch. Such a table of branch-history registers is called "Pattern History Table" (PHT), and the resulting predictor is called "Generalized Two-Level Branch- Predictor": Note, this time each branch is predicted using its own local history - and spatial patterns are not taken into account. There exist several other variations on this theme. For example, the Pentium Pro (circ 1995) used a 2-bits BHR and 4 BHTs whose entries maintained 2-bits counters. 3.3. Shared Biases The two-level branch-predictors described above are wasteful: for most branches, most patterns are never encountered, and most of the dedicated bits in the BHRs remain unused. To mitigate this, CPUs often reuse the same structure of bias- estimators (the FSMs from the first section) for all the branches. That allow for the implementation of branch-predictors that can deal with very long patterns using 2 or 3 bits FSMs, that requires only a few kilobits of storage. Such shared-predictors come in two flavours. The first one, sometimes called PShare (for "private history, shared FSMs") is similar to the generalized two-level predictor from earlier in that it considers the temporal patterns of each branch separately, and does not take into account spatial pattern involving several different branches. The idea is to "mesh together" the history-pattern of the current branch with some bits of its address (usually simply by xoring) to index into a BHT. Since each entry of the BHT takes 2 or 3 bits, it can be large, and collision are rare: The second flavour, GShare (for "global history, shared FSMs", of course), uses a global BHR instead of a BHT. So it exploit spatial patterns, but is not very efficient when it comes to local temporal patterns (especially long ones). 3.4. Ensembles and Hierarchies What really brings branch predictors to their best performance is the use of stacking. This allows them to exploit both private histories and spatial patterns as the context dictates. Combining different types of predictors has many other advantages as well: CPUs can assign simple predictors that perform well on most cases to most branches, but keep some complicated or specialized predictors and assign them to hard-to-predict branches. The simplest form is a Tournament Predictor, in which the CPU has both a PSHare predictor and a GShare predictor, together with a third meta-predictor (often referred as "a choice-predictor") that decides for each branch, whether it should be predicted using the PShare or the GShare predictor. The choice- predictor is essentially just a BHT whose different entries are 2 or 3 bits FSMs one per local branch - that decides how to predict their corresponding branch: A special care should be taken with the training of tournament predictors. The choice predictor is basically a "bias estimation" finite-state-machine per- branch, and at each iteration only 1 concrete predictor (either the PShare or the GSHare) is updated based on the actual branch outcome, while the choice- predictor itself is updated base on the success of the concrete predictor (so if both are right or both are wrong, nothing changes - but if one is right and the other is wrong, a bias is strengthened). The catch is, that since the true outcome is known only late in the pipeline, and in the meanwhile more predictions are usually made - the global predictor is speculatively updated, but restored later on mispredictions. The idea is originated with the Alpha 21264 processor (i.e. it's around since 1996), and it's still more-or-less the prevalent design of branch predictors. But newer CPUs takes things further. The main addition introduced by modern predictors, is an hierarchical architecture in which specialized predictors can be integrated (e.g. a special counter-based predictor for loops). Hierarchical predictors take advantage of the fact that typically many branches display simple and regular behaviour: they are almost always have the same outcome. So for those branches, a very simple branch-predictor can be used (i.e. 1 bit cache). Since such predictors are very cheap, many such predictors can be maintained simultaneously, and many different branches can be tracked. More complicated branches are dealt with using a more complex predictors (e.g. 2-bits FSMs), which are more expensive but rarer, and only the most complicated branches are predicted using a tournament predictor which now is required to deal with much fewer branches, hence can exploit more complex patterns. This is, for example, the type of prediction algorithm that is implemented in Intel's Pentium M CPU (from 2003). By default, it predicts branches using a 2-bit FSMs. But it also maintains a PShare and a GShare predictors. Each branch is mapped to all 3 predictors using its address's LSBs, and a tag-check in the PShare and GShare predictors (using some of its address's MSBs) is used to determine if they should handle it. A priority is given to the GShare, then (if the branch is not in the GShare predictor) to the PShare, and then (if the branch is not in the PShare predictor), the 2-bit FSM from the BHT is used for prediction. A branch that is predicted poorly is "moved up" the hierarchy. Intel's Core i7 also implements an hierarchical predictor, and incorporates a specialized counter-based loop predictor in it. Those are the types of predictors that finally achieve an accuracy of over 99%. The problems imposed by branches are a fundamental obstacle in the way of high performance computing. Luckily, learning algorithms offer a very practical solution. CPUs are using a combination of costume data-structures, cheap hashing techniques and finite-state machines to implement suitable algorithms, instead of the "default" solutions such as Q-Learning, Value and Policy iterations or Hidden Markov models. By doing so, they achieve a stunning accuracy under the most stringent constraints. Variations of those methods are highly applicable for integrating learning algorithms in general embedded systems or real-time computing, or for speeding up the training time of expensive models.
CommonCrawl
Learning-based synchronous approach from forwarding nodes to reduce the delay for Industrial Internet of Things Minrui Wu1, Yanhui Wu2, Xiao Liu1, Ming Ma3, Anfeng Liu ORCID: orcid.org/0000-0001-5190-47611 & Ming Zhao4 The Industrial Internet of Things (IIoTs) is creating a new world which incorporates machine learning, sensor data, and machine-to-machine (M2M) communications. In IIoTs, the length of the transmission delay is one of the pivotal performance because dilatory communication will cause heavy losses to industrial applications. In this paper, a learning-based synchronous (LS) approach from forwarding nodes is proposed to reduce the delay for IIoTs. In an asynchronous Media Access Control protocol, when senders need to send data, they always require to wait for their corresponding receiver to wake up. Thus, the delay here is greater than in the synchronous network. However, the synchronization cost of the whole network is enormous, and it is difficult to maintain. Therefore, LS mechanism uses a partial synchronization approach to reduce synchronization costs while effectively reducing delay. In LS approach, instead of synchronizing the nodes in the entire network, only sender nodes and part of the nodes in their forwarding node set are synchronized by self-learning methods, and accurate synchronization is not required here. Thus, the delay can be effectively reduced under the low cost. Secondly, the nodes near sink maintain the original duty cycle, while the nodes in the regions away from the sink use their remaining energy and perform synchronization operations, so as not to damage the network lifetime. Finally, because the synchronization in this paper is based on different synchronization periods among different nodes, it can improve the network performance by reducing the conflict between simultaneous data transmission. The theoretical analysis results show that compared with the previous approach FFSC, LS approach can reduce the end-to-end delay by 5.13–11.64% and increase the energy efficiency by 14.29–17.53% under the same lifetime with a more balanced energy utilization. The Internet of Things (IoT) has been regarded as the third wave of information technology [1,2,3,4,5,6,7,8,9,10,11,12]. Based on IoT technologies, the Industrial Internet of Things (IIoTs) [13] corporates machine learning and big data technology, sensor data, and machine-to-machine (M2M) communications that have existed in industrial areas for years [14,15,16]. In IIoTs, huge amounts of wireless sensor nodes [17,18,19,20,21,22,23,24] can be deployed more conveniently than the previous wired network to monitor the whole process of industrial production from many angles, in real time or semi-real time. In such way, IIoTs is creating a new world for industrial manufacturing, where the workers or managers can manage their industrial manufacturing in more informed ways and can make more opportune and better informed decisions. Many advanced IIoTs have been successfully applied in industrial manufacturing. Wireless sensor-based networks [24,25,26,27] are one of the important components for IIoTs, in which each node sensing manufactures information from surrounding environment and each node in the network helps in routing to data center (DC) or sink by forwarding data of other nodes [27,28,29,30]. One of the most important and challenging issue is delay [4, 6, 24, 29, 31]; especially, data end-to-end delay is very important for industrial manufacturing [32], because there are often some emergency situations that require workers to take corresponding emergency measures to handle. Therefore, it is important to quickly send the perceived information to the sink. So it is very important to study how to reduce the delay effectively for IIoTs, edge network, and society network as well as cloud computing [33,34,35], and there are also some related research work [4, 6, 24, 29, 32, 36,37,38]. In wireless sensor-based network, the data of nodes is usually routed to the sink via multi-hop relay. When a sender has data to send, they generally chose nodes which are within its own sending range and closer to its own distance from sink as forwarding nodes (FNs) [4]. FNs represent the sets of the candidate forwarding nodes of the senders. In asynchronous-based wireless sensor-based network, each node adopts a periodic awake\sleep mode to save energy, because the consumption of a node in the awake state is 100 to 1000 times as much as in the sleep state [4]. This method is called duty cycling. In order to save energy consumption, nodes should be left in the sleep state as much as possible. But when nodes in the sleep state, they cannot receive or send data, which will increase the delay of data transmission. When senders have data that needs to be sent to sink, their forwarding nodes may be in the sleep state. Therefore, senders need to wait for their FNs to awake before they can transmit data. In general, due to their inherent characteristics of constrained resource for sensor nodes, the duty cycle of each node is relatively small and its sleeping time is longer than waking time in order to save energy. This means that the time that nodes in the sleep state is longer than in the awake state, while the time that nodes in the awake state is generally longer than the time it takes to transmit a packet effectively, so that a packet can complete the transmission in a cycle. Obviously, when the sender has only one forwarding node, before it transmits data, the expected length of time required to wait for the receiver to wake up (called sleep delay) is half of the length of its sleeping time. Thus, in the network with a duty cycle of 0.2, the required time that the sender waits for the receiver to wake up is twice as much as that it takes to transmit data packets. Therefore, sleep delay is an important component of end-to-end delay and plays an important role in it. In the data transmission, the time required to send data packets is needed in any network and cannot be reduced; thus, to reduce the end-to-end delay of packets, the most critical thing is to reduce the sleep delay per hop transmission. The most direct way to reduce the sleep delay is to increase the duty cycle of nodes. If the duty cycle of the node is increased to 1, then the sleep delay is reduced to 0. At present, there are some research to reduce the delay by adjusting the duty cycle of nodes [4, 32]. But in general, this method of increasing duty cycle of nodes is difficult to be applied in practice because it greatly increases the energy consumption of nodes. Another way to reduce sleep delay is by synchronizing the network with nodes waking up and sleeping at the same time, so that nodes can communicate with each other when they are awake [4, 32]. Although there is no sleep delay, the disadvantage of this approach is that each node in the network needs to be synchronized, while maintaining synchronization in large-scale networks requires a great deal of communication overhead. Moreover, as time goes on, the clock between nodes will produce an offset, so it is necessary to constantly maintain synchronization, and its cost is also very large. It can be seen that in the current research, to reduce the delay of the network will generally reduce the network life. In view of the deficiencies in previous research, a learning-based synchronous (LS) approach from forwarding nodes is proposed to reduce the delay for IIoTs while not reducing the network lifetime. The main contributions of this paper are as follows: A LS approach from forwarding nodes is proposed to reduce the delay for IIoTs. In LS approach, for each sender, a part of nodes called candidate relays are selected from forwarding nodes which are closer to the sink than the unselected forwarding nodes. The sender synchronizes its duty cycle with its candidate relays by self-learning. Therefore, when the sender is awake, its candidate relays are also awake, thus greatly reducing the sleep delay. In addition, the data forwarding of senders are all carried out through their corresponding candidate relays, so the distance of each hop on the path from the sender to the sink is longer than the previous strategies that select awake nodes from all forwarding nodes. Therefore, the number of hops can be reduced, and the end-to-end delay can also be reduced. Moreover, in LS approach, it does not require that all nodes in the network remain synchronized, and only the synchronization between senders and their candidate relays is required. There is no strict requirement for synchronization here, so it is easy to implement. In LS approach, the network lifetime is longer. In sensor-based network, the energy consumption of the nodes near the sink is high, while the energy consumption of the nodes far away from the sink is low and there have residual energy. Therefore, in the LS approach, in the range of one hop near the sink, the operations of the nodes are exactly the same as the previous strategy, but nodes whose distance to the sink are longer than the length of one hop start self-learning to synchronize with their candidate relays, so they can just use their residual energy to keep the synchronization with their candidate relays. Thus, this approach can effectively utilize the network energy, while not reducing the network lifetime. The effectiveness of LS approach is evaluated through extensive theoretical analyses, and we demonstrate that compared with the previous approach, LS approach has two critical advantages: (a) it significantly reduces the end-to-end delay of data transmission, and specifically, under the same lifetime, compared to previous methods, LS approach can reduce end-to-end delay by about 5.13 to 11.64% and (b) it takes full advantage of the residual energy to maintain the synchronization, and the network energy utilization ratio can be enhanced by as much as 15.91% averagely. The rest of this paper is organized as follows. In Section 2, the related works are reviewed. The system model and problem statements are described in Section 3. Section 4 elaborates on the design of learning-based synchronous (LS) approach from forwarding nodes for IIoTs. The performance analysis of LS approach is provided in Section 5. Section 6 provides the simulation result. Finally, we conclude in Section 7. A large number of sensor nodes are usually deployed in industrial manufacturing, forming the so-called Industrial Internet of Things (IIoTs). In such kind of application scenarios, sensory data are valid only when they reach the sink within the specified time threshold. In particular, alarm information obtained by sensor nodes need to be delivered as soon as possible; otherwise, it will cause great damage and disaster to industrial manufacturing [4, 6, 24, 29, 32, 36, 38,39,40,41]. Therefore, there are lots of research trying to reduce delay [2]. This paper divides the work into the following categories: Routing-based method for reducing delay. Because of the limited communication capacity and communication range of sensor nodes, this method generally uses multi-hop routing to send data to the sink via several relays. But in general, a node has at least one or more forwarding nodes closer to the sink than itself. Thus, selecting the right forwarding nodes can effectively reduce delay. If the forwarding node which has the nearest distance to the sink is selected as relay node each time, the one-hop distance in the direction that is sent to the sink will be maximized, so that the number of hops on the path of transmission is minimal. Because each data forwarding will increase the forwarding delay, reducing the number of forwarding can effectively reduce the delay. This method is the first proposed shortest routing method. Based on the shortest routing method, an optimization method for multiple integrated routing is proposed. The general characteristic of such routing methods is as follows: the selection of relay node is based on the composite values of multiple performance indexes, and the forwarding nodes with high integrated value will be selected as relay nodes. If the requirement for delay is high, the weight of delay will be large; for example, when only the delay is considered, the shortest routing method is appropriate. If the energy consumption and delay are considered synthetically, the forwarding nodes with high residual energy and closer to the sink are selected as relay nodes, so that the energy consumption is balanced and the delay is smaller. Such methods generally assume that nodes are always in the awake state; therefore, as long as a sender has data to transmit, its forwarding nodes are optional. But in fact, most sensor networks adopt awake/sleep periodic rotation to save energy. Thus, such methods do not apply to sleep-wake cycling sensor networks. For sleep-wake cycling sensor networks, Naveen and Kumar [42] proposed an approach for relay node selection for sleep-wake cycling wireless sensor networks. Their goal is how to select the appropriate relay node from the forwarding nodes to minimize the delay. The approach proposed by Naveen and Kumar takes into account the fact that when the sender selects the relay node, the forwarding node which is closer to the sink is in the sleep state. At this point, the sender has two choices: one is to wait for the node which is closer to the sink to wake up, and the other is to select the forwarding node that is awake but not near the sink as the relay node. If the sender waits for a node near the sink to wake up, although this method can reduce the number of hops required for routing, the delay of this hop will increase as a result of waiting. If the node which is in the awake state now is selected as the relay node, the number of hops will increase, so it is possible that the delay is not necessarily small. Naveen and Kumar [42] consider that the selection of relay nodes can be inducted as asset selling problems. The description is that the seller wants to sell his product, the buyer arrives at any time, and the price is stochastic. If the seller does not buy a deal with the buyer, the buyer will leave and never come back. The new arrival buyer may have a lower price than the previous buyer. The selection of relay node is similar to that. If a forwarding node is awake, the sender should make a decision whether to select it as relay node. If the sender decides to continue waiting, the new node which is in the awake state may be farther away from the sink than the previous nodes. Naveen and Kumar convert the distance from the sink to the forwarding node and the waiting time to reward, then select the node with high reward as relay node. A Fast data collection for the node Faraway sink and Slow data collection for the node Close to sink (FFSC) approach is proposed to meet energy efficient, and the end-to-end delay can be minimized in Ref. [4]. In FFSC approach, for the region which is far away from the sink, nodes adopt a routing strategy of forwarding immediately if there is a forwarding node available. Thus, the remaining energy of the node in this region can be fully utilized. For nodes closer to the sink, the optimized relay nodes need to be selected before forwarding so as to save energy and maintain high network lifetime. Therefore, as a whole, the energy efficient utilization of FFSC approach is improved, and the network delay is reduced [4]. The advantages of routing-based method for optimizing delay are its better versatility and a wide range of applications. The deficiency is that the design method is more complex, and the performance is limited. The main reason for the delay in such methods is that nodes are not synchronized, so forwarding nodes are not necessarily awake when the sender has data to transmit, which result in longer delay. The solution based on duty cycle. As can be seen from the previous discussion, if the nodes are always in the awake state, there is no sleep delay when the sender has data to transmit. But because the energy consumption of nodes when they are in the awake state is high, it is not an effective way to simply increase the duty cycle. Chen et al. [32] proposed a method of changing duty cycle to reduce delay. They found that the energy consumption of nodes in the region closer to the sink is high, while the energy consumption of nodes in the region far away from the sink is low, so there is a great deal of residual energy. Based on the discovery, the nodes in the region far away from the sink adopt large duty cycle to reduce delay, and the nodes in the region closer to the sink adopt small duty cycle in order to ensure the longer network lifetime, thus reducing the overall delay and maintaining a high lifetime. Lee [43] also proposed an approach to adjust the duty cycle of nodes according to the traffic of nodes, which is called Adaptable Wakeup Period (AWP) [43]. They find many performances in the network are related to duty cycle. For example, if the duty cycle increases, the successful delivery ratio becomes high and retransmission counts decrease. Based on the above discovery, in order to save energy, when the traffic of the node is small, the duty cycle of this node should be reduced, so that the energy can be saved without reducing the performance of the network. When the traffic of the node is large, the duty cycle of this node is increased to meet the performance requirements of the network. Thus, compared with the approach that the whole network uses the same duty cycle (SDC), as a whole, the AWP approach reduces the energy consumption of the network without reducing network performance. The approach based on synchronous network [44]. The description above is mainly about asynchronous network. Another type of network is synchronous network, in which each node follows the same clock frequency, so that the nodes wake up only at the time they take to work, while sleeping at other times. In such methods, nodes only need to be awake at time slots for work, thus saving energy effectively. However, such methods require precise synchronization, which limits the scalability of the system. Because the synchronization in a large-scale network is a very difficult task, in the wireless sensor networks, most applications still adopt asynchronous operation. Thus, each node only needs to select its own working slots independently without synchronization, which can increase its applicability. But in general, when using asynchronous operation mode, the performance of wireless sensor networks is not as good as synchronous network. Communication method based on competitiveness. TDMA-based Media Access Control (MAC) protocol is a non-competitive communication protocol [23]. In such a protocol, the system assigns the appropriate work slot to each node, and each node wakes up in the planned slot, performs the corresponding data operation, and then switches to sleep. This is a very energy-saving and efficient way. In this approach, delay usually refers to the number of time slots needed to collect a round of data in the network. In Ref. [45], the authors came up with centralized scheduling algorithm, in which the radius of network is R, the maximum node degree is ∆, and the bound of delay is 23 + R∆ + 18 time slots. The distributed scheduling method proposed by Yu et al. [46] can generate conflict-free schedules where the upper limit of delay is 24D + 6∆ + 16 time slots, and the diameter of network is D. Xu et al. [47] gave theoretical proof that their algorithm can generate the aggregation schedule where the upper limit of delay is 16 + R∆ − 14 time slots. This approach is often used in wireless sensor nodes for data fusion, especially in the network where n packets are fused into one packet. But the deficiency of this approach is that the operation of nodes in the network should be determined in advance. Once the node time slot is determined, the entire network must adopt planned sequence of operations and cannot change. Therefore, this approach cannot be applied to networks whose data operations cannot be determined in advance. In industrial applications, sensor nodes often needs to be aware of unexpected events, so it is impossible to determine its work slot in advance. Thus, in most cases, this method cannot be adopted. In conclusion, the main reason for the delay is the asynchronous operation of the node, which causes the forwarding nodes to be in the sleep state when the sender has data to transmit. Although the delay will be effectively reduced when using synchronous network, the maintenance cost of synchronous network has also reduced the development of its application. Therefore, this paper proposes a scheme of partial synchronization, which reduces the cost and operation difficulty of synchronization effectively and reduces the delay effectively. At the same time, partial synchronization between the sender and its forwarding set is the difference between this and other similar scheme. The system model and problem statement The network model In this paper, the system model is based on duty-cycled wireless sensor networks [4, 32]. This kind of network adopts a common method in WSNs, duty cycling, in which nodes transform between active and sleep modes. A wakeup interval is always predefined there, which is the basis of the transformation. Combined with the opportunistic routing based on asynchronous protocol, nodes wake up at different moments and each node has multiple forwarders. Each node has a communication range; here, we think of it as a circular region with a radius of r and make the sender as the center of the circle. Other nodes in this circle have the chance to become the forwarding nodes of the sender. When a sender wants to transmit a packet, at least one of its corresponding candidate forwarders should be in an active mode. Otherwise, the sender has to wait for the receiver to wake up. In the process of being transmitted to the sink node, a packet may be forwarded by multiple relay nodes, which is affected by the metric for forwarder selection. Due to the random and uniform distribution of the monitoring targets in the entire network, the probabilities that a target being perceived by any node are equal. Similarly, for each node, the probability of generating data is equal. The security of network such as privacy and attacks are not considered in this paper [48, 49]. As is described above, each node has a duty cycle, waking up and sleeping periodically. Any node cannot receive or send packets while it is in sleep mode. Beyond that, in the sleep state, less energy is consumed compared to the active state. In addition to transmission, each sender does periodic self-learning. After self-learning, the sender starts communication. Thus, the duration of communication consists of two main parts: active periods and sleep periods. In a self-learning duty cycle, nodes always stay active and detect the state of the corresponding forwarding nodes. According to the description above, we use the following equations to denote the nodes' duty cycle, which is denoted as C below: $$ {C}_{\mathrm{COM}}=\left({D}_{\mathrm{LEA}}+{D}_{\mathrm{COM}}^A\right)/\left({D}_{\mathrm{LEA}}+{D}_{\mathrm{COM}}^A+{D}_{\mathrm{COM}}^S\right) $$ $$ {C}_{\mathrm{LEA}}={D}_{\mathrm{LEA}}^A/{D}_{\mathrm{LEA}}={D}_{\mathrm{LEA}}^A/{D}_{\mathrm{LEA}}^A $$ where DCOM is the duration of communication and \( {\mathrm{D}}_{\mathrm{COM}}^A \) and \( {\mathrm{D}}_{\mathrm{COM}}^S \) are the periods of the nodes when they are in active and sleep modes respectively. DLEA is the learning duration of the nodes, and because at this time nodes are always active, here, DLEA is equal to \( {D}_{\mathrm{LEA}}^A \). The calculation of power consumption mainly include these three aspects: the power consumed in learning duration, the power consumed in receiving or transmission, and the power required when nodes in the sleep mode. Power consumption is connected with duty cycle. The main system parameters are listed in Table 1 [4, 42]. Other parameters will be explained when they appear in equations. Table 1 System parameters Each packet is sent to the sink after being forwarded by multiple relay nodes. In regard to the packet transmission, the main objective of this work is to propose a method for minimizing delay and maximizing network lifetime. These indicators are explained below. The effective energy utilization There are n nodes in the network. E U is the energy utilization, which relates to the ratio of the energy utilized by the whole network and the total initial energy in the network. The energy utilization can be expressed as the equation below. The maximization of energy utilization is to improve the efficient utilization of network energy and maximize the ratio of energy consumption to the initial energy in the network. $$ \max \left({E}_U\right)=\max \left[\left({\sum}_{1\le i\le n}{\varepsilon}_i\right)/\left({\sum}_{1\le i\le n}{E}_O^i\right)\right] $$ As shown in the equation, ε i is the energy consumption of the ith node and \( {E}_O^i \) represents the initial energy of node i. The maximization of network lifetime Network lifetime is defined as the period from the beginning until the first node dies out because of energy drain. Thus, maximizing the lifetime of the first node which uses up its energy can make the network lifetime maximized. Here, the initial energy of node i is\( {E}_O^i \), and the energy consumption of node i is ε i per unit time. Accordingly, this requirement is guaranteed in the following equation. $$ \max \left({T}_N\right)=\max \left({\min}_{1\le i\le n}\left(\frac{E_O^i}{\varepsilon_i}\right)\right) $$ The minimization of end-to-end delay The end-to-end transmission delay refers to the time that the nodes take for transmitting the packet from the sender to the sink. Assuming that in the packet transmission, the packet passes through k nodes. Here, let YE2E represents the delay and yi − 1, i represents the delay of the transmission from (i − 1)th to ith node. Then, the corresponding equation can be obtained. $$ \operatorname{Min}\left({Y}_{E2E}^{v_i}\right)=\min \left(\sum {y}_{v_j,{v}_k}\right)\mid \forall {v}_j,{v}_k\in {\mathrm{\mathbb{P}}}_{v_i}\ \mathrm{and}\ {v}_k\ \mathrm{is}\ \mathrm{forwarding}\ \mathrm{node}\ \mathrm{of}\ {v}_j $$ where \( {\mathbb{P}}_{v_i} \) represents the routing path from the node v i to the sink and \( {v}_j\in {\mathbb{P}}_{v_i} \) expresses that v j is a node in the \( {\mathbb{P}}_{v_i} \). In conclusion, the objectives of our research are $$ \left\{\begin{array}{c}\max \left({E}_U\right)=\max \left[\left({\sum}_{1\le i\le n}{\varepsilon}_i\right)/\left({\sum}_{1\le i\le n}{E}_O^i\right)\right]\\ {}\max \left({T}_N\right)=\max \left({\min}_{1\le i\le n}\left(\frac{E_O^i}{\varepsilon_i}\right)\right)\\ {}\operatorname{Min}\left({Y}_{E2E}^{v_i}\right)=\min \left(\sum {y}_{v_j,{v}_k}\right)\end{array}\right. $$ Design of LS approach Research motivation Opportunistic Routing in Wireless sensor network (ORW) is a routing protocol and works on top of an asynchronous MAC protocol [50]. In this protocol, each sender has multiple candidate forwarding nodes and each forwarding nodes has the chance to receive the packet when the sender has data to transmit. When the first forwarding node receives the packet and sends the acknowledgement (ACK) to sender, the sender can finish its transmission. ORW uses a metric called Expected Duty Cycled Wakeups (EDC) to select relay nodes. EDC is a measure of the time from the sender transmitting a packet to the packet reaching the sink, which does not consider the location of relays. Thus, there is a probability that a node far away from the sink is selected as forwarder, which may cause longer packet delay. And in the FFSC approach, nodes closer to the sink should select the optimized relay nodes before forwarding, while nodes far away from the sink adopt the strategy of forwarding immediately if there is a forwarding node available. However, because nodes are not synchronized, the sender wait time will be longer and lead to the longer delay. Therefore, we are committed to designing a better way to reduce delay. Synchronization can reduce delay. As can be seen in Fig. 1, without synchronization, the expected delay is half of sleep time in the FFSC approach. If the duty cycle of the corresponding nodes is roughly synchronized before sending the data, the sender does not require to wait for the receiver to wake up or only need to wait for a shorter time than without synchronization. As Fig. 2 shows, after self-learning, the synchronization of the sender and its receiver is completed, so when sender A transmits data, one of its candidate receiver is also in the awake state, and then, the delay is reduced. It is difficult to implement synchronization. The trouble is that maintaining a wide range of synchronization consumes a lot of energy. But the approach presented in this paper does not require synchronization of all nodes in the entire network and only requires synchronization within the forwarding range of the sender, so it is easy to implement. Furthermore, nodes which are closer to the sink do not require to maintain synchronization, so these nodes do not need additional energy consumption, while nodes which are far away from the sink have residual energy. Using this part of energy for synchronization, the delay can be reduced without affecting the network lifetime. Duplicate packets will lead to the waste of energy consumption. But this problem is inevitable while reducing delay as much as possible. Because multiple forwarding nodes may be awake at the same time, duplicate packets will be generated whether forwarding nodes transmit the same packet to their next nodes or return the ACK to the sender. In the approach proposed in this paper, the selection of nodes is optimized, and for each sender, only its forwarding nodes within a small range will be synchronized, in order to avoid aggravating the problem of duplicate packets compared to other approaches. Timeline of FFSC Timeline of LS LS approach In this paper, the forwarder set of each node is defined as a region formed by the intersection of a circle and an arc through the center of the circle. As shown in Fig. 3, the distance between sink and the sender node B is l and the radius of the circle is equal to the communication range of node B. In order to avoid selecting forwarders with longer distance from the sink, there is a constraint that the distance from the candidate relay node to the sink is at least closer than node B to sink. Optional range of forwarder set The area of this region can be calculated by equation: $$ {\vartheta}_0={l}^2\bullet {\cos}^{-1}\left(1-\frac{r^2}{2{l}^2}\right)+{r}^2\bullet {\cos}^{-1}\frac{r^2}{2l}-l\bullet r\bullet \sin \left({\cos}^{-1}\frac{r}{2l}\right) $$ where the first part of Eq. (7) is used to calculate the area of sector ACD, the second part represents sector BCD area, and the third calculates the area of the quadrilateral ACBD. Based on this model, the selection of relay nodes and the determination of duty cycle are related to self-learning. First, we define the nodes in the shaded portion of Fig. 4 as the candidate relays. Optional range of forwarder set after re-screening Assuming that the density of relays is ρ. The area of the shaded portion can be expressed as $$ \vartheta ={\vartheta}_{\mathrm{fanAEF}}+{\vartheta}_{\mathrm{fanBEF}}-{\vartheta}_{\blacksquare \mathrm{AEBF}}=\left({\cos}^{-1}\frac{{\left(l-x\right)}^2+{l}^2-{r}^2}{2\left(l-x\right)l}\right)\cdotp {\left(l-x\right)}^2+\left({\cos}^{-1}\frac{l^2+{r}^2-{\left(l-x\right)}^2}{2 lr}\right)\cdotp {r}^2- lr\cdotp \kern0.5em \sin \left({\cos}^{-1}\frac{l^2+{r}^2-{\left(l-x\right)}^2}{2 lr}\right) $$ Thus, the number of forwarders in forwarder set of node B (denoted as σ) is $$ \sigma =\rho \bullet \vartheta $$ The sender node stays active in the self-learning cycle and sends preamble repeatedly to detect the state of its candidate relays in the forwarder set. After sensing their duty cycle, we need to determine a period of time where more nodes are active. Compared with sender's duty cycle, if sender's active period is nearly coincident with the period where the majority of forwarders are active, there is no necessary to change sender's duty cycle temporarily. Beyond that, the distance between forwarders and sink also needs to be taken into account. In short, we should change the duty cycle of the sender, in order to synchronize the duty cycle of the nearest forwarder to sink and the sender. This approach can make one-hop distance longer and reduce delay. Meanwhile, each node remembers the number of its corresponding nodes participating in the adjustment of the cycle. After the sender has adjusted its duty cycle, the relay nodes affected by it also have to adjust their cycle. Another method is to keep the original start time of node's cycle constant and change the main duty cycle, which cannot affect the work of the related nodes. Certainly, this method will lead to the fragmentation of the cycle; therefore, adjustment and consolidation are also necessary at last. In several self-learning cycles, the first task to complete is to divide the area from the sender to the slot into multiple regions with a width of x, as shown in Fig. 5. The distance between each of the two areas that need to be synchronized is d, synchronizing the duty cycle of the nodes near the sender node A with the nodes in the next region closer to the sink. Then, the nodes in this area repeat the above operations until the region containing the sink. Synchronization of any two regions is completed in this multiple self-learning cycle. But it is important to note that when the A region is learning from the B region, the B region is in the normal duty cycle, so the learning of the B region toward the C region should begin after the completion of the learning of the A region. However, there is another approach that when the A region is learning from the B region and the C region is learning from the D region, the C region can learn from the B region at the same time. But one drawback of this approach is that it increases the probability of ACK conflict. Range of self-learning At the same time, because there are more than one sender, the intersection of candidate forwarding node sets of different senders may not be a non-empty set. In this case, the solution we take is that for any two senders, if their intersection of forwarder sets is larger, we synchronize them with their forwarding nodes; if their intersection of forwarder sets is smaller, we divide the intersection into two parts and realize the synchronization of two senders respectively. In short, the number of forwarding nodes of any sender cannot be too small. In a complete synchronization, a self-learning duration is DLEA and the duty cycle of each relay node is cA/c. In that way, assuming that a self-learning cycle can be divided into several time slots and the length of a time slot is c, then there are DLEA/c time slots. The probability of any node waking up in each slot is cA/c. In the above context, each node has σ forwarders, and before a node sends a data packet, it sends the preamble at a certain frequency (denoted as V) in the self-learning state. It is assumed that the average distance between any two nodes in the two regions is d and that a node should be at least 2Vd after the delivery of a packet and then send the probe packet. When a node receives a probe packet, it sends an ACK back to the sender. In addition to ACK conflict problem, there is no guarantee that the ACKs received by the sender nodes are not returned by the nodes that returned ACK before. Therefore, it is a probability problem to estimate the time spent in self-learning. The whole process of LS approach can be described in Algorithm 1. Based on the description above, there are DLEA/c time slots, where DLEA is the duration of self-learning and c is the length of a time slot. When multiple nodes in the self-leaning area are sending probe packets in the similar time, it is easy to cause collisions on the sending and receiving channels. Thus, in addition to the time interval of each node that sends probe packets, it is necessary to restrict the interval at which each node sends a probe packet and its means is based on Carrier Sense Multiple Access with Collision Avoidance. When a node is ready to send a probe packet, it intercepts whether there is a co-channel carrier in the medium. If not, it means that the channel is idle and the node will go directly into the data-transmission state. If carrier exists, the node will detect the channel after a random back-off for a period of time. Here, we assume that self-learning starts at the beginning of a time slot. The duration of a self-learning cycle can be calculated as Eq. (10), where ϱ is a system parameter. $$ {D}_{\mathrm{LEA}}={\sum}_{i=1}^{\rho \vartheta}{\sum}_{j=1}^i2 Vd\varrho \left[\frac{c^A}{c}\bullet {\left(1-\frac{c^A}{c}\right)}^{\rho \vartheta -1}+\frac{c^A}{c}\bullet {\sum}_{k=1}^{\rho \vartheta -1}{C}_{\rho \vartheta -1}^k\bullet {\left(1-\frac{c^A}{c}\right)}^k\right] $$ Here, the probability of any node waking up in each slot is cA/c and the number of forwarders in forwarder set of node i is ρϑ. In Eq. (10), \( \frac{c^A}{c}\bullet {\left(1-\frac{c^A}{c}\right)}^{\rho \vartheta -1} \) represents the situation that the sender wakes up in a slot but all of its forwarders wake up in other slots. And \( \frac{c^A}{c}\bullet {\sum}_{k=1}^{\rho \vartheta -1}{C}_{\rho \vartheta -1}^k\bullet {\left(1-\frac{c^A}{c}\right)}^k \) represents that the sender wakes up in a slot and most of its forwarders wake up in the same slot, except for k forwarders, k = 1, 2, ⋯, ρϑ − 1. Because the whole self-learning process includes simultaneous learning among multiple regions, the time required to complete self-learning within a certain range (denoted as \( {D}_{\mathrm{LEA}}^{\mathrm{TOT}} \)) is the maximum value of the self-learning time between the regions. Performance analysis of LS approach Delay analysis After the completion of self-learning, all nodes in any two regions separated by d maintain the synchronization of duty cycle. Compared with other methods, in which nodes need to wait for the corresponding candidate forwarders to wake up before sending the packets because their duty cycle is not necessarily synchronous, this approach can optimize the performance of this respect, thus effectively reducing the sender wait time. Figure 6 shows one of the possible multi-hop path of the packet from the sender node A to the sink. Multi-hop path Theorem 1 The expected vertical distance from one of sender nodes to its forwarder is $$ {d}_x={\int}_0^x\frac{2x\left(d-x\right)}{\vartheta_x}\bullet {\cos}^{-1}\left(\frac{{\left(d-x\right)}^2+{d}^2-{x}^2}{2\left(d-x\right)d}\right) dx $$ Proof According to [4], the probability density function of the expected vertical distance from one of the sender nodes to its forwarder is \( {\mathcal{F}}_d(x)=\frac{2\left(d-x\right)}{\vartheta_x}\bullet {\cos}^{-1}\left(\frac{{\left(d-x\right)}^2+{d}^2-{x}^2}{2\left(d-x\right)d}\right) \). The expected d x can be calculated by the formula \( {d}_x={\int}_0^xx\bullet {\mathcal{F}}_d(x) dx \). ■ Although the duty cycle of most of the nodes in the forwarder set are synchronized with the sender's cycle, we prefer to select the relay node closer to the sink. Therefore, there is a need to reach a balance between the minimum delay and the maximum vertical distance. Theorem 2 When packet forwarding is basing on LS approach, the duty cycle of nodes and their forwarders almost synchronous. When the distance from a node and the sink is L, the delay of end-to-end transmission can be calculated in Eq. (13). $$ {Y}_{i-1,i}={\sum}_{i=0}^{\frac{D_{\mathrm{COM}}}{c}-2}i{\left(1-\frac{c}{D_{\mathrm{COM}}}\right)}^{i\rho \vartheta}\left[1-{\left(1-\frac{c}{D_{\mathrm{COM}}}\right)}^{\rho \vartheta}\right]+\left(\frac{D_{\mathrm{COM}}}{c}-1\right){\left(1-\frac{c}{D_{\mathrm{COM}}}\right)}^{\left(\frac{D_{\mathrm{COM}}}{c}-1\right)\rho \vartheta} $$ $$ {Y}_{\mathrm{ETE}}=\left[\frac{L}{d_x}\right]{Y}_{i-1,i} $$ Proof Yi − 1, i is the average delay of the transmission from (i − 1)th to ith node. According to [4], the duty cycle of each node is c/DCOM in transmission and the probability density function of the delay in each slot is \( {\left(1-\frac{c}{D_{\mathrm{COM}}}\right)}^{i\rho \vartheta}\left[1-{\left(1-\frac{c}{D_{\mathrm{COM}}}\right)}^{\rho \vartheta}\right] \), where \( i=0,1,\cdots, \frac{c}{D_{\mathrm{COM}}}-2 \). Here, the vertical distance is greater than or equal to x, so the expected hop count is \( \left\lceil \frac{L}{d_x}\right\rceil \). Then, the total delay can be obtained by single hop delay and hop count. ■ Based on this formula, Fig. 7 shows the expected delay under different number of nodes. As we can see from the diagram, the overall trend is that the more the number of forwarders is, the shorter the delay is. The expected delay under different number of nodes And as shown in Fig. 8, with the increase of single hop distance, the single hop delay is reduced. This indicates that the longer distance can lead to the shorter delay. At the same time, as duty cycle increases, the delay will decrease. The single hop delay under different single hop distance As Fig. 9 shows, the one-hop delay increases with the increase of the distance L. Meanwhile, the distance between a sender and its forwarder also reflects one-hop delay: the longer the one-hop distance, the longer the one-hop delay. The single hop delay under different distance Energy consumption analysis The network lifetime is determined by energy consumption, and the selection of forwarders is also related to residual energy. In LS approach, nodes have four types of states: transmission of packets, reception of packets, self-learning, and low power listening. In a general way, while the number of packets in network is large, the energy they consume is serious because the packet transmission and reception are the main energy-consuming operation. Beyond that, in this approach, the self-learning process is also an operation that consumes a lot of energy. On the other hand, the overhead of time slot can be divided into three parts: when there have data packet transmission, listening, and sleeping time. This is included in the calculation of the above four parts of the energy consumption. The total energy consumption can be calculated in the following equation $$ {E}_U={E}_T{N}_T+{E}_R{N}_R+{E}_{\mathrm{LPL}}{D}_{\mathrm{COM}}+{E}_{\mathrm{LEA}}{D}_{\mathrm{LEA}} $$ where E U is the total energy consumption, E T is the energy consumption of a packet transmission, ELPL represents the energy consumption in low power listening, and ELEA is the energy consumption in self-learning. N T and N R are the number of packets in transmission and the packet amount in reception respectively. DCOM and DLEA are the duration of communication and self-learning. According to [4], assuming that the radius of network is D, the transmission range of each node is r and the distance between the sender and sink is l; N T and N R can be calculated by following equations in which i is an integer here. $$ {N}_R=\frac{r\left(i+1\right)i}{2l}+\left(i+1\right)\mid ir+l<D $$ $$ {N}_T={N}_R+1 $$ The energy consumption in a packet sending includes consumption of the real packet transmission and the related preamble consumption. ε T D P represents the energy consumption of packet transmission. The preamble transmission is a periodic operation in the whole transmission, which can inform the relay node of the packet's arrival. The average energy consumption in a packet transmission can be expressed as $$ {E}_T={\varepsilon}_T{D}_P+\left({\varepsilon}_T{D}_B+{\varepsilon}_R{D}_{\mathrm{ACK}}\right)\frac{\left(1-\frac{c^A}{c}\right){D}_{\mathrm{COM}}}{2\left({D}_B+{D}_{\mathrm{ACK}}\right)} $$ The second part of Eq. (17) represents the power consumption of periodic preamble transmission. The energy consumption in packet reception can be calculated as $$ {E}_R={\varepsilon}_R{D}_P+{\varepsilon}_T{D}_{\mathrm{ACK}}+{\varepsilon}_R{D}_B $$ where ε R D P represents the power consumption of the packet reception, ε T DACK is the power consumption in the transmission of acknowledgment message, and ε R D B represents the power consumption of the preamble reception. The energy consumption in self-learning can be expressed by the following equation: $$ {E}_{\mathrm{LEA}}={\varepsilon}_{\mathrm{LEA}}{D}_{\mathrm{LEA}}={D}_{\mathrm{LEA}}\left({\varepsilon}_R+{\varepsilon}_T+{\varepsilon}_R\right) $$ The energy consumption used for low power listening of the node from sink L meters (denoted as \( {E}_{\mathrm{LPL}}^L \)) can be calculated as $$ {E}_{\mathrm{LPL}}^L={\varepsilon}_S\left(1-\frac{c^A}{c}\right)+{\varepsilon}_R\frac{c^A}{c}-{\psi}_T^L-{\psi}_R^L $$ The process of low power listening consists of two parts: listening and sleeping states, which correspond to the first and second part in Eq. (20). Because the energy consumption of packet transmission and reception is calculated in the equation before, \( {\psi}_T^L \) and \( {\psi}_R^L \) should be subtracted from the calculation for \( {E}_{\mathrm{LPL}}^L \). According to [5], \( {\psi}_T^L \) and \( {\psi}_R^L \) can be expressed by Eqs. (21) and (22) respectively: $$ {\psi}_T^L=\left\{\left[\frac{T_{\mathrm{COM}}\left(1-\frac{c^A}{c}\right)}{2}+{D}_B+{D}_{\mathrm{ACK}}\right]{\varepsilon}_S+{\varepsilon}_R{D}_B\right\}\frac{N_T^L}{T_{\mathrm{COM}}} $$ $$ {\psi}_R^L=\left[{\varepsilon}_R{D}_B+\left({D}_P+{D}_{\mathrm{ACK}}\right){\varepsilon}_S\right]\frac{N_R^L}{T_{\mathrm{COM}}} $$ Based on the analysis above, the energy consumption of nodes which are x meters away from the corresponding relay nodes can be expressed as $$ {E}_U^x={E}_T^x{N}_T+{E}_R^x{N}_R+{E}_{\mathrm{LPL}}{D}_{\mathrm{COM}}^x+{E}_{\mathrm{LEA}}^x{D}_{\mathrm{LEA}}^x $$ Because the range of the forwarder set is x, and the average distance from the sender and the region of relay nodes is d, the total energy consumption can be calculated as follows: $$ {E}_U=\left[\frac{L}{d}\right]\left({E}_T^x{N}_T+{E}_R^x{N}_R+{E}_{\mathrm{LPL}}{D}_{\mathrm{COM}}+{E}_{\mathrm{LEA}}^x{D}_{\mathrm{LEA}}\right) $$ Theorem 3 In the approach presented in this paper, suppose that the initial energy of node i is \( {E}_O^i \) and there are n nodes in the network, the lifetime of network can be expressed as $$ {T}_N=\frac{E_O^i}{\max_{1\le i\le n}\left({E}_T{N}_T^i+{E}_R{N}_R^i+{E}_{\mathrm{LPL}}{D}_{\mathrm{COM}}+{E}_{\mathrm{LEA}}{D}_{\mathrm{LEA}}\right)} $$ Proof Network lifetime is defined as the period before the node which has the largest energy dies out due to its energy drain. In other words, the lifetime of the node with the maximum energy and the initial energy refer to the network lifetime. \( {E}_O^i \) is the initial energy of the ith node, and the largest energy consumption of the node is \( {\max}_{1\le i\le n}\left({E}_T{N}_T^i+{E}_R{N}_R^i+{E}_{\mathrm{LPL}}{D}_{\mathrm{COM}}+{E}_{\mathrm{LEA}}{D}_{\mathrm{LEA}}\right) \). ■ Based on Eq. (24), Fig. 10 shows the energy consumption under the different x values. It can be observed that the larger the x is, the smaller the energy consumption is, which is because the hop count decreases with the increase of x. Meanwhile, as the distance between the sender and sink increases, the energy consumption of nodes are also reduced. The energy consumption under different x values As shown in Fig. 11, the trend of energy consumption changes with the total distance from sink is the same as in Fig. 10. In the approach proposed in this paper, when each node becomes the sender, it is taken as the center in the network. In the transmission of packets, the selection of relays in each packet forwarding always follow the principle of selecting nodes that are closer to the sink. Thus, the closer the distance between a node and sink, the greater the energy consumed, which can be observed from Fig. 11. The energy consumption under different duty cycle Simulation result The performance of LS approach by simulation is evaluated in this chapter. The reason for choosing FFSC approach to compare is that the main difference between it and the protocol we proposed is that FFSC approach does not have self-learning stage and self-learning is the innovation of our protocol. The whole evaluation system includes the following main aspects: transmission delay, energy consumption, and network lifetime. Except that we analyze the effect of some parameters, such as the node density, the radius of the communication range, and the duty cycle. When there is no special explanation, the transmission range of a node (denoted as r) is set to 80 m. In this simulation, when the location of each node is randomly generated, the forwarding nodes set of the sender is selected by the method in 4.2. And n sensors are deployed here, n = 700 without special explanation. The analysis results show that compared with FFSC approach, the LS approach can reduce the total transmission delay by 5.13–11.64% and increase the energy efficiency by 14.29–17.53%, while guaranteeing the network lifetime is not less than other approaches. The values of the main simulation parameters are listed in Table 2. Table 2 Simulation parameters Transmission delay Figures 12 and 13 show the total transmission delay of the LS, FFSC, and ORW under different distance from sink. All nodes have the same duty cycle in ORW approach. Therefore, the distance between sender node and sink is mainly related to the total delay. As shown in the first curve in Fig. 12, the greater the distance, the greater the total delay. Meanwhile, because ORW does not consider the location of nodes, there exists a problem that the forwarder far away from the sink may be selected. So the total delay of ORW approach is obviously longer than the other two approaches. Similarly, in FFSC approach, the communication duty cycle of one node is as same as others. The difference between ORW and FFSC lies in the forwarder selection. With the consideration of the location of nodes, candidate forwarders are delineated within a range closer to the sink. Thus, compared to ORW, the total delay of FFSC approach is shorter. Unlike the above two approach, LS has been improved on the basis of FFSC, which is mainly embodied in self-learning. In the self-learning cycle, the duty cycle of the sender and its corresponding forwarder set are synchronized. It successfully reduces the total delay of transmission. Compared with FFSC, the LS approach can reduce the transmission delay by 5.13–11.64%, while the network lifetime of these two protocols are the same. Thus, it can be seen that the approach proposed in this paper is better than other approaches in terms of transmission delay. The total delay under different distance from sink Energy efficiency and network lifetime The energy utilization can reflect the performance of the protocol and also affect the selection of forwarding nodes. Meanwhile, the network lifetime is related to energy consumption of nodes. In this section, the performance of LS approach in energy consumption and network lifetime is analyzed. Figures 14 and 15 show the energy consumption of the LS and FFSC approach under different distance from sink. Here, the vertical distance between a sender and its forwarder has to be greater than or equal to x, while x is related to other parameters in the network. As can be seen in the following figures, the longer the distance from the sink, the less the energy consumed. Compared with the FFSC approach, LS consumes energy efficiently in non-hotspot areas. By comparing the two pairs of curves in Figs. 14 and 15, it can be observed that the maximum energy consumption of LS and FFSC is relatively close, which is partly due to the large energy consumption during self-learning period. The bell-shaped is presented in Fig. 14 because there is no need to do self-learning before transmission when the sender is close to the sink, so at the beginning of the curve, the energy consumption of the two approaches is similar. And with the distance from the sink increases, because the partial synchronization is achieved in the self-learning period, the energy consumption during the transmission is effectively reduced. Thus, the bell-shaped is formed in Fig. 14. The energy consumption under different distance from sink And the trends of curves in Figs. 14 and 15 prove that the closer the distance between a node and sink, the greater the energy consumed. This trend is the same as in Fig. 11. Figure 16 shows the column diagram of residual energy under different radius of transmission range. Compared with FFSC approach, the LS can increase the efficiency of energy utilization by 14.29–17.53%, which can be observed in the figure. In a word, while adopting the LS approach, the energy utilization reaches a better balance. Especially for the areas farther away from the sink, the energy efficiency of nodes has improved. The residual energy under different transmission range Figures 17 and 18 show the network lifetime of FFSC and LS approach under different number of the first nodes that use up their energy. Compared to FFSC approach, although the efficiency of energy utilization during the transmission is improved, the network lifetime of the approach proposed in this paper is unchanged. This is because that the energy saved during the transmission compared to FFSC approach is almost offset by the energy consumption during the self-learning period. The network lifetime under different transmission range Effects of other parameters on the performance In this section, we analyze the performance of LS approach which is affected by other factors in the network. The length of a duty cycle is an important factor influencing the end-to-end delay. We usually expect a near perfect duty cycle to promote the probability that the sender and its candidate relays will be active at the same time. Figures 19 and 20 show the total transmission delay under the different duty cycle. As they shows, compared with other approach, the total delay of the approach proposed in this paper has been reduced, which also proves the correctness of our ideas. At the same time, it can be observed that the larger the duty cycle is, the shorter the total delay is. This is the same as the conclusion obtained in Fig. 8. In addition to duty cycle, the number of nodes is also one of the factors which we are concerned with. Figures 21 and 22 show the total delay of the different approaches under different number of nodes. Compared with ORW and FFSC, LS approach can achieve smaller total delay than others. This is because that in ORW and FFSC approach, the nodes in the forwarder set are not synchronized with the corresponding sender. So if the number of nodes increases, the sender wait time before its forwarding nodes wake up will decrease and the delay of each hop in transmission will also decrease. This trend is the same as in Fig. 7. And in LS approach, due to the synchronization between the sender and its candidate forwarding nodes, the sender does not require to wait for the receiver to wake up or only need to wait for a shorter time than without synchronization. The parameter d mentioned in the previous section also has an effect on transmission delay. In general, the longer the one-hop distance, the less the number of forwarders that a packet will pass. As can be seen in Fig. 23, LS approach is less than the FFSC in the total transmission delay, and as d becomes longer, the total delay decreases. On the contrary, when using LS approach, the one-hop delay under different x grows with the increase of x, as can be seen in Fig. 24. The total delay under different x To reduce delay is important for the Industrial Internet of Things (IIoTs). However, previous strategies often require sacrificing some other network performance to reduce delay. In general, the reduction of delay is achieved by consuming a certain amount of energy and reducing the network lifetime. In our LS scheme, a novel approach is proposed, which can effectively reduce delay without reducing network lifetime. The main innovation of this approach is that partial synchronization strategy is used to keep the sender synchronized with its forwarding nodes, thus greatly reducing the difficulty of maintaining synchronization and effectively reducing the delay. Moreover, this synchronization is implemented by self-learning method and has strong robustness. In addition, the synchronous energy consumption takes full advantage of the remaining energy in the network, so as to improve the energy efficient utilization without reducing the network lifetime. The LS approach proposed in this paper has a wide range of applicability and not only can be applied to IIOTs but also can be widely applied to IoT-based edge network. That is what we will research in the future. ACK: AWP: Adaptable Wakeup Period EDC: Expected Duty Cycled Wakeups FFSC: A Fast data collection for the node Faraway sink and Slow data collection for the node Close to sink FNs: The sets of the candidate forwarding nodes of the senders IIoTs: The Industrial Internet of Things IoT: LS: Learning-based synchronous M2M: Machine-to-machine Media Access Control ORW: Opportunistic Routing in Wireless sensor networks SDC: Same duty cycle TDMA: Time division multiple access WSNs: Xiao F., Sha L. T., Yuan Z. P., Wang R. C. VulHunter: a discovery for unknown bugs based on analysis for known patches in Industry Internet of Things, IEEE Transactions on Emerging Topics in Computing, 2017, 1-13. H Zhu, F Xiao, L Sun, R Wang, P Yang, R-TTWD: robust device-free through-the-wall detection of moving human with WiFi. IEEE Journal on Selected Areas in Communications 35(5), 1090–1103 (2017) A Liu, Z Chen, N Xiong, An adaptive virtual relaying set scheme for loss-and-delay sensitive WSNs. Inf. Sci. 424, 118–136 (2018) Y Liu, A Liu, Y Hu, Z Li, YJ Choi, H Sekiya, J Li, FFSC: an energy efficiency communications approach for delay minimizing in Internet of things. IEEE Access 4, 3775–3793 (2016) Liu, X., Zhao, S., Liu, A., Xiong, N., & Vasilakos, A. V. Knowledge-aware proactive nodes selection approach for energy management in Internet of things. Future generation computer systems, 2017, https://doi.org/10.1016/j.future.2017.07.022. Xu, Y., Liu, A, Huang, C. Delay-aware program codes dissemination scheme in Internet of everything. Mobile Information Systems, 2016. https://doi.org/10.1155/2016/2436074. Chen, X., Ma, M., Liu, A. Dynamic power management and adaptive packet size selection for IoT in e-Healthcare. Computers & Electrical Engineering, 2017, https://doi.org/10.1016/j.compeleceng.2017.06.010. S Andreev, O Galinina, A Pyattaev, M Gerasimenko, T Tirronen, J Torsner, Y Koucheryavy, Understanding the IoT connectivity landscape: a contemporary M2M radio technology roadmap. IEEE Commun. Mag. 53(9), 32–40 (2015) Kawabata, H., Ishibashi, K., Vuppala, S., de Abreu, G. T. Robust relay selection for large-scale energy-harvesting IoT networks. IEEE Internet of Things Journal, 2017, 4(2), 384-392. H Zhang, P Cheng, L Shi, J Chen, Optimal DoS attack scheduling in wireless networked control system. IEEE Trans. Control Syst. Technol. 24(3), 843–852 (2016) Duan, J., Gao, D., Yang, D., Foh, C. H., Chen, H. H. An energy-aware trust derivation scheme with game theoretic approach in wireless sensor networks for IoT applications. IEEE Internet of Things Journal, 2014, 1(1), 58-69. PV Mekikis, A Antonopoulos, E Kartsakli, L Alonso, C Verikoukis, in Global Communications Conference (GLOBECOM). Connectivity analysis in wireless-powered sensor networks with battery-less devices (2016), pp. 1–6 A Liu, Q Zhang, Z Li, YJ Choi, J Li, N Komuro, A green and reliable communication modeling for industrial internet of things. Computers & Electrical Engineering 58, 364–381 (2017) HW Kim, JH Park, YS Jeong, Sustainable load-balancing scheme for inter-sensor convergence processing of routing cooperation topology. Sustainability 8(5), 436 (2016) Y Sung, YS Jeong, JH Park, Beacon-based active media control interface in indoor ubiquitous computing environment. Clust. Comput. 19(1), 547–556 (2016) Z Wang, F Xiao, N Ye, R Wang, P Yang, A see-through-wall system for device-free human motion sensing based on battery-free RFID. ACM Transactions on Embedded Computing Systems (TECS) 17(1), 6 (2017) Chen, Z., Liu, A., Li, Z., Choi, Y. J., Sekiya, H., & Li, J. Energy-efficient broadcasting scheme for smart industrial wireless sensor networks. Mobile Information Systems, 2017, https://doi.org/10.1155/2017/7538190 Liu, X., Dong, M., Ota, K., Yang, L. T., & Liu, A. Trace malicious source to guarantee cyber security for mass monitor critical infrastructure. Journal of Computer and System Sciences, 2016. https://doi.org/10.1016/j.jcss.2016.09.008 A Liu, X Liu, T Wei, LT Yang, SC Rho, A Paul, Distributed multi-representative re-fusion approach for heterogeneous sensing data collection. ACM Trans Embed Comput Syst. 16, 73 (2017). https://doi.org/10.1145/2974021 Y Liu, A Liu, S Guo, Z Li, YJ Choi, H Sekiya, Context-aware collect data with energy efficient in cyber-physical cloud systems. Futur. Gener. Comput. Syst. (2017). https://doi.org/10.1016/j.future.2017.05.029 Liu, Q., Liu, A. On the hybrid using of unicast-broadcast in wireless sensor networks. Computers & Electrical Engineering, 2017, https://doi.org/10.1016/j.compeleceng. 2017.03.004 Liu X., Liu, A, Deng Q., Liu, H. Large-scale programing code dissemination for software defined wireless networks. Computer Journal.60(10), 1417-1442 (2017). X Liu, Y Liu, H Song, A Liu, Big data orchestration as a service networking. IEEE Commun. Mag. 55(9), 94–101 (2017) X Chen, Y Xu, A Liu, Cross layer design for optimal delay, energy efficiency and lifetime in body sensor networks. Sensors 17(4), 900 (2017). https://doi.org/10.3390/s17040900. T Li, A Liu, C Huang, A similarity scenario-based recommendation model with small disturbances for unknown items in social networks. IEEE Access 4, 9251–9272 (2016) X Liu, G Li, S Zhang, A Liu, Big program code dissemination scheme for emergency software-define wireless sensor networks. Peer-to-Peer Networking and Applications (2017). https://doi.org/10.1007/s12083-017-0565- A Antonopoulos, C Verikoukis, Network-coding-based cooperative ARQ medium access control protocol for wireless sensor networks. International Journal of Distributed Sensor Networks 8(1), 601321 (2011) PV Mekikis, A Antonopoulos, E Kartsakli, AS Lalos, L Alonso, C Verikoukis, Information exchange in randomly deployed dense WSNs with wireless energy harvesting capabilities. IEEE Trans. Wirel. Commun. 15(4), 3008–3018 (2016) A Liu, X Liu, Z Tang, LT Yang, Z Shao, Preserving smart sink location privacy with delay guaranteed routing scheme for WSNs. ACM Trans. Embed. Comput. Syst. 16, 68 (2017). https://doi.org/10.1145/2990500 C Huang, M Ma, Y Liu, A Liu, Preserving source location privacy for energy harvesting WSNs. Sensors 17, 724 (2017). https://doi.org/10.3390/s17040724 X Liu, A Liu, Z Li, S Tian, YJ Choi, H Sekiya, J Li, Distributed cooperative communication nodes control and optimization reliability for resource-constrained WSNs. Neurocomputing 270, 122–136 (2017) Z Chen, A Liu, Z Li, YJ Choi, J Li, Distributed duty cycle control for delay improvement in wireless sensor networks. Peer-to-Peer Networking and Applications. 10, 559–578 (2017) J Wang, A Liu, S Zhang, Key parameters decision for cloud computing: insights from a multiple game model, concurrency and computation: practice and experience (2017). https://doi.org/10.1002/cpe.4200 J Wang, A Liu, T Yan, Z Zeng, A resource allocation model based on double-sided combinational auctions for transparent computing, peer-to-peer networking and applications (2017). https://doi.org/10.1007/s12083-017-0556-6 Li, T., Liu, Y., Gao, L., Liu, A. A cooperative-based model for smart-sensing tasks in fog computing. IEEE Access, 2017, https://doi.org/10.1109/ACCESS.2017.2756826 Liu, Y., Liu, A., Li, Y., Li, Z., Choi, Y. J., Sekiya, H., & Li, J. APMD: a fast data transmission protocol with reliability guarantee for pervasive sensing data communication. Pervasive and Mobile Computing. https://doi.org/10.1016/j.pmcj. 2017.03.012, 2017. Zhang, Q., Liu, A. An unequal redundancy level based mechanism for reliable data collection in wireless sensor networks. EURASIP Journal on Wireless Communications and Networking. 2016(1), 258 (2016). https://doi.org/10.1186/s13638-016-0754-6. Y Xu, X Chen, CH A Liu, A latency and coverage optimized data collection scheme for smart cities based on vehicular ad-hoc networks. Sensors 17(4), 888 (2017). https://doi.org/10.3390/s17040888 M Dong, K Ota, A Liu, RMER: reliable and energy-efficient data collection for large-scale wireless sensor networks. IEEE Internet of Things Journal 3(4), 511–519 (2016) M Dong, K Ota, LT Yang, A Liu, M Guo, LSCD: a low-storage clone detection protocol for cyber-physical systems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 35(5), 712–723 (2016) M Dong, K Ota, A Liu, M Guo, Joint optimization of lifetime and transport delay under reliability constraint wireless sensor networks. IEEE Transactions on Parallel and Distributed Systems 27(1), 225–236 (2016) KP Naveen, A Kumar, Relay selection for geographical forwarding in sleep-wake cycling wireless sensor networks. IEEE Trans. Mob. Comput. 12(3), 475–488 (2013) JH Lee, A traffic-aware energy efficient scheme for WSN employing an adaptable wakeup period. Wirel. Pers. Commun. 71(3), 1879–1914 (2013) CJ Liu, P Huang, L Xiao, TAS-MAC: a traffic-adaptive synchronous MAC protocol for wireless sensor networks. ACM Transactions on Sensor Networks (TOSN) 12(1), 1 (2016) SC Huang, PJ Wan, CT Vu, et al., Nearly constant approximation for data aggregation scheduling in wireless sensor networks. IEEE INFOCOM, 366–372 (2007). https://doi.org/10.1109/INFCOM.2007.50 Yu, B., Li, J., Li, Y. Distributed data aggregation scheduling in wireless sensor networks, Proceedings of the IEEE INFOCOM, 2009, 2159-2167. https://doi.org/10.1109/INFCOM.2009.5062140. XH Xu, XY Li, X Mao, et al., A delay-efficient algorithm for data aggregation in multihop wireless sensor networks. IEEE Transactions on Parallel and Distributed Systems 22(1), 163–175 (2011) H Zhang, Y Shu, P Cheng, J Chen, Privacy and performance trade-off in cyber-physical systems. IEEE Netw. 30(2), 62–66 (2016) H Zhang, P Cheng, L Shi, J Chen, Optimal denial-of-service attack scheduling with energy constraint. IEEE Trans. Autom. Control 60(11), 3023–3028 (2015) O Landsiedel, E Ghadimi, S Duquennoy, M Johansson, in Proceedings of the 11th International Conference on Information Processing in Sensor Networks. Low power, low delay: opportunistic routing meets duty cycling (2012), pp. 185–196 This work was supported in part by the National Natural Science Foundation of China (61772554, 61379110, 61572526, 61572528) and the National Basic Research Program of China (973 Program) (2014CB046305). School of Information Science and Engineering, Central South University, 401 room, Computer building, Changsha, 410083, China Minrui Wu, Xiao Liu & Anfeng Liu Hunan University of Commerce, Changsha, 410205, China Yanhui Wu Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA Ming Ma School of Software, Central South University, Changsha, 410075, China Ming Zhao Minrui Wu Anfeng Liu MW is the main author of the current paper. AL contributed to the conception and design of the study. YW, XL, MM, and MZ commented the work. All authors read and approved the final manuscript. Correspondence to Anfeng Liu. Wu, M., Wu, Y., Liu, X. et al. Learning-based synchronous approach from forwarding nodes to reduce the delay for Industrial Internet of Things. J Wireless Com Network 2018, 10 (2018). https://doi.org/10.1186/s13638-017-1015-z Learning-based synchronous (LS) approach Communication delay Network lifetime
CommonCrawl
您现在的位置: 首页» 学术研究» 会议论坛 俄罗斯新西伯利亚大学"几何、拓扑及其应用"系列学术报告 Proof of the Toponogov Conjecture on complete convex planes. To Join Zoom Meeting: https://us02web.zoom.us/j/82010396279?pwd=cTQrTFdqeXFvUW9tK3RzV2lseDZudz09 Speaker: W. Klingenberg (Durham, England) Time: December 5, 2022 (17:00 Beijing time, 16:00 Novosibirsk time). Abstract: In joint work with B Guilfoyle we prove that complete convex embedded surfaces homeomorphic to R^2 need to have at least one umbilic point. Victor Andreevich Toponogov proved this under an additional assumption on the growth of the mean curvature of the surface. His ingenious argument uses comparison with round spheres. We deal with the problem in the quotient space of surfaces in R^3 modulo parallelism. It turns out that this reformulation allows for applying the Theorem of Riemann Roch, where the analytic index counts the number of umbilics in the equivalence class. Magnetic Curvature and Closed Magnetic Geodesic. Speaker: V. Assenza (Heidelberg, Germany) Time: April 25, 2022 (17:00 Beijing time, 16:00 Novosibirsk time). Abstract: A Magnetic System describes the motion of a charged particle moving on a Riemannian Manifold under the influence of a magnetic field. Trajectories for this dynamics are called Magnetic Geodesics and one of the main tasks in the theory is to investigate the existence of Magnetic Geodesic which are closed. In general, this depends on the magnetic system taken into account and on the topology of the base space. Inspired by the work of Bahri and Taimanov, I will introduce the notion of Magnetic Curvature which is a perturbation of the standard Riemannian curvature due to the magnetic interaction. We will see that Closed Magnetic Geodesic exists when the Magnetic Curvature is positive which happens, for instance, when the magnetic field is sufficiently strong. Global surfaces of section for Reeb flows on closed 3-manifolds. Speaker: Marco Mazzucchelli (Lyon, France) Time: March 28, 2022 (17:00 Beijing time, 16:00 Novosibirsk time). Abstract: In this talk, which is based on joint work with Gonzalo Contreras, I will sketch a proof of the existence of a global surface of section for any Reeb vector field of a closed 3-manifold satisfying the Kupka-Smale condition. This result implies the existence of global surfaces of section for the Reeb vector fields of C^\infty-generic contact forms on any closed 3-manifold, and for the geodesic vector fields of C^\infty-generic Riemannian metrics on any closed surface. I will discuss a few significant applications of this existence result, and in particular a confirmation of Palis-Smale's stability conjecture for geodesic flows of surfaces: any C^2-structurally stable Riemannian geodesic flow of a closed surface must be Anosov. Bio: Marco Mazzucchelli is a CNRS researcher at the Unité de Mathématiques Pures et Appliquées of the École normale supérieure de Lyon. His main research interests are dynamical systems, Riemannian geometry, and symplectic topology. He is also co-coordinator of the ANR CoSyDy (Conformally Symplectic Dynamics beyond symplectic dynamics), and member of the ANR Cosy (New challenges in symplectic and contact topology). Mazzucchelli is an editorial board member of the Journal of Fixed Point Theory and Applications. Quantum computing. Axiomatic model. Examples of algorithms. Speaker: S.B. Tikhomirov (St. Petersburg State University, Russia). Time: 2021-11-15 16:00 Novosibirsk time (12:00 Moscow time, 17:00 Beijing time). Laguerre geometry and its applications in computer numerically controlled machining. Speaker: M.B. Skopenkov (NRU Higher School of Economics, Institute for Information Transmission Problems RAS, Moscow, Russia). Abstract: Motivated by applications in engineering, we provide characterizations of surfaces which are enveloped by a one-parametric family of congruent cones. As limit cases we also address developable surfaces and ruled surfaces. The characterizations are higher-order nonlinear PDEs generalizing the ones by Gauss and Monge. In the process we reconstruct the positions of the cones for a given envelope. The methodology is based on a model of Laguerre geometry, which transforms an envelope to a surface containing a special conic through each point. Most of the talk is explained in figures and is accessible to undergraduate students. This is joint work with Pengbo Bo, Michael Barton, Helmut Pottmann. Old and new on the (non)-existence of complex structures on S^6. Speaker: I. Agricola (Marburg, Germany) Time: 2021-04-12 17:00 Beijing time Non-Abelian generalizations of integrable PDEs and ODEs. Speaker: V.V. Sokolov (Moscow) Abstract: A general procedure for non-abelinization of given integrable polynomial differential equation is described. We are considering the NLS type equations as an example. We also find non-abelinating Euler's top. Results related to the Painlevé-2 and Painlevé-4 equations are discussed. Fibrations with torus fibres and the toral rank conjecture. Speaker: M. Amann (Augsburg, Germany) Time: 2021-01-18 17:00 Foliations on closed 3-dimensional Riemannian manifolds with small modulus of mean curvature of the leaves. Speaker: D.V. Bolotov (Verkin Institute for Low Temperature Physics and Engineering, Kharkiv, Ukraine) Time: 2020-12-21 15:00 Beijing time (14:00 Novosibirsk time) Abstract: We prove that a foliation on a closed 3-dimensional Riemannian manifold must be taut if the modulus of the mean curvature of the leaves is less than a certain constant depending on the volume, injectivity radius, and the maximum value of the sectional curvature of the manifold. On the spectral problem of a three term difference operator. Speaker: Rinat Kashaev (University of Geneva, Switzerland; St. Petersburg Department of V.A. Steklov Mathematical Institute, St. Petersburg, Russia) Abstract: We address the spectral problem of the quantum mechanical operator associated to the quantized mirror curve of the toric (almost) del Pezzo Calabi—Yau threefold called local P^2 in the case of complex values of Planck's constant. This is joint work with Sergey Sergeev. Geometric facets of quantization. Speaker: Leonid Polterovich (Tel Aviv University, Tel Aviv, Israel) Abstract: I will explain why compatible almost-complex structures on symplectic manifolds correspond to optimal positive quantizations, and discuss classification of geometric quantizations up to conjugation and small error. Joint work with Louis Ioos and David Kazhdan. Braided Descriptions of Homotopy Groups of the 2-Sphere. Speaker: Wu Jie (Hebei Normal University, Shijiazhuang, China) Abstract: In this talk, we will discuss combinatorial and braided descriptions of homotopy groups of the 2-sphere. We will also discuss the connections between relative Lie algebras of Brunnian braid groups over the 2-sphere and the unstable Adams spectral sequence. Infinite excitation limit: horocyclic chaos. Speaker: M. Dubashinskiy (Chebyshev Laboratory, St. Petersburg State University) Abstract: What will be if, given a pure stationary state at a compact hyperbolic surface, we start applying creation operator every $\hbar$ "adiabatic" second? It turns that during adiabatic time comparable to 1 wavefunction will change as a wave traveling with a finite speed (with respect to the adiabatic time), whereas the semiclassical measure of the system will undergo a controllable transformation admitting a description in the spirit of geometric optics. If adiabatic time goes to infinity then, by quantization of Furstenberg Theorem, the system will become quantum uniquely ergodic. Thus, infinite excitation of a closed system leads to quantum chaos. Action-minimizing methods and invariant Lagrangian Graphs. Speaker: Alfonso Sorrentino (University of Rome Tor Vergata, Rome, Italy) Abstract: In this talk I would like to describe some properties of Hamiltonian and Lagrangian systems, with particular attention to the relation between their action-minimizing properties and their dynamics. More specifically, I shall illustrate what kind of information the principle of least Lagrangian action conveys into the study of the integrability of these systems, and, more generally, how this information relates to the existence or to the non-existence of invariant Lagrangian graphs. Nonlinear Dirac equations on compact spin manifolds. Speaker: Tian Xu (Tianjin University, Tianjin, China) Abstract: Motivated by recent progress on a spinorial analogue of the Yamabe problem in the geometric literature, we shall consider analytic aspects of nonlinear Dirac equations on compact spin manifolds. Viavariational theory and blow-up analysis, our main target is to study the existence issue of a conformally invariant spinor field equation, which has a strong relationship with Spinorial Weierstraß representation. Invariant geometry on nilmanifolds and narrow Lie algebras. Speaker: D.V. Millionshchikov (Moscow State University, Steklov Mathematical Institute, Moscow, Russia) Abstract: The problems of invariant geometry on a nilmanifold G/Γ can be reduced to studying the corresponding algebraic structures on a nilpotent Lie algebra g, which corresponds to the one-connected nilpotent Lie group G. As nilpotent Lie algebras we will consider the so-called narrow positively graded Lie algebras g = ⊕ g_i , dim g_i ≤ 2. We will focus on left-invariant: 1) symplectic structures; b) affine structures and c) complex structures on G/Γ, and discuss the various classification lists of narrow positively graded Lie algebras that arose during such research. Isoperimetric inequalities for Laplace eigenvalues on the sphere and the real projective plane. Speaker: Alexei V. Penskoi (Moscow State University, National Research University - Higher School of Economics, Independent University of Moscow, Interdisciplinary Scientific Center J.-V. Poncelet, Moscow, Russia) Abstract: This talk will be a review on results concerning sharp isoperimetric inequalities for Laplace eigenvalues on surfaces, mainly the sphere and the real projective plane. A universal topological approach to data science. Abstract: In this talk, we will report our current research for exploring topology of subgraphs and its applications. We introduce the notion of super-hypergraph, which can be briefly described as Delta set with missing faces, as a model for topological structure of subgraphs. The applications are given by providing a unified topological approach to data science. This is a joint work with Professor Jelena Grbic from Southampton. Matrix resolvent, Volterra lattice hierarchy, and the modified GUE partition function. Speaker: Di Yang (University of Science and Technology of China) Time: 2020-4-20 15:00 Beijing time
CommonCrawl
Optimal Space Trajectories with Multiple Coast Arcs Using Modified Equinoctial Elements Mauro Pontani ORCID: orcid.org/0000-0003-3095-73911 Journal of Optimization Theory and Applications volume 191, pages 545–574 (2021)Cite this article The detection of optimal trajectories with multiple coast arcs represents a significant and challenging problem of practical relevance in space mission analysis. Two such types of optimal paths are analyzed in this study: (a) minimum-time low-thrust trajectories with eclipse intervals and (b) minimum-fuel finite-thrust paths. Modified equinoctial elements are used to describe the orbit dynamics. Problem (a) is formulated as a multiple-arc optimization problem, and additional, specific multipoint necessary conditions for optimality are derived. These yield the jump conditions for the costate variables at the transitions from light to shadow (and vice versa). A sequential solution methodology capable of enforcing all the multipoint conditions is proposed and successfully applied in an illustrative numerical example. Unlike several preceding researches, no regularization or averaging is required to make tractable and solve the problem. Moreover, this work revisits problem (b), formulated as a single-arc optimization problem, while emphasizing the substantial analytical differences between minimum-fuel paths and problem (a). This study also proves the existence and provides the derivation of the closed-form expressions for the costate variables (associated with equinoctial elements) along optimal coast arcs. Spacecraft trajectory optimization is concerned with the determination of the optimal direction and magnitude of the propulsive thrust that drive a space vehicle toward some specified conditions, while minimizing either propellant consumption or time of flight. Optimal space trajectories have been extensively investigated from both the analytical and the numerical perspective, using a variety of approaches. The impulsive thrust assumption [45] represents an excellent approximation for spacecraft that employ chemical propulsion for short durations. In this context, orbit transfer optimization, aimed at minimizing propellant consumption, reduces to a nonlinear programming problem. However, in the presence of moderate or low thrust levels, the impulsive approximation loses accuracy, and the general properties of optimal finite-thrust trajectories can no longer be inferred from an impulsive solution [34, 49]. In these cases, the optimal path of interest must be found as the solution of a continuous-time optimal control problem. As a valuable alternative to chemical propulsion, in recent years low-thrust electric propulsion [48] attracted an increasing interest by the scientific community, and already found application in a variety of mission scenarios, e.g. the NASA Deep Space 1 and the ESA Smart-1 missions [43, 44]. Thanks to high values of the specific impulse, low-thrust propulsion allows substantial propellant savings, at the price of increasing—even considerably—the time of flight. Pioneering studies on low-thrust trajectories are due to Edelbaum [13], who apparently was the first scientist to point out the advantages of using low-thrust in space missions. Most recently, extensive research on the same subject was carried out by Petropoulos [35, 36], Betts [5,6,7], Ross [46], and Kechichian [23,24,25,26,27], to name a few. Low-thrust trajectory optimization problems are solved through the use of direct, indirect, or heuristic approaches sometimes combined in hybrid forms [37]. With this regard, Betts [8], Rao [42], and Conway [11] offer excellent overviews of the available methods in spacecraft trajectory optimization. A major drawback of low-thrust electric propulsion resides in the demanding requirement of electrical power to operate. As a result, in operational scenarios low-thrust propulsion is switched off when the satellite is eclipsed. It is quite obvious that this circumstance implies a complete reconsideration of the underlying optimization problem. In general, this is formulated as a minimum-time problem, for which the available control depends on the state. A limited number of studies are devoted to low-thrust trajectory optimization with eclipsing. The cylindrical shadow model is assumed in several works as a very accurate approximation, due to the distance that separates the Sun from the Earth [15, 17]. Under some simplifying assumption, Kechichian [26, 27] derived analytical expressions for the variation of the orbit elements due to low thrust, with the inclusion of the eclipse effect on the availability of electrical power. Some researchers employed orbital averaging, in conjunction with either direct techniques [16, 28], or the sequential gradient-restoration algorithm [2], or a shooting method [15]. Recently, Kluever [29] provided an algorithm, based on Edelbaum's analytic solution, devoted to computing the transfer time for low-thrust maneuvers with eclipsing, while Betts [7] proposed a direct optimization method that includes a multiple-phase formulation of the problem. A similar approach is due to Graham and Rao [19], who employed adaptive Gaussian quadrature orthogonal collocation, without any use of averaging. Their method finds an initial guess by identifying the optimal path while neglecting eclipsing. Then, a sequence of optimal control problems is defined and solved, by adding a single eclipse period at a time. Alternative (indirect) approaches are based on regularization, applied to the transitions from light to shadow (and vice versa). Ferrier and Epenoy [15] used a smoothing function when the spacecraft travels the region between two cylinders that enclose the eclipse cylinder. They proposed an effective methodology for circumventing the theoretical difficulties inherent to the state-dependent control. Most recently, Mazzini [33] utilized a smooth eclipse function, to restore the regularity conditions, for the purpose of applying the Pontryagin minimum principle, whereas Taheri and Junkins [50] proposed the hyperbolic tangent as a smoothing function. Similarly, Aziz et al. [1] smoothed the eclipse transition with a logistic function and computed the optimal paths through differential dynamic programming, a second-order gradient-based method. The challenges related to the inclusion of multiple coast arcs, associated with eclipse intervals, are thus apparent from the preceding considerations. However, another class of optimal space trajectories exists that may include multiple coast arcs. In fact, minimum-fuel paths using finite thrust are associated with a set of necessary conditions that admit coast arcs and powered phases [40]. This consolidated property was proven in the 60 s [30] and since then a vast amount of literature was dedicated to investigating minimum-fuel space trajectories. A very interesting recent contribution is due to Taheri and Junkins [49], who analyzed the relations between impulsive transfers and finite-thrust paths using optimal control theory. They point out the existence of optimal trajectories with a variety of structures (i.e. different numbers and timing of powered phases and thrust arcs). In this context, the switching function, which depends on the state and the costate variables, plays a major role. In fact, the time evolution of this function determines the sequence of powered phases and coast arcs along the optimal path. For a space vehicle that orbits a single celestial body, orbital motion along thrust intervals must be obtained through numerical integration, whereas its trajectory is Keplerian along coast arcs (under the assumption of neglecting orbit perturbations). This means that the dynamical state is integrable. In the 60 s, Lawden [30] derived the first two-dimensional closed-form also for the costate along optimal coast arcs. Alternative closed-form solutions in the plane of the Keplerian arc are due to Hempel [20], Eckenwiler [14], and Lion and Handlesman [31]. Later, Glandorf [18] derived the general three-dimensional closed-form costate using Cartesian coordinates. More recently, Pan et al. [34] provided the closed-form costate along optimal coast arcs employing spherical coordinates and emphasized its utility for accurate determination of the switching times from coast to thrust. The analytical study that follows is concerned with both types of trajectory optimization problems that admit multiple coast arcs: (a) minimum-time low-thrust paths with eclipse intervals and (b) minimum-fuel trajectories that employ finite thrust. Modified equinoctial elements are selected as the variables that describe the orbital motion of the space vehicle of interest, subject to the gravitational attraction of a single celestial body. This choice is related to three remarkable properties. First, virtually all types of trajectories can be described using modified equinoctial elements, unlike what occurs if the classical orbit elements are employed. Second, 5 out of 6 equinoctial elements remain constant (while the sixth is integrable) along coast arcs, in the presence of a single attracting body. In more complex mission scenarios, the space vehicle may be subject to perturbing accelerations, other than the action of the dominating attracting body, and in this case these 5 elements exhibit slow time variations. Third, in the numerical solution of low-thrust path optimization problems (with no eclipse intervals), the use of equinoctial elements was proven to mitigate the hypersensitivity issues encountered with spherical coordinates [38, 47]. In fact, for multi-revolution orbit transfers, they exhibit superior convergence properties if compared to spherical coordinates, and are also amenable to homotopy methods [48]. With reference to problem (a), the present study has the primary objectives of (1) formulating the problem as a multiple-arc optimization problem, (2) deriving the complete set of necessary conditions for optimality, (3) proving that all the multipoint conditions referred to intermediate times can be solved sequentially in the numerical integration process, and (4) employing the latter conditions in a numerical example, for illustrative purposes. No averaging or regularization is introduced to make the problem tractable. Then, this research revisits problem (b) using equinoctial elements, with the objective of deriving the necessary conditions associated with minimum-fuel paths, while emphasizing the substantial analytical differences from problem (a). As a further contribution, the optimal adjoint equations along coast arcs are analyzed, with the intent of ascertaining the existence of closed forms for the costate variables associated with modified equinoctial elements. Orbit Dynamics This research considers a space vehicle that orbits a single celestial body, in the dynamical framework of the restricted problem of two bodies. The spacecraft of interest is modeled as a point mass. In general, orbital motion can be described using either Cartesian coordinates, spherical variables, or osculating orbit elements, i.e. semimajor axis a, eccentricity e, inclination i, right ascension of the ascending node (RAAN) \(\Omega\), argument of periapse \(\omega\), and true anomaly f [41]. However, the Gauss equations [41], which govern the time evolution of the orbit elements, become singular in the presence of a circular or equatorial orbit (and also when an elliptic orbit transitions to a hyperbola). For these reasons, the modified equinoctial elements [9] were introduced, and are selected in this work as the variables that identify the dynamical state of the space vehicle. These elements are defined as [3, 9] $$ \begin{gathered} x_{1} = a\left( {1 - e^{2} } \right) \quad x_{2} = e\cos \left( {\Omega + \omega } \right) \quad x_{3} = e\sin \left( {\Omega + \omega } \right) \quad \hfill \\ x_{4} = \tan \frac{i}{2}\cos \Omega \quad x_{5} = \tan \frac{i}{2}\sin \Omega \quad x_{6} = \Omega + \omega + f \hfill \\ \end{gathered} $$ It is straightforward to recognize that \(x_{1}\) represents the orbit semilatus rectum. Unlike the classical orbit elements, the modified equinoctial elements are never singular, with the only exception of \(i = \pi\) (condition that is unlikely to encounter, because equatorial retrograde orbits are rather impractical). If \(\vartheta : = 1 + x_{2} \cos x_{6} + x_{3} \sin x_{6}\), the instantaneous radius is \(r = {{x_{1} } \mathord{\left/ {\vphantom {{x_{1} } \vartheta }} \right. \kern-\nulldelimiterspace} \vartheta }\). The classical orbit elements can be retrieved by inverting Eq. (1). The spacecraft position can be written in terms of a, e, i, \(\Omega\), \(\omega\), and f [41] or can be computed directly from the equinoctial elements [19]. Equations of Motion The dynamical evolution of the modified equinoctial elements is governed by the respective Gauss equations [3], $$ \dot{x}_{1} = \frac{2}{\vartheta }\sqrt {\frac{{x_{1}^{3} }}{\mu }} a_{\theta } $$ $$ \dot{x}_{2} = \sqrt {\frac{{x_{1} }}{\mu }} \left[ {a_{r} {\text{sin }}x_{6} + a_{\theta } \frac{{\left( {\vartheta + 1} \right){\text{cos }}x_{6} + x_{2} }}{\vartheta } - a_{h} \frac{{x_{4} \sin x_{6} - x_{5} \cos x_{6} }}{\vartheta }x_{3} } \right] $$ $$ \dot{x}_{3} = \sqrt {\frac{{x_{1} }}{\mu }} \left[ { - a_{r} {\text{cos }}x_{6} + a_{\theta } \frac{{\left( {\vartheta + 1} \right){\text{sin }}x_{6} + x_{3} }}{\vartheta } + a_{h} \frac{{x_{4} \sin x_{6} - x_{5} \cos x_{6} }}{\vartheta }x_{2} } \right] $$ $$ \dot{x}_{4} = a_{h} \sqrt {\frac{{x_{1} }}{\mu }} \frac{{1 + x_{4}^{3} + x_{5}^{2} }}{2\vartheta }{\text{cos }}x_{6} $$ $$ \dot{x}_{5} = a_{h} \sqrt {\frac{{x_{1} }}{\mu }} \frac{{1 + x_{4}^{3} + x_{5}^{2} }}{2\vartheta }{\text{sin }}x_{6} $$ $$ \dot{x}_{6} = \sqrt {\frac{\mu }{{x_{1}^{3} }}} \vartheta^{2} + a_{h} \sqrt {\frac{{x_{1} }}{\mu }} \frac{{x_{4} \sin x_{6} - x_{5} \cos x_{6} }}{\vartheta } $$ where \(\mu\) is the gravitational parameter of the attracting body. The terms \(\left\{ {a_{r} ,a_{\theta } ,a_{h} } \right\}\) are the components of the non-Keplerian acceleration a in the local vertical local horizontal (LVLH) frame aligned with \(\left( {\hat{r},\hat{\theta },\hat{h}} \right)\), where unit vector \(\hat{r}\) is directed toward the instantaneous position vector r (taken from the center of the attracting body), whereas \(\hat{h}\) is aligned with the spacecraft angular momentum. The modified equinoctial elements allow identifying the instantaneous position and velocity of the spacecraft. This is controlled using the thrust supplied by the propulsion system. Let \(T_{max}\) and \(m_{0}\) represent the maximum available thrust magnitude and the initial mass of the space vehicle. If \(x_{7}\) denotes the mass ratio and T the thrust magnitude, for \(x_{7}\) the following equation can be obtained: $$ \dot{x}_{7} : = \frac{{\dot{m}}}{{m_{0} }} = - \frac{{u_{T} }}{c}{\text{ with }}0 \le u_{T} \le u_{T}^{{\left( {max} \right)}} \, \left( {u_{T} : = \frac{T}{{m_{0} }}{\text{ and }}u_{T}^{{\left( {max} \right)}} : = \frac{{T_{max} }}{{m_{0} }}} \right) $$ where c represents the (constant) effective exhaust velocity of the propulsion system, whereas m is the instantaneous mass. The magnitude of the instantaneous thrust acceleration is \({{a_{T} = u_{T} m_{0} } \mathord{\left/ {\vphantom {{a_{T} = u_{T} m_{0} } m}} \right. \kern-\nulldelimiterspace} m} = {{u_{T} } \mathord{\left/ {\vphantom {{u_{T} } {x_{7} }}} \right. \kern-\nulldelimiterspace} {x_{7} }}\) and is constrained to the interval \(0 \le a_{T} \le a_{T}^{{\left( {max} \right)}}\), where \(a_{T}^{{\left( {max} \right)}} = {{u_{T}^{{\left( {max} \right)}} } \mathord{\left/ {\vphantom {{u_{T}^{{\left( {max} \right)}} } {x_{7} }}} \right. \kern-\nulldelimiterspace} {x_{7} }}\). The thrust acceleration \(\left( {{{\mathbf{T}} \mathord{\left/ {\vphantom {{\mathbf{T}} m}} \right. \kern-\nulldelimiterspace} m}} \right)\) is assumed to be the only non-Keplerian contribution in Eqs. (2)-(7), thus \({\mathbf{a}} = {{\mathbf{T}} \mathord{\left/ {\vphantom {{\mathbf{T}} m}} \right. \kern-\nulldelimiterspace} m} = {{\mathbf{T}} \mathord{\left/ {\vphantom {{\mathbf{T}} {\left( {m_{0} x_{7} } \right)}}} \right. \kern-\nulldelimiterspace} {\left( {m_{0} x_{7} } \right)}} = {{{\mathbf{u}}_{T} } \mathord{\left/ {\vphantom {{{\mathbf{u}}_{T} } {x_{7} }}} \right. \kern-\nulldelimiterspace} {x_{7} }}\), where \({\mathbf{u}}_{T}\) \(\left( { = {{\mathbf{T}} \mathord{\left/ {\vphantom {{\mathbf{T}} {m_{0} }}} \right. \kern-\nulldelimiterspace} {m_{0} }}} \right)\) has magnitude constrained to the interval \(\left[ {0,u_{T}^{{\left( {max} \right)}} } \right]\). The thrust direction is identified by means of the two thrust angles \(\alpha\) \(\left( { - \pi \le \alpha < \pi } \right)\) and \(\beta\) \(\left( { - {\pi \mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace} 2} \le \beta \le {\pi \mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace} 2}} \right)\), $$ a_{r} = \frac{{u_{T} }}{{x_{7} }}\cos \beta \sin \alpha \quad a_{\theta } = \frac{{u_{T} }}{{x_{7} }}\cos \beta \cos\alpha \quad a_{h} = \frac{{u_{T} }}{{x_{7} }}\sin\beta $$ In the end, the spacecraft dynamics is described using the state vector x and the control vector u defined as $$ {\mathbf{x}}: = \left[ {\begin{array}{*{20}c} {x_{1} } & {x_{2} } & {x_{3} } & {x_{4} } & {x_{5} } & {x_{6} } & {x_{7} } \\ \end{array} } \right]^{T} {\text{ and }}{\mathbf{u}}: = \left[ {\begin{array}{*{20}c} \alpha & \beta & {u_{T} } \\ \end{array} } \right]^{T} $$ In light of Eq. (10), the state Eqs. (2)–(8) can be written in compact form as $$ {\dot{\mathbf{x}}} = {\mathbf{f}}\left( {{\mathbf{x}},{\mathbf{u}},t} \right) $$ Spacecraft Eclipse Condition Some types of low-thrust propulsion systems [48] require onboard electrical power in order to operate. In most cases, this can be provided only when the space vehicle is illuminated. This implies that propulsion must be considered unavailable when the spacecraft is eclipsed. This subsection focuses on the eclipse condition for space vehicles that orbit the Earth. As a preliminary step, the spacecraft position in a proper inertial frame is derived. The Earth-centered inertial frame (ECI) has origin at the Earth center and axes aligned with the right-hand sequence of unit vectors \(\left( {\hat{c}_{1} ,\hat{c}_{2} ,\hat{c}_{3} } \right)\). Vector \(\hat{c}_{1}\) identifies the vernal axis, which corresponds to the intersection of the Earth equatorial plane with the ecliptic plane, whereas \(\hat{c}_{3}\) points toward the Earth rotation axis. In the ECI-frame, the spacecraft Cartesian coordinates can be written in terms of the classical orbit elements as shown in Ref. 41. Then, relatively long trigonometric steps (omitted for the sake of brevity), based on the definitions (1), lead to the following expressions for the three coordinates X, Y, Z: $$ X = \frac{{x_{1} }}{\vartheta }\frac{{\left( {x_{4}^{2} - x_{5}^{2} + 1} \right){\text{c}}_{{x_{6} }} + 2x_{4} x_{5} s_{{x_{6} }} }}{{x_{4}^{2} + x_{5}^{2} + 1}}{, }Y = \frac{{x_{1} }}{\vartheta }\frac{{\left( {x_{5}^{2} - x_{4}^{2} + 1} \right)s_{{x_{6} }} + 2x_{4} x_{5} {\text{c}}_{{x_{6} }} }}{{x_{4}^{2} + x_{5}^{2} + 1}}, \, Z = \frac{{2x_{1} }}{\vartheta }\frac{{x_{4} s_{{x_{6} }} - x_{5} {\text{c}}_{{x_{6} }} }}{{x_{4}^{2} + x_{5}^{2} + 1}} $$ where \({\text{c}}_{\eta } : = \cos \eta\) and \({\text{s}}_{\eta } : = \sin \eta\) (\(\eta\) denotes a generic angle). Under the very reasonable (approximating) assumption that the Earth describes a circular orbit about the Sun, the position of the latter in the ECI-frame is identified by the three coordinates \(X_{S}\), \(Y_{S}\), and \(Z_{S}\), $$ \mathop {{\mathbf{r}}_{S} }\limits_{ \to } : = \left[ {\begin{array}{*{20}c} {X_{S} } & {Y_{S} } & {Z_{S} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\hat{c}_{1} } & {\hat{c}_{2} } & {\hat{c}_{3} } \\ \end{array} } \right]^{T} = r_{S} \left[ {\begin{array}{*{20}c} {{\text{cos}}\theta_{S} } & {\sin \theta_{S} \cos \varepsilon } & {\sin \theta_{S} \sin \varepsilon } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\hat{c}_{1} } & {\hat{c}_{2} } & {\hat{c}_{3} } \\ \end{array} } \right]^{T} $$ where \(r_{S}\) equals 1 AU, \(\varepsilon \, \left( { = 23.4{\text{ deg}}} \right)\) is the ecliptic obliquity, whereas \(\theta_{S}\) identifies the instantaneous position of the Sun. If \(\theta_{S0}\) denotes the value of \(\theta_{S}\) at the initial time \(t_{0}\) (set to 0) and \(\omega_{S} \, \left( { = {{2\pi } \mathord{\left/ {\vphantom {{2\pi } {T_{S} }}} \right. \kern-\nulldelimiterspace} {T_{S} }}; \, T_{S} = {\text{1 year}}} \right)\) is the (constant) angular rate of the Sun in its motion relative to Earth, then \(\theta_{S} = \omega_{S} t + \theta_{S0}\). Once the spacecraft and Sun positions are specified in the ECI-frame, the condition for eclipsing is [12] $$ \phi_{1} + \phi_{2} \le \xi ,{\text{ with }}\xi = \arccos \left( {\frac{{\mathop {{\mathbf{r}} }\limits_{ \to } \cdot \mathop {{\mathbf{r}}_{S} }\limits_{ \to } }}{{r_{S} r}}} \right), \, \phi_{1} = \arccos \left( {{{R_{E} } \mathord{\left/ {\vphantom {{R_{E} } r}} \right. \kern-\nulldelimiterspace} r}} \right), \, \phi_{2} = \arccos \left( {{{R_{E} } \mathord{\left/ {\vphantom {{R_{E} } {r_{S} }}} \right. \kern-\nulldelimiterspace} {r_{S} }}} \right) $$ where \(R_{E}\) is the Earth radius. Because \(r_{S} \gg R_{E}\), \(\phi_{2} \simeq {\pi \mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace} 2}\), and the eclipse condition (14) becomes $$ \cos \xi + \sin \phi_{1} \le 0 \, \Rightarrow \, \frac{{\mathop \mathbf{r}\limits_{ \to } \cdot\mathop {\mathbf{r}_{S} }\limits_{ \to } }}{{r_{S} r}} + \sqrt {1 - \frac{{R_{E}^{2} }}{{r^{2} }}} \le 0 $$ Insertion of Eqs. (12) and (13) into (15) yields $$ \begin{gathered} x_{1} \left\{ {\left[ {\left( {x_{4}^{2} - x_{5}^{2} + 1} \right){\text{c}}_{{x_{6} }} + 2x_{4} x_{5} s_{{x_{6} }} } \right]{\text{c}}_{{\theta_{S} }} + \left[ {\left( {x_{5}^{2} - x_{4}^{2} + 1} \right)s_{{x_{6} }} + 2x_{4} x_{5} {\text{c}}_{{x_{6} }} } \right]{\text{s}}_{{\theta_{S} }} {\text{c}}_{\varepsilon } + 2\left( {x_{4} s_{{x_{6} }} - x_{5} {\text{c}}_{{x_{6} }} } \right){\text{s}}_{{\theta_{S} }} {\text{s}}_{\varepsilon } } \right\} \hfill \\ + \left( {x_{4}^{2} + x_{5}^{2} + 1} \right)\sqrt {x_{1}^{2} - R_{E}^{2} \left( {1 + x_{2} {\text{c}}_{{x_{6} }} + x_{3} {\text{s}}_{{x_{6} }} } \right)^{2} } \le 0 \hfill \\ \end{gathered} $$ Usefulness of the previous relation will be apparent in the next section. It is worth remarking that some trajectory arcs can be traveled in penumbral or antumbral conditions, related to the relative positions of Earth, Moon, and Sun. However, in the great majority of the mission scenarios these arcs are so short that propulsion can remain ignited. For this reason, penumbral and antumbral conditions are neglected in this study. Minimum-Time Trajectories with Eclipse Intervals Low-thrust propulsion systems usually require considerable electrical power to operate. For this reason, they are ignited only when the satellite is illuminated. This section is devoted to investigating the minimum-time transfer paths from an initial to a final Earth orbit, assuming availability of the thrust only in the intervals when the space vehicle is illuminated. The spacecraft of interest is governed by the state equations (11) and is subject to some (problem-dependent) boundary conditions of the form $$ {{\varvec{\upzeta}}}\left( {{\mathbf{x}}_{0} ,{\mathbf{x}}_{f} ,t_{f} } \right) = {\varvec{0}} $$ where \({{\varvec{\upzeta}}}\) denotes a generic column vector, which depends on the final time \(t_{f}\) and on the initial and final state \({\mathbf{x}}_{0}\) and \({\mathbf{x}}_{f}\). These conditions usually include the relations that define the initial and final orbits. As an example, the two terminal orbits can be defined by the respective values of the first five equinoctial elements, and in this case \({{\varvec{\upzeta}}}\) includes 10 components of the form \(\left\{ {x_{j,0} - \overline{x}_{j,0} } \right\}_{j = 1, \ldots ,5}\) and \(\left\{ {x_{j,f} - \overline{x}_{j,f} } \right\}_{j = 1, \ldots ,5}\), where the symbols with bar denote specified values, whereas subscripts 0 and f respectively refer to the initial and final value of the corresponding variable. The initial time is assumed specified and is set to 0. The objective function J to minimize is the time of flight, therefore $$ J = t_{f} $$ For the orbit transfer problem at hand, the thrust is available only when the spacecraft is illuminated, i.e. when inequality (16) is violated. It is quite obvious that the entire trajectory is composed of an unspecified number N of eclipse arcs and light arcs. Therefore, this becomes a multiple-arc trajectory optimization problem. In each arc j, the state equations are $$ {\dot{\mathbf{x}}}^{\left( j \right)} = {\mathbf{f}}^{\left( j \right)} \left( {{\mathbf{x}}^{\left( j \right)} ,{\mathbf{u}}^{\left( j \right)} ,t} \right) $$ Thrust is switched off along the eclipse arcs (\(u_{T}^{\left( j \right)} \equiv 0\)), where inequality (16) is fulfilled. This means that the optimal control law must be determined only in the light arcs. Unlike the control vector u, the state x is continuous across two adjacent arcs, i.e. $$ {\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} = {\mathbf{x}}_{fin}^{\left( j \right)} \, \left( {j = 1, \ldots ,N - 1} \right) $$ For each variable the subscripts ini and fin denote the initial and final value in the arc with index reported in the superscript. Furthermore, the transition between two adjacent arcs (either an eclipse arc followed by a light arc or a light arc followed by an eclipse arc) corresponds to fulfillment of the equality associated with Eq. (16), rewritten in compact form as $$ \psi^{\left( j \right)} \left( {{\mathbf{x}}_{fin}^{\left( j \right)} ,t_{j} } \right) = 0 \, \left( {j = 1, \ldots ,N - 1} \right) $$ The symbol \({\mathbf{x}}_{fin}^{\left( j \right)}\) collects all the state components that appear in the left-hand side of Eq. (16), evaluated at the end of arc j, which occurs at time \(t_{j}\). It is worth noticing that \(\psi^{\left( j \right)}\) depends explicitly on \(t_{j}\) through the variable \(\theta_{S}\), evaluated at \(t_{j}\) \(\left( {{\text{i}}{\text{.e}}{. }\theta_{S,j} = \omega_{S} t_{j} + \theta_{S0} } \right)\). In the end, the multiple-arc problem consists in finding the optimal control u that minimizes the objective function (18), while holding the state equations (19) and the multipoint conditions (17), (20), and (21). Multiple-arc optimal control problems were also investigated by the author in Ref. [39]. Necessary Conditions for Optimality In order to derive the necessary conditions for optimality, the extended objective function \(\overline{J}\) is defined, $$ \overline{J} = t_{f} + {{\varvec{\upsigma}}}^{T} {{\varvec{\upzeta}}}\left( {{\mathbf{x}}_{0} ,{\mathbf{x}}_{f} ,t_{f} } \right) + \sum\limits_{j = 1}^{N - 1} {\left[ {{{\varvec{\upupsilon}}}_{j}^{T} \left( {{\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} - {\mathbf{x}}_{fin}^{\left( j \right)} } \right) + \xi_{j} \psi^{\left( j \right)} \left( {{\mathbf{x}}_{fin}^{\left( j \right)} ,t_{j} } \right)} \right]} + \sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {{{\varvec{\uplambda}}}^{\left( j \right)T} \left[ {{\mathbf{f}}^{\left( j \right)} \left( {{\mathbf{x}}^{\left( j \right)} ,{\mathbf{u}}^{\left( j \right)} ,t} \right) - {\dot{\mathbf{x}}}^{\left( j \right)} } \right]dt} } $$ where \({{\varvec{\upsigma}}}\), \({{\varvec{\upupsilon}}}_{j}\), and \(\xi_{j}\) are time-independent adjoint variables conjugate to the multipoint conditions (17), (20), and (21), whereas \({{\varvec{\uplambda}}}^{\left( j \right)}\) is the time-varying costate vector associated with the state equations (19) and \(t_{N} \equiv t_{f}\). N Hamiltonian functions \(H^{\left( j \right)} \, \left( {j = 1, \ldots ,N} \right)\) and a function \(\Phi\) are introduced, $$ H^{\left( j \right)} = {{\varvec{\uplambda}}}^{\left( j \right)T} {\mathbf{f}}^{\left( j \right)} \left( {{\mathbf{x}}^{\left( j \right)} ,{\mathbf{u}}^{\left( j \right)} ,t} \right) $$ $$ \Phi : = t_{f} + {{\varvec{\upsigma}}}^{T} {{\varvec{\upzeta}}}\left( {{\mathbf{x}}_{0} ,{\mathbf{x}}_{f} ,t_{f} } \right) + \sum\limits_{j = 1}^{N - 1} {\left[ {{{\varvec{\upupsilon}}}_{j}^{T} \left( {{\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} - {\mathbf{x}}_{fin}^{\left( j \right)} } \right) + \xi_{j} \psi^{\left( j \right)} \left( {{\mathbf{x}}_{fin}^{\left( j \right)} ,t_{j} } \right)} \right]} $$ and the extended objective function \(\overline{J}\) is rewritten as $$ \begin{aligned} \overline{J} =& \Phi \left( {{\mathbf{x}}_{0} ,{\mathbf{x}}_{f} ,t_{f} ,{\mathbf{x}}_{ini}^{\left( 2 \right)} , \ldots ,{\mathbf{x}}_{ini}^{\left( N \right)} ,{\mathbf{x}}_{fin}^{\left( 1 \right)} , \ldots ,{\mathbf{x}}_{fin}^{{\left( {N - 1} \right)}} ,t_{1} , \ldots ,t_{N - 1} ,{{\varvec{\upsigma}}},{{\varvec{\upupsilon}}}_{1} , \ldots ,{{\varvec{\upupsilon}}}_{N - 1} ,\xi_{1} , \ldots ,\xi_{N - 1} } \right) \hfill \\ &\quad + \sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {\left[ {H^{\left( j \right)} \left( {{\mathbf{x}}^{\left( j \right)} ,{\mathbf{u}}^{\left( j \right)} ,{{\varvec{\uplambda}}}^{\left( j \right)} ,t} \right) - {{\varvec{\uplambda}}}^{\left( j \right)T} {\dot{\mathbf{x}}}^{\left( j \right)} } \right]dt} } \hfill \\ \end{aligned} $$ The first differential \(d\overline{J}\) can be obtained by using the steps illustrated in Appendix, and is $$ \begin{gathered} d\overline{J} = \sum\limits_{j = 1}^{N - 1} {\left[ {\left( {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} }} + {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)T}} } \right)d{\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} + \left( {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{fin}^{\left( j \right)} }} - {{\varvec{\uplambda}}}_{fin}^{\left( j \right)T} } \right)d{\mathbf{x}}_{fin}^{\left( j \right)} + \left( {\frac{\partial \Phi }{{\partial t_{j} }} + H_{fin}^{\left( j \right)} - H_{ini}^{{\left( {j + 1} \right)}} } \right)dt_{j} } \right]} \hfill \\ \quad + \left( {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{0} }} + {{\varvec{\uplambda}}}_{0}^{T} } \right)d{\mathbf{x}}_{0} + \left( {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{f} }} - {{\varvec{\uplambda}}}_{f}^{T} } \right)d{\mathbf{x}}_{f} + \left( {H_{f} + \frac{\partial \Phi }{{\partial t_{f} }}} \right)dt_{f} { + }\sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {\left[ {\left( {{\dot{\mathbf{\lambda }}}^{\left( j \right)T} + \frac{{\partial H^{\left( j \right)} }}{{\partial {\mathbf{x}}^{\left( j \right)} }}} \right)\delta {\mathbf{x}}^{\left( j \right)} + \frac{{\partial H^{\left( j \right)} }}{{\partial {\mathbf{u}}^{\left( j \right)} }}\delta {\mathbf{u}}^{\left( j \right)} } \right]dt} } \hfill \\ \end{gathered} $$ where subscripts 0 and f refer to the initial and final time, respectively, whereas the symbol \(\delta\) denotes the variation, i.e. the time-fixed differential, using the notation of Ref. 22 (cf. also Appendix). The first differential must vanish at an extremal [22], for arbitrary values of \(\left\{ {d{\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} ,{\mathbf{x}}_{fin}^{\left( j \right)} ,dt_{j} } \right\}_{j = 1, \ldots ,N - 1}\), \(d{\mathbf{x}}_{0}\), \(d{\mathbf{x}}_{f}\), \(dt_{f}\), \(\left\{ {\delta {\mathbf{x}}^{\left( j \right)} ,\delta {\mathbf{u}}^{\left( j \right)} } \right\}_{j = 1, \ldots ,N}\), and this implies that the following necessary conditions for optimality must hold: $$ {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} + \left[ {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} }}} \right]^{T} = {\varvec{0}} \, \Rightarrow \, {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} = - {{\varvec{\upupsilon}}}_{j} \, \left( {j = 1, \ldots ,N - 1} \right) $$ $$ {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} - \left[ {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{fin}^{\left( j \right)} }}} \right]^{T} = {\varvec{0}} \, \Rightarrow \, {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} = - {{\varvec{\upupsilon}}}_{j} { + }\xi_{j} \left[ {\frac{{\partial \psi^{\left( j \right)} }}{{\partial {\mathbf{x}}_{fin}^{\left( j \right)} }}} \right]^{T} \, \left( {j = 1, \ldots ,N - 1} \right) $$ $$ \frac{\partial \Phi }{{\partial t_{j} }} + H_{fin}^{\left( j \right)} - H_{ini}^{{\left( {j + 1} \right)}} = 0 \, \Rightarrow \, H_{ini}^{{\left( {j + 1} \right)}} = H_{fin}^{\left( j \right)} + \xi_{j} \frac{{\partial \psi^{\left( j \right)} }}{{\partial t_{j} }} \, \left( {j = 1, \ldots ,N - 1} \right) $$ $$ {{\varvec{\uplambda}}}_{0}^{{}} + \left[ {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{0} }}} \right]^{T} = {\varvec{0}} \, \Rightarrow \, {{\varvec{\uplambda}}}_{0}^{{}} = - \left[ {\frac{{\partial {{\varvec{\upzeta}}}}}{{\partial {\mathbf{x}}_{0} }}} \right]^{T} {{\varvec{\upsigma}}} $$ $$ {{\varvec{\uplambda}}}_{f}^{{}} - \left[ {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{f} }}} \right]^{T} = {\varvec{0}} \, \Rightarrow \, {{\varvec{\uplambda}}}_{f}^{{}} = \left[ {\frac{{\partial {{\varvec{\upzeta}}}}}{{\partial {\mathbf{x}}_{f} }}} \right]^{T} {{\varvec{\upsigma}}} $$ $$ H_{f} + \frac{\partial \Phi }{{\partial t_{f} }} = 0 \, \Rightarrow \, H_{f} = - 1 - \left[ {\frac{{\partial {{\varvec{\upzeta}}}}}{{\partial t_{f} }}} \right]^{T} {{\varvec{\upsigma}}} $$ $$ {\dot{\mathbf{\lambda }}}^{\left( j \right)} = - \left[ {\frac{{\partial H^{\left( j \right)} }}{{\partial {\mathbf{x}}^{\left( j \right)} }}} \right]^{T} $$ $$ \left[ {\frac{{\partial H^{\left( j \right)} }}{{\partial {\mathbf{u}}^{\left( j \right)} }}} \right]^{T} = {\varvec{0}} $$ The last condition, which implies stationarity of \(H^{\left( j \right)}\) with respect to \({\mathbf{u}}^{\left( j \right)}\), can be replaced by the more general Pontryagin minimum principle [32], $$ {\mathbf{u}}_{*}^{\left( j \right)} = \arg \mathop {\min }\limits_{{{\mathbf{u}}^{\left( j \right)} }} H^{\left( j \right)} $$ where \(H^{\left( j \right)}\) is defined in Eq. (23), whereas the subscript * denotes the optimal value of the corresponding variable. Equation (33) is the adjoint equation for the costate variables, accompanied by the related boundary conditions (30) and (31) and by the multipoint conditions (27) and (28). Because \({{\varvec{\upzeta}}}\) appears in Eqs. (30) and (31), their explicit form is problem-dependent, unlike what occurs for the adjoint equations (33). Moreover, Eqs. (29) and (32) hold for the Hamiltonian functions, evaluated at times \(\left\{ {t_{1} , \ldots ,t_{N - 1} ,t_{N} \equiv t_{f} } \right\}\). While Eqs. (30)–(35) are formally identical to the necessary conditions that hold for ordinary (single-arc) optimization problems, Eqs. (27)–(29) represent additional relations, termed multipoint necessary conditions henceforth. Using Eqs. (2)–(9) and (23), H can be rewritten as $$ H^{\left( j \right)} = \frac{{u_{T}^{\left( j \right)} }}{{x_{7}^{\left( j \right)} }}\left[ {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} \cos \beta^{\left( j \right)} \cos \alpha^{\left( j \right)} + {{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}^{\left( j \right)} \cos \beta^{\left( j \right)} \sin \alpha^{\left( j \right)} + {{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} \sin\beta^{\left( j \right)} - \lambda_{7}^{\left( j \right)} \frac{{x_{7}^{\left( j \right)} }}{c}} \right] + {{\varvec{\upchi}}}_{{}}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} $$ where \({{\varvec{\upchi}}}_{r}\), \({{\varvec{\upchi}}}_{\theta }\), \({{\varvec{\upchi}}}_{h}\), and \({{\varvec{\upchi}}}\) (whose expressions are not reported for the sake of brevity) depend on \({\mathbf{z}}^{\left( j \right)} : = \left[ {\begin{array}{*{20}c} {x_{1}^{\left( j \right)} } & {x_{2}^{\left( j \right)} } & {x_{3}^{\left( j \right)} } & {x_{4}^{\left( j \right)} } & {x_{5}^{\left( j \right)} } & {x_{6}^{\left( j \right)} } \\ \end{array} } \right]^{T}\). The adjoint variable \(\lambda_{7}^{\left( j \right)}\), which is the seventh component of \({{\varvec{\uplambda}}}^{\left( j \right)}\), deserves special attention. Due to Eqs. (33) and (36), the costate equation for \(\lambda_{7}^{\left( j \right)}\) is $$ \dot{\lambda }_{7}^{\left( j \right)} = - \frac{{\partial H^{\left( j \right)} }}{{\partial x_{7}^{\left( j \right)} }} = \frac{{u_{T}^{\left( j \right)} }}{{x_{7}^{2} }}\left[ {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} \cos \beta^{\left( j \right)} \cos \alpha^{\left( j \right)} + {{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}^{\left( j \right)} \cos \beta^{\left( j \right)} \sin \alpha^{\left( j \right)} + {{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} \sin\beta^{\left( j \right)} } \right] $$ Moreover, because \(\psi^{\left( j \right)}\) is independent of \(x_{7}^{\left( j \right)}\) (cf. Eq. (16)), combination of Eqs. (27) and (28) yields $$ \lambda_{7,ini}^{{\left( {j + 1} \right)}} = \lambda_{7,fin}^{\left( j \right)} \, \left( {j = 1, \ldots ,N - 1} \right) $$ Because the final mass is unspecified, the boundary conditions (17) are independent of \(x_{7,f}\). As a result, Eq. (31) yields $$ \lambda_{7,f} = 0 $$ While in the eclipse arc the spacecraft is subject to no control due to unavailability of low-thrust, in light arcs the minimum principle (35) allows expressing the optimal control in terms of the state and costate variables. With reference to Eq. (36), the first three terms in square parentheses can be regarded as a dot product. Thus, since \({{u_{T} } \mathord{\left/ {\vphantom {{u_{T} } {x_{7} }}} \right. \kern-\nulldelimiterspace} {x_{7} }} \ge 0\), the thrust angles that minimize \(H^{\left( j \right)}\) are given by $$ \sin \beta^{\left( j \right)} = - {{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} \left\{ {\left[ {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right]^{2} + \left[ {{{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right]^{2} + \left[ {{{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right]^{2} } \right\}^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} $$ $$ \sin \alpha^{\left( j \right)} = - {{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} \left\{ {\left[ {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right]^{2} + \left[ {{{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right]^{2} } \right\}^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} {\text{ and }}\cos \alpha^{\left( j \right)} = - {{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}^{\left( j \right)} \left\{ {\left[ {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right]^{2} + \left[ {{{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right]^{2} } \right\}^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} $$ Using these expressions for \(\alpha^{\left( j \right)}\) and \(\beta^{\left( j \right)}\), Eqs. (36) and (37) become $$ H^{\left( j \right)} = - \frac{{u_{T}^{\left( j \right)} }}{{x_{7}^{\left( j \right)} }}\left[ {\sqrt {\left( {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right)^{2} } + \frac{{x_{7}^{\left( j \right)} \lambda_{7}^{\left( j \right)} }}{c}} \right] + {{\varvec{\upchi}}}_{{}}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} $$ $$ \dot{\lambda }_{7}^{\left( j \right)} = - \frac{{\partial H^{\left( j \right)} }}{{\partial x_{7}^{\left( j \right)} }} = - \frac{{u_{T}^{\left( j \right)} }}{{\left( {x_{7}^{\left( j \right)} } \right)^{2} }}\sqrt {\left( {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}^{\left( j \right)} } \right)^{2} } \le 0 $$ The latter relation, in conjunction with the final condition (39) and the continuity condition (38), implies that \(\lambda_{7}^{\left( j \right)}\) is nonnegative at all times. Moreover, because \(\lambda_{7}^{\left( j \right)} \ge 0\), applying the Pontryagin minimum principle to Eq. (42) allows obtaining the optimal value of \(u_{T}\), $$ u_{T} = u_{T}^{{\left( {max} \right)}} $$ It is worth remarking that Eqs. (40), (41), and (44) are meaningful only for light arcs, where the optimal thrust magnitude corresponds to the maximum value \(u_{T}^{{\left( {max} \right)}} \, \left( { = {{T_{max} } \mathord{\left/ {\vphantom {{T_{max} } {m_{0} }}} \right. \kern-\nulldelimiterspace} {m_{0} }}} \right)\), as prescribed by Eq. (44). In the end, the necessary conditions for optimality (27)-(33) and (35), in conjunction with the state equations (19) and the multipoint conditions (20)-(21), allow converting the original optimal control problem into a two-point boundary value problem (TPBVP), where the unknowns are the state \({\mathbf{x}}^{\left( j \right)} \, \left( {j = 1, \ldots ,N} \right)\), the control \({\mathbf{u}}^{\left( j \right)} \, \left( {j = 1, \ldots ,N} \right)\), the intermediate and final times \(\left\{ {t_{1} , \ldots ,t_{N - 1} ,t_{N} \equiv t_{f} } \right\}\), and the adjoint variables \({{\varvec{\uplambda}}}^{\left( j \right)} \, \left( {j = 1, \ldots ,N} \right)\), \({{\varvec{\upsigma}}}\), \({{\varvec{\upupsilon}}}_{k}\), and \(\xi_{k}\) \(\left( {k = 1, \ldots ,N - 1} \right)\). Sequential Solution of the Multipoint Necessary Conditions for Optimality The formulation of the orbit transfer with eclipse intervals as a multiple-arc optimization problem leads to establishing an extended set of necessary conditions for optimality. In particular, the multipoint necessary conditions for optimality (27)-(29), together with the multipoint conditions (20) and (21), represent a considerable number of additional relations to satisfy in the numerical solution process. However, Eqs. (27)-(29) can be combined, for the purpose of developing an advantageous methodology to enforce them. As a first step, insertion of Eq. (27) into (28) yields $$ {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} = {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} { + }\xi_{j} \left[ {\frac{{\partial \psi^{\left( j \right)} }}{{\partial {\mathbf{x}}_{fin}^{\left( j \right)} }}} \right]^{T} \, \left( {j = 1, \ldots ,N - 1} \right) $$ Moreover, Eq. (29) can be solved for \(\xi_{j}\), using also Eq. (42) to express the Hamiltonian functions \(H_{ini}^{{\left( {j + 1} \right)}} {\text{ and }}H_{fin}^{\left( j \right)}\), $$ \begin{aligned} \xi_{j} = - \left( {\frac{{\partial \psi^{\left( j \right)} }}{{\partial t_{j} }}} \right)^{ - 1} \left\{ { - \frac{{u_{T,j}^{{\left( {max} \right)}} }}{{x_{7,fin}^{\left( j \right)} }}\left[ {\sqrt {\left( {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right)^{2} } + \frac{{x_{7,fin}^{\left( j \right)} \lambda_{7,fin}^{\left( j \right)} }}{c}} \right] + {{\varvec{\upchi}}}_{{}}^{T} {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right\} \hfill \\ { + }\left( {\frac{{\partial \psi^{\left( j \right)} }}{{\partial t_{j} }}} \right)^{ - 1} \left\{ { - \frac{{u_{T,j + 1}^{{\left( {max} \right)}} }}{{x_{7,ini}^{{\left( {j + 1} \right)}} }}\left[ {\sqrt {\left( {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} } \right)^{2} } + \frac{{x_{7,ini}^{{\left( {j + 1} \right)}} \lambda_{7,ini}^{{\left( {j + 1} \right)}} }}{c}} \right] + {{\varvec{\upchi}}}_{{}}^{T} {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} } \right\} \hfill \\ \end{aligned} $$ where \({{\varvec{\upchi}}}_{r}^{{}} : = {{\varvec{\upchi}}}_{r,ini}^{{\left( {j + 1} \right)}} \equiv {{\varvec{\upchi}}}_{r,fin}^{\left( j \right)}\), \({{\varvec{\upchi}}}_{\theta }^{{}} : = {{\varvec{\upchi}}}_{\theta ,ini}^{{\left( {j + 1} \right)}} \equiv {{\varvec{\upchi}}}_{\theta ,fin}^{\left( j \right)}\), \({{\varvec{\upchi}}}_{h}^{{}} : = {{\varvec{\upchi}}}_{h,ini}^{{\left( {j + 1} \right)}} \equiv {{\varvec{\upchi}}}_{h,fin}^{\left( j \right)}\), and \({{\varvec{\upchi}}}: = {{\varvec{\upchi}}}_{ini}^{{\left( {j + 1} \right)}} \equiv {{\varvec{\upchi}}}_{fin}^{\left( j \right)}\), because the state is continuous across adjacent arcs. Two cases can occur, with regard to Eq. (46): (a) transition from a light arc to an eclipse arc, which means that \(u_{T,j}^{{\left( {max} \right)}} = u_{T}^{{\left( {max} \right)}}\) and \(u_{T,j + 1}^{{\left( {max} \right)}} = 0\), and (b) transition from an eclipse arc to a light arc, implying \(u_{T,j}^{{\left( {max} \right)}} = 0\) and \(u_{T,j + 1}^{{\left( {max} \right)}} = u_{T}^{{\left( {max} \right)}}\). Case (a). Transition from a light arc to an eclipse arc. Insertion of Eq. (46) (with \(u_{T,j}^{{\left( {max} \right)}} = u_{T}^{{\left( {max} \right)}}\) and \(u_{T,j + 1}^{{\left( {max} \right)}} = 0\)) into Eq. (45) leads to obtaining the following relation: $$ \begin{gathered} \left\{ {\user2{I + }\left( {\frac{{\partial \psi^{\left( j \right)} }}{{\partial t_{j} }}} \right)^{ - 1} \left[ {\frac{{\partial \psi^{\left( j \right)} }}{{\partial {\mathbf{x}}_{fin}^{\left( j \right)} }}} \right]^{T} {{\varvec{\upchi}}}_{{}}^{T} } \right\}{{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} = {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} + \left[ {\frac{{\partial \psi^{\left( j \right)} }}{{\partial {\mathbf{x}}_{fin}^{\left( j \right)} }}} \right]^{T} \left( {\frac{{\partial \psi^{\left( j \right)} }}{{\partial t_{j} }}} \right)^{ - 1} \hfill \\ \cdot \left\{ { - \frac{{u_{T}^{{\left( {max} \right)}} }}{{x_{7,fin}^{\left( j \right)} }}\left[ {\sqrt {\left( {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right)^{2} } + \frac{{x_{7,fin}^{\left( j \right)} \lambda_{7,fin}^{\left( j \right)} }}{c}} \right] + {{\varvec{\upchi}}}_{{}}^{T} {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right\} \hfill \\ \end{gathered} $$ where I denotes the identity matrix, with dimension appropriate to the context. Equation (47) can be regarded as a linear system with \({{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}}\) as the unknown. Therefore, \({{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}}\) can be obtained using Eq. (47), if all the other quantities (including \({{\varvec{\uplambda}}}_{fin}^{\left( j \right)}\)) are known. Case (b). Transition from an eclipse arc to a light arc. Insertion of Eq. (46) (with \(u_{T,j}^{{\left( {max} \right)}} = 0\) and \(u_{T,j + 1}^{{\left( {max} \right)}} = u_{T}^{{\left( {max} \right)}}\)) into Eq. (45) leads to obtaining the following vector equation: $$ \begin{gathered} {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} - {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} - \left[ {\frac{{\partial \psi^{\left( j \right)} }}{{\partial {\mathbf{x}}_{fin}^{\left( j \right)} }}} \right]^{T} \left( {\frac{{\partial \psi^{\left( j \right)} }}{{\partial t_{j} }}} \right)^{ - 1} \left[ {{{\varvec{\upchi}}}_{{}}^{T} \left( {{{\varvec{\uplambda}}}_{fin}^{\left( j \right)} - {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} } \right) + \frac{{u_{T}^{{\left( {max} \right)}} \lambda_{7,ini}^{{\left( {j + 1} \right)}} }}{c}} \right] \hfill \\ = \left[ {\frac{{\partial \psi^{\left( j \right)} }}{{\partial {\mathbf{x}}_{fin}^{\left( j \right)} }}} \right]^{T} \left( {\frac{{\partial \psi^{\left( j \right)} }}{{\partial t_{j} }}} \right)^{ - 1} \frac{{u_{T}^{{\left( {max} \right)}} }}{{x_{7,ini}^{{\left( {j + 1} \right)}} }}\sqrt {\left( {{{\varvec{\upchi}}}_{r}^{T} {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{\theta }^{T} {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} } \right)^{2} + \left( {{{\varvec{\upchi}}}_{h}^{T} {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} } \right)^{2} } \hfill \\ \end{gathered} $$ Squaring both sides of Eq. (48) leads to obtaining a system of quadratic equations with the components of \({{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}}\) as the unknowns. The entire set of solutions of this system can be obtained numerically (e.g., in MATLAB using the embedded function vpasolve). Then, among the real solutions, only those that preserve the sign consistency of Eq. (48) are acceptable. If more than a single acceptable solution for \({{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}}\) exists, then the one that minimizes \(\left| {{{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} - {{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right|\) is selected. This choice is consistent with a straightforward property that regards \(\left\{ {{{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}} ,{{\varvec{\uplambda}}}_{fin}^{\left( j \right)} } \right\}\). In fact, as the eclipse arc duration reduces or as \(u_{T}^{{\left( {max} \right)}}\) tends to zero, \({{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)}}\) must tend to \({{\varvec{\uplambda}}}_{fin}^{\left( j \right)}\), to retrieve continuity of the adjoint variables, which holds for single-arc problems with no discontinuity in the available thrust magnitude. It is apparent that Eqs. (47)–(48) represent the matching conditions for the adjoint vector \({{\varvec{\uplambda}}}\) at times \(t_{j} \, \left( {j = 1, \ldots ,N - 1} \right)\). In the scientific literature, Cerf [10] provided the relations that identify the jump conditions for the adjoint variables associated with Cartesian coordinates, by employing the Weierstrass–Erdmann corner conditions. Equations (47)–(48) represent the jump relations for the adjoint variables associated with equinoctial elements. They are derived making reference to a transition condition that has the general form reported in Eq. (21), and using the general principles of variational calculus (cf. also Sect. 3.2 and Appendix), in conjunction with the Pontryagin minimum principle. It is straightforward to recognize that Eq. (38) is the seventh scalar equation associated with either Eq. (47) or Eq. (48). Based on these relations and the remaining necessary conditions for optimality, the numerical solution process can include the following steps: Step 1. Identify the known initial values of the state and costate variables using Eqs. (17) and (30). Step 2. Derive the (possible) relations between the initial values of the state and costate variables (i.e., the components of \({{\varvec{\uplambda}}}\)) by eliminating some components of \({{\varvec{\upsigma}}}\) from Eq. (30); this leads to identifying the minimal set of unknown initial values for the components of \({\mathbf{x}}_{0}\) and \({{\varvec{\uplambda}}}_{0}\). Step 3. Select the unknown initial values for the state and costate components belonging to the minimal set identified at Step 2; calculate the remaining initial values of the state and costate components using the relations found at Step 2. Step 4. Select the final time \(t_{f}\). Step 5. Until the current time \(t \le t_{f}\), iterate the following sub-steps: identify the type of arc (either a light arc or an eclipse arc); integrate numerically the state and costate equations (19) and (33), setting \(u_{T}^{\left( j \right)} \equiv 0\) (in an eclipse arc) or using Eqs. (40), (41), and (44) to express the control in terms of state and costate components (along a light arc), until condition (21) is met or \(t = t_{f}\); if \(t = t_{f}\), then stop the numerical integration, otherwise use the appropriate matching condition (either (47) or (48)) to find the initial value of the adjoint vector for the subsequent arc and repeat steps 5(a) and 5(b); Step 6. Evaluate the violations of the boundary conditions (17) and the necessary conditions (31)-(32). If these violations do not exceed a prescribed threshold, then convergence is declared, otherwise Steps 3 through 6 are repeated. It is worth remarking that when an eclipse arc is entered, at Step 5(b) all the state and costate components are available in closed form, and numerical integration can be avoided. Details on the costate along coast arcs are provided in Sect. 5. Moreover, the transition time from an eclipse arc to a light arc can be detected in a straightforward and very accurate way. Omitting superscript (j), along coast arcs Eq. (21) can be rewritten as $$ \psi \left( {x_{6,fin} ,t_{j} } \right) = 0 $$ In fact, only \(x_{6,fin}\) and \(\theta_{S}\) are time-varying along eclipse arcs. The numerical solution of the preceding equation, in conjunction with the definition of \(x_{6}\), the Kepler's equation, and the relation between true anomaly and eccentric anomaly, allows identifying the transition time \(t_{j}\) from a light arc to an eclipse arc, without any need of detecting it during the numerical integration process. The preceding algorithmic steps point out that the optimization process is reduced to identifying the values of some unknown quantities, which form the parameter set, such that all the necessary conditions are fulfilled. This general approach characterizes most indirect optimization algorithms. Using the two relations (47) and (48) and the multipoint conditions (20) and (21), the multiple-arc trajectory optimization problem of interest can be solved through sequential integration of the state and costate equations, while using Eqs. (40), (41), and (44) for the optimal control (in light arcs). Enforcement of Eqs. (27)–(29), (20), and (21) is guaranteed during the integration process. Although in general the parameter set is problem-dependent, for the multiple-arc problem at hand it includes at most the time of flight \(t_{f}\) and the initial values of some state components and some (or all the) components of \({{\varvec{\uplambda}}}_{0}\). No intermediate value of any variable belongs to the parameter set. The iterated selection mentioned at Steps 3 and 4 is a delicate operation, which strictly depends on the specific indirect algorithm and regards the parameter set. As an example, the indirect heuristic method [37] employs a heuristic technique (e.g. the particle swarm method) to select the unknown parameters, with the final aim of enforcing all the necessary conditions for optimality, while satisfying the boundary conditions of the problem of interest. In conclusion, the sequential solution of the multipoint conditions (20) and (21) and the multipoint necessary conditions for optimality (27)–(29) allows identifying a reduced parameter set, which essentially includes the same quantities needed for the solution of a single-arc trajectory optimization problem. Numerical Example The methodology and the algorithmic steps described in the previous section are applied to an illustrative numerical example. A low-thrust Earth orbit transfer problem is considered. The initial and final orbits are equatorial and circular, with radii equal to 6778 km and 20,000 km, respectively. Therefore, the boundary condition vector is (cf. Eq.(17)) $$ {{\varvec{\upzeta}}}\left( {{\mathbf{x}}_{0} ,{\mathbf{x}}_{f} ,t_{f} } \right) = \left[ {\begin{array}{*{20}c} {x_{1,0} - p_{0} } \quad {x_{2,0} } \quad {x_{3,0} } \quad {x_{7,0} - 1} \quad {x_{1,f} - p_{f} } \quad {x_{2,f} } \quad {x_{3,f} } \\ \end{array} } \right]^{T} $$ where \(p_{0} = 7000{\text{ km}}\) and \(p_{f} = 20000{\text{ km}}\). Because the two terminal orbits and the transfer path are coplanar, only the state components \(x_{1}\), \(x_{2}\), \(x_{3}\), \(x_{6}\), and \(x_{7}\) (together with the corresponding adjoint variables \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), \(\lambda_{6}\), and \(\lambda_{7}\)) are needed in the numerical solution process. This means that the remaining state and costate components are identically zero. The following parameters are assumed for the low-thrust propulsion system: $$ u_{T}^{{\left( {max} \right)}} = 10^{ - 3} {\text{g}}_{0} {\text{ and }}c = 30{\text{ km sec}}^{ - 1} \, \left( {{\text{g}}_{0} = 9.8{\text{ m sec}}^{ - 2} } \right) $$ It is worth remarking that the value assumed for \(u_{T}^{{\left( {max} \right)}}\) is beyond the current technological capabilities (although it might become feasible in one or two decades), and was chosen for illustrative purposes. The boundary condition (30) yields $$ \lambda_{1,0} = - \sigma_{1} \quad \lambda_{2,0} = - \sigma_{2} \quad \lambda_{3,0} = - \sigma_{3} \quad \lambda_{6,0} = 0 \quad \lambda_{7,0} = - \sigma_{4} $$ $$ \lambda_{1,f} = \sigma_{5} \quad \lambda_{2,f} = \sigma_{6} \quad \lambda_{3,f} = \sigma_{7} \quad \lambda_{6,f} = 0 \quad \lambda_{7,f} = 0 $$ where \(\left\{ {\sigma_{j} } \right\}_{j = 1, \ldots ,7}\) represent the components of \({{\varvec{\upsigma}}}\). Therefore, the initial values of the adjoint variables are unknown, with the only exception of \(\lambda_{6}\). No relation can be identified among the initial values because each of them is expressed in terms of a different component of \({{\varvec{\upsigma}}}\). Hence, with reference to Step 2 of the numerical solution process described in Sect. 3.3, the minimal set of unknown initial values of the state and costate components is \(\left\{ {x_{6,0} ,\lambda_{1,0} ,\lambda_{2,0} ,\lambda_{3,0} ,\lambda_{7,0} } \right\}\). In the numerical solution process, canonical units are used: the distance unit (DU) equals 100,000 km, whereas the time unit is such that \(\mu = 1 \, {{{\text{DU}}^{{3}} } \mathord{\left/ {\vphantom {{{\text{DU}}^{{3}} } {{\text{TU}}^{{2}} }}} \right. \kern-\nulldelimiterspace} {{\text{TU}}^{{2}} }}\). The indirect heuristic method (IHM) [37], used to solve the problem at hand, assumes the preceding unknown initial values and the time of flight as the parameter set. A thorough description of IHM can be found in Ref. 37. Further details on the numerical solution process are omitted for the sake of brevity. The value of \(\theta_{S0}\) (cf. Sect. 2.2) is set to 10 deg. The minimum time of flight equals 5.1 days, whereas the boundary conditions do not exceed \(10^{ - 3}\). The magnitude of \({{\varvec{\upzeta}}}\) (cf. Eq. (50)) is \(7.4 \cdot 10^{ - 4}\). Figures 1 and 2 depict the time histories of the semilatus rectum (component \(x_{1}\)) and the orbit eccentricity (retrieved from \(x_{2}\) and \(x_{3}\)). The horizontal segments correspond to the eclipse arcs, where no thrust is employed and the orbit elements remain constant as a result. Figures 3 and 4 portray the time histories of the adjoint variables \(\lambda_{2}\) and \(\lambda_{3}\). Inspection of these figures reveals that at the transition points \(\lambda_{2}\) and \(\lambda_{3}\) are subject to discontinuities, shown with greater detail in the insets. Similar time behaviors characterize the time histories of \(\lambda_{1}\) and \(\lambda_{6}\) (while \(\lambda_{7}\) is continuous, cf. Eq. (38)). The numerical results found for this illustrative example definitely corroborate the effectiveness of the solution methodology described in Sect. 3.3. Time history of the semilatus rectum (with zoom in the inset) Time history of the eccentricity (with zoom in the inset) Time history of the adjoint variable \(\lambda_{2}\) (with zoom in the inset) Minimum-Fuel Trajectories In most mission scenarios of practical interest, spacecraft are equipped with a finite-thrust propulsion system. In these dynamical contexts, the crucial objective consists in minimizing propellant consumption. Previous (and rather extensive) researches proved that minimum-fuel trajectories include relatively short finite-thrust arcs and long-duration coast intervals [30, 40]. This section considers the problem of minimizing the propellant consumption for performing an orbit transfer between two specified (initial and final) orbits, while using the modified equinoctial elements to describe the spacecraft dynamics. The spacecraft of interest is governed by the state equations (11) and is subject to some (problem-dependent) boundary conditions of the form (17). These conditions usually include the relations that define the initial and final orbits. The initial time \(t_{0}\) is assumed specified and is set to 0. The objective function J to minimize is the propellant mass, and this is equivalent to maximizing the final mass ratio \(x_{7,f}\). Thus, for the problem at hand the objective is $$ J = - x_{7,f} $$ Therefore, the problem consists in finding the optimal control u that minimizes the objective function (54), while holding the state equations (11) and the boundary conditions (17). In order to derive the necessary conditions for optimality, a Hamiltonian function \(H\) and a function \(\Phi\) are defined as $$ H: = {{\varvec{\uplambda}}}^{T} {\mathbf{f}}\left( {{\mathbf{x}},{\mathbf{u}},t} \right){\text{ and }}\Phi : = - x_{7,f} + {{\varvec{\upsigma}}}^{T} {{\varvec{\upzeta}}}\left( {{\mathbf{x}}_{0} ,{\mathbf{x}}_{f} ,t_{f} } \right) $$ and the extended objective function \(\overline{J}\) is introduced, $$ \begin{aligned} \bar{J} = - x_{{7,f}} + {\mathbf{\sigma }}^{T} {\mathbf{\zeta }}\left( {{\mathbf{x}}_{0} ,{\mathbf{x}}_{f} ,t_{f} } \right) + \int\limits_{{t_{0} }}^{{t_{f} }} {{\mathbf{\lambda }}^{T} \left[ {{\mathbf{f}}\left( {{\mathbf{x}},{\mathbf{u}},t} \right) - {\mathbf{\dot{x}}}} \right]dt} \hfill \\ = \Phi \left( {{\mathbf{x}}_{0} ,{\mathbf{x}}_{f} ,t_{f} ,{\mathbf{\sigma }}} \right) + \int\limits_{{t_{0} }}^{{t_{f} }} {\left[ {H\left( {{\mathbf{x}},{\mathbf{u}},{\mathbf{\lambda }},t} \right) - {\mathbf{\lambda }}^{T} {\mathbf{\dot{x}}}} \right]dt} \hfill \\ \end{aligned} $$ Then, the first differential \(d\overline{J}\) can be obtained by using the analytical steps described in Ref. 22 (similar to those illustrated in Appendix for the more complicated problem of Sect. 3), and is $ \begin{aligned} d\bar{J} = \left( {\frac{{\partial \Phi }}{{\partial {\mathbf{x}}_{0} }} + {\mathbf{\lambda }}_{0}^{T} } \right)d{\mathbf{x}}_{0} + \left( {\frac{{\partial \Phi }}{{\partial {\mathbf{x}}_{f} }} - {\mathbf{\lambda }}_{f}^{T} } \right)d{\mathbf{x}}_{f} + \left( {H_{f} + \frac{{\partial \Phi }}{{\partial t_{f} }}} \right)dt_{f} \hfill \\ {\text{ + }}\int\limits_{{t_{0} }}^{{t_{f} }} {\left[ {\left( {{\mathbf{\dot{\lambda }}}^{T} + \frac{{\partial H}}{{\partial {\mathbf{x}}}}} \right)\delta {\mathbf{x}} + \frac{{\partial H}}{{\partial {\mathbf{u}}}}\delta {\mathbf{u}}} \right]dt} \hfill \\ \end{aligned} $ The first differential must vanish at an extremal [22], for arbitrary values of \(d{\mathbf{x}}_{0}\), \(d{\mathbf{x}}_{f}\), \(dt_{f}\), \(\delta {\mathbf{x}},{\text{ and }}\delta {\mathbf{u}}\), and this implies that the following necessary conditions must hold at an optimal solution: $$ {{\varvec{\uplambda}}}_{f}^{{}} - \left[ {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{f} }}} \right]^{T} = {\varvec{0}} $$ $$ H_{f} + \frac{\partial \Phi }{{\partial t_{f} }} = 0 \, \Rightarrow \, H_{f} = - \left[ {\frac{{\partial {{\varvec{\upzeta}}}}}{{\partial t_{f} }}} \right]^{T} {{\varvec{\upsigma}}} $$ $$ {\dot{\mathbf{\lambda }}} = - \left[ {\frac{\partial H}{{\partial {\mathbf{x}}}}} \right]^{T} $$ $$ \left[ {\frac{\partial H}{{\partial {\mathbf{u}}}}} \right]^{T} = {\varvec{0}} $$ The last condition, which implies stationarity of \(H\) with respect to \({\mathbf{u}}\), can be replaced again by the more general Pontryagin minimum principle [32], $$ {\mathbf{u}}_{*} = \arg \mathop {\min }\limits_{{\mathbf{u}}} H $$ where the subscript * denotes the optimal value of the corresponding variable. Equation (61) is the adjoint equation for the costate vector, accompanied by the related boundary conditions (58) and (59). Because \({{\varvec{\upzeta}}}\) appears in Eqs. (58) and (59), their explicit form is problem-dependent, unlike what occurs for the adjoint equations (61). Furthermore, Eq. (60) holds for the final value of the Hamiltonian function. $$ H = \frac{{u_{T} }}{{x_{7} }}\left[ {H_{r} \left( {{\mathbf{z}},{{\varvec{\uplambda}}}} \right)\cos \beta \cos \alpha + H_{\theta } \left( {{\mathbf{z}},{{\varvec{\uplambda}}}} \right)\cos \beta \sin \alpha + H_{h} \left( {{\mathbf{z}},{{\varvec{\uplambda}}}} \right)\sin\beta - \lambda_{7} \frac{{x_{7} }}{c}} \right] + H_{0} \left( {{\mathbf{z}},{{\varvec{\uplambda}}}} \right) $$ where z collects components \(x_{1}\) through \(x_{6}\) of the state, i.e. \({\mathbf{z}}: = \left[ {\begin{array}{*{20}c} {x_{1} } & {x_{2} } & {x_{3} } & {x_{4} } & {x_{5} } & {x_{6} } \\ \end{array} } \right]^{T}\); the terms \(H_{r}\), \(H_{\theta }\), \(H_{h}\), and \(H_{0}\), whose expressions are not reported for the sake of brevity, depend on both z and \({{\varvec{\uplambda}}}\). Due to Eq. (64), the adjoint equation for \(\lambda_{7}\) is $$ \dot{\lambda }_{7} = - \frac{\partial H}{{\partial x_{7} }} = \frac{{H_{r} \left( {{\mathbf{z}},{{\varvec{\uplambda}}}} \right)\cos \beta \cos \alpha + H_{\theta } \left( {{\mathbf{z}},{{\varvec{\uplambda}}}} \right)\cos \beta \sin \alpha + H_{h} \left( {{\mathbf{z}},{{\varvec{\uplambda}}}} \right)\sin\beta }}{{x_{7}^{2} }} $$ Moreover, because the final mass is unspecified (and in fact is to be minimized), the boundary conditions (17) are independent of \(x_{7,f}\). As a result, Eq. (59) yields $$ \lambda_{7,f} = - 1 $$ The minimum principle (63) allows expressing the optimal control in terms of the state and costate variables. With reference to Eq. (64), the first three terms in square parentheses can be regarded as a dot product. Thus, since \({{u_{T} } \mathord{\left/ {\vphantom {{u_{T} } {x_{7} }}} \right. \kern-\nulldelimiterspace} {x_{7} }} \ge 0\), the thrust angles that minimize H are given by $$ \sin \beta = - H_{h} \left( {H_{r}^{2} + H_{\theta }^{2} + H_{h}^{2} } \right)^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} $$ $$ \sin \alpha = - H_{r} \left( {H_{r}^{2} + H_{\theta }^{2} } \right)^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} {\text{ and }}\cos \alpha = - H_{\theta } \left( {H_{r}^{2} + H_{\theta }^{2} } \right)^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} $$ Using these expressions for \(\alpha\) and \(\beta\), Eqs. (64) and (65) become $$ H = - u_{T} \left[ {\frac{{\sqrt {H_{r}^{2} + H_{\theta }^{2} + H_{h}^{2} } }}{{x_{7} }} + \frac{{\lambda_{7} }}{c}} \right] + H_{0} $$ $$ \dot{\lambda }_{7} = - \frac{\partial H}{{\partial x_{7} }} = - \frac{{u_{T} }}{{x_{7}^{2} }}\sqrt {H_{r}^{2} + H_{\theta }^{2} + H_{h}^{2} } \le 0 $$ The latter relation, in conjunction with the final condition (66), implies that \(\lambda_{7}\) cannot be positive at all times. Moreover, using the Pontryagin minimum principle and Eq. (69), the optimal value of \(u_{T}\) is $$ u_{T} = \left\{ \begin{gathered} u_{T}^{{\left( {max} \right)}} {\text{ if }}S > 0 \hfill \\ 0{\text{ if }}S < 0 \hfill \\ \end{gathered} \right.{\text{ with }}S: = \frac{{\sqrt {H_{r}^{2} + H_{\theta }^{2} + H_{h}^{2} } }}{{x_{7} }} + \frac{{\lambda_{7} }}{c} $$ This means that a minimum-fuel path includes powered phases (where the maximum available thrust is used) and coast arcs. In the previous relation, S is referred to as the switching function, because it determines the switching times between the two types of arcs that compose the optimal trajectory. Equation (71) is deduced under the assumption of neglecting singular arcs, associated with the condition \(S \equiv 0\) over a time interval of finite duration. The existence of similar arcs can be investigated using singular optimal control theory [4]. Unlike the preceding problem, where minimum-time paths including eclipse arcs were sought, minimum-fuel space trajectories do not require formulating a multiple-arc optimization problem. The adjoint vector \({{\varvec{\uplambda}}}\) is continuous for the entire time of flight, and in fact no matching condition for \({{\varvec{\uplambda}}}\) is derived from the necessary conditions for optimality. It is worth stressing that the existence of coast arcs and powered phases along minimum-fuel space trajectories was already proven using different representations for the dynamical state (e.g., Cartesian or spherical coordinates [14, 18, 20, 30, 32, 34, 40]). Therefore, the previous analytical developments represent an alternative derivation leading to an expected result. In the end, the necessary conditions for optimality (58)–(61) and (63), in conjunction with the state equations (11) and the boundary conditions (17), allow converting the original optimal control problem into a TPBVP, where the unknowns are the state \({\mathbf{x}}\), the control \({\mathbf{u}}\), the final time \(t_{f}\), and the adjoint variables \({{\varvec{\uplambda}}}\) and \({{\varvec{\upsigma}}}\). An indirect algorithm applied to the problem at hand can proceed using Steps 1–4 and 6 of Sect. 3.2 (with the necessary conditions of the present problem in place of the respective ones found for problem (a)). Instead, Step 5 simplifies in the following fashion: Step 5. Until the current time \(t \le t_{f}\), integrate numerically the state and costate equations (11) and (61), while using Eqs.(67), (68), and (71) to express the control in terms of state and costate components. Closed Form of the Costate Along Optimal Coast Arcs The preceding two sections address two different trajectory optimization problems that involve multiple coast arcs, where no propulsion is used. Under the assumption of neglecting orbit perturbations, along a coast arc the space vehicle travels a Keplerian trajectory, either an elliptic or a parabolic or a hyperbolic arc. This is apparent also from inspection of Eqs. (2)–(6). In fact, if \(a_{r} = a_{\theta } = a_{h} = 0\), then \(\dot{x}_{1} = \dot{x}_{2} = \dot{x}_{3} = \dot{x}_{4} = \dot{x}_{5} = 0\). Only \(x_{6}\) varies along a Keplerian arc (cf. Eq. (7)), due to the true anomaly f (whereas \(\Omega\) and \(\omega\) remain constant). For all the three types of Keplerian paths, f can be found in terms of the current time by solving a transcendental equation (e.g., the Kepler's equation for ellipses and the Barker's equation for parabolas) [12, 41]. The time variation of \(x_{6}\) can be obtained as a result. This section is concerned with the derivation of the closed-form expressions for the adjoint variables \(\lambda_{j} \, \left( {j = 1, \ldots ,7} \right)\) along coast arcs. Availability of analytical relations allows precise evaluation of the time histories for all the costate components, and leads to writing the switching function S (cf. Eq. (71)) along coast arcs of minimum-fuel paths as a closed-form expression. As a favorable consequence, the accurate detection of the switching times from coast to thrust arcs, which was recognized as a challenging task in the scientific literature [34], can be facilitated (e.g., using Newton or even higher-order root-finding methods [21], applied to the switching function S). The Hamiltonian H and the adjoint (or costate) equations (33) and (61) are identical for the two problems addressed in Sects. 3 and 4. The only difference resides in the introduction of multiple arcs for minimum-time problems that include eclipse arcs. However, this has no effect on H and the form of the costate equations. Therefore, for both problems, along coast arcs the Hamiltonian function is $$ H = \lambda_{6} \sqrt {\frac{\mu }{{x_{1}^{3} }}} \left( {1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} } \right)^{2} $$ Equations (61) and (72) yield the following scalar adjoint equations: $$ \dot{\lambda }_{1} = \frac{3}{2}\lambda_{6} \sqrt {\frac{\mu }{{x_{1}^{5} }}} \left( {1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} } \right)^{2} $$ $$ \dot{\lambda }_{2} = - 2\lambda_{6} \sqrt {\frac{\mu }{{x_{1}^{3} }}} \cos x_{6} \left( {1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} } \right) $$ $$ \dot{\lambda }_{3} = - 2\lambda_{6} \sqrt {\frac{\mu }{{x_{1}^{3} }}} \sin x_{6} \left( {1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} } \right) $$ $$ \dot{\lambda }_{4} = \dot{\lambda }_{5} = \dot{\lambda }_{7} = 0 $$ $$ \dot{\lambda }_{6} = - 2\lambda_{6} \sqrt {\frac{\mu }{{x_{1}^{3} }}} \left( {x_{3} \cos x_{6} - x_{2} \sin x_{6} } \right)\left( {1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} } \right) $$ As a first result, Eq. (76) proves that the adjoint variables \(\lambda_{4}\), \(\lambda_{5}\), and \(\lambda_{7}\) are constant along coast arcs. Therefore, only the remaining components must be integrated. The use of equinoctial elements has thus the remarkable advantage of reducing to 5 the number of state and costate variables that are time-varying along coast arcs, i.e. \(x_{6} , \, \lambda_{1} , \, \lambda_{2} , \, \lambda_{3} ,{\text{ and }}\lambda_{6}\). This desirable circumstance is not encountered when alternative representations for the state are employed [18, 34]. For the purpose of obtaining a closed-form solution to the equation system (73)–(75) and (77), these relations are rewritten with the use of \(x_{6}\) as the independent variable, in place of the time t. This is possible because \(x_{6}\) is a strictly monotonic variable for coast arcs of elliptic, parabolic, or hyperbolic type, with positive time derivative \(\dot{x}_{6}\) (cf. Eq. (7) with \(a_{h} = 0\)). Hence, using Eq. (7) (with \(a_{h} = 0\)), one obtains $$ \lambda_{1}^{\prime } = \frac{{\dot{\lambda }_{1} }}{{\dot{x}_{6} }} = \frac{{3\lambda_{6} }}{{2x_{1} }} $$ $$ \lambda_{2}^{\prime } = \frac{{\dot{\lambda }_{2} }}{{\dot{x}_{6} }} = \frac{{ - 2\lambda_{6} \cos x_{6} }}{{1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} }} $$ $$ \lambda_{3}^{\prime } = \frac{{\dot{\lambda }_{3} }}{{\dot{x}_{6} }} = \frac{{ - 2\lambda_{6} \sin x_{6} }}{{1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} }} $$ $$ \lambda_{6}^{\prime } = \frac{{\dot{\lambda }_{6} }}{{\dot{x}_{6} }} = \frac{{ - 2\lambda_{6} \left( {x_{3} \cos x_{6} - x_{2} \sin x_{6} } \right)}}{{1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} }} $$ where the symbol \(^{\prime }\) denotes the derivative with respect to \(x_{6}\). Equation (81) can be solved by separation of variables. In fact, Eq. (81) can be rewritten as $$ \frac{{d\lambda_{6} }}{{\lambda_{6} }} = \frac{{ - 2\left( {x_{3} \cos x_{6} - x_{2} \sin x_{6} } \right)}}{{1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} }}dx_{6} $$ Because \(x_{2}\) and \(x_{3}\) are constant along coast arcs, both sides of Eq. (82) are integrable. After a few straightforward steps, the following closed-form solution is obtained: $$ \lambda_{6} = \lambda_{6,in} \left( {\frac{{1 + x_{2} \cos x_{6,in} + x_{3} \sin x_{6,in} }}{{1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} }}} \right)^{2} $$ where subscript in denotes the value of the respective variable when the coast arc begins. Insertion of Eq. (83) into Eqs. (78) through (80) yields the following relations: $$ \lambda_{1}^{\prime } = \frac{{3\lambda_{6,in} }}{{2x_{1} }}\left( {\frac{{1 + x_{2} \cos x_{6,in} + x_{3} \sin x_{6,in} }}{{1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} }}} \right)^{2} $$ $$ \lambda_{2}^{\prime } = - 2\lambda_{6,in} \cos x_{6} \frac{{\left( {1 + x_{2} \cos x_{6,in} + x_{3} \sin x_{6,in} } \right)^{2} }}{{\left( {1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} } \right)^{3} }} $$ $$ \lambda_{3}^{\prime } = - 2\lambda_{6,in} \sin x_{6} \frac{{\left( {1 + x_{2} \cos x_{6,in} + x_{3} \sin x_{6,in} } \right)^{2} }}{{\left( {1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} } \right)^{3} }} $$ It is worth remarking that Eqs. (83)–(86) hold for Keplerian (coast) arcs of elliptic, parabolic, and hyperbolic type. In the subsections that follow these three types of conics are distinguished, with the intent of obtaining closed-form solutions for \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\). Elliptic Arcs In most orbit transfer problems of practical interest, intermediate coast arcs are of elliptic type. This means that \(x_{2}^{2} + x_{3}^{2} < 1\) (cf. the definitions (1)). In this case, letting \(c_{0} : = \lambda_{6,in} \left( {1 + x_{2} \cos x_{6,in} + x_{3} \sin x_{6,in} } \right)^{2}\) and using the definition of \(\vartheta\) (cf. Section 2), Eqs. (84)-(86) admit the following closed-form solutions, obtained with the use of the software Mathematica: $$ \lambda_{1} = c_{1} + \frac{{3c_{0} }}{{2x_{1} }}\left\{ {\frac{2}{{\left( {1 - x_{2}^{2} - x_{3}^{2} } \right)^{{{3 \mathord{\left/ {\vphantom {3 2}} \right. \kern-\nulldelimiterspace} 2}}} }}\arctan \left[ {\frac{{x_{3} + \left( {1 - x_{2} } \right)\tan \left( {{{x_{6} } \mathord{\left/ {\vphantom {{x_{6} } 2}} \right. \kern-\nulldelimiterspace} 2}} \right)}}{{\left( {1 - x_{2}^{2} - x_{3}^{2} } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} }}} \right] - \frac{{x_{3} + \left( {x_{2}^{2} + x_{3}^{2} } \right)\sin x_{6} }}{{x_{2} \left( {1 - x_{2}^{2} - x_{3}^{2} } \right)\vartheta }}} \right\} $$ $$ \begin{aligned} \lambda_{2} = &c_{2} + \frac{{6c_{0} x_{2} }}{{\left( {1 - x_{2}^{2} - x_{3}^{2} } \right)^{{{5 \mathord{\left/ {\vphantom {5 2}} \right. \kern-\nulldelimiterspace} 2}}} }}\arctan \left[ {\frac{{x_{3} + \left( {1 - x_{2} } \right)\tan \left( {{{x_{6} } \mathord{\left/ {\vphantom {{x_{6} } 2}} \right. \kern-\nulldelimiterspace} 2}} \right)}}{{\left( {1 - x_{2}^{2} - x_{3}^{2} } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} }}} \right] - c_{0} \frac{{x_{3} + \sin x_{6} }}{{\left( {1 - x_{2}^{2} - x_{3}^{2} } \right)\vartheta^{2} }} \hfill \\ \,& - c_{0} \frac{{3x_{3} + \left( {1 + 2x_{2}^{2} + 2x_{3}^{2} } \right)\sin x_{6} }}{{\left( {1 - x_{2}^{2} - x_{3}^{2} } \right)^{2} \vartheta }} \hfill \\ \end{aligned} $$ $$ \begin{aligned} \lambda_{3} = &c_{3} + \frac{{6c_{0} x_{3} }}{{\left( {1 - x_{2}^{2} - x_{3}^{2} } \right)^{{{5 \mathord{\left/ {\vphantom {5 2}} \right. \kern-\nulldelimiterspace} 2}}} }}\arctan \left[ {\frac{{x_{3} + \left( {1 - x_{2} } \right)\tan \left( {{{x_{6} } \mathord{\left/ {\vphantom {{x_{6} } 2}} \right. \kern-\nulldelimiterspace} 2}} \right)}}{{\left( {1 - x_{2}^{2} - x_{3}^{2} } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} }}} \right] - c_{0} \frac{{1 - x_{2}^{2} + x_{3} \sin x_{6} }}{{x_{2} \left( {1 - x_{2}^{2} - x_{3}^{2} } \right)\vartheta^{2} }} \hfill \\ \,& - c_{0} \frac{{x_{3} \left[ {3x_{3} + \left( {1 + 2x_{2}^{2} + 2x_{3}^{2} } \right)\sin x_{6} } \right]}}{{x_{2} \left( {1 - x_{2}^{2} - x_{3}^{2} } \right)^{2} \vartheta }} \hfill \\ \end{aligned} $$ The symbols \(c_{1}\) \(c_{2}\), and \(c_{3}\) denote the three integration constants. Their values can be obtained by evaluating Eqs. (87)–(89) at the initial time of the coast arc. Parabolic Arcs Parabolic arcs are rather infrequent to encounter in space trajectory optimization. However, for the sake of completeness, this subsection addresses the derivation of the closed-form solution of Eqs. (84)–(86) along parabolic arcs, for which \(x_{2}^{2} + x_{3}^{2} = 1\). As a preliminary step, because \(x_{2}^{2} + x_{3}^{2} = 1\), the terms \(\left( {1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} } \right)\) in Eqs. (84)–(86) are rewritten as $$ 1 + x_{2} \cos x_{6} + x_{3} \sin x_{6} = 1 + \sin \left( {x_{6} + \varphi } \right){\text{, with sin}}\varphi = x_{2} {\text{ and cos}}\varphi = x_{3} $$ The preceding definitions of \(c_{0}\) and \(\vartheta\) are used again. After insertion of Eq. (90) into Eqs. (84)–(86), the following closed-form solutions are obtained with the use of the software Mathematica: $$ \lambda_{1} = c_{1} + \frac{{c_{0} }}{{2x_{1} }}\frac{{ - 3 + 4\cos \left( {x_{6} + \varphi } \right) + \cos \left[ {2\left( {x_{6} + \varphi } \right)} \right] - 4\sin \left( {x_{6} + \varphi } \right) + \sin \left[ {2\left( {x_{6} + \varphi } \right)} \right]}}{{\cos \left[ {2\left( {x_{6} + \varphi } \right)} \right] - 4\sin \left( {x_{6} + \varphi } \right) - 3}} $$ $$ \begin{aligned} \lambda_{2} =& c_{2} + \frac{{c_{0} }}{10}\left[ {\cos \left( {\frac{{x_{6} + \varphi }}{2}} \right) + \sin \left( {\frac{{x_{6} + \varphi }}{2}} \right)} \right]^{ - 5} \left[ {\cos \left( {\frac{\varphi }{2}} \right) + \sin \left( {\frac{\varphi }{2}} \right)} \right]^{ - 1} \left\{ {10\cos \left( {\frac{{x_{6} }}{2} + \varphi } \right)} \right. \hfill \\ &\, \left. { + \cos \left( {\frac{{5x_{6} }}{2} + \varphi } \right) - \cos \left( {\frac{{5x_{6} }}{2} + 3\varphi } \right) + 10\sin \left( {\frac{{x_{6} }}{2}} \right) - 5\sin \left( {\frac{{3x_{6} }}{2}} \right) + 5\sin \left( {\frac{{3x_{6} }}{2} + 2\varphi } \right)} \right\} \hfill \\ \end{aligned} $$ $$ \begin{aligned} \lambda_{3} = &c_{3} + \frac{{c_{0} }}{10}\left[ {\cos \left( {\frac{{x_{6} + \varphi }}{2}} \right) + \sin \left( {\frac{{x_{6} + \varphi }}{2}} \right)} \right]^{ - 5} \left\{ { - 10\cos \left( {\frac{{x_{6} + \varphi }}{2}} \right) + 5\cos \left[ {\frac{{3\left( {x_{6} + \varphi } \right)}}{2}} \right] + \cos \left[ {\frac{{5\left( {x_{6} + \varphi } \right)}}{2}} \right]} \right. \hfill \\& \, + 5\cos \left( {\frac{{3x_{6} + \varphi }}{2}} \right) - \cos \left( {\frac{{5x_{6} + 3\varphi }}{2}} \right) - 10\sin \left( {\frac{{x_{6} + \varphi }}{2}} \right) - 5\sin \left[ {\frac{{3\left( {x_{6} + \varphi } \right)}}{2}} \right] + \sin \left[ {\frac{{5\left( {x_{6} + \varphi } \right)}}{2}} \right] \hfill \\ \, \left. { + 5\sin \left( {\frac{{3x_{6} + \varphi }}{2}} \right) + \sin \left( {\frac{{5x_{6} + 3\varphi }}{2}} \right)} \right\} \hfill \\ \end{aligned} $$ Again, the symbols \(c_{1}\) \(c_{2}\), and \(c_{3}\) denote the three integration constants. Their values can be obtained by evaluating Eqs. (91)–(93) at the initial time of the coast arc. Hyperbolic Arcs Closed-form expressions for the adjoint variables \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) can be obtained also when the space vehicle travels an optimal hyperbolic coast arc, in which \(x_{2}^{2} + x_{3}^{2} > 1\). In fact, using the identity \(\arctan \left( {i\eta } \right) = i{\text{arctanh}} \eta\) (where \(\eta\) is a generic variable and i denotes the imaginary unit), Eqs. (87)–(89) can be rewritten for hyperbolic coast arcs as $$ \lambda_{1} = c_{1} + \frac{{3c_{0} }}{{2x_{1} }}\left\{ {\frac{2}{{\left( {x_{2}^{2} + x_{3}^{2} - 1} \right)^{{{3 \mathord{\left/ {\vphantom {3 2}} \right. \kern-\nulldelimiterspace} 2}}} }}{\text{arctanh}} \left[ {\frac{{x_{3} + \left( {1 - x_{2} } \right)\tan \left( {{{x_{6} } \mathord{\left/ {\vphantom {{x_{6} } 2}} \right. \kern-\nulldelimiterspace} 2}} \right)}}{{\left( {x_{2}^{2} + x_{3}^{2} - 1} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} }}} \right] + \frac{{x_{3} + \left( {x_{2}^{2} + x_{3}^{2} } \right)\sin x_{6} }}{{x_{2} \left( {x_{2}^{2} + x_{3}^{2} - 1} \right)\vartheta }}} \right\} $$ $$ \begin{gathered} \lambda_{2} = c_{2} - \frac{{6c_{0} x_{2} }}{{\left( {x_{2}^{2} + x_{3}^{2} - 1} \right)^{{{5 \mathord{\left/ {\vphantom {5 2}} \right. \kern-\nulldelimiterspace} 2}}} }}{\text{arctanh}} \left[ {\frac{{x_{3} + \left( {1 - x_{2} } \right)\tan \left( {{{x_{6} } \mathord{\left/ {\vphantom {{x_{6} } 2}} \right. \kern-\nulldelimiterspace} 2}} \right)}}{{\left( {x_{2}^{2} + x_{3}^{2} - 1} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} }}} \right] + c_{0} \frac{{x_{3} + \sin x_{6} }}{{\left( {x_{2}^{2} + x_{3}^{2} - 1} \right)\vartheta^{2} }} \hfill \\ \qquad - c_{0} \frac{{3x_{3} + \left( {1 + 2x_{2}^{2} + 2x_{3}^{2} } \right)\sin x_{6} }}{{\left( {x_{2}^{2} + x_{3}^{2} - 1} \right)^{2} \vartheta }} \hfill \\ \end{gathered} $$ $$ \begin{aligned} \lambda_{3} =& c_{3} - \frac{{6c_{4} x_{3} }}{{\left( {x_{2}^{2} + x_{3}^{2} - 1} \right)^{{{5 \mathord{\left/ {\vphantom {5 2}} \right. \kern-\nulldelimiterspace} 2}}} }}{\text{arctanh}} \left[ {\frac{{x_{3} + \left( {1 - x_{2} } \right)\tan \left( {{{x_{6} } \mathord{\left/ {\vphantom {{x_{6} } 2}} \right. \kern-\nulldelimiterspace} 2}} \right)}}{{\left( {x_{2}^{2} + x_{3}^{2} - 1} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} }}} \right] + c_{0} \frac{{1 - x_{2}^{2} + x_{3} \sin x_{6} }}{{x_{2} \left( {x_{2}^{2} + x_{3}^{2} - 1} \right)\vartheta^{2} }} \hfill \\ \, &- c_{0} \frac{{x_{3} \left[ {3x_{3} + \left( {1 + 2x_{2}^{2} + 2x_{3}^{2} } \right)\sin x_{6} } \right]}}{{x_{2} \left( {x_{2}^{2} + x_{3}^{2} - 1} \right)^{2} \vartheta }} \hfill \\ \end{aligned} $$ Once more, the symbols \(c_{1}\) \(c_{2}\), and \(c_{3}\) denote the three integration constants. Their values can be obtained by evaluating Eqs. (94)–(96) at the initial time of the coast arc. This paper presents the analytical study of two types of optimal space trajectories that include multiple coast arcs: (a) minimum-time low-thrust paths with eclipse intervals and (b) minimum-fuel trajectories that employ finite thrust. Modified equinoctial elements are used to describe the orbit dynamics. Problem (a) is formulated as a multiple-arc optimization problem, and all the related necessary conditions for optimality are derived. These form an extended set of conditions, which includes the Pontryagin minimum principle (in the light arcs), the adjoint equations for the costate variables, the related boundary conditions, the transversality relation on the final value of the Hamiltonian, and the multipoint necessary conditions, at the junction times between two arcs. The latter relations are combined, to yield the matching (jump) conditions for the costate variables between two consecutive arcs. This research demonstrates that all the multipoint conditions can be solved sequentially in the numerical solution process. As a result, the parameter set for an indirect algorithm retains the size of the typical set associated with a single-arc optimization problem. No regularization or averaging is required to make tractable and solve the problem. In fact, based on this new multiple-arc formulation and the related analysis, indirect algorithms can easily incorporate the discontinuities of the costate variables, located at the times when the spacecraft transitions from light to shadow (and vice versa). Moreover, the use of equinoctial elements can mitigate the hypersensitivity of the solution on the initial values of the adjoints, which is a common issue when indirect approaches are used. As a result, an indirect method is capable of yielding very accurate numerical results, while avoiding several difficulties of a theoretical or numerical nature, inherent to alternative formulations. An illustrative numerical example demonstrates the effectiveness of the approach proposed in this work. This research also revisits problem (b) with the use of equinoctial elements, and derives the necessary conditions associated with minimum-fuel trajectories. Unlike minimum-time paths with eclipse intervals, problem (b) is formulated as a single-arc optimization problem. The switching function is introduced, and plays the role of determining the optimal sequence of powered phases and thrust arcs, whose number and timing is unknown a priori. The well-consolidated properties of minimum-fuel trajectories are retrieved in terms of equinoctial elements and related adjoint variables, and the substantial analytical differences between problems (a) and (b) are emphasized. As a further contribution, this study focuses on the costate along optimal coast arcs. Closed-form expressions, written in terms of elementary functions, are derived for the adjoint variables associated with modified equinoctial elements, along the three major types of Keplerian trajectories. Overall, 9 out of 14 components of the state and the costate turn out to be constant along optimal coast arcs, whereas for the remaining 5 variables closed-form expressions exist. This allows fast and accurate evaluation of the state and costate time histories along coast arcs, and leads to writing the switching function for problem (b) in closed form. This finding can be beneficial for the accurate determination of the switching times from coast to thrust arcs. These circumstances definitely represent further, unequivocal advantages of using modified equinoctial elements in spacecraft trajectory optimization. Aziz, J., Scheeres, D., Parker, J., Englander, J.: A smoothed eclipse model for solar electric propulsion trajectory optimization. Trans. Jpn. Soc. Aeronaut. Space Sci. 17(2), 181–188 (2019) Bastante, J.C., Letizia, F., De Bruijn, F., Lubke-Ossenbeck, B.: Application of the sequential gradient restoration algorithm to the solution of the low thrust transfer problem. Aerotecnica Missili Spazio 96(4), 228–236 (2017) Battin, R.H. An Introduction to the Mathematics and Methods of Astrodynamics, AIAA Education Series, New York, NY, 1987, pp. 448–450, 490–494 Bell, D.J., Jacobson, D.H.: Singular Optimal Control Problems, pp. 1–100. Academic Press, New York, NY (1975) Betts, J.T.: Very low-thrust trajectory optimization using a direct SQP method. J. Comput. Appl. Math. 120, 27–40 (2000) MathSciNet MATH Google Scholar Betts, J.T., Erb, S.O.: Optimal low thrust trajectories to the moon. SIAM J. Appl. Dyn. Syst. 2(2), 144–170 (2003) Betts, J.T.: Optimal low thrust orbit transfers with eclipsing. Optim. Control Appl. Methods 36, 218–240 (2015) Betts, J.T.: Survey of numerical methods for trajectory optimization. J. Guid. Control. Dyn. 21(2), 193–207 (1988) Broucke, R.A., Cefola, P.J.: On the equinoctial orbit elements. Celest. Mech. 5, 303–310 (1972) Cerf, M.. Fast solution of minimum-time low-thrust transfer with eclipses. In: Proceedings of the institution of mechanical engineers part G: Journal of Aerospace Engineering, Vol. 233, No. 7, 2019, pp. 2699–2714 Conway, B.A.: A survey of methods available for the numerical optimization of continuous dynamical systems. J. Optim. Theory Appl. 152, 271–306 (2012) Curtis H.D.. Orbital Mechanics for Engineering Students, Elsevier, Kidlington, Oxford, U.K., 2014, pp. 147–169, 701–702 Edelbaum, T.N.: The use of high- and low-thrust propulsion in combination for space missions. J. Astronaut. Sci. 9, 58–59 (1962) Eckenwiler, M.W.: Closed-form Lagrangian multipliers for coast periods of optimum trajectories. AIAA J. 3(6), 1149–1151 (1965) Ferrier, C., Epenoy, R.: Optimal control for engines with eletro-ionic propulsion under constraint of eclipse. Acta Astronaut. 48(4), 181–192 (2001) Gao, Y.: Near-optimal very low-thrust earth orbit transfers and guidance schemes. J. Guid. Control. Dyn. 30(2), 529–539 (2007) Geoffrey, S., Epenoy, R.: Optimal low-thrust transfers with constraints – Generalization of averaging techniques. Acta Astronaut. 48(4), 181–192 (2001) Glandorf, D.R.: Lagrange multipliers and the state transition matrix for coasting arcs. AIAA J. 7(2), 363–365 (1968) Graham, K.F., Rao, A.V.: Minimum-time trajectory optimization of low-thrust earth-orbit transfers with eclipsing. J. Guid. Control. Dyn. 53(2), 289–303 (2016) Hempel, P.R.: Representation of the Lagrangian multipliers for coast periods of optimum trajectories. AIAA J. 4(4), 729–730 (1966) Hildebrand, F.B.: Introduction to Numerical Analysis, pp. 578–583. Dover, Mineola, NY (1987) D. G. Hull. Optimal Control Theory for Applications, Springer International Edition, New York, NY, 2003, pp. 95–100, 142–143, 221–223 Kéchichian, J.A.: Optimal low-thrust rendezvous using equinoctial orbit elements. Acta Astronaut. 38(1), 1–14 (1996) Kéchichian, J.A.: Optimal low-thrust transfer in general circular orbit using analytic averaging of the system dynamics. J. Astronaut. Sci. 57, 369–392 (2009) Kéchichian, J.A.: Mathematics of attitude-constrained optimal low-thrust orbit transfer. Aerotecnica Missili Spazio 96(4), 180–194 (2017) Kéchichian, J.A.: Orbit raising with low-thrust tangential acceleration in presence of earth shadow. J. Spacecr. Rocket. 35(4), 516–525 (1998) Kéchichian, J.A.: Low-thrust inclination control in presence of earth shadow. J. Spacecr. Rocket. 35(4), 526–532 (1998) Kluever, C.A., Oleson, S.R.: Direct approach for computing near-optimal low-thrust earth-orbit tranfers. J. Spacecr. Rocket. 35(4), 509–515 (1998) Kluever, C.A.: Using Edelbaum's method to compute low-thrust transfers with earth-shadow eclipses. J. Guid. Control. Dyn. 34(1), 300–303 (2011) Lawden, D.F.: Optimal Trajectories for Space Navigation. Butterworths, London, U. K. (1963) Lion, P.M., Handelsman, M.: Primer vector on fixed-time impulsive trajectories. AIAA J. 6(1), 127–132 (1968) Longuski, J.M., Guzman, J.J., Prussing, J.E.: Optimal Control with Aerospace Applications. Springer, New York (2014) Mazzini, L.: Finite thrust orbital transfers. Acta Astronaut. 100, 107–128 (2014) Pan, B., Lu, P., Chen, Z.: Three-dimensional closed-form costate solutions in optimal coast. Acta Astronaut. 77, 156–166 (2012) Petropoulos, A.E., Simple control laws for low-thrust orbit transfers. AIAA/AAS Astrodynamics Specialist Conference, Big Sky, MN, 2003 Petropoulos A.E., Low-thrust orbit transfers using candidate Lyapunov functions with a mechanism for coasting. AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Providence, RI, 2004 Pontani, M., Conway, B.A.: Optimal low-thrust orbital maneuvers via indirect swarming method. J. Optim. Theory Appl. 162, 272–292 (2014) Pontani, M.: Optimal low-thrust trajectories using nonsingular equinoctial orbit elements. Adv. Astronaut. Sci. 170, 443–462 (2020) Pontani M., Metodi di Ottimizzazione Globale delle Traiettorie Aerospaziali, Ph.D. dissertation, Sapienza Università di Roma, 2008, pp. 10–17 Prussing, J.E.: "Primer Vector Theory and Applications. In: Conway, B. (ed.) Spacecraft Trajectory Optimization, pp. 16–36. Cambridge University Press, New York (2001) Prussing, J.E., Conway, B.A.: Orbital Mechanics. Oxford University Press, New York (2013) Rao A.V. A survey of numerical methods for optimal control. Advances in the Astronautical Sciences, Vol. 135, 2010; paper AAS 09–334 Rathsman, P., Kugelberg, J., Bodin, P., Racca, G.D., Foing, B., Stagnaro, L.: SMART-1: Development and lessons learnt. Acta Astronaut. 57(2–8), 455–468 (2005) Rayman, M.D., Chadbourne, P.A., Culwell, J.S., Williams, S.N.: Mission design for deep space 1: a low-thrust technology validation mission. Acta Astronaut. 45(4–9), 381–388 (1999) Robbins, H.M.: An Analytical Study of the Impulsive Approximation. AIAA J. 4(8), 1417–1423 (1966) Ross, I.M., Gong, Q., Sekhavat, P.: Low-thrust, high-accuracy trajectory optimization. J. Guid. Control. Dyn. 30(4), 921–933 (2007) Singh S.K., Taheri E., Woollands R., Junkins J., Mission design for close-range lunar mapping by quasi-frozen orbits. Proceedings of the 70th International Astronautical Congress, Washington, D.C., 2019; paper IAC-19-C1.1.11 Sutton, G.P., Biblarz, O.: Rocket Propulsion Elements, pp. 660–710. John Wiley and Sons, New York, NY (2001) Taheri, E., Junkins, J.L.: How many impulses redux. J. Astronaut. Sci. 67, 257–334 (2020) Taheri, E., Junkins, J.L.: Generic smoothing for optimal bang-off-bang spacecraft maneuvers. J. Guid. Control. Dyn. 41(11), 2467–2472 (2018) Open access funding provided by Università degli Studi di Roma La Sapienza within the CRUI-CARE Agreement. Department of Astronautical, Electrical, and Energy Engineering, Sapienza University of Rome, Rome, Italy Mauro Pontani Correspondence to Mauro Pontani. Communicated by: Francesco Topputo. First Differential of the Extended Objective Function for Multiple-Arc Problems This Appendix is concerned with the derivation of the first differential of the extended objective function (25), in the context of the multiple-arc optimization problem addressed in Sect. 3. The dynamical system of interest is governed by state equations of the form (19) and is subject to the boundary conditions (17). As a first step, the first differential of the scalar term \(d\Phi\) is obtained, using the chain rule, $$ \begin{aligned} d\Phi =& \frac{\partial \Phi }{{\partial {\mathbf{x}}_{0} }}d{\mathbf{x}}_{0} + \frac{\partial \Phi }{{\partial {\mathbf{x}}_{f} }}d{\mathbf{x}}_{f} + \frac{\partial \Phi }{{\partial t_{f} }}dt_{f} + \sum\limits_{j = 1}^{N - 1} {\left[ {\frac{\partial \Phi }{{\partial {\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} }}d{\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} + \frac{\partial \Phi }{{\partial {\mathbf{x}}_{fin}^{\left( j \right)} }}d{\mathbf{x}}_{fin}^{\left( j \right)} } \right]} + \sum\limits_{j = 1}^{N - 1} {\frac{\partial \Phi }{{\partial t_{j} }}dt_{j} } \hfill \\& \, + \frac{\partial \Phi }{{\partial {{\varvec{\upsigma}}}}}d{{\varvec{\upsigma}}} + \sum\limits_{j = 1}^{N - 1} {\left[ {\frac{\partial \Phi }{{\partial {{\varvec{\upupsilon}}}_{j} }}d{{\varvec{\upupsilon}}}_{j} + \frac{\partial \Phi }{{\partial \xi_{j} }}d\xi_{j} } \right]} \hfill \\ \end{aligned} $$ However, all the terms in the second row of Eq. (97) vanish. In fact, due to Eq. (24), $$ \frac{\partial \Phi }{{\partial {{\varvec{\upsigma}}}}} = {{\varvec{\upzeta}}}^{T} = {\varvec{0}}^{T} {, }\frac{\partial \Phi }{{\partial {{\varvec{\upupsilon}}}_{j} }} = \left( {{\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} - {\mathbf{x}}_{fin}^{\left( j \right)} } \right)^{T} = {\varvec{0}}^{T} , \, \frac{\partial \Phi }{{\partial \xi_{j} }} = \psi^{\left( j \right)} \left( {{\mathbf{x}}_{fin}^{\left( j \right)} ,t_{j} } \right) = 0 \, \left( {j = 1, \ldots ,N - 1} \right) $$ As a second step, the differential of the remaining terms of (25) is derived, using the general relation for the differential of an integral [22], $$ \begin{aligned} d\left\{ {\sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {\left[ {H^{\left( j \right)} - {{\varvec{\uplambda}}}^{\left( j \right)T} {\dot{\mathbf{x}}}^{\left( j \right)} } \right]dt} } } \right\} = \sum\limits_{j = 1}^{N - 1} {\left[ {H_{fin}^{\left( j \right)} dt_{j} - H_{ini}^{{\left( {j + 1} \right)}} dt_{j} - {{\varvec{\uplambda}}}_{fin}^{\left( j \right)T} {\dot{\mathbf{x}}}_{fin}^{\left( j \right)} dt_{j} + {{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)T}} {\dot{\mathbf{x}}}_{ini}^{{\left( {j + 1} \right)}} dt_{j} } \right]} + H_{f} dt_{f} \hfill \\ \, - {{\varvec{\uplambda}}}_{f}^{T} {\dot{\mathbf{x}}}_{f}^{{}} dt_{f} + \sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {\left[ {\frac{{\partial H^{\left( j \right)} }}{{\partial {\mathbf{x}}^{\left( j \right)} }}\delta {\mathbf{x}}^{\left( j \right)} + \frac{{\partial H^{\left( j \right)} }}{{\partial {\mathbf{u}}^{\left( j \right)} }}\delta {\mathbf{u}}^{\left( j \right)} - {{\varvec{\uplambda}}}^{\left( j \right)T} \delta {\dot{\mathbf{x}}}^{\left( j \right)} } \right]dt + } } \sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {\left[ {\frac{{\partial H^{\left( j \right)} }}{{\partial {{\varvec{\uplambda}}}^{\left( j \right)} }}\delta {{\varvec{\uplambda}}}^{\left( j \right)} - {\dot{\mathbf{x}}}^{\left( j \right)T} \delta {{\varvec{\uplambda}}}^{\left( j \right)} } \right]dt} } \hfill \\ \end{aligned} $$ where the symbol \(\delta\) denotes the variation, i.e. the time-fixed differential [22]. However, the last term in the second row of Eq. (99) vanishes, because, due to Eq. (19) $$ \left[ {\frac{{\partial H^{\left( j \right)} }}{{\partial {{\varvec{\uplambda}}}^{\left( j \right)} }} - {\dot{\mathbf{x}}}^{\left( j \right)T} } \right]\delta {{\varvec{\uplambda}}}^{\left( j \right)} = \left[ {{\mathbf{f}}^{\left( j \right)T} - {\dot{\mathbf{x}}}^{\left( j \right)T} } \right]\delta {{\varvec{\uplambda}}}^{\left( i \right)} = 0 $$ In Eq. (99) the following term is integrated by parts: $$ \sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {\left[ { - {{\varvec{\uplambda}}}^{\left( j \right)T} \delta {\dot{\mathbf{x}}}^{\left( j \right)} } \right]dt} } = \sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {\left[ {{\dot{\mathbf{\lambda }}}^{\left( j \right)T} \delta {\mathbf{x}}^{\left( j \right)} } \right]dt} } - \sum\limits_{j = 1}^{N - 1} {{{\varvec{\uplambda}}}_{fin}^{\left( j \right)T} \delta {\mathbf{x}}_{fin}^{\left( j \right)} } - {{\varvec{\uplambda}}}_{f}^{T} \delta {\mathbf{x}}_{f}^{{}} + {{\varvec{\uplambda}}}_{0}^{T} \delta {\mathbf{x}}_{0}^{{}} + \sum\limits_{j = 1}^{N - 1} {{{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)T}} \delta {\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} } $$ The relation \(d\eta = \delta \eta + \dot{\eta }dt\) (where \(\eta\) represents a generic variable) [22] is used in Eq. (101), then the resulting expression is inserted into Eq. (99), to yield $$ \begin{gathered} d\left\{ {\sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {\left[ {H^{\left( j \right)} - {{\varvec{\uplambda}}}^{\left( j \right)T} {\dot{\mathbf{x}}}^{\left( j \right)} } \right]dt} } } \right\} = \sum\limits_{j = 1}^{N - 1} {\left[ {H_{fin}^{\left( j \right)} - H_{ini}^{{\left( {j + 1} \right)}} } \right]dt_{j} } + H_{f} dt_{f} - \sum\limits_{j = 1}^{N - 1} {{{\varvec{\uplambda}}}_{fin}^{\left( j \right)T} d{\mathbf{x}}_{fin}^{\left( j \right)} } \hfill \\ \, + \sum\limits_{j = 1}^{N} {\int\limits_{{t_{j - 1} }}^{{t_{j} }} {\left[ {\frac{{\partial H^{\left( j \right)} }}{{\partial {\mathbf{x}}^{\left( j \right)} }}\delta {\mathbf{x}}^{\left( j \right)} + \frac{{\partial H^{\left( j \right)} }}{{\partial {\mathbf{u}}^{\left( j \right)} }}\delta {\mathbf{u}}^{\left( j \right)} + {\dot{\mathbf{\lambda }}}^{\left( j \right)T} \delta {\mathbf{x}}^{\left( j \right)} } \right]dt} } - {{\varvec{\uplambda}}}_{f}^{T} d{\mathbf{x}}_{f}^{{}} + {{\varvec{\uplambda}}}_{0}^{T} d{\mathbf{x}}_{0}^{{}} + \sum\limits_{j = 1}^{N - 1} {{{\varvec{\uplambda}}}_{ini}^{{\left( {j + 1} \right)T}} d{\mathbf{x}}_{ini}^{{\left( {j + 1} \right)}} } \hfill \\ \end{gathered} $$ Finally, summation of the two right-hand sides of Eqs. (97) and (102) (while holding Eqs. (98) and (100)) leads to obtaining the final expression of the first differential \(d\overline{J}\), i.e. Eq. (26). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Pontani, M. Optimal Space Trajectories with Multiple Coast Arcs Using Modified Equinoctial Elements. J Optim Theory Appl 191, 545–574 (2021). https://doi.org/10.1007/s10957-021-01867-2 Issue Date: December 2021 Optimal space trajectories Minimum-time trajectories Low-thrust trajectories Coast arcs Spacecraft eclipsing Modified equinoctial elements
CommonCrawl
DOURNAC.ORG Coding > Computing the size of the observable Universe - Cosmological Horizon and Angular Diameter Distance 2.Equations and Solving 3.Results for Cosmological Horizon 4.Results for Angular Diameter Distance 5.Paradox and Interpretation 6.Lookback time 7.Matlab script 1.Introduction : Within the framework of the expanding Universe models, we are going to answer the following question using a relatively simple calculation : At this moment, what is the distance of the farthest object whose light has had the time to reach us since the beginning of the Universe ? This imaginary spatial limit is called the cosmological horizon or particle horizon (not to confuse with event horizon). Any event that is now occurring or has already occurred at a point beyond this horizon cannot or cannot yet be observed by us. This is illustrated in the following figure : Figure 1 : Representation of the cosmological horizon in an expanding Universe This allows us to determine the radius of the observable universe, which is none other than the comoving distance to the cosmological horizon. 2.Equations and Solving : To calculate this distance, we must locate the points at which light is emitted and received by their co-moving spatial coordinates. We shall select our own position as the receiving point ($r=0$) and suppose that the angular coordinate of these two points is zero. By using the FLRW metric, one can obtain the equation for the trajectory of the ray of light emitted at $t_{1}$ from point $r_{1}$ and reaching us at $t_{0}$; this is a null geodesic. \begin{equation} c^{2}\text{d}t^{2}-R(t)^{2}\dfrac{\text{d}r^{2}}{1-kr^{2}}=0 \label{eq1} \end{equation} One thus obtains the source's $r_{1}$ coordinate: \begin{equation} {\large\int}_{t_{1}}^{t_{0}}\dfrac{c\text{d}t}{R(t)}={\large\int}_{0}^{r_{1}}\dfrac{\text{d}r}{\sqrt{1-kr^{2}}}=S_{k}^{-1}(r_{1}) \label{eq2} \end{equation} where : \begin{eqnarray} S_{k}^{-1}(r_{1})= \left\{ \begin{aligned} &\,\,\,\, \text{arcsin}(r_{1})\,\,\,\, \text{si}\,\, k=+1 \\ \\ &\,\,\,\, r_{1}\,\,\,\, \text{si}\,\, k=0 \\ \\ &\,\,\,\, \text{argsh}(r_{1})\,\,\,\, \text{si}\,\,k=-1 \end{aligned} \right. \label{eq3} \end{eqnarray} Let us now examine the evolution of the Universe. Two primary types of model exist: those which consider the present age of the universe (or at least the age of its present phase of expansion) to be finite, and those that consider the universe to be infinitely old. In other words, for the first type (called Big-Bang models), the history of the universe can be described as beginning at an initial instant chosen as zero on the time scale. In contrast, history according to infinite models precludes any consideration of an initial instant in time. Let's examine the case of Big-Bang models. Since their history of the Universe begins at $t=0$, they impose the following constraint: $t_{1} > t_{min}=0$. Let's apply this constraint to the equation \eqref{eq2} : does this imply that a constraint is imposed on the spatial coordinate ? This all depends on whether the integral converges or not : \begin{equation} {\large\int}_{0}^{t_{0}}\dfrac{c\text{d}t}{R(t)} \label{eq4} \end{equation} If it converges, the cosmological horizon's value can be defined. We are now going to express this integral in another form by replacing the time variable with redshift $z$ : \begin{eqnarray} \left\{ \begin{aligned} &\,\,\,\, \text{d}t=\text{d}z\dfrac{\text{d}t}{\text{d}z}\\ \\ &\,\,\,\, 1+z=\dfrac{R_{0}}{R(t)}\,\,\Rightarrow\,\,\text{d}t=-\dfrac{\text{d}z}{(1+z)H(z)} \\ \\ &\,\,\,\, H(t)=\dfrac{R'(t)}{R(t)} \end{aligned} \right. \label{eq5} \end{eqnarray} The term $H(z)$ representing the expansion rate as a function of redshift $z$ is determined by the Friedmann's equation (2). One can thus express it in this form : \begin{equation} H(z)=H_{0}\big[\Omega^{0}_{m}(1+z)^{3}+\Omega^{0}_{r}(1+z)^{4}+\Omega^{0}_{\Lambda}+\Omega^{0}_{k}(1+z)^{2}\big]^{1/2} \label{eq6} \end{equation} In the following calculations, we use the $\Omega^{0}_{k}$ variable, which is determined by $\Omega^{0}_{m}$, $\Omega^{0}_{r}$ and $\Omega^{0}_{\Lambda}$ variables according to which : \begin{equation} \Omega^{0}_{k}=1-\Omega^{0}_{m}-\Omega^{0}_{r}-\Omega^{0}_{\Lambda}\,\,\,\text{with}\,\,\,\Omega^{0}_{k}=-\dfrac{kc^{2}}{H_{0}^{2}R_{0}^{2}} \label{eq7} \end{equation} Even if $\Omega^{0}_{r}$ might be negligible ($\simeq 10^{-4}$), one takes it into account to compute $\Omega^{0}_{k}$. By beginning with the definition for the cosmological horizon : \begin{equation} D_{h}=R_{0}r_{1} \label{eq8} \end{equation} one can finally get to express this distance for the three models of the Universe, with $z_{e}$ the emission redshift : Hyperbolic Universe : $\Omega^{0}_{k} > 0$ \begin{equation} D_{h}=\dfrac{c}{H_{0}\sqrt{\Omega^{0}_{k}}}\text{sinh}\Bigg[\sqrt{\Omega^{0}_{k}}{\large\int}_{0}^{z_{e}} \dfrac{\text{d}z}{\big[\Omega^{0}_{m}(1+z)^{3}+\Omega^{0}_{r}(1+z)^{4}+\Omega^{0}_{\Lambda}+\Omega^{0}_{k}(1+z)^{2}\big]^{1/2}}\Bigg] \label{eq9} \end{equation} Euclidean Universe : $\Omega^{0}_{k} = 0$ \begin{equation} D_{h}=\dfrac{c}{H_{0}}{\large\int}_{0}^{z_{e}}\dfrac{\text{d}z}{\big[\Omega^{0}_{m}(1+z)^{3}+\Omega^{0}_{r}(1+z)^{4}+\Omega^{0}_{\Lambda}\big]^{1/2}} \label{eq10} \end{equation} Spherical Universe : $\Omega^{0}_{k} < 0$ \begin{equation} D_{h}=\dfrac{c}{H_{0}\sqrt{|\Omega^{0}_{k}|}}\text{sin}\Bigg[\sqrt{|\Omega^{0}_{k}|}{\large\int}_{0}^{z_{e}}\dfrac{\text{d}z}{\big[\Omega^{0}_{m}(1+z)^{3}+\Omega^{0}_{r}(1+z)^{4}+\Omega^{0}_{\Lambda}+\Omega^{0}_{k}(1+z)^{2}\big]^{1/2}}\Bigg] \label{eq11} \end{equation} We are going to calculate these three integrals numerically using Matlab's integral function. We will also use the arrayfun function to perform a batch execution for 1D array redshift values : % Redshift array z_begin=0; z_final=1100; z_step=0.01; z=z_begin:z_step:z_final; % Functions for 3 cases integralFunc1=@(x) c/(H0*sqrt(Omega1_k)*sinh(sqrt(Omega1_k)*integral(@(x)myfunc(x,Omega1_m,Omega1_r, Omega1_l,Omega1_k),z_begin,x)); integralFunc2=@(x) c/H0*(integral(@(x)myfunc(x,Omega2_m,Omega2_r,Omega2_l,Omega2_k),z_begin,x)); integralFunc3=@(x) c/(H0*sqrt(abs(Omega3_k)))*sin(sqrt(abs(Omega3_k))*integral(@(x)myfunc(x,Omega3_m,Omega3_r, % Compute cosmological horizon for 3 cases dc1=arrayfun(integralFunc1,z); % Function to integrate function y = myfunc(x,Omega_m,Omega_r,Omega_l,Omega_k) y=(Omega_m*(1+x).^(3)+Omega_r*(1+x).^(4)+Omega_l+Omega_k*(1+x).^(2)).^(-1/2); In the program, we will set $z_{e}$ equal to $1100$, since this corresponds to the time when the decoupling of matter and radiation occurred. 3.Results for Cosmological Horizon : Here is the graph obtained from the program : Figure 2 : Cosmological horizon value with $z_{e}=1100$ for the 3 models The red curve represents the cosmological parameters ($\Omega^{0}_{m}=0.3$, $\Omega^{0}_{\Lambda}\simeq 0.7$ and so $\Omega^{0}_{k} = 0$, i.e flat space) that correspond to the standard model ($\Lambda\text{CDM}$). By converting into light years ($1\,\text{pc} = 3.26\,\text{ly}$), we get, with $z_{e}=1100$ and $H_{0}=71\,\text{km/s/Mpc}$, a value for the radius of the observable Universe equal to: $R = 1.343\,10^4\,\text{Mpc} = 43.8\,\text{Gly}$, or roughly $\mathbf{44}$ billion light years. 4.Results for Angular Diameter Distance : Angular Diameter Distance is defined by the distance of a galaxy when it emitted a light ray at time $t_{1}$ that will be received today at $t_{0}$. In other words, its expression is : \begin{equation} D_{a}=R_{1}r_{1}=\dfrac{R_{1}}{R_{0}}\,R_{0}r_{1}=\dfrac{D_{h}}{1+z} \label{eq12} \end{equation} Once Cosmological horizon is computed, one has just to divide $D_{h}$ by $(1+z)$ factor, itself defined by the ratio of $R_{0}$ and $R_{1}$. Figure 3 : Angular Diameter Distance as a function of Redshift 5.Paradox and Interpretation : Concerning the Angular Diameter Distance $D_{a}$, one can see a paradox : if we take 2 galaxies of redshifts $z_{1}$ and $z_{2}$ such that $z_{2} > z_{1}$, then the one that is the most distant today ($\text{galaxy}_{2}$) will appear larger in the sky than the one which is currently the least distant ($\text{galaxy}_{1}$). This is due to the fact that, starting from the values of $z_{max}$ given in the figure above, the Cosmological Horizon grows less quickly than the factor $(1+z)$, which implies a decreasing $D_{a}$ value for $z > z_{max}$ in each of the 3 models. 6.Lookback time : Lookback time is an estimation of the moment at which an astronomic object has emitted light that is received currently by the observer (i.e our galaxy). The origin time "0" is defined as the today cosmic time (redshift z=0). We also called this time as being the difference between age of Universe and age of Universe when considered object sent out light, or as the flying time of photon. Its general expression is : \begin{equation} T(z)=\dfrac{1}{H_{0}}{\large\int}_{0}^{z_{e}}\dfrac{\text{d}z}{(1+z)\big[\Omega^{0}_{m}(1+z)^{3}+\Omega^{0}_{r}(1+z)^{4}+\Omega^{0}_{\Lambda}+\Omega^{0}_{k}(1+z)^{2}\big]^{1/2}} \label{eq13} \end{equation} For example, an object having a lookback time of 10 Giga years means that its light has travelled during 10 Giga years before reaching us, or the emission of its light has happened 10 Giga years ago. Given the figure below, with a lookback of 10 Gyr, redshift will be equal to 2, considering the ΛCDM model ($\Omega^{0}_{m}=0.3$ and $\Omega^{0}_{\Lambda}=0.7$). Figure 4 : Lookback time as a function of Redshift In practice, when observer receives light from an oject and compute its redshift, he's able to deduce, thanks to figure 4 above, the lookback time. Then, observer knows that he's seeing the object as it was at this value of time. 7.Matlab script : Matlab source of this project : cosmological_distances.m ps : join like me the Cosmology@Home project whose aim is to refine the model that best describes our Universe Home | Astronomy | Sciences | Philosophy | Coding | Cv - dournac.org © 2003 by fab -
CommonCrawl
Article | Open | Published: 12 January 2018 Autapses promote synchronization in neuronal networks Huawei Fan1, Yafeng Wang1, Hengtong Wang1, Ying-Cheng Lai1,2 & Xingang Wang ORCID: orcid.org/0000-0002-6851-01091 Scientific Reportsvolume 8, Article number: 580 (2018) | Download Citation Complex networks Nonlinear phenomena Neurological disorders such as epileptic seizures are believed to be caused by neuronal synchrony. However, to ascertain the causal role of neuronal synchronization in such diseases through the traditional approach of electrophysiological data analysis remains a controversial, challenging, and outstanding problem. We offer an alternative principle to assess the physiological role of neuronal synchrony based on identifying structural anomalies in the underlying network and studying their impacts on the collective dynamics. In particular, we focus on autapses - time delayed self-feedback links that exist on a small fraction of neurons in the network, and investigate their impacts on network synchronization through a detailed stability analysis. Our main finding is that the proper placement of a small number of autapses in the network can promote synchronization significantly, providing the computational and theoretical bases for hypothesizing a high degree of synchrony in real neuronal networks with autapses. Our result that autapses, the shortest possible links in any network, can effectively modulate the collective dynamics provides also a viable strategy for optimal control of complex network dynamics at minimal cost. An autapse is referred to as a special synapse that connects the axon and dendrites of the same neuron. Autapses were first discovered in 1972 in the pyramidal cell of neocortex cerebri (Golgi preparations of rabbit occipital cortex)1. Because of their aberrant and incestuous structure, autapses were originally regarded as "a wiring error" in the development of the brain and therefore were conceived as playing no actual role in the biological functions of the underlying neural network. Twenty-five years later, autapses were found commonly in the neural circuits of the cat's visual cortex2, suggesting that autapses may actually have certain biological usage3,4. Sign of ubiquity of autapses emerged subsequently, when they were discovered in multiple areas in the brain such as neocortex, hippocampus, cerebellum, substantia nigra, and striatum5,6,7,8. There was experimental evidence that autapses are sparse and are present only on a small fraction of the neurons in the underlying brain neuronal network5,6. With respect to the functional role of autapses, significant improvement in the spike-time precision of the neocortical interneurons was found in the presence of inhibitory autapses6, and excitatory autapses were found to be essential to persistent activities of certain neurons of Aplysa8. In theoretical and computational studies, issues that have been addressed concerning the roles of autapses include: persistence in recurrent neural networks9, generation of oscillatory behavior of a single neuron10, switching among distinct dynamical states (e.g., quiescent, periodic or chaotic)11, induction of wave patterns in a regular network of neurons12, signal detection in stochastic neurons13, emergence of coherence resonance in single neurons and in neural networks14, and promotion of rhythmic propagation in neuronal networks15. A fundamental issue in complex networks with significant implications to biology and physiology is synchronization16, a ubiquitous phenomenon in natural systems17. A widely known example is epilepsy, a disorder characterized by seizures, which affects over 50 million people worldwide. The classical understanding is that seizures are associated with hypersynchrony18. This principle, however, has been challenged19,20,21,22, and the dynamical origin of epilepsy in relation to synchrony has been continuously debated23,24. As a matter of fact, the lingering and unanswered question in this field of medicine is whether epileptic seizures are associated with an increased or a decreased level of neuronal synchrony. To certain extent, this question may be addressed through the approach of data analysis: by analyzing EEG (electroencephalogram) or ECoG (electrocorticogram) data recorded from epileptic patients and calculating measures of synchrony, one hopes to be able to assess the possible causal role of synchronization in triggering a seizure. Indeed, there were previous efforts in developing seizure detection and characterization frameworks based on partial synchrony among multichannel brain data, with the finding that, depending on the type of seizures, there can be either an enhancement or a reduction in the degree of synchrony25,26,27. The data based approach has one deficiency: it reveals no information about the interplay between seizure and synchrony at the localized neuronal network level, as the EEG or ECoG data reflect only the collective neuronal activities at much larger scales. This leads to a related question: suppose epileptic seizures are indeed associated with synchronization (either enhanced or reduced) in small scale neuronal networks, what characteristic structural features do the networks possess to promote or suppress synchrony? Identification of unusual and unconventional features in the seizure network structure which predominately affect synchronization can potentially lead to a deep understanding of the interplay between seizure and synchrony. In this paper, we develop the computational and theoretical foundation for hypothesizing that autapses associated with single neurons are capable of significantly modulating synchrony at the network level. In particular, implementing a widely studied nonlinear neuron model on complex networks of different topologies, we assume the existence of autapses on a small fraction of the neurons and investigate quantitatively how global synchronization of the network is affected by the locations, strength, and time delays of the autapses. We find that, in general, the presence of a sparse set of autapses can increase dramatically the odds for the whole network to achieve global synchronization. We develop a criterion that allows us to determine, for a given set of autapses, their optimal locations in the network to maximize synchrony. The implication is that autapses can serve as an effective structural indicator at the single neuron level for anticipating synchronization in networks that contain such neurons. In epilepsy, for example, there are distinct types of seizures that are associated with neuronal networks in different regions of the brain. If it is possible to examine the structural details of the representative neurons in such a network, the presence of autapses would imply a higher probability for synchronization and, consequently, a more appreciable likelihood that synchrony is the culprit of the corresponding seizure. This may provide insights into a possible resolution of the interplay and causal relation between synchrony and certain types of seizures. With the advance of biotechnologies, we hope that it would be possible to test our hypothesis that autapses are correlated with synchrony through structural examination at the single neuron level and dynamics characterization at the network level. In a broader perspective, our finding that autapses, the shortest possible links in any network, are able to modulate the collective dynamics might provide an alternative approach to controlling the dynamics of complex networks, especially for situations where long-distance links are costly and infeasible. Understanding the role of autapse in promoting neuronal synchronization through a toy network model To demonstrate that autapses can promote global synchronization in neuronal networks, we start with a toy model consisting of four coupled neurons (Fig. 1). Mathematically, an autapse can be modeled as a self-loop in the network with a time delay τ (see Methods). As shown in Fig. 1(a1), in the absence of any autapse, the network has four nodes and four mutual links (edges). The coupling strength associated with each edge is determined by the parameter ε normalized by the nodal degree (including the autapse) (see Methods). We assume that each node corresponds to an idealized neuron, whose dynamics are identical and described by the Hindmarsh-Rose (HR) oscillator28,29,30,31,32: $$\begin{array}{ccc}\dot{x} & = & y+\varphi (x)-z+I,\\ \dot{y} & = & \psi (x)-y,\\ \dot{z} & = & r[s(x-{x}_{R})-z],\end{array}$$ $$\begin{array}{ccc}\varphi (x) & = & -a{x}^{3}+b{x}^{2},\\ \psi (y) & = & c-d{x}^{2},\end{array}$$ where x is the membrane potential, y and z represent the transport rates of the fast and slow ion channels corresponding to the spiking and bursting variable, respectively, I is the external stimulating current that determines the nature of the isolated neuronal dynamics, e.g., periodic or chaotic. To appreciate the ability of autapses to promote synchronization for complicated dynamics, we choose the parameters of the HR neuron so that it exhibits a chaotic attractor (see Methods). The impact of a single autapse on synchronization in a toy neuronal network. (a1) Without any autapse, the network has four nodes and four edges, where each node is a Hindmarsh-Rose neuron. (b1–d1) Network structure when a single autapse (represented by the red curve with an arrow) is present at node 1, 2, and 4, respectively. (a2–d2) For the network structures in (a1–d1), respectively, the dynamical behaviors of the network in terms of synchronization. Shown in each panel is a plot of the x variable from each node versus the averaged value of this variable over all the nodes during the time evolution. When there is global synchronization, all the variables are equal to their average value at any instant of time, tracing out a straight line segment along the diagonal. Any deviation from the diagonal signifies lack of synchronization. The uniform coupling parameter is ε = 1 and the time delay associated with the autapse is τ = 4. Full size image A single autapse can be present at any node, and there are three distinct cases, as shown in Fig. 1(b1–d1), respectively (due to the symmetry in the network structure, an autapse on node 3 is equivalent to one on node 2). To determine if there is global synchronization in the network, we plot one dynamical variable, e.g. x i (t), from each node (for i = 1, …, 4), versus the network averaged variable defined as x ave (t) = ∑ i x i (t)/4 for a relatively large time interval. When the network is completely synchronized, we have x i (t) = x ave (t) for i = 1, …, 4, so the dynamical trajectory will trace out a straight line segment along the diagonal in the (x i , x ave ) plane. Any deviation from the diagonal indicates lack of synchronization in the system. From Fig. 1(a2–d2), we see that, without any autapse (a2), the network is not synchronized. When an autapse is present at node 1 or 4, which has the largest and the smallest degree, respectively, there is global synchrony in the network, as shown in Fig. 1(b2) and (d2), respectively. Adding an autapse at node 2 (or 3) will not result in global synchronization (c2). These results indicate that the presence of a single autapse at a proper location in the network can promote synchronization. Intuitively, from the network synchronization theory33,34,35,36,37,38,39,40,41,42,43,44,45,46,47, the introduction of an autapse is likely to discourage global synchronization because the time delay associated with the autapse effectively makes the dynamics of the neuron hosting the self-loop non-identical to other neurons in the network at which no autapses are present. Previous simulation of an isolated neuron showed11 that, with the addition of an autapse, the neuronal dynamics could be changed drastically, e.g., from chaotic to periodic. If the dynamics of the individual nodes are characteristically different, the synchronization manifold existing for identical nodal dynamics may be destroyed48. The phenomenon of autapse promoted synchronization, as exemplified in Fig. 1(b2) and (d2), thus seems to be counterintuitive. However, an examination of the dynamical behavior of neuron 1 (or neuron 4) reveals that, even in the presence of an autapse, its trajectory is identical to that without the autapse. This observation suggests that, in spite of the autapse, the network still possesses a global synchronization manifold. In Methods, we provide a heuristic argument for the existence of a synchronization manifold in the presence of an autapse. (It is worth noting that, depending on the time-delay parameter, the dynamics within the synchronization manifold may differ from those of the individual, isolated neurons. For the time-delay parameter used in Fig. 1(b), there are periodic dynamics in the synchronization manifold). To demonstrate that autapses can promote synchronization in a more quantitative manner, we exploit the concept of master stability function (MSF)49. Given a networked system possesses the synchronization manifold, the MSF method proposed in 1998 by Pecora and Carroll49 has become the golden standard in the study of synchronization in complex dynamical systems. To appreciate the physical significance of the MSF method, we note that, as the network approaches a global synchronization state, the variables of the nodes approach each other asymptotically due to the mutual entrainment caused by the coupling. The subspace in which the synchronous solution lies is the synchronization manifold whose dimension is typically much smaller than that of the full phase space. Specifically, let x s (t) be the synchronous state of the network and {δx i (t)} be the infinitesimal perturbations added on node i at time t. Whether the perturbed system is restorable to the synchronous state can be inferred from the MSF49. $$\delta \dot{{\boldsymbol{y}}}=[{\boldsymbol{DF}}({{\boldsymbol{x}}}_{s})+\sigma {\boldsymbol{DH}}({{\boldsymbol{x}}}_{s})]\cdot \delta {\boldsymbol{y}},$$ with δy and σ = ελ being the perturbation in the eigenmode space and the generic coupling strength, respectively. The quantity ε represents the uniform coupling strength, and {λ} are the eigenvalue spectrum of the Laplacian matrix defining the network structure (see Methods). Denoting Λ as the largest Lyapunov exponent calculated from the above equation, we have that the variation of Λ with σ gives the shape of the MSF curve. When the coupling configuration and the synchronous dynamics (i.e., the dynamics of an isolated node) are given, the MSF curve is independent of the particular network structure. If the dynamics in the synchronization manifold is stable with respect to perturbations in the transverse subspace, synchronization can be physically realized. In this case, the value of Λ is negative for all the transverse perturbation modes σ i = ελ i for i = 2, …, N (N is the number of coupled oscillators and the mode associated with λ1 = 1 describes the motion along the synchronization manifold. See Methods for details). For the general nodal dynamics and coupling configuration, the value of Λ is negative only in certain interval36,49,50 of the generic parameter. As such, for a network of coupled nonlinear oscillators to be synchronizable, the necessary condition is that all the generalized coupling strengths of the transverse modes {σ i } fall entirely the interval of negative Λ. (A mathematical formulation of the MSF is given in Methods). We calculate the MSF curve for the small neuronal network in Fig. 1 without or with an autapse at each node, to assess the impact of autapse on synchronization. For a fixed value of ε, the argument of σ (i.e., the generalized coupling parameter) is directly proportional to the eigenvalue λ of the network coupling matrix. For convenience, we choose ε = 1 and plot Λ versus λ (see Methods). Figure 2(a) shows such a plot for τ = 4 (the same parameter values as in Fig. 1). We have Λ < 0 for λ ∈ (σ1, σ2) with σ1 ≈ −0.7 and σ2 ≈ 1. The structure of the small neuronal network gives λ1 = 1 = σ2 and λ1 > λ2 ≥ λ3 ≥ λ4. Stable synchronization of the network is thus determined solely by the the smallest eigenvalue λ N : synchronization is (is not) achievable for λ N > σ1 (λ N < σ1). Without any autapse Fig. 1(a1), we have λ4 = −0.73. Since λ4 < σ1, the network is not synchronizable - in agreement with the numerical result in Fig. 1(a2). When an autapse is present at neuron 1 Fig. 1(b1), we have λ4 = −0.5 > σ1, so the network is synchronizable, as indicated by Fig. 1(b2). Similarly, for an autapse at neuron 4 Fig. 1(d1), we have λ4 = −0.5 > σ1 so there is synchronization Fig. 1(d2). However, when the single autapse is attached to neuron 2 Fig. 1(c1) we have λ4 = −0.72 < σ1, so the network is non-synchronizable, as indicated by Fig. 1(c2). Master stability functions (MSFs) for the network of four neurons. For the toy neuronal network model in Fig. 1, the stability of the MSF, as characterized by Λ, for different combinations of the system parameters. The synchronization state is stable when Λ < 0 for all the transverse modes, and will be unstable when Λ < 0 for any of the transverse modes. All results are obtained by solving Eq. (9) numerically (see Methods). (a) For ε = 1 and τ = 4, Λ versus λ. We see that Λ is negative in the region λ ∈ (−0.7, 1), indicating that stable synchronization will be achieved when all the transverse modes are located in this parameter interval. (b) For ε = 0.8 and τ = 4, Λ versus λ, where now Λ is negative in three subregions: λ ∈ (−0.82, −0.75), λ ∈ (−0.72, 0.36), and λ ∈ (0.61, 0.67). (c) For fixed ε = 1, contour plots of Λ in the parameter plane (λ, τ), where the blue color indicates parameter regions of stable synchronization (Λ < 0) and the red color specifies the regions where synchronization is unstable or cannot be realized physically (Λ > 0). (d–f) Contour plot of Λ in the parameter planes (λ, ε), (λ, τ), and (λ, ε) for fixed τ = 4, ε = 0.8, and τ = 6, respectively. The general phenomenon is that, through variations of the parameters (λ, τ, and ε), the region of stable synchronization can be readily modified. In general, when the network structure is given so that all eigenvalues {λ i } (i = 1, …, N) are known, whether stable synchronization can arise is determined by the MSF curve, where the range of the negative values of the MSF, namely the stable region, determines whether all the transverse eigenmodes are stable. For network systems without time delay (the general model studied in literature), the existence and the size of the stable region depend only on the local nodal dynamics and the coupling function, but not on the coupling strength49,50. For our neuronal network with autapses, however, the stable region depends also on the coupling strength parameter and the time delay. To uncover the effect of these two parameters on the stable region of the MSF curve, we decrease the coupling strength to ε = 0.8 and calculate the corresponding MSF, as shown in Fig. 2(b). Comparing Fig. 2(a) and (b), we see that not only the range but also the shape of the stable synchronization region are modified: the stable region now consists of three separated subregions: (−0.82, −0.75), (−0.72, 0.36), and (0.61, 0.67). To obtain a full picture of the distribution of the stable regions in the three-dimensional parameter space (ε, λ, τ), we show in Fig. 2(c–f) the value of Λ in four different parameter planes. In general, network synchrony is sensitive to the network structure (characterized by the eigenvalue spectrum {λ i }), the coupling parameter (ε), and the time delay parameter τ associated with the autapse. The sensitive dependence of stable synchronization region on variations in ε and τ is a typical feature of ragged synchronization51. The remarkable phenomenon is that, regardless of the changes in the parameters, insofar as there is an autapse, stable synchronization regions [blue regions in Fig. 2(c–f) persist. That is, for an unsynchronized neuronal networked system, the presence of a single autapse is able to induce stable synchronization. Autapse centralities and synchronization optimization Suppose a small set of autapses are distributed into a neuronal network, to which set of neurons should the autapses be attached so that the network achieves the maximum synchronizability? To address this question for the general case of multiple autapses is difficult. We thus first consider the relatively simple case of a single autapse and develop a theoretical criterion to determine the optimal location for the autapse. Equivalently, it is necessary to find the autapse centrality, a quantity that measures the impact of a specific autapse on the network synchronizability. From the MSF formalism, we have that the autapse induces a shift in the eigenvalues of the network coupling matrix. Specifically, let G0 and G be the network coupling matrices without and with the autapse, respectively. Letting \({\lambda }_{j}^{0}\) and λ j be the jth eigenvalue of the respective matrices, we write the eigenvalue shift as \({\rm{\Delta }}{\lambda }_{j}\equiv {\lambda }_{j}-{\lambda }_{j}^{\mathrm{(0)}}\). In the typical case where the MSF curve possesses only one bounded stable region, the network can reach a global synchronous state only if the eigenmodes in the transverse subspace associated with the largest and the smallest nontrivial eigenvalues (λ2 and λ N , respectively) are both stable. The impact of the autapse on the network synchronizability can then be characterized by the shifts in the two nontrivial eigenvalues: \({\rm{\Delta }}{\lambda }_{2}^{i}={\lambda }_{2}-{\lambda }_{2}^{\mathrm{(0)}}\) and \({\rm{\Delta }}{\lambda }_{N}^{i}={\lambda }_{N}-{\lambda }_{N}^{\mathrm{(0)}}\), with i denoting the neuron that receives the autapse. A large value of \({\rm{\Delta }}{\lambda }_{2}^{i}\) or \({\rm{\Delta }}{\lambda }_{N}^{i}\) signifies a more significant impact. We thus propose the quantities \({\rm{\Delta }}{\lambda }_{2}^{i}\) and \({\rm{\Delta }}{\lambda }_{N}^{i}\) as two autapse centrality measures. To characterize the autapse centralities quantitatively, we develop a perturbation-based approach by treating the autapse as a small alteration to the network structure. With the perturbation, the normalized network matrix can be written as G = G(0) + G(1), where G(1) is the perturbation matrix induced by the autapse. Assuming that the autapse is attached to neuron i which has degree k i , we have the elements of G(1) as \({g}_{ii}^{\mathrm{(1)}}=\mathrm{1/(}{k}_{i}+\mathrm{1)}\), \({g}_{ij}^{\mathrm{(1)}}=-\mathrm{1/[}{k}_{i}({k}_{i}+\mathrm{1)]}\) if there is a link between neurons i and j, and \({g}_{ij}^{\mathrm{(1)}}=0\) for other elements. Letting e2 = [e2,1, e2,2, …, e2,N]T be the normalized eigenvector associated with the eigenvalue \({\lambda }_{2}^{\mathrm{(0)}}\) of G(0) [i.e., \({{\boldsymbol{G}}}^{\mathrm{(0)}}{{\boldsymbol{e}}}_{2}={\lambda }_{2}^{\mathrm{(0)}}{{\boldsymbol{e}}}_{2}\)], and \({{\boldsymbol{e}}}_{2}^{-1}=[{e^{\prime} }_{\mathrm{2,1}},{e^{\prime} }_{\mathrm{2,2}},\ldots ,{e^{\prime} }_{\mathrm{2,}N}]\) be the left eigenvector of G(0) (i.e., \({{\boldsymbol{e}}}_{2}^{-1}\cdot {{\boldsymbol{G}}}^{\mathrm{(0)}}={\lambda }_{2}^{\mathrm{(0)}}{{\boldsymbol{e}}}_{2}^{-1}\) and \({{\boldsymbol{e}}}_{2}^{-1}\cdot {{\boldsymbol{e}}}_{2}\mathrm{=1}\))52, we have $${\rm{\Delta }}{\lambda }_{2}\approx {{\boldsymbol{e}}}_{2}^{-1}{{\boldsymbol{G}}}^{\mathrm{(1)}}{{\boldsymbol{e}}}_{2}=\sum _{i,j\mathrm{=1}}^{N}{e^{\prime} }_{\mathrm{2,}i}{e}_{\mathrm{2,}j}{g}_{ij}^{\mathrm{(1)}}\mathrm{.}$$ For a densely connected network, we have \({k}_{i}\gg 1\) and so \({g}_{ij}^{\mathrm{(1)}}\ll {g}_{ii}^{\mathrm{(1)}}\). Under the approximation \({g}_{ij}^{\mathrm{(1)}}=0\), we have that the matrix G(1) has only one non-zero element: \({g}_{ii}^{\mathrm{(1)}}\), leading to the following simplified version of Eq. (1): $${\rm{\Delta }}{\lambda }_{2}={e^{\prime} }_{\mathrm{2,}i}{e}_{\mathrm{2,}i}\,{g}_{ii}^{\mathrm{(1)}}={e^{\prime} }_{\mathrm{2,}i}{e}_{\mathrm{2,}i}/({k}_{i}+\mathrm{1).}$$ The smallest eigenvalue λ N can be treated in a similar way. We have $${\rm{\Delta }}{\lambda }_{N}={e^{\prime} }_{N,i}{e}_{N,i}{g}_{ii}^{\mathrm{(1)}}={e^{\prime} }_{N,i}{e}_{N,i}/({k}_{i}+\mathrm{1),}$$ where e N = [eN,1, eN,2, …, e2,N]T is the normalized eigenvector associated with the eigenvalue \({\lambda }_{N}^{\mathrm{(0)}}\) of G(0): \({{\boldsymbol{G}}}^{\mathrm{(0)}}{{\boldsymbol{e}}}_{N}={\lambda }_{N}^{\mathrm{(0)}}{{\boldsymbol{e}}}_{N}\)), and \({{\boldsymbol{e}}}_{N}^{-1}=[{e^{\prime} }_{N\mathrm{,1}},{e^{\prime} }_{N\mathrm{,2}},\ldots ,{e^{\prime} }_{N,N}]\) is the left eigenvector associated with e N . Equations (2) and (3) characterize the impact of a single autapse on the synchronizability of the underlying neuronal network, as they show that the autapse centrality of a neuron is determined jointly by the neuron degree k and the two eigenvectors of the original network coupling matrix: e2 and e N . To verify Eqs (2) and (3) numerically, we present in Fig. 3 the two autapse centrality measures for complex networks of distinct topologies. In particular, Fig. 3(a–f) are for a scale-free, random, and small-world network, respectively. The three networks have the same size N = 100 and average degree 〈k〉 = 6. Introducing a single autapse onto each neuron alternatively, we calculate the two centrality measures: the variations of the two extreme eigenvalues, Δλ2 and Δλ N . Figure 3(a,b) show the autapse centralities versus the nodal index and degree, respectively. From Fig. 3(a), we see that the centralities exhibit large variations among the neurons, while Fig. 3(b) shows that the hub neurons are less sensitive to the autapse than the non-hub neurons in spite of the large fluctuations in the values of Δλ2 and especially of Δλ N . Figure 1(c) and (d) show the numerical values of Δλ2 and Δλ N versus the theoretical predictions of Eqs (2) and (3), respectively. For Δλ2, the numerical and theoretical results are not exactly equal to each other but exhibit a high degree of linear correlation. For Δλ N , the numerical and theoretical values are approximately equal. For the random network, the numerical results of both centralities are approximately equal to the theoretical values, as shown in Fig. 3(e). The results for the small-world network are similar to those for the scale-free network Fig. 3(f). Overall, there is a reasonable agreement between the two centrality measures calculated numerically and predicted theoretically. Distribution of autapse centrality for complex neuronal networks of distinct topologies. For a given network, the centrality measures can be calculated when a single autapse is attached to a node in the network. Varying the node across the network leads to a distribution of each centrality. (a) For a scale-free network, the distribution of Δλ2 and Δλ N , (b) the variations Δλ2 and Δλ N with respect to the neuron degree k, (c) Δλ2 with respect to the theoretical prediction Eq. (2) denoted as \(\Delta {\lambda }_{2}^{th}\), and (d) Δλ N versus \({\rm{\Delta }}{\lambda }_{N}^{th}\), the prediction given by Eq. (3). (e) For an ER random neuronal network, Δλ2 versus \({\rm{\Delta }}{\lambda }_{2}^{th}\). (f) For a small-world network generated by rewiring 5% of the links of a regular lattice, Δλ2 versus \({\rm{\Delta }}{\lambda }_{2}^{th}\). The insets in (e) and (f) show Δλ N versus \({\rm{\Delta }}{\lambda }_{N}^{th}\) for the respective networks. The networks have the same size (N = 100) and average degree (〈k〉 = 6). The parameters of the neuronal dynamics are the same as those in Fig. 2. We now address the issue of multiple autapses. By introducing autapses to a judiciously chosen set of neurons (one autapse to each neuron), we seek to optimize network synchronization. We again exploit Eqs (2) and (3), which suggest that the optimal set of neurons should be selected according to the largest possible values of the autapse centralities calculated based on a single autapse. Suppose the number of available autapses is m. We arrange the centrality values for individual nodes in a descending order and choose the m neurons from the top of the list. To test this optimization strategy, we compare its performance with those of two alternative strategies: degree-based and random, where for the former, the set of neurons (each receiving one autapse) are selected according to the nodal degree in the descending order, while for the latter, the set is chosen randomly. In simulations, we use the scale-free network in Fig. 3(a), with the same nodal dynamics and coupling function as in Fig. 2(a). As the stable region of the MSF curve is bounded Fig. 2(a), the relevant autapse centrality is Δλ N : the set of neurons to the receive the autapses can be chosen according to the descending order of the values of Δλ N . We calculate the network-averaged synchronization error of the variable x: \(\langle \delta x\rangle \equiv {\langle \mathrm{(1/}N){\sum }_{i\mathrm{=1}}^{N}|{x}_{i}(t)-{x}_{ave}(t)|\rangle }_{T}\), with x ave (t) being the nodal average value and 〈...〉 T being the time average. Our computations reveal that, as the number of autapses m is increased through a critical point (denoted as m c ), the synchronization error decreases to zero, as shown in Fig. 4. For our autapse centrality based optimization strategy, we have m c ≈ 45, while the degree based and random strategies give m c ≈ 52 and m c ≈ 60, respectively. Among the three strategies, ours yields the minimum number of autapses required for the network to be fully synchronized. Performance of autapse centrality based strategy to optimize network synchronization. For the scale-free network in Fig. 3(a), the variation of network average synchronization error δx versus the number m of autapses for different synchronization strategies. For the autapse centrality strategy (open triangles), autapses are added to neurons in the descending order of neuron autapse centrality. For the degree (open circles) and random (open squares) strategies, the set of neurons receiving autapses is selected, respectively, according to the descending order of nodal degree and randomly. The minimum numbers of autapses for the autapse-centrality-based, degree-based, and random strategies are m c = 45, 52, and 60, respectively, where the autapse centrality strategy requires the smallest number of autapses. The neuronal dynamics and coupling parameters are the same as those in Fig. 2. Generality and robustness of autapse-centrality based synchronization strategy We address a number of issues to further substantiate the main result that autapses could promote global synchronization in neuronal networks, including: (1) the performance of autapse-centrality based synchronization strategy for other types of complex networks; (2) the impacts of parameter mismatches among neurons on global synchronization; and (3) the influences of the system parameters on the minimum number of autapses required for global synchronization. So far we have demonstrated that the autapse-centrality based strategy outperforms the conventional degree based and random strategies in promoting global synchronization in scale-free network, as shown in Fig. 4. Our theoretical analysis implies that this result should hold for the general network model, regardless of the network topologies. An example is given in Fig. 5, which displays the synchronization performance of the three strategies for the random and small-world networks. We see that, as for the case of scale-free networks in Fig. 4, the autapse-centrality based strategy always requires the least number autapses in achieving global network synchronization. Performance of autapse-centrality based synchronization strategy for the random and small-world networks. The size, average degree, local dynamics and coupling function of the networks are identical to those for scale-free networks Fig. 4. (a) Results for random networks. The parameters are: (ε, τ) = (0.95, 4). The minimum numbers of autapses for the autapse-centrality based, degree based and random strategies are m c = 23, 34 and 36, respectively. (b) Results for small-world network, which is constructed with the rewiring probability p = 0.1 for (ε, τ) = (0.95, 1.8). The minimum numbers of autapses for the autapse-centrality based, degree based and random strategies are m c = 13, 17 and 20, respectively. For both the random and small-world networks, the autapse-centrality based strategy requires the least number of autapses in achieving global synchronization. In real systems, parameter mismatches among the neuronal dynamics and the network links are inevitable. To gain insights into whether autapses could promote synchronization in networks consisting of non-identical neurons, we consider the toy network model in Fig. 1 and introduce small mismatch in the parameters (I, ε, and τ). To be concrete, we vary the coupling strength w aut and the time delay τ aut of the autapse and investigate how network synchronization is deteriorated as the values of w aut and τ aut are deviated from those of the synaptic connections. Setting w = 1 and τ = 4 for the synaptic connections, we calculate the synchronization error 〈δx〉 as a function of w aut . The numerical results are shown in Fig. 6(a). We observe 〈δx〉 = 0 for w aut > 0.3. To see how mismatched coupling strength affects synchronization, we plot in Fig. 6(b) and (c) the neuron trajectories for w aut = 0.1 and w aut = 0.5, respectively, where there is still global synchronization for the latter but for the former, synchronization is lost. This indicates that, even when the mismatch of w aut is about 50% Fig. 6(c), synchronization can still be achieved. In fact, given w aut > 0.3, global synchronization will be maintained regardless of the mismatch amplitude Fig. 6(a). The underlying reason for the robustness of global synchronization to the mismatch of w aut is that, due to the normalized coupling scheme (see Methods), the dynamics of the synchronous manifold is independent of the autapse strength. Impacts of parameter mismatch on synchronization. Two types of parameter mismatch are considered for the toy network in Fig. 1: in the coupling strength and in the time delay. The values of these two parameters associated with the synaptic connections are fixed at (w, τ) = (1, 4), whereas those of the autaptic connection, denoted as w aut and τ aut , respectively, are varied. (a) For fixed τ aut = 4, the network synchronization error 〈δx〉 versus w aut , where 〈δx〉 = 0 for w aut > 0.3. (b) For w aut = 0.1, the neurons are desynchronized. (c) For w aut = 0.5, the neurons are completely synchronized. (d) For fixed w aut = 1, 〈δx〉 versus Δτ = τ aut − τ aut . The neurons are desynchronized when Δτ is large (e) but they are well synchronized when Δτ is small (f). Figure 6(d) demonstrates the robustness of network synchronization in the presence of mismatch in time delay, where the behavior of 〈δx〉 versus Δτ = τ aut − τ aut is shown for fixed w aut = 1. We see that the network is well synchronized for a small amount of mismatch in the time delay. For example, we have 〈δx〉 < 0.1 for Δτ ∈ (−0.08, 0.1). For a relatively large amount of mismatch, synchronization is lost, as shown in Fig. 6(e) for Δτ = 0.2. In general, for Δτ ≠ 0, the local dynamics of the autapsed neuron are different from those of the other neurons, making global synchronization difficult. However, as shown in Fig. 6(f), if the mismatch amount Δτ is not too large, all neurons in the network can still be well synchronized. In general, the number of autapses required to synchronize a complex network depends on the network parameters such as the connecting density, the degree distribution, the delay parameter, and the coupling strength among the neurons. For the scale-free network in Fig. 4 of 100 nodes and average degree 〈k〉 = 6 and by the parameters ε = 1 and τ = 4, m c = 45 autapses are required for global synchronization. This critical number of autapses, however, could be changed significantly by varying the network parameters. To shown an example, we fix the network structure and the parameter τ used in Fig. 4 but changing the value of ε to 0.95, and recalculate the number of required autapses for different synchronization strategies. The numerical results are shown in Fig. 7. We see that, for the autapse-centrality based, degree based, and random strategies, the critical numbers of autapses required for global synchronization are reduced to, respectively, to m c = 19, 31, and 35. Example of how the change of network parameter could reduce the critical number of autapses required for global synchronization. For the same network structure and the value of the parameter τ used in Fig. 4 but with a change in the coupling strength to ε = 0.95, the network averaged synchronization error δx versus the number of autapses for the three synchronization strategies. For the autapse-centrality based, degree based, and random strategies, the critical numbers of autapses are m c = 19, 31, and 35, respectively, signifying a marked reduction in comparison with the results in Fig. 4. In this paper, we study neuronal autapses, a type of structural anomalies that give certain neurons a time-delayed, self-feedback loop. Autapses, when they were first discovered1, were thought of playing no specific biological or physiological role. Only later were their potential impacts in biological functions found or suggested3,4,5,6,7,8. In view of the fundamental importance of synchrony in biology, physiology, and biomedical sciences, we focus our study on the possible role of autapses in modulating neuronal synchrony through computation and theory. Our main finding is that, for a complex neuronal network, even the existence of autapses on a small fraction of the neurons can promote synchrony significantly. In particular, using both a small toy network and larger networks of distinct complex topologies, we introduce autapse to a fraction of neurons and analyze the change in the network synchronizability. Based on a systematic analysis of the impact of a single autapse at different nodes in the network, we develop centrality measures to analyze the situation of multiple autapses in terms of their optimal placement in the network to realize robust synchronization of the entire network. Our work provides the computational and theoretical foundation for hypothesizing the positive role of autapses in promoting synchronization in neuronal networks. It is an accepted notion that synchrony in neuronal networks is strongly correlated with certain neurological diseases such as epileptic seizures18, but whether such a disease can be attributed to an increasing level of synchrony has been a controversial issue19,20,21,22. At the present it remains difficult to ascertain, through the traditional approach of EEG or ECoG data analysis, whether a general, well defined causal relation between an elevation of synchrony and the occurrence of seizure exists. The main idea underlying our work is to focus on "unusual" features or structural anomalies in neurons and to study their role in modulating neuronal synchrony. Suppose for certain type of seizure, the responsible neuronal network in a brain region can be identified. Whether there exist structural anomalies in some neurons in this region can then lead to insights into the interplay between synchrony and the particular type of seizure. Generally, the issue of synchronization goes much beyond the scope of epileptic seizures with broad implications to biology and physiology16. We discuss a few technical issues. Firstly, to make the model theoretically tractable, we assume that the isolated neuronal dynamics are identical and the links (including the autapses) have the same coupling strength and time delay. However, in real systems, parameter mismatches among the neuronal dynamics and network links can be expected. We have gained preliminary insights into whether autapses can promote synchronization in networks of non-identical neurons, by introducing small mismatches in parameters (I, ε, and τ) in the toy network model in Fig. 1. Our computations reveal, for example, when the time delay parameter has a 2% mismatch among the network links, global synchronization can still be achieved. Secondly, in our study we adopt the scheme of normalized couplings to enable computation and analysis of the MSF and the proposal of autapse centralities. If the couplings are not normalized, such a theoretical analysis would not be possible and it appears at the present that one must rely on numerical simulations to uncover and quantify the role of autapses in synchronization. Thirdly, we assume linear feedback couplings in the network. While this type of coupling scheme does have physiological support (e.g., in terms of electrical synapses underlying the neuronal connections), it remains unknown whether other types of couplings, e.g., those with chemical synapses, can lead to effective modulation of network synchronization by autapses. Finally, we point out the significance of our work in general network science and engineering with respect to the problems of synchronization and control. A focus in this area of research has been on the role of long-range connections, such as those in small-world networks53, in promoting network synchronization36. From a practical standpoint, long range connections may be costly54. For example, for neural networks in the human brain55, while there are white fiber tracts among distant cortical areas which are essential for the brain functions, the network connectivity is still dominated by short-distance, local connections, due to the minimization principle of the axonal length and energy consumption. A similar issue arises in other realistic systems54 such as traffic networks, power grids, communication network, and the Internet. Our finding that autapses, the shortest possible links in any network, can play a positive role in promoting synchronization, is encouraging from the perspective of optimal control of complex network dynamics at a minimal cost. Our model of neuronal network with autapses reads $${\dot{{\boldsymbol{x}}}}_{i}={{\boldsymbol{F}}}_{i}({{\boldsymbol{x}}}_{i})+\frac{\varepsilon }{{k}_{i}+{\delta }_{il}}\sum _{j\mathrm{=1}}^{N}{c}_{ij}[{\boldsymbol{H}}({{\boldsymbol{x}}}_{j})(t-{\tau }_{ij})-{\boldsymbol{H}}({{\boldsymbol{x}}}_{i})]+{\eta }_{i}(t),$$ where i, j = 1, 2, …, N are nodal indices, N is the network size, x i the state vector of the i th neuron, ε is the coupling parameter, F i (x) is the velocity field governing the dynamics of the i th neuron when it is isolated, H(x) is the coupling function, τ ij denotes the time delay of signal propagating from neuron j to i, and η i (t) represents the independent, identically distributed Gaussian white noise added to the membrane potential of each neuron, with 〈η i (t)〉 = 0 and 〈η i (t)η j (t)〉 = (2 × 10−10)δ ij . The network structure in the presence of autaptic connections is characterized by the coupling matrix C = {c ij }. For non-diagonal elements, we have c ij = c ji = 1 if there is a link between neurons i and j, otherwise c ij = 0. For the diagonal elements, we set c ii = δ il , where δ il is the Kronecker delta function and V = {l} (with l = 1, …, m) denotes the set of neurons with autaptic connections. Specifically, we have c ii = 1 if neuron i possesses an autapse, and c ii = 0 otherwise. The quantity k i = ∑j ≠ ic ij is the number of links connected to neuron i (i.e., the degree). To capture the essential network dynamics subject to autapses while making the model theoretically tractable, we assume identical time delays: τ ij = τ. Biologically, this approximation is reasonable for synapses within the same cortical minicolumn55. The scheme of diffusive (linear feedback) coupling assumed in Eq. (4) models the realistic electrical synaptic interactions (e.g., of the gap-junction type) among neurons16,56. In simulations, we set the parameters of the HR oscillator as (a, b, c, d, r, s, x R , I) = (1, 3, 1, 5, 6 × 10−3, −1.6, 3.2), for which the isolated HR neuron exhibits chaotic bursting dynamics28. The coupling function is chosen to be H(x) = [x, 0, 0]T, i.e., the neurons in the network are coupled through their membrane potentials. We use the Bogacki-Shampine algorithm57 to simulate Eq. (4), which is a time-delayed, stochastic system of coupled differential equations. The integration time step is δt = 1 × 10−3. Persistence of a global synchronization manifold in the presence of an autapse We argue that, given that the autapses have an identical time delay with respect to the synaptic connections, the neuronal network possesses a synchronization manifold. Consider the network in Fig. 1(b1), in which the dynamics of neuron 1 are governed by the equation $${\dot{{\boldsymbol{x}}}}_{1}=\{{{\boldsymbol{F}}}_{1}({{\boldsymbol{x}}}_{1})+\frac{\varepsilon }{4}[{x}_{1}(t-\tau )-{x}_{1}]\}+\frac{\varepsilon }{4}\sum _{j\ne 1}[{x}_{j}(t-\tau )-{x}_{1}],$$ where the 1st and 2nd terms on the right side represent, respectively, the local dynamics of the autapsed neuron and the coupling signals it received from the neighboring neurons. Assume that the network is globally synchronized, and denote x s as the dynamical state within the synchronous manifold, we have x i (t − τ) = x s (t − τ) (for i = 1, …, 4). Equation (5) can then be rewritten as $${\dot{{\boldsymbol{x}}}}_{1}={{\boldsymbol{F}}}_{1}({{\boldsymbol{x}}}_{1})+\frac{\varepsilon }{4}[{x}_{s}(t-\tau )-{x}_{1}]+\frac{\varepsilon }{4}\sum _{j\ne 1}[{x}_{s}(t-\tau )-{x}_{1}]$$ $$={{\boldsymbol{F}}}_{1}({{\boldsymbol{x}}}_{1})+\frac{\varepsilon }{4}\sum _{j\mathrm{=1}}^{4}[{x}_{s}(t-\tau )-{x}_{1}]={{\boldsymbol{F}}}_{1}({{\boldsymbol{x}}}_{1})+\varepsilon [{x}_{s}(t-\tau )-{x}_{1}\mathrm{].}$$ For neurons without autapse (j = 2, 3, 4), their dynamics are governed by $${\dot{{\boldsymbol{x}}}}_{j}={{\boldsymbol{F}}}_{j}({{\boldsymbol{x}}}_{j})+\frac{\varepsilon }{{k}_{j}}\sum _{l\mathrm{=1}}^{4}{c}_{jl}[{x}_{s}(t-\tau )-{x}_{j}]={{\boldsymbol{F}}}_{j}({{\boldsymbol{x}}}_{j})+\varepsilon [{x}_{s}(t-\tau )-{x}_{j}],$$ which is identical to Eq. (5). That is, in the synchronous state all neurons in the network, regardless of whether they are autapsed or not, follow the same dynamical equation. Further, because of the normalized coupling scheme, the equations are independent of the network structure. A global synchronization manifold thus persists in the presence of autapses, regardless of their locations in the network. Master stability function based analysis of network synchronization in the presence of autapses To uncover the impact of autapses on network synchronization quantitatively, we analyze the stability of the synchronous manifold. Within the manifold, the dynamics are governed by $${\dot{{\boldsymbol{x}}}}_{s}={\boldsymbol{F}}({{\boldsymbol{x}}}_{s})+\varepsilon [{\boldsymbol{H}}({{\boldsymbol{x}}}_{s}(t-\tau ))-{\boldsymbol{H}}({{\bf{x}}}_{s}\mathrm{)].}$$ Let δx i = x i − x s be an infinitesimal perturbation to x s , whose evolution is governed by the variational equations obtained by linearizing Eq. (4) about x s : $$\delta {\dot{{\boldsymbol{x}}}}_{i}=[{\boldsymbol{DF}}({{\boldsymbol{x}}}_{s})-\varepsilon {\boldsymbol{DH}}({{\boldsymbol{x}}}_{s})]\delta {{\boldsymbol{x}}}_{i}+\varepsilon \sum _{j\mathrm{=1}}^{N}{g}_{ij}{\boldsymbol{DH}}({{\boldsymbol{x}}}_{s}(t-\tau ))\delta {{\boldsymbol{x}}}_{j}(t-\tau ),$$ for i, j = 1, …, N, where G = {g ij } = {c ij /(k i + δ il )} is the normalized coupling matrix, DF and DH are the Jacobians of the isolated neuron dynamics and of the coupling function, respectively. Projecting {δx i } into the eigenspace spanned by the eigenvectors of G, we obtain N independent variational equations $$\delta {\dot{{\boldsymbol{y}}}}_{i}=[{\boldsymbol{DF}}({{\boldsymbol{x}}}_{s})-\varepsilon {\boldsymbol{H}}({{\boldsymbol{x}}}_{s})]\delta {{\boldsymbol{y}}}_{i}+\varepsilon {\lambda }_{i}{\boldsymbol{DH}}({{\boldsymbol{x}}}_{s}(t-\tau ))\delta {{\boldsymbol{y}}}_{i}(t-\tau ),$$ where δy i is the i th mode of the perturbation, and 1 = λ1 > λ2 ≥ ... ≥ λ N are the eigenvalues of G. The mode associated with λ1 describes the chaotic motion within the synchronous manifold as corresponding to the dynamics of each isolated neuron, and the transverse modes {λ j } (j = 2, …, N) determine the stability of the synchronous chaotic motion within the synchronization manifold. Denote Λ j as the largest Lyapunov exponent calculated from the variational equations for the j th mode: the mode is stable (unstable) if Λ j < 0 (Λ j > 0). The necessary condition for the synchronous state to be stable is that all the transverse modes decay exponentially with time: Λ j < 0 for j = 2, …, N. Note that Λ is determined by three parameters (ε, τ, and λ), which is different from systems without any autapse49,50,58 in which Λ depends only on the generic coupling strength σ = ελ. For the autapsed network, we determine its synchronizability by two steps: (1) finding the stable region of the MSF curve in the parameter space (ε, λ, τ) and (2) calculating the eigenvalues {λ i } of the normalized coupling matrix G, where the former depends on both the local dynamics and the coupling function but the latter is solely determined by the network structure. Ethics declarations Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Van Der Loos, H. & Glaser, E. M. Autapses in neocortex cerebri: synapses between a pyramidal cell's axon and its own dendrites. Brain Res. 48, 355–360 (1972). Tamás, G., Buhl, E. H. & Somogyi, P. Massive autaptic self-innervation of gabaergic neurons in cat visual cortex. J. Neurosci. 17, 6352–6364 (1997). Bekkers, J. M. Neurophysiology: Are autapses prodigal synapses? Curr. Biol. 8, R52–R55 (1998). Bekkers, J. M. Synaptic transmission: Functional autapses in the cortex. Curr. Biol. 13, R433–R435 (2003). Bacci, A., Huguenard, J. R. & Prince, D. A. Functional autaptic neurotransmission in fast-spiking interneurons: a novel form of feedback inhibition in the neocortex. J. Neurosci. 23, 859–866 (2003). Bacci, A. & Huguenard, J. R. Enhancement of spike-timing precision by autaptic transmission in neocortical inhibitory interneurons. Neuron 49, 119–130 (2006). Ikeda, K. & Bekkers, J. M. Autapses. Curr. Biol. 16, R308 (2006). Saada, R., Miller, N., Hurwitz, I. & Susswein, A. J. Autaptic muscarinic excitation underlies a plateau potential and persistent activity in a neuron of known behavioral function. Curr. Biol. 19, 479–484 (2009). Seung, H. S., Lee, D. D., Reis, B. Y. & Tank, D. W. The autapse: A simple illustration of short-term analog memory storage by tuned synaptic feedback. J. Comp. Neurosci. 9, 171–185 (2000). Herrmann, C. S. Autapse turns neuron into oscillator. Int. J. Bif. Chaos 14, 623–633 (2004). Wang, H.-T., Ma, J., Chen, Y.-L. & Chen, Y. Effect of an autapse on the firing pattern transition in a bursting neuron. Comm. Nonlinear Sci. Num. Simu. 19, 3242–3254 (2014). Qin, H., Ma, J., Wang, C. & Chu, R. Autapse-induced target wave, spiral wave in regular network of neuron. Sci. China Phys. Mech. Astro. 57, 1918–1926 (2014). Yilmaz, E. & Ozer, M. Delayed feedback and detection of weak periodic signals in a stochastic hodgkin-huxley neuron. Physica A 421, 455–462 (2015). Yilmaz, E., Ozer, M., Baysal, V. & Perc, M. Autapse-induced multiple coherence resonance in single neurons and neuronal networks. Sci. Rep. 6, 30914 (2014). Yilmaz, E., Baysal, V., Ozer, M. & Perc, M. Autaptic pacemaker mediated propagation of weak rhythmic activity across small-world neuronal networks. Physica A 444, 538–546 (2016). Glass, L. Synchronization and rhythmic processes in physiology. Nature 410, 277–284 (2001). Pikovsky, A., Rosenblum, M. & Kurths, J. Synchronization - A Universal Concept in Nonlinear Sciences (Cambridge University Press, Cambridge UK, 2001), first edn. Kandel, E. R., Schwartz, J. H. & Jessell, T. M. Principle of Neural Science (Appleton and Lange, Norwalk CT, 1991), 3rd edn. Netoff, T. I. & Schiff, S. J. Decreased neuronal synchronization during experimental seizures. J. Neurosci. 22, 7297–7307 (2002). Schiff, S. J., Sauer, T., Kumarc, R. & Weinstein, S. L. Neuronal spatiotemporal pattern discrimination: The dynamical evolution of seizures. NeuroImage 28, 1043–1055 (2005). Jerger, K. K., Weinstein, S. L., Sauer, T. & Schiff, S. J. Multivariate linear discrimination of seizures. Clin. Neurophysio. 116, 545–551 (2005). Jiruska, P. et al. Synchronization and desynchronization in epilepsy: controversies and hypotheses. J. Physio. 591, 787–797 (2013). Schiff, S. J. Forecasting brain storms. Nat. Med. 4, 1117–1118 (1998). Frei, M. G. et al. Controversies in epilepsy: Debates held during the fourth international workshop on seizure prediction. Epilep. Behav. 19, 4–16 (2010). Lai, Y.-C., Frei, M. G. & Osorio, I. Detecting and characterizing phase synchronization in nonstationary dynamical systems. Phys. Rev. E 73, 026214 (2006). Lai, Y.-C., Frei, M. G., Osorio, I. & Huang, L. Characterization of synchrony with applications to epileptic brain signals. Phys. Rev. Lett. 98, 108102 (2007). Osorio, I. & Lai, Y.-C. A phase-synchronization and random-matrix based approach to multichannel time-series analysis with application to epilepsy. Chaos 21, 033108 (2011). Hindmarsh, J. L. & Rose, R. M. A model of neuronal bursting using three coupled first order differential equations. Proc. Roy. Soc. London. B Biol. Sci. 221, 87 (1984). Storace, M., Linaro, D. & de Lange, E. The Hindmarsh-Rose neuron model: Bifurcation analysis and piecewise-linear approximations. Chaos 18, 033128 (2008). Baptista, M. S., Kakmeni, F. M. M. & Grebogi, C. Combined effect of chemical and electrical synapses in Hindmarsh-Rose neural networks on synchronization and the rate of information. Phys. Rev. E 82, 036203 (2010). Barrio, R. & Shilnikov, A. Parameter-sweeping techniques for temporal dynamics of neuronal systems: case study of Hindmarsh-Rose model. J. Math. Neurosci. 1, 6 (2011). Barrio, R., Angeles Martinez, M., Serrano, S. & Shilnikov, A. Macro- and micro-chaotic structures in the Hindmarsh-Rose model of bursting neurons. Chaos 24, 023128 (2014). Lago-Fernandez, L. F., Huerta, R., Corbacho, F. & Siguenza, J. A. Fast response and temporal coherent oscillations in small-world networks. Phys. Rev. Lett. 84, 2758–2761 (2000). Gade, P. M. & Hu, C.-K. Synchronous chaos in coupled map lattices with small-world interactions. Phys. Rev. E 62, 6409–6413 (2000). Jost, J. & Joy, M. P. Spectral properties and synchronization in coupled map lattices. Phys. Rev. E 65, 016201 (2001). Barahona, M. & Pecora, L. M. Synchronization in small-world systems. Phys. Rev. Lett. 89, 054101 (2002). Nishikawa, T., Motter, A. E., Lai, Y.-C. & Hoppensteadt, F. C. Heterogeneity in oscillator networks: Are smaller worlds easier to synchronize? Phys. Rev. Lett. 91, 014101 (2003). Belykh, V., Belykh, I. & Hasler, M. Connection graph stability method for synchronized coupled chaotic systems. Physica D 195, 159–187 (2004). Belykh, I., Hasler, M., Lauret, M. & Nijmeijer, H. Synchronization and graph topology. Int. J. Bif. Chaos 15, 3423–3433 (2005). Chavez, M., Hwang, D.-U., Amann, A., Hentschel, H. G. E. & Boccaletti, S. Synchronization is enhanced in weighted complex networks. Phys. Rev. Lett. 94, 218701 (2005). Donetti, L., Hurtado, P. I. & Munoz, M. A. Entangled networks, synchronization, and optimal network topology. Phys. Rev. Lett. 95, 188701 (2005). Zhou, C. & Kurths, J. Dynamical weights and enhanced synchronization in adaptive complex networks. Phys. Rev. Lett. 96, 164102 (2006). Zhou, C. & Kurths, J. Hierarchical synchronization in complex networks with heterogeneous degrees. Chaos 16, 015104 (2006). Park, K., Lai, Y.-C., Gupte, S. & Kim, J.-W. Synchronization in complex networks with a modular structure. Chaos 16, 015105 (2006). Huang, L., Park, K., Lai, Y.-C., Yang, L. & Yang, K. Abnormal synchronization in complex clustered networks. Phys. Rev. Lett. 97, 164101 (2006). Wang, X. G., Huang, L., Lai, Y.-C. & Lai, C.-H. Optimization of synchronization in gradient clustered networks. Phys. Rev. E 76, 056113 (2007). Guan, S.-G., Wang, X.-G., Lai, Y.-C. & Lai, C. H. Transition to global synchronization in clustered networks. Phys. Rev. E 77, 046211 (2008). Sun, J., Bollt, E. M. & Nishikawa, T. Master stability functions for coupled nearly identical dynamical systems. EPL 85 (2009). Pecora, L. M. & Carroll, T. L. Master stability functions for synchronized coupled systems. Phys. Rev. Lett. 80, 2109–2112 (1998). Huang, L., Chen, Q., Lai, Y.-C. & Pecora, L. M. Generic behavior of master-stability functions in coupled nonlinear dynamical systems. Phys. Rev. E 80, 036204 (2009). Stefański, A., Perlikowski, P. & Kapitaniak, T. Ragged synchronizability of coupled oscillators. Phys. Rev. E 75, 016210 (2007). Stoer, J. & Bulirsch, R. Introduction to Numerical Analysis (Springer-Verlag, New York, 1980). Watts, D. J. & Strogatz, S. H. Collective dynamics of small world networks. Nature 393, 440 (1998). Barthelemy, M. Spatial networks. Phys. Rep. 499, 1 (2011). Basar, E. Brain Function and Oscillation (Springer, New York, 1998). Varela, F., Lachaux, J.-P., Rodriguez, E. & Martinerie, J. The brainweb: phase synchronization and large-scale integration. Nat. Rev. Neurosci. 2, 229 (2001). Flunkert, V. Delay-coupled Complex Systems and Applications to Lasers (Springer-Verlag, Berlin, 2011). Lin, W., Fan, H., Wang, Y., Ying, H. & Wang, X. Controlling synchronous patterns in complex networks. Phys. Rev. E 93, 042209 (2016). This work was supported by the National Natural Science Foundation of China under the Grant No. 11375109, and by the Fundamental Research Funds for the Central Universities under the Grant Nos GK201601001 and 2017TS003. YCL would like to acknowledge support from the Vannevar Bush Faculty Fellowship program sponsored by the Basic Research Office of the Assistant Secretary of Defense for Research and Engineering and funded by the Office of Naval Research through Grant No. N00014-16-1-2828. School of Physics and Information Technology, Shaanxi Normal University, Xi'an, 710062, China Huawei Fan , Yafeng Wang , Hengtong Wang , Ying-Cheng Lai & Xingang Wang School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, Arizona, 85287, USA Ying-Cheng Lai Search for Huawei Fan in: Search for Yafeng Wang in: Search for Hengtong Wang in: Search for Ying-Cheng Lai in: Search for Xingang Wang in: Devised the research project: X.G.W.; Performed numerical simulations: H.W.F. and Y.F.W.; Analyzed the results: X.G.W., H.W.F., Y.F.W., H.T.W., and Y.C.L.; Wrote the paper: X.G.W. and Y.C.L. Correspondence to Xingang Wang. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Guest Edited Collections Editorial Board Highlights Author Highlights Scientific Reports rigorous editorial process Open Access Funding Support
CommonCrawl
This issue Previous Article Lazer-McKenna conjecture for higher order elliptic problem with critical growth Next Article Some Liouville-type results for stable solutions involving the mean curvature operator: The radial case Measure solutions to a system of continuity equations driven by Newtonian nonlocal interactions José Antonio Carrillo1, , , Marco Di Francesco2, , Antonio Esposito2, , Simone Fagioli2, and Markus Schmidtchen1, Department of Mathematics, Imperial College London, London SW7 2AZ, United Kingdom DISIM - Department of Information Engineering, Computer Science and Mathematics, University of L'Aquila, Via Vetoio 1 (Coppito) 67100 L'Aquila (AQ), Italy * Corresponding author: José A. Carrillo Revised: August 31, 2019 We prove global-in-time existence and uniqueness of measure solutions of a nonlocal interaction system of two species in one spatial dimension. For initial data including atomic parts we provide a notion of gradient-flow solutions in terms of the pseudo-inverses of the corresponding cumulative distribution functions, for which the system can be stated as a gradient flow on the Hilbert space $ L^2(0,1)^2 $ according to the classical theory by Brézis. For absolutely continuous initial data we construct solutions using a minimising movement scheme in the set of probability measures. In addition we show that the scheme preserves finiteness of the $ L^m $-norms for all $ m\in [1,+\infty] $ and of the second moments. We then provide a characterisation of equilibria and prove that they are achieved (up to time subsequences) in the large time asymptotics. We conclude the paper constructing two examples of non-uniqueness of measure solutions emanating from the same (atomic) initial datum, showing that the notion of gradient flow solution is necessary to single out a unique measure solution. Systems of aggregation equations, Newtonian potentials, uniqueness of solutions, gradient flows, long time asymptotics. Mathematics Subject Classification: Primary: 45K05, 35A15; Secondary: 92D25, 35L45, 35L80. Figure 1. This example has two separated indicator functions as initial data. In the left graph we see the evolution of system (6) to the stationary state (right graph). In the middle we see the energy (black) of the solution and its dissipation (red). The dotted line is the numerical time derivative of the energy. It matches well with the analytically obtained dissipation Figure 2. We choose partially overlapping initial data and observe, as before that mixing occurs. The graph on the left displays the evolution of both densities at different time instances, while the rightmost graph displays the stationary state with identical densities. The graph in the middle shows the energy decay along the solution and the numerical dissipation and the analytical dissipation agree well Figure 3. Initial (left) and exact solution (right) at time $ t = 0.5 $ for the case of two distinct Dirac deltas at the level of distribution functions Figure 4. Initial (left) and exact solution (right) at time $ t = 1/(4(1-m)) $ with $ m = 0.4 $ for the case of two partially overlapping deltas at the level of distribution functions [1] L. Ambrosio, N. Gigli and G. Savaré, Gradient Flows in Metric Spaces and in the Space of Probability Measures, Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel, second edition, 2008. [2] L. Ambrosio and G. Savaré, Gradient flows of probability measures, Handbook of Differential Equations: Evolutionary Equations, 3 (2007), 1-136. doi: 10.1016/S1874-5717(07)80004-1. [3] A. Arnold, P. Markowich and G. Toscani, On large time asymptotics for drift-diffusion-Poisson systems, Proceedings of the Fifth International Workshop on Mathematical Aspects of Fluid and Plasma Dynamics (Maui, HI, 1998), 29 (2000), 571-581. doi: 10.1080/00411450008205893. [4] D. Balagué, J. A. Carrillo, T. Laurent and G. Raoul, Dimensionality of local minimizers of the interaction energy, Arch. Ration. Mech. Anal., 209 (2013), 1055-1088. doi: 10.1007/s00205-013-0644-6. [5] D. Benedetto, E. Caglioti and M. Pulvirenti, A kinetic equation for granular media, RAIRO-Modélisation Mathématique et Analyse Numérique, 31 (1997), 615–641. doi: 10.1051/m2an/1997310506151. [6] A. L. Bertozzi, J. A. Carrillo and T. Laurent, Blow-up in multidimensional aggregation equations with mildly singular interaction kernels, Nonlinearity, 22 (2009), 683-710. doi: 10.1088/0951-7715/22/3/009. [7] A. L. Bertozzi, T. Kolokolnikov, H. Sun, D. Uminsky and J. von Brecht, Ring patterns and their bifurcations in a nonlocal model of biological swarms, Commun. Math. Sci., 13 (2015), 955-985. doi: 10.4310/CMS.2015.v13.n4.a6. [8] A. L. Bertozzi, T. Laurent and F. Léger, Aggregation and spreading via the newtonian potential: The dynamics of patch solutions, Mathematical Models and Methods in Applied Sciences, 22 (2012), 1140005, 39pp. doi: 10.1142/S0218202511400057. [9] A. L. Bertozzi, T. Laurent and J. Rosado, Lp theory for the multidimensional aggregation equation, Communications on Pure and Applied Mathematics, 64 (2011), 45-83. doi: 10.1002/cpa.20334. [10] F. Bolley, Y. Brenier and G. Loeper, Contractive metrics for scalar conservation laws, J. Hyperbolic Differ. Equ., 2 (2005), 91-107. doi: 10.1142/S0219891605000397. [11] G. A. Bonaschi, Gradient flows driven by a non-smooth repulsive interaction potential, Master's thesis, University of Pavia, Italy, arXiv: 1310.3677, 2011. [12] G. A. Bonaschi, J. A. Carrillo, M. Di Francesco and M. A. Peletier, Equivalence of gradient flows and entropy solutions for singular nonlocal interaction equations in 1D, ESAIM Control, Optimisation and Calculus of Variations, 21 (2015), 414-441. doi: 10.1051/cocv/2014032. [13] Y. Brenier, L$^2$ formulation of multidimensional scalar conservation laws, Arch. Ration. Mech. Anal., 193 (2009), 1-19. doi: 10.1007/s00205-009-0214-0. [14] A. Bressan, Hyperbolic Systems of Conservation Laws. The one-dimensional Cauchy Problem, Oxford Lecture Series in Mathematics and its Applications. Oxford University Press, Oxford, 20 edition, 2000. [15] H. Brézis, Opérateurs Maximaux Monotones et Semi-groupes de Contractions Dans Les Espaces de Hilbert, North-Holland Publishing Co., Amsterdam, 1973. [16] M. Burger, M. Di Francesco, S. Fagioli and A. Stevens, Sorting phenomena in a mathematical model for two mutually attracting/repelling species, SIAM Journal on Mathematical Analysis, 50 (2018), 3210-3250. doi: 10.1137/17M1125716. [17] M. Burger, R. Fetecau and Y. Huang, Stationary states and asymptotic behavior of aggregation models with nonlinear local repulsion, SIAM J. Appl. Dyn. Syst., 13 (2014), 397-424. doi: 10.1137/130923786. [18] V. Calvez, J. A. Carrillo and F. Hoffmann, Equilibria of homogeneous functionals in the fair-competition regime, Nonlinear Anal., 159 (2017), 85-128. doi: 10.1016/j.na.2017.03.008. [19] V. Calvez, J. A. Carrillo and F. Hoffmann, The geometry of diffusing and self-attracting particles in a one-dimensional fair-competition regime, Nonlocal and Nonlinear Diffusions and Interactions: New Methods and Directions, 2186 (2017), 1-71. [20] J. Carrillo, Y. Huang and M. Schmidtchen, Zoology of a nonlocal cross-diffusion model for two species, SIAM Journal on Applied Mathematics, 78 (2018), 1078-1104. doi: 10.1137/17M1128782. [21] J. A. Carrillo, A. Chertock and Y. Huang, A finite-volume method for nonlinear nonlocal equations with a gradient flow structure, Communications in Computational Physics, 17 (2015), 233-258. doi: 10.4208/cicp.160214.010814a. [22] J. A. Carrillo, K. Craig and Y. Yao, Aggregation-diffusion equations: Dynamics, asymptotics, and singular limits, Active Particles, 2 (2019), 65–108, arXiv: 1810.03634. doi: 10.1007/978-3-030-20297-2_3. [23] J. A. Carrillo, M. G. Delgadino and A. Mellet, Regularity of local minimizers of the interaction energy via obstacle problems, Comm. Math. Phys., 343 (2016), 747-781. doi: 10.1007/s00220-016-2598-7. [24] J. A. Carrillo, M. Di Francesco, A. Figalli, T. Laurent and D. Slepcev, Global-in-time weak measure solutions and finite-time aggregation for nonlocal interaction equations, Duke Math. J., 156 (2011), 229-271. doi: 10.1215/00127094-2010-211. [25] J. A. Carrillo, L. C. Ferreira and J. C. Precioso, A mass-transportation approach to a one dimensional fluid mechanics model with nonlocal velocity, Advances in Mathematics, 231 (2012), 306-327. doi: 10.1016/j.aim.2012.03.036. [26] J. A. Carrillo, F. Filbet and M. Schmidtchen, Convergence of a Finite Volume Scheme for a System of Interacting Species with Cross-Diffusion, arXiv e-prints, Apr. 2018. [27] J. A. Carrillo, Y. Huang and S. Martin, Explicit flock solutions for Quasi-Morse potentials, European J. Appl. Math., 25 (2014), 553-578. doi: 10.1017/S0956792514000126. [28] J. A. Carrillo, S. Martin and V. Panferov, A new interaction potential for swarming models, Phys. D, 260 (2013), 112-126. doi: 10.1016/j.physd.2013.02.004. [29] J. A. Carrillo and G. Toscani, Wasserstein metric and large–time asymptotics of nonlinear diffusion equations, New Trends in Mathematical Physics, (In Honour of the Salvatore Rionero 70th Birthday), 2005, 234–244. [30] T. Cazenave and A. Haraux, An Introduction to Semilinear Evolution Equations, Oxford Lecture Series in Mathematics and its Applications, 13. Oxford University Press, New York, 1998. [31] C. M. Dafermos, Hyperbolic Conservation Laws in Continuum Physics. Fourth Edition, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 325 edition, 2016. doi: 10.1007/978-3-662-49451-6. [32] S. Daneri and G. Savaré, Eulerian calculus for the displacement convexity in the wasserstein distance, SIAM J. Math. Anal., 40 (2008), 1104-1122. doi: 10.1137/08071346X. [33] E. De Giorgi, New problems on minimizing movements, Boundary Value Problems for PDE and Applications, C. Baiocchi and J. L. Lions eds., Masson, 29 (1993), 81-98. [34] M. Di Francesco, A. Esposito and S. Fagioli, Nonlinear degenerate cross-diffusion systems with nonlocal interaction, Nonlinear Analysis, 169 (2018), 94-117. doi: 10.1016/j.na.2017.12.003. [35] M. Di Francesco and S. Fagioli, Measure solutions for nonlocal interaction PDEs with two species, Nonlinearity, 26 (2013), 2777-2808. doi: 10.1088/0951-7715/26/10/2777. [36] M. Di Francesco and D. Matthes, Curves of steepest descent are entropy solutions for a class of degenerate convection-diffusion equations, Calc. Var. Partial Differential Equations, 50 (2014), 199-230. doi: 10.1007/s00526-013-0633-5. [37] M. R. D'Orsogna, Y.-L. Chuang, A. L. Bertozzi and L. S. Chayes, Self-propelled particles with soft-core interactions: Patterns, stability, and collapse, Physical Review Letters, 96 (2006), 104302. [38] L. C. Evans, Partial Differential Equations, volume 19 of Graduate Studies in Mathematics, American Mathematical Society, Providence, Rhode Island, 1998. [39] J. H. M. Evers and T. Kolokolnikov, Metastable states for an aggregation model with noise, SIAM J. Appl. Dyn. Syst., 15 (2016), 2213-2226. doi: 10.1137/16M1069006. [40] R. C. Fetecau and Y. Huang, Equilibria of biological aggregations with nonlocal repulsive-attractive interactions, Phys. D, 260 (2013), 49-64. doi: 10.1016/j.physd.2012.11.004. [41] R. C. Fetecau, Y. Huang and T. Kolokolnikov, Swarm dynamics and equilibria for a nonlocal aggregation model, Nonlinearity, 24 (2011), 2681-2716. doi: 10.1088/0951-7715/24/10/002. [42] R. Jordan, D. Kinderlehrer and F. Otto, The variational formulation of the fokker–planck equation, SIAM Journal on Mathematical Analysis, 29 (1998), 1-17. doi: 10.1137/S0036141096303359. [43] T. Kolokolnikov, H. Sun, D. Uminsky and A. L. Bertozzi, Stability of ring patterns arising from two-dimensional particle interactions, Phys. Rev. E, 84 (2011), 015203. doi: 10.1103/PhysRevE.84.015203. [44] S. Kružhkov, First order quasilinear equations with several independent variables, Mat. Sb. (N.S.), 81 (1970), 228-255. [45] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, Linear and Quasilinear Equations of Parabolic Type, Translated from the Russian by S. Smith. Translations of Mathematical Monographs, Vol. 23. American Mathematical Society, Providence, R.I., 1968. [46] H. Li and G. Toscani, Long-time asymptotics of kinetic models of granular flows, Arch. Ration. Mech. Anal., 172 (2004), 407-428. doi: 10.1007/s00205-004-0307-8. [47] T.-P. Liu and M. Pierre, Source solutions and asymptotic behaviour in conservations laws, J. Differential Equations, 51 (1984), 419-441. doi: 10.1016/0022-0396(84)90096-2. [48] D. Matthes, R. McCann and G. Savaré, A family of fourth order equations of gradient flow type, Comm. P.D.E., 34 (2009), 1352-1397. doi: 10.1080/03605300903296256. [49] R. J. McCann, A convexity principle for interacting gases, Advances in Mathematics, 128 (1997), 153-179. doi: 10.1006/aima.1997.1634. [50] A. Mogilner and L. Edelstein-Keshet, A non-local model for a swarm, Journal of Mathematical Biology, 38 (1999), 534-570. doi: 10.1007/s002850050158. [51] F. Otto, The geometry of dissipative evolution equations: The porous medium equation, Comm. Partial Differential Equations, 26 (2001), 101-174. doi: 10.1081/PDE-100002243. [52] S. T. Rachev and L. Rüschendorf, Mass Transportation Problems, volume I of Probability and Its Applications, Springer, New York, 1998. [53] F. Santambrogio, Optimal Transport for Applied Mathematicians, volume 86 of Progress in Nonlinear Differential Equations and Their Applications., Birkhäuser Verlag, Basel, 2015. doi: 10.1007/978-3-319-20828-2. [54] C. M. Topaz, A. L. Bertozzi and M. A. Lewis, A nonlocal continuum model for biological aggregation, Bull. Math. Biol., 68 (2006), 1601-1623. doi: 10.1007/s11538-006-9088-6. [55] G. Toscani, One-dimensional kinetic models of granular flows, ESAIM: Modélisation Mathématique et Analyse Numérique, 34 (2000), 1277–1291. doi: 10.1051/m2an:2000127. [56] C. Villani, Topics in Optimal Transportation, volume 58 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, 2003. doi: 10.1007/b12016. [57] C. Villani, Optimal Transport: Old and New, Grundlehren der mathematischen Wissenschaften. Springer, Berlin, 2009. doi: 10.1007/978-3-540-71050-9. José Antonio Carrillo Marco Di Francesco Simone Fagioli Markus Schmidtchen This example has two separated indicator functions as initial data. In the left graph we see the evolution of system (6) to the stationary state (right graph). In the middle we see the energy (black) of the solution and its dissipation (red). The dotted line is the numerical time derivative of the energy. It matches well with the analytically obtained dissipation We choose partially overlapping initial data and observe, as before that mixing occurs. The graph on the left displays the evolution of both densities at different time instances, while the rightmost graph displays the stationary state with identical densities. The graph in the middle shows the energy decay along the solution and the numerical dissipation and the analytical dissipation agree well Initial (left) and exact solution (right) at time $ t = 0.5 $ for the case of two distinct Dirac deltas at the level of distribution functions Initial (left) and exact solution (right) at time $ t = 1/(4(1-m)) $ with $ m = 0.4 $ for the case of two partially overlapping deltas at the level of distribution functions
CommonCrawl
Describe the meaning of the Mean Value Theorem for Integrals. State the meaning of the Fundamental Theorem of Calculus, Part 1. Use the Fundamental Theorem of Calculus, Part 1, to evaluate derivatives of integrals. Use the Fundamental Theorem of Calculus, Part 2, to evaluate definite integrals. Explain the relationship between differentiation and integration. In the previous two sections, we looked at the definite integral and its relationship to the area under the curve of a function. Unfortunately, so far, the only tools we have available to calculate the value of a definite integral are geometric area formulas and limits of Riemann sums, and both approaches are extremely cumbersome. In this section we look at some more powerful and useful techniques for evaluating definite integrals. These new techniques rely on the relationship between differentiation and integration. This relationship was discovered and explored by both Sir Isaac Newton and Gottfried Wilhelm Leibniz (among others) during the late 1600s and early 1700s, and it is codified in what we now call the Fundamental Theorem of Calculus, which has two parts that we examine in this section. Its very name indicates how central this theorem is to the entire development of calculus. Isaac Newton's contributions to mathematics and physics changed the way we look at the world. The relationships he discovered, codified as Newton's laws and the law of universal gravitation, are still taught as foundational material in physics today, and his calculus has spawned entire fields of mathematics. To learn more, read a brief biography of Newton with multimedia clips. Before we get to this crucial theorem, however, let's examine another important theorem, the Mean Value Theorem for Integrals, which is needed to prove the Fundamental Theorem of Calculus. The Mean Value Theorem for Integrals The Mean Value Theorem for Integrals states that a continuous function on a closed interval takes on its average value at the same point in that interval. The theorem guarantees that if is continuous, a point exists in an interval such that the value of the function at is equal to the average value of over We state this theorem mathematically with the help of the formula for the average value of a function that we presented at the end of the preceding section. If is continuous over an interval then there is at least one point such that This formula can also be stated as Since is continuous on by the extreme value theorem (see Maxima and Minima), it assumes minimum and maximum values— and M, respectively—on Then, for all in we have Therefore, by the comparison theorem (see The Definite Integral), we have Dividing by gives us Since is a number between and M, and since is continuous and assumes the values and M over by the Intermediate Value Theorem (see Continuity), there is a number over such that and the proof is complete. Finding the Average Value of a Function Find the average value of the function over the interval and find such that equals the average value of the function over The formula states the mean value of is given by We can see in (Figure) that the function represents a straight line and forms a right triangle bounded by the – and -axes. The area of the triangle is We have The average value is found by multiplying the area by Thus, the average value of the function is Set the average value equal to and solve for . Figure 1. By the Mean Value Theorem, the continuous function takes on its average value at c at least once over a closed interval. [reveal-answer q="168934″]Show Solution[/reveal-answer] [hidden-answer a="168934″] Use the procedures from (Figure) to solve the problem FINDING THE POINT WHERE A FUNCTION TAKES ON ITS AVERAGE VALUE Given find such that equals the average value of over We are looking for the value of such that Replacing with 2, we have Since is outside the interval, take only the positive value. Thus, ((Figure)). Figure 2. Over the interval the function takes on its average value at Use the procedures from (Figure) to solve the problem. Fundamental Theorem of Calculus Part 1: Integrals and Antiderivatives As mentioned earlier, the Fundamental Theorem of Calculus is an extremely powerful theorem that establishes the relationship between differentiation and integration, and gives us a way to evaluate definite integrals without using Riemann sums or calculating areas. The theorem is comprised of two parts, the first of which, the Fundamental Theorem of Calculus, Part 1, is stated here. Part 1 establishes the relationship between differentiation and integration. Fundamental Theorem of Calculus, Part 1 If is continuous over an interval and the function is defined by then over Before we delve into the proof, a couple of subtleties are worth mentioning here. First, a comment on the notation. Note that we have defined a function, as the definite integral of another function, from the point to the point . At first glance, this is confusing, because we have said several times that a definite integral is a number, and here it looks like it's a function. The key here is to notice that for any particular value of , the definite integral is a number. So the function returns a number (the value of the definite integral) for each value of . Second, it is worth commenting on some of the key implications of this theorem. There is a reason it is called the Fundamental Theorem of Calculus. Not only does it establish a relationship between integration and differentiation, but also it guarantees that any integrable function has an antiderivative. Specifically, it guarantees that any continuous function has an antiderivative. Applying the definition of the derivative, we have *** QuickLaTeX cannot compile formula: \begin{array}{}\\ \\ \\ {F}^{\prime }(x)\hfill & =\underset{h\to 0}{\text{lim}}\frac{F(x+h)-F(x)}{h}\hfill \\ \\ & =\underset{h\to 0}{\text{lim}}\frac{1}{h}\left[{\int }_{a}^{x+h}f(t)dt-{\int }_{a}^{x}f(t)dt\right]\hfill \\ & =\underset{h\to 0}{\text{lim}}\frac{1}{h}\left[{\int }_{a}^{x+h}f(t)dt+{\int }_{x}^{a}f(t)dt\right]\hfill \\ & =\underset{h\to 0}{\text{lim}}\frac{1}{h}{\int }_{x}^{x+h}f(t)dt.\hfill \end{array} *** Error message: Missing # inserted in alignment preamble. leading text: $\begin{array}{} Missing $ inserted. leading text: $\begin{array}{}\\ \\ \\ {F}^ leading text: ...n{array}{}\\ \\ \\ {F}^{\prime }(x)\hfill & Extra alignment tab has been changed to \cr. leading text: ...}(x)\hfill & =\underset{h\to 0}{\text{lim}} Extra }, or forgotten $. Missing } inserted. leading text: ...text{lim}}\frac{F(x+h)-F(x)}{h}\hfill \\ \\ Looking carefully at this last expression, we see is just the average value of the function over the interval Therefore, by (Figure), there is some number in such that In addition, since is between and , approaches as approaches zero. Also, since is continuous, we have Putting all these pieces together, we have \begin{array}{}\\ {F}^{\prime }(x)\hfill & =\underset{h\to 0}{\text{lim}}\frac{1}{h}{\int }_{x}^{x+h}f(x)dx\hfill \\ & =\underset{h\to 0}{\text{lim}}f(c)\hfill \\ & =f(x),\hfill \end{array} leading text: $\begin{array}{}\\ {F}^ leading text: $\begin{array}{}\\ {F}^{\prime }(x)\hfill & leading text: ...rac{1}{h}{\int }_{x}^{x+h}f(x)dx\hfill \\ & Finding a Derivative with the Fundamental Theorem of Calculus Use the (Figure) to find the derivative of According to the Fundamental Theorem of Calculus, the derivative is given by Use the Fundamental Theorem of Calculus, Part 1 to find the derivative of Follow the procedures from (Figure) to solve the problem. Using the Fundamental Theorem and the Chain Rule to Calculate Derivatives Let Find Letting we have Thus, by the Fundamental Theorem of Calculus and the chain rule, \begin{array}{}\\ {F}^{\prime }(x)\hfill & = \sin (u(x))\frac{du}{dx}\hfill \\ & = \sin (u(x))·(\frac{1}{2}{x}^{-1\text{/}2})\hfill \\ & =\frac{ \sin \sqrt{x}}{2\sqrt{x}}.\hfill \end{array} leading text: ...{array}{}\\ {F}^{\prime }(x)\hfill & = \sin leading text: ...ill & = \sin (u(x))\frac{du}{dx}\hfill \\ & leading text: ... \sin (u(x))\frac{du}{dx}\hfill \\ & = \sin leading text: ...·(\frac{1}{2}{x}^{-1\text{/}2})\hfill \\ & Use the chain rule to solve the problem. Using the Fundamental Theorem of Calculus with Two Variable Limits of Integration We have Both limits of integration are variable, so we need to split this into two integrals. We get \begin{array}{}\\ F(x)\hfill & ={\int }_{x}^{2x}{t}^{3}dt\hfill \\ & ={\int }_{x}^{0}{t}^{3}dt+{\int }_{0}^{2x}{t}^{3}dt\hfill \\ & =\text{−}{\int }_{0}^{x}{t}^{3}dt+{\int }_{0}^{2x}{t}^{3}dt.\hfill \end{array} leading text: $\begin{array}{}\\ F(x)\hfill & leading text: $\begin{array}{}\\ F(x)\hfill & ={\int leading text: $\begin{array}{}\\ F(x)\hfill & ={\int } leading text: ...ill & ={\int }_{x}^{2x}{t}^{3}dt\hfill \\ & Differentiating the first term, we obtain Differentiating the second term, we first let Then, \begin{array}{}\\ \frac{d}{dx}\left[{\int }_{0}^{2x}{t}^{3}dt\right]\hfill & =\frac{d}{dx}\left[{\int }_{0}^{u(x)}{t}^{3}dt\right]\hfill \\ & ={(u(x))}^{3}\frac{du}{dx}\hfill \\ & ={(2x)}^{3}·2\hfill \\ & =16{x}^{3}.\hfill \end{array} leading text: $\begin{array}{}\\ \frac{d}{dx} leading text: ...ft[{\int }_{0}^{2x}{t}^{3}dt\right]\hfill & Thus, \begin{array}{}\\ \\ {F}^{\prime }(x)\hfill & =\frac{d}{dx}\left[\text{−}{\int }_{0}^{x}{t}^{3}dt\right]+\frac{d}{dx}\left[{\int }_{0}^{2x}{t}^{3}dt\right]\hfill \\ & =\text{−}{x}^{3}+16{x}^{3}\hfill \\ & =15{x}^{3}.\hfill \end{array} leading text: $\begin{array}{}\\ \\ {F}^ leading text: $\begin{array}{}\\ \\ {F}^{\prime }(x)\hfill & leading text: ...\ \\ {F}^{\prime }(x)\hfill & =\frac{d}{dx} leading text: ...{\int }_{0}^{2x}{t}^{3}dt\right]\hfill \\ & Fundamental Theorem of Calculus, Part 2: The Evaluation Theorem The Fundamental Theorem of Calculus, Part 2, is perhaps the most important theorem in calculus. After tireless efforts by mathematicians for approximately 500 years, new techniques emerged that provided scientists with the necessary tools to explain many phenomena. Using calculus, astronomers could finally determine distances in space and map planetary orbits. Everyday financial problems such as calculating marginal costs or predicting total profit could now be handled with simplicity and accuracy. Engineers could calculate the bending strength of materials or the three-dimensional motion of objects. Our view of the world was forever changed with calculus. After finding approximate areas by adding the areas of rectangles, the application of this theorem is straightforward by comparison. It almost seems too simple that the area of an entire curved region can be calculated by just evaluating an antiderivative at the first and last endpoints of an interval. The Fundamental Theorem of Calculus, Part 2 If is continuous over the interval and is any antiderivative of then We often see the notation to denote the expression We use this vertical bar and associated limits and to indicate that we should evaluate the function at the upper limit (in this case, ), and subtract the value of the function evaluated at the lower limit (in this case, ). The Fundamental Theorem of Calculus, Part 2 (also known as the evaluation theorem) states that if we can find an antiderivative for the integrand, then we can evaluate the definite integral by evaluating the antiderivative at the endpoints of the interval and subtracting. Let be a regular partition of Then, we can write Now, we know F is an antiderivative of over so by the Mean Value Theorem (see The Mean Value Theorem) for we can find in such that Then, substituting into the previous equation, we have Taking the limit of both sides as we obtain \begin{array}{}\\ \\ F(b)-F(a)\hfill & =\underset{n\to \infty }{\text{lim}}\underset{i=1}{\overset{n}{\text{∑}}}f({c}_{i})\text{Δ}x\hfill \\ & ={\int }_{a}^{b}f(x)dx.\hfill \end{array} leading text: $\begin{array}{}\\ \\ F(b)-F(a)\hfill & leading text: ...fill & =\underset{n\to \infty }{\text{lim}} leading text: ...\text{∑}}}f({c}_{i})\text{Δ}x\hfill \\ & Evaluating an Integral with the Fundamental Theorem of Calculus Use (Figure) to evaluate Recall the power rule for Antiderivatives: Use this rule to find the antiderivative of the function and then apply the theorem. We have Notice that we did not include the "+ C" term when we wrote the antiderivative. The reason is that, according to the Fundamental Theorem of Calculus, Part 2, any antiderivative works. So, for convenience, we chose the antiderivative with If we had chosen another antiderivative, the constant term would have canceled out. This always happens when evaluating a definite integral. The region of the area we just calculated is depicted in (Figure). Note that the region between the curve and the -axis is all below the -axis. Area is always positive, but a definite integral can still produce a negative number (a net signed area). For example, if this were a profit function, a negative number indicates the company is operating at a loss over the given interval. Figure 3. The evaluation of a definite integral can produce a negative value, even though area is always positive. Evaluating a Definite Integral Using the Fundamental Theorem of Calculus, Part 2 Evaluate the following integral using the Fundamental Theorem of Calculus, Part 2: First, eliminate the radical by rewriting the integral using rational exponents. Then, separate the numerator terms by writing each one over the denominator: Use the properties of exponents to simplify: Now, integrate using the power rule: \begin{array}{}\\ \\ {\int }_{1}^{9}({x}^{1\text{/}2}-{x}^{-1\text{/}2})dx\hfill & ={(\frac{{x}^{3\text{/}2}}{\frac{3}{2}}-\frac{{x}^{1\text{/}2}}{\frac{1}{2}})|}_{1}^{9}\hfill \\ \\ & =\left[\frac{{(9)}^{3\text{/}2}}{\frac{3}{2}}-\frac{{(9)}^{1\text{/}2}}{\frac{1}{2}}\right]-\left[\frac{{(1)}^{3\text{/}2}}{\frac{3}{2}}-\frac{{(1)}^{1\text{/}2}}{\frac{1}{2}}\right]\hfill \\ & =\left[\frac{2}{3}(27)-2(3)\right]-\left[\frac{2}{3}(1)-2(1)\right]\hfill \\ & =18-6-\frac{2}{3}+2\hfill \\ & =\frac{40}{3}.\hfill \end{array} leading text: $\begin{array}{}\\ \\ {\int leading text: $\begin{array}{}\\ \\ {\int } leading text: ...}^{1\text{/}2}-{x}^{-1\text{/}2})dx\hfill & See (Figure). Figure 4. The area under the curve from to can be calculated by evaluating a definite integral. Use the power rule. A Roller-Skating Race James and Kathy are racing on roller skates. They race along a long, straight track, and whoever has gone the farthest after 5 sec wins a prize. If James can skate at a velocity of ft/sec and Kathy can skate at a velocity of ft/sec, who is going to win the race? We need to integrate both functions over the interval and see which value is bigger. For James, we want to calculate Using the power rule, we have Thus, James has skated 50 ft after 5 sec. Turning now to Kathy, we want to calculate We know is an antiderivative of so it is reasonable to expect that an antiderivative of would involve However, when we differentiate we get as a result of the chain rule, so we have to account for this additional coefficient when we integrate. We obtain Kathy has skated approximately 50.6 ft after 5 sec. Kathy wins, but not by much! Suppose James and Kathy have a rematch, but this time the official stops the contest after only 3 sec. Does this change the outcome? Kathy still wins, but by a much larger margin: James skates 24 ft in 3 sec, but Kathy skates 29.3634 ft in 3 sec. Change the limits of integration from those in (Figure). A Parachutist in Free Fall Figure 5. Skydivers can adjust the velocity of their dive by changing the position of their body during the free fall. (credit: Jeremy T. Lock) Julie is an avid skydiver. She has more than 300 jumps under her belt and has mastered the art of making adjustments to her body position in the air to control how fast she falls. If she arches her back and points her belly toward the ground, she reaches a terminal velocity of approximately 120 mph (176 ft/sec). If, instead, she orients her body with her head straight down, she falls faster, reaching a terminal velocity of 150 mph (220 ft/sec). Since Julie will be moving (falling) in a downward direction, we assume the downward direction is positive to simplify our calculations. Julie executes her jumps from an altitude of 12,500 ft. After she exits the aircraft, she immediately starts falling at a velocity given by She continues to accelerate according to this velocity function until she reaches terminal velocity. After she reaches terminal velocity, her speed remains constant until she pulls her ripcord and slows down to land. On her first jump of the day, Julie orients herself in the slower "belly down" position (terminal velocity is 176 ft/sec). Using this information, answer the following questions. How long after she exits the aircraft does Julie reach terminal velocity? Based on your answer to question 1, set up an expression involving one or more integrals that represents the distance Julie falls after 30 sec. If Julie pulls her ripcord at an altitude of 3000 ft, how long does she spend in a free fall? Julie pulls her ripcord at 3000 ft. It takes 5 sec for her parachute to open completely and for her to slow down, during which time she falls another 400 ft. After her canopy is fully open, her speed is reduced to 16 ft/sec. Find the total time Julie spends in the air, from the time she leaves the airplane until the time her feet touch the ground. On Julie's second jump of the day, she decides she wants to fall a little faster and orients herself in the "head down" position. Her terminal velocity in this position is 220 ft/sec. Answer these questions based on this velocity: How long does it take Julie to reach terminal velocity in this case? Before pulling her ripcord, Julie reorients her body in the "belly down" position so she is not moving quite as fast when her parachute opens. If she begins this maneuver at an altitude of 4000 ft, how long does she spend in a free fall before beginning the reorientation? Some jumpers wear "wingsuits" (see (Figure)). These suits have fabric panels between the arms and legs and allow the wearer to glide around in a free fall, much like a flying squirrel. (Indeed, the suits are sometimes called "flying squirrel suits.") When wearing these suits, terminal velocity can be reduced to about 30 mph (44 ft/sec), allowing the wearers a much longer time in the air. Wingsuit flyers still use parachutes to land; although the vertical velocities are within the margin of safety, horizontal velocities can exceed 70 mph, much too fast to land safely. Figure 6. The fabric panels on the arms and legs of a wingsuit work to reduce the vertical velocity of a skydiver's fall. (credit: Richard Schneider) Answer the following question based on the velocity in a wingsuit. If Julie dons a wingsuit before her third jump of the day, and she pulls her ripcord at an altitude of 3000 ft, how long does she get to spend gliding around in the air? The Mean Value Theorem for Integrals states that for a continuous function over a closed interval, there is a value such that equals the average value of the function. See (Figure). The Fundamental Theorem of Calculus, Part 1 shows the relationship between the derivative and the integral. See (Figure). The Fundamental Theorem of Calculus, Part 2 is a formula for evaluating a definite integral in terms of an antiderivative of its integrand. The total area under a curve can be found using this formula. See (Figure). Key Equations Mean Value Theorem for Integrals Fundamental Theorem of Calculus Part 1 If is continuous over an interval and the function is defined by then 1. Consider two athletes running at variable speeds and The runners start and finish a race at exactly the same time. Explain why the two runners must be going the same speed at some point. 2. Two mountain climbers start their climb at base camp, taking two different routes, one steeper than the other, and arrive at the peak at exactly the same time. Is it necessarily true that, at some point, both climbers increased in altitude at the same rate? Yes. It is implied by the Mean Value Theorem for Integrals. 3. To get on a certain toll road a driver has to take a card that lists the mile entrance point. The card also has a timestamp. When going to pay the toll at the exit, the driver is surprised to receive a speeding ticket along with the toll. Explain how this can happen. 4. Set Find and the average value of over average value of over is In the following exercises, use the Fundamental Theorem of Calculus, Part 1, to find each derivative. 17. The graph of where is a piecewise constant function, is shown here. Over which intervals is positive? Over which intervals is it negative? Over which intervals, if any, is it equal to zero? What are the maximum and minimum values of ? What is the average value of ? a. is positive over and negative over and and zero over and b. The maximum value is 2 and the minimum is −3. c. The average value is 0. 19. The graph of where ℓ is a piecewise linear function, is shown here. Over which intervals is ℓ positive? Over which intervals is it negative? Over which, if any, is it zero? Over which intervals is ℓ increasing? Over which is it decreasing? Over which, if any, is it constant? What is the average value of ℓ? Over which intervals is ℓ increasing? Over which is it decreasing? Over which intervals, if any, is it constant? a. ℓ is positive over and and negative over b. It is increasing over and and it is constant over and c. Its average value is In the following exercises, use a calculator to estimate the area under the curve by computing T10, the average of the left- and right-endpoint Riemann sums using rectangles. Then, using the Fundamental Theorem of Calculus, Part 2, determine the exact area. 21. [T] over In the following exercises, evaluate each definite integral using the Fundamental Theorem of Calculus, Part 2. In the following exercises, use the evaluation theorem to express the integral as a function In the following exercises, identify the roots of the integrand to remove absolute values, then evaluate using the Fundamental Theorem of Calculus, Part 2. 55. Suppose that the number of hours of daylight on a given day in Seattle is modeled by the function with given in months and corresponding to the winter solstice. What is the average number of daylight hours in a year? At which times 1 and 2, where do the number of daylight hours equal the average number? Write an integral that expresses the total number of daylight hours in Seattle between and Compute the mean hours of daylight in Seattle between and where and then between and and show that the average of the two is equal to the average day length. 56. Suppose the rate of gasoline consumption in the United States can be modeled by a sinusoidal function of the form gal/mo. What is the average monthly consumption, and for which values of is the rate at time equal to the average rate? What is the number of gallons of gasoline consumed in the United States in a year? Write an integral that expresses the average monthly U.S. gas consumption during the part of the year between the beginning of April and the end of September a. The average is since has period 12 and integral 0 over any period. Consumption is equal to the average when when and when b. Total consumption is the average rate times duration: c. 57. Explain why, if is continuous over there is at least one point such that 58. Explain why, if is continuous over and is not equal to a constant, there is at least one point such that and at least one point such that If is not constant, then its average is strictly smaller than the maximum and larger than the minimum, which are attained over by the extreme value theorem. 59. Kepler's first law states that the planets move in elliptical orbits with the Sun at one focus. The closest point of a planetary orbit to the Sun is called the perihelion (for Earth, it currently occurs around January 3) and the farthest point is called the aphelion (for Earth, it currently occurs around July 4). Kepler's second law states that planets sweep out equal areas of their elliptical orbits in equal times. Thus, the two arcs indicated in the following figure are swept out in equal times. At what time of year is Earth moving fastest in its orbit? When is it moving slowest? 60. A point on an ellipse with major axis length 2 and minor axis length 2 has the coordinates Show that the distance from this point to the focus at is where Use these coordinates to show that the average distance from a point on the ellipse to the focus at with respect to angle θ, is . a. b. 61. As implied earlier, according to Kepler's laws, Earth's orbit is an ellipse with the Sun at one focus. The perihelion for Earth's orbit around the Sun is 147,098,290 km and the aphelion is 152,098,232 km. By placing the major axis along the -axis, find the average distance from Earth to the Sun. The classic definition of an astronomical unit (AU) is the distance from Earth to the Sun, and its value was computed as the average of the perihelion and aphelion distances. Is this definition justified? 62. The force of gravitational attraction between the Sun and a planet is where is the mass of the planet, M is the mass of the Sun, G is a universal constant, and is the distance between the Sun and the planet when the planet is at an angle θ with the major axis of its orbit. Assuming that M, , and the ellipse parameters and (half-lengths of the major and minor axes) are given, set up—but do not evaluate—an integral that expresses in terms of the average gravitational force between the Sun and the planet. Mean gravitational force = 63. The displacement from rest of a mass attached to a spring satisfies the simple harmonic motion equation where is a phase constant, ω is the angular frequency, and A is the amplitude. Find the average velocity, the average speed (magnitude of velocity), the average displacement, and the average distance from rest (magnitude of displacement) of the mass. fundamental theorem of calculus the theorem, central to the entire development of calculus, that establishes the relationship between differentiation and integration uses a definite integral to define an antiderivative of a function (also, evaluation theorem) we can evaluate a definite integral by evaluating the antiderivative of the integrand at the endpoints of the interval and subtracting guarantees that a point exists such that is equal to the average value of the function Previous: 5.2 The Definite Integral Next: 5.4 Integration Formulas and the Net Change Theorem 5.3 The Fundamental Theorem of Calculus by OSCRiceUniversity is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
CommonCrawl
algebraic-geometry nLab > nLab General Discussions: Derived Category of Sheaves CommentAuthorhilbertthm90 CommentTimeJul 31st 2011 Author: hilbertthm90 Format: MarkdownItexRecently I've been looking a lot at the derived category of coherent sheaves on a scheme (specifically of Calabi-Yau threefolds). I've been told that it is "better" to consider this as a dg-category for whatever reason. I'm quite interested in exploring this, since I have no idea why this would be. For instance, the great book Fourier-Mukai Transforms in Algebraic Geometry by Huybrechts develops tons of theory and has lots of great beautiful theorems, but the phrase "dg-category" never once appears in the whole book. Do you really get much more from this point of view? Anyway, I'm interested in writing notes here as I try to figure out what is going on here. I'm trying to take inventory of what pages already exist on this topic. I've found [[derived algebraic geometry]], [[dg-category]], [[derived category]]. I can't find anything more on topic than those, so am I right in assuming that there seems to be very little about the derived category of sheaves or am I missing it somewhere? Recently I've been looking a lot at the derived category of coherent sheaves on a scheme (specifically of Calabi-Yau threefolds). I've been told that it is "better" to consider this as a dg-category for whatever reason. I'm quite interested in exploring this, since I have no idea why this would be. For instance, the great book Fourier-Mukai Transforms in Algebraic Geometry by Huybrechts develops tons of theory and has lots of great beautiful theorems, but the phrase "dg-category" never once appears in the whole book. Do you really get much more from this point of view? Anyway, I'm interested in writing notes here as I try to figure out what is going on here. I'm trying to take inventory of what pages already exist on this topic. I've found derived algebraic geometry, dg-category, derived category. I can't find anything more on topic than those, so am I right in assuming that there seems to be very little about the derived category of sheaves or am I missing it somewhere? Format: MarkdownItexThis is an example of a general phenomenon: a [[derived category]] is always the [[homotopy category]] of a [[stable (infinity,1)-category]]. Generally, a homotopy category -- by definition -- is good for remembering equivalence classes of objects (as in [[equivalence in an (infinity,1)-category]]). But for nothing else! Meaning: no actual category theory works in the homotopy category as expected. Meaning: no universal constructions, no limits, no colimits, etc. The reason is that the correct universal constructions must be universal with respect to the higher degrees of freedom of the higher category (must be [[homotopy limits]] etc.) which is precisely the information that is discarded when passing to the homotopy category. Now there are plenty of models and presentations for $(\infty,1)$-categories in general and [[stable (infinity,1)-categories]] in particular (see that entry for links, scroll down a bit). The main one being by linear [[A-infinity categories]]. These can always be strictified to [[dg-categories]]. Therefore many texts will tell you to use these. This is an example of a general phenomenon: a derived category is always the homotopy category of a stable (infinity,1)-category. Generally, a homotopy category – by definition – is good for remembering equivalence classes of objects (as in equivalence in an (infinity,1)-category). But for nothing else! Meaning: no actual category theory works in the homotopy category as expected. Meaning: no universal constructions, no limits, no colimits, etc. The reason is that the correct universal constructions must be universal with respect to the higher degrees of freedom of the higher category (must be homotopy limits etc.) which is precisely the information that is discarded when passing to the homotopy category. Now there are plenty of models and presentations for (∞,1)(\infty,1)-categories in general and stable (infinity,1)-categories in particular (see that entry for links, scroll down a bit). The main one being by linear A-infinity categories. These can always be strictified to dg-categories. Therefore many texts will tell you to use these. Format: MarkdownItexThanks. This is great to get me started. I guess one thing I'm really curious about right from the start is the following: If $X$ and $Y$ are two varieties (maybe say CY 3-folds), then there seems to be two notions going around that may or may not be equivalent. If $D^b(X)$ is equivalent as a triangulated category to $D^b(Y)$, is it still possible that they are *not* equivalent as A-infinity categories ... or dg-categories? The essence of this question being, the way I think about these things right now is in some sense the "decategorified" version, so if I care about these up to equivalence, do I actually lose something? Another example is that there is a construction due to Caldararu to show that it is possible for X, Y to be non-birational CY 3-folds and still have equivalent derived categories as triangulated categories. It would be interesting to know if using equivalence as A-infinity categories rules this possibility out. Thanks. This is great to get me started. I guess one thing I'm really curious about right from the start is the following: If XX and YY are two varieties (maybe say CY 3-folds), then there seems to be two notions going around that may or may not be equivalent. If D b(X)D^b(X) is equivalent as a triangulated category to D b(Y)D^b(Y), is it still possible that they are not equivalent as A-infinity categories … or dg-categories? The essence of this question being, the way I think about these things right now is in some sense the "decategorified" version, so if I care about these up to equivalence, do I actually lose something? Another example is that there is a construction due to Caldararu to show that it is possible for X, Y to be non-birational CY 3-folds and still have equivalent derived categories as triangulated categories. It would be interesting to know if using equivalence as A-infinity categories rules this possibility out. Format: MarkdownItex> If $D^b(X)$ is equivalent as a triangulated category to $D^b(Y)$, is it still possible that they are *not* equivalent as A-infinity categories ... or dg-categories? It is certainly in general so that two homotopy categories $Ho(\mathcal{C})$ ad $Ho(\mathcal{D})$ being equivalent says very little about whether the $\infty$-categories $\mathcal{C}$ and $\mathcal{D}$ are equivalent. In your case you are looking at homotopy categories of the special kind $D^b(X) = Ho(QC(X))$ etc. So for general $\mathcal{D}$ an equivalence $D^b(X) \simeq Ho(QC(X)) \simeq Ho(\mathcal{D})$ says nothing about an equivalence $QC(X) \simeq \mathcal{D}$. Now of course if $\mathcal{D}$ itself is constrained to be of the form $QC(Y)$ maybe one can say more. I am not expert enough on that situation to know. But Zoran knows examples and counter-examples for such things. He can tell once he comes online. > The essence of this question being, the way I think about these things right now is in some sense the "decategorified" version, That's exactly what it is. $D^b(X)$ is the decategorification from $(\infty,1)$ to $(1,1)$ of $QC(X)$. > so if I care about these up to equivalence, do I actually lose something? As I said, $D^b(X)$ knows everything about the equivalence classes in $QC(X)$ (the isomorphism classes in $D^b(X)$ are the equivalence class in $QC(X)$). So for that purpose $D^b(X)$ gives you everything you might want. A problem arises as soon as you want to look at universal construction of quasicoherent sheaves. $D^b(X)$ has forgotten essentially everything about these. > Another example is that there is a construction due to Caldararu to show that it is possible for X, Y to be non-birational CY 3-folds and still have equivalent derived categories as triangulated categories. It would be interesting to know if using equivalence as A-infinity categories rules this possibility out Right. Again, I think, Zoran might know. I guess this is in the literature. But I don't know off the top of my head. If D b(X)D^b(X) is equivalent as a triangulated category to D b(Y)D^b(Y), is it still possible that they are not equivalent as A-infinity categories … or dg-categories? It is certainly in general so that two homotopy categories Ho(𝒞)Ho(\mathcal{C}) ad Ho(𝒟)Ho(\mathcal{D}) being equivalent says very little about whether the ∞\infty-categories 𝒞\mathcal{C} and 𝒟\mathcal{D} are equivalent. In your case you are looking at homotopy categories of the special kind D b(X)=Ho(QC(X))D^b(X) = Ho(QC(X)) etc. So for general 𝒟\mathcal{D} an equivalence D b(X)≃Ho(QC(X))≃Ho(𝒟)D^b(X) \simeq Ho(QC(X)) \simeq Ho(\mathcal{D}) says nothing about an equivalence QC(X)≃𝒟QC(X) \simeq \mathcal{D}. Now of course if 𝒟\mathcal{D} itself is constrained to be of the form QC(Y)QC(Y) maybe one can say more. I am not expert enough on that situation to know. But Zoran knows examples and counter-examples for such things. He can tell once he comes online. The essence of this question being, the way I think about these things right now is in some sense the "decategorified" version, That's exactly what it is. D b(X)D^b(X) is the decategorification from (∞,1)(\infty,1) to (1,1)(1,1) of QC(X)QC(X). so if I care about these up to equivalence, do I actually lose something? As I said, D b(X)D^b(X) knows everything about the equivalence classes in QC(X)QC(X) (the isomorphism classes in D b(X)D^b(X) are the equivalence class in QC(X)QC(X)). So for that purpose D b(X)D^b(X) gives you everything you might want. A problem arises as soon as you want to look at universal construction of quasicoherent sheaves. D b(X)D^b(X) has forgotten essentially everything about these. Another example is that there is a construction due to Caldararu to show that it is possible for X, Y to be non-birational CY 3-folds and still have equivalent derived categories as triangulated categories. It would be interesting to know if using equivalence as A-infinity categories rules this possibility out Right. Again, I think, Zoran might know. I guess this is in the literature. But I don't know off the top of my head. CommentAuthorjim_stasheff Author: jim_stasheff Format: TextI thought derived cats did NOT involve passing to Ho? Is not the derived cat of use because of greater freedom without losing essential info? Latest Notices AMS has a very enlightening (for me) article What is a...Derived Stack I thought derived cats did NOT involve passing to Ho? Is not the derived cat of use because of greater freedom without losing essential info? Latest Notices AMS has a very enlightening (for me) article What is a...Derived Stack Format: MarkdownItexDerived categories are sometimes not explicitly advertized as being homotopy categories (namely of the model category or $\infty$-category of complexes) but that's precisely what they are. However, more recently some people have started to use the word "derived" as a kind of synonym for "in higher category theory". Which is a) very unfortunate while b) nevertheless almost aready fully standard and hence c) probably the source of your question. Derived categories are sometimes not explicitly advertized as being homotopy categories (namely of the model category or ∞\infty-category of complexes) but that's precisely what they are. However, more recently some people have started to use the word "derived" as a kind of synonym for "in higher category theory". Which is a) very unfortunate while b) nevertheless almost aready fully standard and hence c) probably the source of your question. CommentAuthorMike Shulman Author: Mike Shulman Format: MarkdownItex> Generally, a homotopy category ... is good for remembering equivalence classes of objects.... But for nothing else! > Meaning: no actual category theory works in the homotopy category as expected. Meaning: no universal constructions, no limits, no colimits, etc. I think that's a bit of an exaggeration. For instance, products and coproducts work fine in a homotopy category, at least in the sense that if the $(\infty,1)$-category has them, then they are preserved by passage to the homotopy category (though the converse might not be true). And especially if you remember a bit more structure on the homotopy category, like making it a [[triangulated category]], then you can do a fair bit more with it than just remembering equivalence classes of objects. A [[derivator]] is a way of adding sufficiently much structure to a homotopy category that you *can* do basically all the category theory you want to with it (though that "sufficiently much" is a lot of structure, but well-organized structure). I agree with the general point, of course, that homotopy categories are not well-behaved categorically and usually (more usually than is done in practice, at least historically) one should use $(\infty,1)$-categories or at least derivators. I'm just saying they're not as worthless as all that. Generally, a homotopy category … is good for remembering equivalence classes of objects…. But for nothing else! I think that's a bit of an exaggeration. For instance, products and coproducts work fine in a homotopy category, at least in the sense that if the (∞,1)(\infty,1)-category has them, then they are preserved by passage to the homotopy category (though the converse might not be true). And especially if you remember a bit more structure on the homotopy category, like making it a triangulated category, then you can do a fair bit more with it than just remembering equivalence classes of objects. A derivator is a way of adding sufficiently much structure to a homotopy category that you can do basically all the category theory you want to with it (though that "sufficiently much" is a lot of structure, but well-organized structure). I agree with the general point, of course, that homotopy categories are not well-behaved categorically and usually (more usually than is done in practice, at least historically) one should use (∞,1)(\infty,1)-categories or at least derivators. I'm just saying they're not as worthless as all that. Format: MarkdownItexSorry, I just realized a major source of confusion. The chain complexes of quasi-coherent sheaves, $QC(X)$, is a stable (infinity, 1)-category and when you take $Ho(QC(X))$ you get $D^b(X)$. I was confused because I thought somehow $D^b(X)$ itself was a stable (infinity, 1)-category and we were trying to take $Ho(D^b(X))$. Now this makes some more sense. I'm not entirely sure that what I just wrote is correct. For instance, at [[Calabi-Yau category]] under examples, it says that $D^b(X)$ is an A-infinity category. Thanks. Sorry, I just realized a major source of confusion. The chain complexes of quasi-coherent sheaves, QC(X)QC(X), is a stable (infinity, 1)-category and when you take Ho(QC(X))Ho(QC(X)) you get D b(X)D^b(X). I was confused because I thought somehow D b(X)D^b(X) itself was a stable (infinity, 1)-category and we were trying to take Ho(D b(X))Ho(D^b(X)). Now this makes some more sense. I'm not entirely sure that what I just wrote is correct. For instance, at Calabi-Yau category under examples, it says that D b(X)D^b(X) is an A-infinity category. Thanks. CommentTimeAug 1st 2011 Format: MarkdownItex> For instance, products and coproducts work fine Okay, sure. > A derivator is a way of adding sufficiently much structure to a homotopy category that you can do basically all the category theory you want to with it Sure that adds in all the homotopy categories of all diagram categories. Which is then of course sufficient to compute homotopy limits. > I'm not entirely sure that what I just wrote is correct. For instance, at [[Calabi-Yau category]] under examples, it says that $D^b(X)$ is an A-infinity category. Hm, I think we should edit that and at least clarify the terminology. I don't think that's particularly good notation. What is happening here is that people started to "fix" the deficiencies of the derived category for instance by passing from just a triangulated category to what is called an "[[enhanced triangulated category]]". But that really amounts to making the full $(\infty,1)$-category behind it manifest. If I would rewrite history I would ban the unspecific "derived" throughout. One needs to be careful in these discussions what kind of object exactly one is looking it. For instance, products and coproducts work fine Okay, sure. A derivator is a way of adding sufficiently much structure to a homotopy category that you can do basically all the category theory you want to with it Sure that adds in all the homotopy categories of all diagram categories. Which is then of course sufficient to compute homotopy limits. I'm not entirely sure that what I just wrote is correct. For instance, at Calabi-Yau category under examples, it says that D b(X)D^b(X) is an A-infinity category. Hm, I think we should edit that and at least clarify the terminology. I don't think that's particularly good notation. What is happening here is that people started to "fix" the deficiencies of the derived category for instance by passing from just a triangulated category to what is called an "enhanced triangulated category". But that really amounts to making the full (∞,1)(\infty,1)-category behind it manifest. If I would rewrite history I would ban the unspecific "derived" throughout. One needs to be careful in these discussions what kind of object exactly one is looking it. Format: MarkdownItex> Sure that adds in all the homotopy categories of all diagram categories. Which is then of course sufficient to compute homotopy limits. And to do some other things, which on the surface might not look exactly like computing homotopy limits. For instance, I believe you can compute the mapping space between two objects (as an object of the homotopy category of spaces). And to do some other things, which on the surface might not look exactly like computing homotopy limits. For instance, I believe you can compute the mapping space between two objects (as an object of the homotopy category of spaces). CommentAuthorzskoda CommentTimeAug 2nd 2011 (edited Aug 2nd 2011) Author: zskoda Format: MarkdownItexThe entry is about the derived category of coherent sheaves on a smooth projective variety. In that c as, THERE IS NO LOSS OF INFORMATION from the [[enhanced triangulated category]] to the triangulated category, i.e. $(infty,1)$ gives no new information, as proved by Lunts and Orlov in the reference states in the entry: each derived category of that kind has a unique dg-enhancement. It is easier to work with triangualted one and one can get intofurther structure with hard work. So $D^b$ notation in this case may mean that one sometimes wants to look with it its canonical dg-enhancement; in mirror symmetry one often does the A-infinity enhancement without changing the notation. This uniqueness of extension is of large "philosophical" importance in physics where mainly such triangulated categories appear in practice. I should also point out that the words "derived category" can abbreaviate "derived dg-category" and there are still he usual cases like bounded, unbounded, positive complexes...so the notation can be, depending on context, used in any of the cases. There is a difference in the case when the enhacement is not unique, and no important difference in geometric case. The entry is about the derived category of coherent sheaves on a smooth projective variety. In that c as, THERE IS NO LOSS OF INFORMATION from the enhanced triangulated category to the triangulated category, i.e. (infty,1)(infty,1) gives no new information, as proved by Lunts and Orlov in the reference states in the entry: each derived category of that kind has a unique dg-enhancement. It is easier to work with triangualted one and one can get intofurther structure with hard work. So D bD^b notation in this case may mean that one sometimes wants to look with it its canonical dg-enhancement; in mirror symmetry one often does the A-infinity enhancement without changing the notation. This uniqueness of extension is of large "philosophical" importance in physics where mainly such triangulated categories appear in practice. I should also point out that the words "derived category" can abbreaviate "derived dg-category" and there are still he usual cases like bounded, unbounded, positive complexes…so the notation can be, depending on context, used in any of the cases. There is a difference in the case when the enhacement is not unique, and no important difference in geometric case. Format: MarkdownItex> Derived categories are sometimes not explicitly advertized as being homotopy categories (namely of the model category or ∞-category of complexes) but that's precisely what they are. > However, more recently some people have started to use the word "derived" as a kind of synonym for "in higher category theory". Which is a) very unfortunate while b) nevertheless almost aready fully standard and hence c) probably the source of your question. Urs, derived category of an abelian category is, as you know and point out in model language, not its homotopy category, but the homotopy category of the category of the chain complexes in it. The idea is that in homological algebra one works with chain complexes modulo some relation of equivalence, which is obtained via a localization at acyclic complexes. A better version if to quotient a dg category of complexes by dg subcategory of acyclic complexes. This is called the derived dg-category or a dg-quotient. It is the idea of QUOTIENTing which is essential here more than the word homotopy which is 1-categorical quotienting. From the very start Grothendieck was looking for best possible way to quotient. Localization worked better than the quotient by a relation of equivalence which would be 0-categorical. Lyubashenko went into defining and studying A-infinity quotient version. The derived idea is idea of taking complexes and then quotient by acyclic. So the stable versions like dg and A-infinity and quasicategory case are just cases of the original derived as a quotient philosophy. I am not sure that all cases of derived categories as quotients come indeed from model categories. Lurie treats in such a way only bounded derived category of complexes over a ring, more can be done, but I am not sure of various nonabelian nonbounded extensions of derived categories like in Rosenberg etc. The idea of derived category is the idea that the quotient by acyclic complexes has some additional structure (like distinguished triangles, octahedra and so on). Quotient is the eternal idea, the categorical level is just a technicality. I would not say that geometers use the word derived as a synonym for higher category theory. Instead they use it to say that the quotients etc. have to be made in the sense of total derived functors hence leading to infinity-stacks. I see no shift in intuition or in usage from the point of view of the algebraic geometry/homological algebra community. Jim mentioned Vezzosi's _What is a derived stack_, so here is the [link](http://www.ams.org/notices/201107/rtx110700955p.pdf) Derived categories are sometimes not explicitly advertized as being homotopy categories (namely of the model category or ∞-category of complexes) but that's precisely what they are. Urs, derived category of an abelian category is, as you know and point out in model language, not its homotopy category, but the homotopy category of the category of the chain complexes in it. The idea is that in homological algebra one works with chain complexes modulo some relation of equivalence, which is obtained via a localization at acyclic complexes. A better version if to quotient a dg category of complexes by dg subcategory of acyclic complexes. This is called the derived dg-category or a dg-quotient. It is the idea of QUOTIENTing which is essential here more than the word homotopy which is 1-categorical quotienting. From the very start Grothendieck was looking for best possible way to quotient. Localization worked better than the quotient by a relation of equivalence which would be 0-categorical. Lyubashenko went into defining and studying A-infinity quotient version. The derived idea is idea of taking complexes and then quotient by acyclic. So the stable versions like dg and A-infinity and quasicategory case are just cases of the original derived as a quotient philosophy. I am not sure that all cases of derived categories as quotients come indeed from model categories. Lurie treats in such a way only bounded derived category of complexes over a ring, more can be done, but I am not sure of various nonabelian nonbounded extensions of derived categories like in Rosenberg etc. The idea of derived category is the idea that the quotient by acyclic complexes has some additional structure (like distinguished triangles, octahedra and so on). Quotient is the eternal idea, the categorical level is just a technicality. I would not say that geometers use the word derived as a synonym for higher category theory. Instead they use it to say that the quotients etc. have to be made in the sense of total derived functors hence leading to infinity-stacks. I see no shift in intuition or in usage from the point of view of the algebraic geometry/homological algebra community. Jim mentioned Vezzosi's What is a derived stack, so here is the link CommentTimeAug 4th 2011 Format: MarkdownItex> Urs, derived category of an abelian category is, as you know and point out in model language, not its homotopy category Well, that's not the original meaning. The original literature says "derived category" for the localization at quasi-isomorphisms. Urs, derived category of an abelian category is, as you know and point out in model language, not its homotopy category Well, that's not the original meaning. The original literature says "derived category" for the localization at quasi-isomorphisms. (edited Aug 4th 2011) Format: MarkdownItexDid you read what I wrote above ? I did not say that it was not a localization, but emphasised that it was a localization of the category of complexes, not of the original abelian category and the localization of the category of complexes, was FROM THE BEGINNING just a technicality to how to quotient by the subcategory of acyclic complexes in a sensible way for the homological algebra. Such localizations in teh setup of abelian categories were called [[quotient category]] by Serre and others. From the very beginning there was lots of ideas in practice about how to introduce the objects of homological algebra as complexes modulo complicated "equivalence" which realizes the idea of zeroing the acyclic complexes in the most sensible way. from the early days of Cartan-Eilenberg it was clear that we can take various resolutions equivalently, so the idea of taking some equivalence among complexes in systematic way is one of the first ideas of homological algebra motivating Grothendieck-Verdier from the start. They looked through adjustements and possibilities (like getting the homotopy equivalence first to get around the nonexistence of localization) and getting the additional structure which make derived category a derived category (like triangles, octahedra etc.). The 1980s idea of enhancing via dg-categories had a central subidea that the quotienting with dg-enrichement is better than with categorical localization; though formal quotienting and not enriching by hand has been fully de veloped only several years later by Keller and Drinfeld. Read the introduction of Gelfand-Manin book from 1988 for the historical account on how these ideas of how to quotient in the best way was a problem among experts from early 1960s. Did you read what I wrote above ? I did not say that it was not a localization, but emphasised that it was a localization of the category of complexes, not of the original abelian category and the localization of the category of complexes, was FROM THE BEGINNING just a technicality to how to quotient by the subcategory of acyclic complexes in a sensible way for the homological algebra. Such localizations in teh setup of abelian categories were called quotient category by Serre and others. From the very beginning there was lots of ideas in practice about how to introduce the objects of homological algebra as complexes modulo complicated "equivalence" which realizes the idea of zeroing the acyclic complexes in the most sensible way. from the early days of Cartan-Eilenberg it was clear that we can take various resolutions equivalently, so the idea of taking some equivalence among complexes in systematic way is one of the first ideas of homological algebra motivating Grothendieck-Verdier from the start. They looked through adjustements and possibilities (like getting the homotopy equivalence first to get around the nonexistence of localization) and getting the additional structure which make derived category a derived category (like triangles, octahedra etc.). The 1980s idea of enhancing via dg-categories had a central subidea that the quotienting with dg-enrichement is better than with categorical localization; though formal quotienting and not enriching by hand has been fully de veloped only several years later by Keller and Drinfeld. Read the introduction of Gelfand-Manin book from 1988 for the historical account on how these ideas of how to quotient in the best way was a problem among experts from early 1960s. Format: MarkdownItex> Did you read what I wrote above ? It's true that I stopped after this: > Urs, derived category of an abelian category is, as you know and point out in model language, not its homotopy category, That sounded to me like you are saying that the derived category means a model category structure and not just its homotopy category. But I guess that's not what you mean. I guess we agree on what's going on. Did you read what I wrote above ? It's true that I stopped after this: Urs, derived category of an abelian category is, as you know and point out in model language, not its homotopy category, That sounded to me like you are saying that the derived category means a model category structure and not just its homotopy category. But I guess that's not what you mean. I guess we agree on what's going on.
CommonCrawl
6.1: Multiple Comparisons [ "article:topic", "Benjamini-Hochberg procedure", "Bonferroni correction", "authorname:mcdonaldj", "showtoc:no" ] 6: Multiple Tests The problem with multiple comparisons Controlling the familywise error rate - Bonferroni Correction Controlling the false discovery rate: Benjamini–Hochberg procedure When not to correct for multiple comparisons How to do the tests When you perform a large number of statistical tests, some will have \(P\) values less than \(0.05\) purely by chance, even if all your null hypotheses are really true. The Bonferroni correction is one simple way to take this into account; adjusting the false discovery rate using the Benjamini-Hochberg procedure is a more powerful method. Any time you reject a null hypothesis because a \(P\) value is less than your critical value, it's possible that you're wrong; the null hypothesis might really be true, and your significant result might be due to chance. A \(P\) value of \(0.05\) means that there's a \(5\%\) chance of getting your observed result, if the null hypothesis were true. It does not mean that there's a \(5\%\) chance that the null hypothesis is true. For example, if you do \(100\) statistical tests, and for all of them the null hypothesis is actually true, you'd expect about \(5\) of the tests to be significant at the \(P<0.05\) level, just due to chance. In that case, you'd have about \(5\) statistically significant results, all of which were false positives. The cost, in time, effort and perhaps money, could be quite high if you based important conclusions on these false positives, and it would at least be embarrassing for you once other people did further research and found that you'd been mistaken. This problem, that when you do multiple statistical tests, some fraction will be false positives, has received increasing attention in the last few years. This is important for such techniques as the use of microarrays, which make it possible to measure RNA quantities for tens of thousands of genes at once; brain scanning, in which blood flow can be estimated in \(100,000\) or more three-dimensional bits of brain; and evolutionary genomics, where the sequences of every gene in the genome of two or more species can be compared. There is no universally accepted approach for dealing with the problem of multiple comparisons; it is an area of active research, both in the mathematical details and broader epistomological questions. The classic approach to the multiple comparison problem is to control the familywise error rate. Instead of setting the critical \(P\) level for significance, or alpha, to \(0.05\), you use a lower critical value. If the null hypothesis is true for all of the tests, the probability of getting one result that is significant at this new, lower critical value is \(0.05\). In other words, if all the null hypotheses are true, the probability that the family of tests includes one or more false positives due to chance is \(0.05\). The most common way to control the familywise error rate is with the Bonferroni correction. You find the critical value (alpha) for an individual test by dividing the familywise error rate (usually \(0.05\)) by the number of tests. Thus if you are doing \(100\) statistical tests, the critical value for an individual test would be \(0.05/100=0.0005\), and you would only consider individual tests with \(P<0.0005\) to be significant. As an example, García-Arenzana et al. (2014) tested associations of \(25\) dietary variables with mammographic density, an important risk factor for breast cancer, in Spanish women. They found the following results: Dietary variable Total calories <0.001 Olive oil 0.008 Whole milk 0.039 White meat 0.041 Proteins 0.042 Nuts 0.06 Cereals and pasta 0.074 White fish 0.205 Butter 0.212 Vegetables 0.216 Skimmed milk 0.222 Red meat 0.251 Fruit 0.269 Eggs 0.275 Blue fish 0.34 Legumes 0.341 Carbohydrates 0.384 Potatoes 0.569 Bread 0.594 Fats 0.696 Sweets 0.762 Dairy products 0.94 Semi-skimmed milk 0.942 Total meat 0.975 Processed meat 0.986 As you can see, five of the variables show a significant (\(P<0.05\)) \(P\) value. However, because García-Arenzana et al. (2014) tested \(25\) dietary variables, you'd expect one or two variables to show a significant result purely by chance, even if diet had no real effect on mammographic density. Applying the Bonferroni correction, you'd divide \(P=0.05\) by the number of tests (\(25\)) to get the Bonferroni critical value, so a test would have to have \(P<0.002\) to be significant. Under that criterion, only the test for total calories is significant. The Bonferroni correction is appropriate when a single false positive in a set of tests would be a problem. It is mainly useful when there are a fairly small number of multiple comparisons and you're looking for one or two that might be significant. However, if you have a large number of multiple comparisons and you're looking for many that might be significant, the Bonferroni correction may lead to a very high rate of false negatives. For example, let's say you're comparing the expression level of \(20,000\) genes between liver cancer tissue and normal liver tissue. Based on previous studies, you are hoping to find dozens or hundreds of genes with different expression levels. If you use the Bonferroni correction, a \(P\) value would have to be less than \(0.05/20000=0.0000025\) to be significant. Only genes with huge differences in expression will have a \(P\) value that low, and could miss out on a lot of important differences just because you wanted to be sure that your results did not include a single false negative. An important issue with the Bonferroni correction is deciding what a "family" of statistical tests is. García-Arenzana et al. (2014) tested \(25\) dietary variables, so are these tests one "family," making the critical \(P\) value \(0.05/25\)? But they also measured \(13\) non-dietary variables such as age, education, and socioeconomic status; should they be included in the family of tests, making the critical \(P\) value \(0.05/38\)? And what if in 2015, García-Arenzana et al. write another paper in which they compare \(30\) dietary variables between breast cancer and non-breast cancer patients; should they include those in their family of tests, and go back and reanalyze the data in their 2014 paper using a critical \(P\) value of \(0.05/55\)? There is no firm rule on this; you'll have to use your judgment, based on just how bad a false positive would be. Obviously, you should make this decision before you look at the results, otherwise it would be too easy to subconsciously rationalize a family size that gives you the results you want. An alternative approach is to control the false discovery rate. This is the proportion of "discoveries" (significant results) that are actually false positives. For example, let's say you're using microarrays to compare expression levels for \(20,000\) genes between liver tumors and normal liver cells. You're going to do additional experiments on any genes that show a significant difference between the normal and tumor cells, and you're willing to accept up to \(10\%\) of the genes with significant results being false positives; you'll find out they're false positives when you do the followup experiments. In this case, you would set your false discovery rate to \(10\%\). One good technique for controlling the false discovery rate was briefly mentioned by Simes (1986) and developed in detail by Benjamini and Hochberg (1995). Put the individual \(P\) values in order, from smallest to largest. The smallest \(P\) value has a rank of \(i=1\), then next smallest has \(i=2\), etc. Compare each individual \(P\) value to its Benjamini-Hochberg critical value, \((i/m)Q\), where i is the rank, \(m\) is the total number of tests, and \(Q\) is the false discovery rate you choose. The largest \(P\) value that has \(P<(i/m)Q\) is significant, and all of the \(P\) values smaller than it are also significant, even the ones that aren't less than their Benjamini-Hochberg critical value. To illustrate this, here are the data from García-Arenzana et al. (2014) again, with the Benjamini-Hochberg critical value for a false discovery rate of \(0.25\). (i/m)Q Total calories <0.001 1 0.010 Olive oil 0.008 2 0.020 Whole milk 0.039 3 0.030 White meat 0.041 4 0.040 Proteins 0.042 5 0.050 Nuts 0.060 6 0.060 Cereals and pasta 0.074 7 0.070 White fish 0.205 8 0.080 Butter 0.212 9 0.090 Vegetables 0.216 10 0.100 Skimmed milk 0.222 11 0.110 Red meat 0.251 12 0.120 Fruit 0.269 13 0.130 Eggs 0.275 14 0.140 Blue fish 0.34 15 0.150 Legumes 0.341 16 0.160 Carbohydrates 0.384 17 0.170 Potatoes 0.569 18 0.180 Bread 0.594 19 0.190 Fats 0.696 20 0.200 Sweets 0.762 21 0.210 Dairy products 0.94 22 0.220 Semi-skimmed milk 0.942 23 0.230 Total meat 0.975 24 0.240 Processed meat 0.986 25 0.250 Reading down the column of \(P\) values, the largest one with \(P<(i/m)Q\) is proteins, where the individual \(P\) value (\(0.042\)) is less than the \((i/m)Q\) value of \(0.050\). Thus the first five tests would be significant. Note that whole milk and white meat are significant, even though their \(P\) values are not less than their Benjamini-Hochberg critical values; they are significant because they have \(P\) values less than that of proteins. When you use the Benjamini-Hochberg procedure with a false discovery rate greater than \(0.05\), it is quite possible for individual tests to be significant even though their \(P\) value is greater than \(0.05\). Imagine that all of the \(P\) values in the García-Arenzana et al. (2014) study were between \(0.10\) and \(0.24\). Then with a false discovery rate of \(0.25\), all of the tests would be significant, even the one with \(P=0.24\). This may seem wrong, but if all \(25\) null hypotheses were true, you'd expect the largest \(P\) value to be well over \(0.90\); it would be extremely unlikely that the largest \(P\) value would be less than \(0.25\). You would only expect the largest \(P\) value to be less than \(0.25\) if most of the null hypotheses were false, and since a false discovery rate of \(0.25\) means you're willing to reject a few true null hypotheses, you would reject them all. You should carefully choose your false discovery rate before collecting your data. Usually, when you're doing a large number of statistical tests, your experiment is just the first, exploratory step, and you're going to follow up with more experiments on the interesting individual results. If the cost of additional experiments is low and the cost of a false negative (missing a potentially important discovery) is high, you should probably use a fairly high false discovery rate, like \(0.10\) or \(0.20\), so that you don't miss anything important. Sometimes people use a false discovery rate of \(0.05\), probably because of confusion about the difference between false discovery rate and probability of a false positive when the null is true; a false discovery rate of \(0.05\) is probably too low for many experiments. The Benjamini-Hochberg procedure is less sensitive than the Bonferroni procedure to your decision about what is a "family" of tests. If you increase the number of tests, and the distribution of \(P\) values is the same in the newly added tests as in the original tests, the Benjamini-Hochberg procedure will yield the same proportion of significant results. For example, if García-Arenzana et al. (2014) had looked at \(50\) variables instead of \(25\) and the new \(25\) tests had the same set of P values as the original \(25\), they would have \(10\) significant results under Benjamini-Hochberg with a false discovery rate of \(0.25\). This doesn't mean you can completely ignore the question of what constitutes a family; if you mix two sets of tests, one with some low \(P\) values and a second set without low \(P\) values, you will reduce the number of significant results compared to just analyzing the first set by itself. Sometimes you will see a "Benjamini-Hochberg adjusted \(P\) value." The adjusted \(P\) value for a test is either the raw \(P\) value times \(m/i\) or the adjusted \(P\) value for the next higher raw \(P\) value, whichever is smaller (remember that m is the number of tests and i is the rank of each test, with \(1\) the rank of the smallest \(P\) value). If the adjusted \(P\) value is smaller than the false discovery rate, the test is significant. For example, the adjusted \(P\) value for proteins in the example data set is \(0.042\times (25/5)=0.210\); the adjusted \(P\) value for white meat is the smaller of \(0.041\times (25/4)=0.256\) or \(0.210\), so it is \(0.210\). In my opinion "adjusted \(P\) values" are a little confusing, since they're not really estimates of the probability (\(P\)) of anything. I think it's better to give the raw \(P\) values and say which are significant using the Benjamini-Hochberg procedure with your false discovery rate, but if Benjamini-Hochberg adjusted P values are common in the literature of your field, you might have to use them. The Bonferroni correction and Benjamini-Hochberg procedure assume that the individual tests are independent of each other, as when you are comparing sample A vs. sample B, C vs. D, E vs. F, etc. If you are comparing sample A vs. sample B, A vs. C, A vs. D, etc., the comparisons are not independent; if A is higher than B, there's a good chance that A will be higher than C as well. One place this occurs is when you're doing unplanned comparisons of means in anova, for which a variety of other techniques have been developed, such as the Tukey-Kramer test. Another experimental design with multiple, non-independent comparisons is when you compare multiple variables between groups, and the variables are correlated with each other within groups. An example would be knocking out your favorite gene in mice and comparing everything you can think of on knockout vs. control mice: length, weight, strength, running speed, food consumption, feces production, etc. All of these variables are likely to be correlated within groups; mice that are longer will probably also weigh more, would be stronger, run faster, eat more food, and poop more. To analyze this kind of experiment, you can use multivariate analysis of variance, or manova, which I'm not covering in this textbook. Other, more complicated techniques, such as Reiner et al. (2003), have been developed for controlling false discovery rate that may be more appropriate when there is lack of independence in the data. If you're using microarrays, in particular, you need to become familiar with this topic. The goal of multiple comparisons corrections is to reduce the number of false positives, because false positives can be embarrassing, confusing, and cause you and other people to waste your time. An unfortunate byproduct of correcting for multiple comparisons is that you may increase the number of false negatives, where there really is an effect but you don't detect it as statistically significant. If false negatives are very costly, you may not want to correct for multiple comparisons at all. For example, let's say you've gone to a lot of trouble and expense to knock out your favorite gene, mannose-6-phosphate isomerase (Mpi), in a strain of mice that spontaneously develop lots of tumors. Hands trembling with excitement, you get the first Mpi-/- mice and start measuring things: blood pressure, growth rate, maze-learning speed, bone density, coat glossiness, everything you can think of to measure on a mouse. You measure \(50\) things on Mpi-/- mice and normal mice, run the approriate statistical tests, and the smallest \(P\) value is \(0.013\) for a difference in tumor size. If you use a Bonferroni correction, that \(P=0.013\) won't be close to significant; it might not be significant with the Benjamini-Hochberg procedure, either. Should you conclude that there's no significant difference between the Mpi-/- and Mpi+/+ mice, write a boring little paper titled "Lack of anything interesting in Mpi-/- mice," and look for another project? No, your paper should be "Possible effect of Mpi on cancer." You should be suitably cautious, of course, and emphasize in the paper that there's a good chance that your result is a false positive; but the cost of a false positive—if further experiments show that Mpi really has no effect on tumors—is just a few more experiments. The cost of a false negative, on the other hand, could be that you've missed out on a hugely important discovery. I have written a spreadsheet to do the Benjamini-Hochberg procedure benjaminihochberg.xls on up to \(1000\) \(P\) values. It will tell you which \(P\) values are significant after controlling for the false discovery rate you choose. It will also give the Benjamini-Hochberg adjusted \(P\) values, even though I think they're kind of stupid. I have also written a spreadsheet to do the Bonferroni correction bonferroni.xls on up to \(1000\) \(P\) values. I'm not aware of any web pages that will perform the Benjamini-Hochberg procedure. Salvatore Mangiafico's \(R\) Companion has a sample R programs for the Bonferroni, Benjamini-Hochberg, and several other methods for correcting for multiple comparisons. There is a PROC MULTTEST that will perform the Benjamini-Hochberg procedure, as well as many other multiple-comparison corrections. Here's an example using the diet and mammographic density data from García-Arenzana et al. (2014). DATA mammodiet; INPUT food $ Raw_P; cards; Blue_fish .34 Bread .594 Butter .212 Carbohydrates .384 Cereals_and_pasta .074 Dairy_products .94 Eggs .275 Fats .696 Fruit .269 Legumes .341 Nuts .06 Olive_oil .008 Potatoes .569 Processed_meat .986 Proteins .042 Red_meat .251 Semi-skimmed_milk .942 Skimmed_milk .222 Sweets .762 Total_calories .001 Total_meat .975 Vegetables .216 White_fish .205 White_meat .041 Whole_milk .039 PROC SORT DATA=mammodiet OUT=sorted_p; BY Raw_P; PROC MULTTEST INPVALUES=sorted_p FDR; Note that the \(P\) value variable must be named "Raw_P". I sorted the data by "Raw_P" before doing the multiple comparisons test, to make the final output easier to read. In the PROC MULTTEST statement, INPVALUES tells you what file contains the Raw_P variable, and FDR tells SAS to run the Benjamini-Hochberg procedure. The output is the original list of \(P\) values and a column labeled "False Discovery Rate." If the number in this column is less than the false discovery rate you chose before doing the experiment, the original ("raw") \(P\) value is significant. Test Raw False Discovery Rate 1 0.0010 0.0250 10 0.2160 0.4911 So if you had chosen a false discovery rate of \(0.25\), the first \(6\) would be significant; if you'd chosen a false discovery rate of \(0.15\), only the first two would be significant. García-Arenzana, N., E.M. Navarrete-Muñoz, V. Lope, P. Moreo, S. Laso-Pablos, N. Ascunce, F. Casanova-Gómez, C. Sánchez-Contador, C. Santamariña, N. Aragonés, B.P. Gómez, J. Vioque, and M. Pollán. 2014. Calorie intake, olive oil consumption and mammographic density among Spanish women. International journal of cancer 134: 1916-1925. Benjamini, Y., and Y. Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society B 57: 289-300. Reiner, A., D. Yekutieli and Y. Benjamini. 2003. Identifying differentially expressed genes using false discovery rate controlling procedures. Bioinformatics 19: 368-375. Simes, R.J. 1986. An improved Bonferroni procedure for multiple tests of significance. Biometrika 73: 751-754. 6.2: Meta-Analysis Benjamini-Hochberg procedure Bonferroni correction
CommonCrawl
Step-by-step Solution Find the derivative of $z=x\ln\left(x^2+y^2\right)=t^2y=t^{-2}$ Step-by-step explanation Problem to solve: $\frac{d}{dt}\left(z=\left(\ln\:\left(x^2+y^2\right)\right)x=\left(t^2\right)y=\left(t^{-2}\right)\right)$ No steps currently available for this problem. Related Problems $\frac{d}{dx}\left(\frac{\left(x^2+1\right)^3}{2^x\sqrt{x^2+4}}\right)$ $\frac{d}{dx}\left(\frac{\left(x-1\right)^2\sqrt{x+1}}{e^{x-2}}\right)$ $\frac{d}{dx}\left(\frac{e^{4x}xe^{5x}}{\sqrt[3]{x+1}}\right)$ $\frac{d}{dx}\left(\left(x^2+y^2\right)^2+xy=4x\right)$ $\frac{d}{dx}\left(y=\frac{x2+2x}{x}\right)$ $\frac{d}{dx}\left(\sqrt{e^{x^2}}\right)$ $\left(7x-3\right)^2\left(6x^2+4\right)^2$ Used formulas: 1. See formulas Time to solve it: ~ 0.84 seconds Differential calculusImplicit differentiationPower ruleProperties of logarithmsExponent properties Show all... Sign up now. Start learning! By signing up, you agree that you're 13 years old or older, and also agree to Snapxam's Terms of Service and Privacy Policy. Start learning math Learn math fast and easy. Access step by step solutions to millions of problems. More solutions added every day. Get sleep and stress. Save up time for your hobbies. Join 14 000 users in problem solving. Your math virtual assistant Register here if you dont have an account.
CommonCrawl
Identity and prevalence of wheat damping-off fungal pathogens in different fields of Basrah and Maysan provinces Mohammed Hamza Abass ORCID: orcid.org/0000-0002-4697-40761, Qusai Hattab Madhi1 & Abdulnabi Abdul Ameer Matrood1 Wheat is the most consumed cereal crops in the world infected by several pathogens and pests causing significant losses. The most threatening pathogens are fungi which cause serious diseases on roots, leaves and heads as one of the most threatening pathogens in specific wheat-growing countries. This study aimed to identify and evaluate the prevalence of damping-off fungal pathogens in different wheat fields at Basra and Maysan provinces. Disease incidence determination and fungal isolation were carried out from two sites at Basra province (Al-Qurna and Al-Madinah) and three sites at Maysan province (Al-Amarah, Kumit, Ali Al Sharqi and Ali Al Gharbi). Al-Qurna fields had the highest disease incidence (32%), while Ali-Alsharqi fields had the lowest one (11%). Fourteen fungal genera were identified. Rhizoctonia solani had the highest appearance (21.6) and frequency (20.20%) percentages followed by Fusarium solani (16.11,14.01) percentages and Macrophomina phaseolina (12.2,11.1) percentages. Seed treatment with R. solani (Rs1 isolate) showed significant decrease in germination (56.6%) compared to F. solani and M. phaseolina treatments. Seed treatment with R. solani (Rs1 isolate) showed significant decrease in germination (56.6%) compared to F. solani and M. phaseolina treatments. These results revealed the prevalence of wheat damping-off disease in all examined fields at both Basra and Maysan province; the highest disease incidence was seen in Basra wheat fields (Al-Qurna fields); the identification of fungal pathogens showed that the most isolated fungus was R. solani followed by F. solani and M. phaseolina. Laboratory experiments showed the pathogenicity of isolated fungi which varied according to the isolate type. Wheat (Triticum aestivum L.) plays a major role in the traditional healthcare system of humans and animals, where the seeds of wheat have a large amount of phytochemicals such as alkali, saponins, glycosides, terpenoids, steroids, flavonoids and tannins. The presence of these compounds in plants may be responsible for their therapeutic effect (Pathak and Shrivastav 2015). Soft wheat flour contains high levels of gluten used in soft bread and cakes, while hard wheat flour is used in pasta, spaghetti and other pasta products (Marconi and Carcea 2001). Wheat belongs to the Triticeae family (= Hordeae) in the Poaceae grass family (Gramineae), the common wheat (Triticum aestivum L.), also known as bread wheat, and it is the most important staple food for nearly two billion people (36% of the world's population). Worldwide, wheat provides approximately 55% of carbohydrates and 20% of the calories consumed worldwide. It exceeds in area and production every other cereal crop (including rice, corn, etc.), and therefore, it is the most important cereal crop in the world (Minati and Ameen 2019). Wheat is the main cultivated crop and the highest grain used by humans in Iraq. In 2017 the wheat area harvested in Iraq was 1,047,531 ha and the production was 2,974,136 MT (FAO 2018). Wheat plants are susceptible for different disease pathogens infecting root, leaf and head causing serious diseases and greatly affect productivity all over the world. Root diseases can cause losses of 3 to 4%, depending on the variety, the severity of the disease and the appropriate climate for disease development (Kaur 2016). Additionally, many studies showed that the root disease causes a loss of global yield from 10 to 15% and the fungus Rhizoctonia solani causes loss of yield of 25–100% of the global wheat production every year. In 2012, losses amounted to about 140 million tons, equivalent to 35 billion dollars (Pooja et al. 2015). Mesterhazy et al. (2005) mentioned that many fungal pathogens cause root rot by invading and colonizing the roots of the wheat plant and entering the seedling tissue and causing damage to the crown and root tissues. Fusarium graminearum, which causes crown rot, and Bipolaris sorkiniana, the common root rot pathogen, and Gaeumannomyces graminis, the cause of take all disease, are among the main causes of wheat root diseases. These fungi can affect seed germination and cause seed blight (Tunali et al. 2008). Wheat is also infected with Fusarium spp., which causes damage to young grains, seedlings, roots, crowns and base stems, causing rotting, and in some cases, infecting the head and affecting the quality of grains and causing losses of up to 50% (Nicol et al 2004). Paulitz et al. (2002) mentioned that wheat is infected with Rhizoctonia spp., which causes cereal and crown rot, root rot and bare patch disease caused by Rhizoctonia solani Kuhn AG-8, which causes lesion, crown rot, also the fungus Pythium spp., which causes losses under moist soil conditions. And the appearance of spots and dwarfism on the plant, and the fungi grow inside the root, crown and base of the stems, and the fungus infects the germinating seeds and the top of the roots, which leads to the removal of the fine roots and root hairs and causes root rot. The present research focuses on the identification and prevalence evaluation of wheat damping-off fungal pathogens in different fields at Basrah and Maysan provinces. The study was conducted at the laboratory of the Plant Protection Department/College of Agriculture University of Basrah. Evaluation of disease incidence The incidence of root rot and damping disease was estimated for the agricultural season 2019/2020 for wheat crops in some fields at Basrah province, which are Al-Qurna and Al-Madinah, and Maysan province, which are Amarah, Kumit, Ali Sharqi and Ali Gharbi (sampling map; Fig. 1). The incidence of root rot and damping-off was calculated based on the coloration of the primary and secondary root, noting the symptoms of damping-off above the soil surface two months after planting. The incidence of the disease was evaluated by collecting a number of square meters of wheat plants (roots and seedlings) that were randomly captured from each field. Representative sample was performed according to the collection method based on observing symptoms of diseased plants which are randomly taken from each field (Yonghao 2013). The application of following formula: $${\text{Disease}}\,{\text{incidence}}\,{\text{percentage}} = \frac{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{symptomatic}}\,{\text{plants}}}}{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{inspected}}\,{\text{plants}} }} \times 100$$ Map of sites for sampling the roots of wheat and soil in Basra and Maysan province was used to calculate the incidence percentage for disease. Isolation and identification of fungi from wheat roots: Samples were collected from the roots that showed symptoms of rot, and the root zone was separated from the rest of plants, the infected areas then washed with running water for 30 min and left for a period to dry on Whatman NO. 1 filter papers. These washed parts were cut into small pieces of 0.5 cm in length and sterilized with a solution of sodium hypochlorite 2% of the commercial preparation for 3 min, then washed with sterile distilled water for a minute to remove remaining sterile solution, dried on filter papers. Subsequently, four pieces were transferred to each Petri dish containing a medium of potato dextrose agar (PDA) amended with chloramphenicol at a concentration of 250 mg/L. The dishes were incubated at a temperature of 25 ± 2 °C for 3 days, and the fungi developing to the species level were diagnosed based on the classification characteristics adopted in Watanabe (2002), Dugan (2006), Leslie and Summerell (2006), Domsch et al. (2007) and Nyongesa et al. (2015). The percentage of appearance and frequency of fungal isolates were calculated according to the following equations. $${\text{Fungal}}\,{\text{appearance}}\% = \frac{{{\text{Number}}\,{\text{of}}\,{\text{apperance}}\,{\text{for}}\,{\text{each}}\,{\text{fungus}}}}{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{samples}} }} \times 100$$ $${\text{Fungal}}\,{\text{frequency}}\% = \frac{{{\text{Number}}\,{\text{of}}\,{\text{fungl}}\,{\text{colonies}}}}{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{all}}\,{\text{fungal}}\,{\text{colonies}} }} \times 100$$ Collection of soil samples from wheat fields Compound soil samples were taken at a depth of 30 cm from each plant sampling site. The samples were mixed from each site, and the sample size was reduced by homogeneous distribution of the soil. Isolation and identification of fungi from soil: After collecting the composite soil samples and brought to the laboratory, the dilution method was performed for each sample using five serial dilutions (Farazimah et al. 2019). One milliliter of the fifth dilution was added to the surface a Petri dish containing sterile PDA medium and stirring the dish to smoothly disperse the suspension. The dishes were incubated at a temperature of 25 ± 2 °C until the growth of the fungi appeared, and the fungi were purified and incubated at a temperature of 25 ± 2 °C for 3–4 days. Fungal identification was applied as mentioned previously. Pathogenicity of fungal isolates Pathogenicity test was carried out by placing a disc (0.5 cm) of 7-day-old pure culture of each fungus for three replicates on water agar plates in the center and incubated at 25 ± 2 °C for 48 h. Ten sterile wheat seeds were planted at 1 cm from the edge of each plate and then incubated again at 25 ± 2 °C. After 3 days germination percentage was calculated according to the following formula: $${\text{Germination}}\% = \frac{{{\text{Number}}\,{\text{of}}\,{\text{geminated}}\,{\text{seeds}}}}{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{seeds}} }} \times 100$$ Pathogenicity experiment of fungal isolates in pots: The experiment was conducted in the green house of the Plant Protection Department/College of Agriculture. Pathogenicity of all examined fungal isolates was tested using sterile soil mixture using autoclave. Aseptic soil was distributed into plastic pots of 1-kg capacity in equal quantities. Fungal inoculum was loaded with local millet seeds and added according to Dewan's method at a rate of 2% (w/w) (Jones et al. 1984; Dewan 1989) and mixed well. The soil then moistened and the pots were covered with perforated polyethylene bags for 3 days. Sterile wheat seeds were treated superficially with sodium hypochlorite solution for three minutes. Ten seeds per pot were planted using three replicates per treatment. After the seedlings appeared, the following parameter was calculated: $${\text{Seedling}}\,{\text{damping - off}}\% = \frac{{{\text{Number}}\,{\text{of}}\,{\text{damping - off}}\,{\text{seedling}}}}{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{seedlings}}}} \times 100$$ Radicle and hypocotyl length (cm): Five normal seedlings were taken from each treatment randomly, measured using a ruler (AOSA 1983). Fresh and dry weight of radicle and hypocotyl (mg): The same seedlings were used to measure the length of radicle and hypocotyl. The radicle and hypocotyl were weighed separately and then placed in perforated bags in an electric oven at 80 °C for 24 h for dry weight (ISTA 2005). The experiment was designed using a complete randomization design (CRD), and the least significant difference (LSD) test was used to compare the averages at the 0.05 probability level. The Statistical Package for Social Science (S.P.S.S.) version (23) in data analysis was used. Results represented an average of three replicates per treatment. Disease incidence determination and fungal isolation were carried out from two sites at Basra province (Al-Qurna and Al-Madinah) and three sites at Maysan province (Al-Amarah, Kumit, Ali Al Sharqi and Ali Al Gharbi). Al-Qurna fields had the highest disease incidence (32%), while Ali-Alsharqi fields had the lowest one (11%) as shown in Table 1. Based on the obtained results, it was obvious that there were differences in the incidence of the disease among the wheat fields of the six selected sites (Fig. 2). Table 1 Percentage of damping-off disease incidence of wheat in Basrah and Maysan province Wheat damping-off symptoms on different samples from: a Al-Qurna, b Al-Madinah, c Al-Amarah, d Kumit, e Ali-Al Sharqi, f Ali-Al Gharbi fields Isolation and identification of fungi The results of isolation and identification showed different species of fungi isolated from wheat roots and soil of wheat fields in Basra and Maysan province, 16 species belongs to different fungi from wheat roots, and 13 genus of fungi have been isolated from the soil. The fungal isolates from the Al-Qurna fields were Rhizoctonia solani (Rs1 and Rs2), Fusarium solani (Fs1), Macrophomina phaseolina (Mph1), F. oxysporum, F. roseum, F. verticillioides, F. chlamydosporum, Bipolaris sp., Alternaria alternata, Phoma sp., Aspergillus fumigatus and A. flavus from roots and R. solani (Rs3), F. solani (Fs2), F. moniliforme, F. avenaceum,Monilia sp., A. niger, A. fumigatus, and Mucor sp. from the soil. The fungal isolates from Al-Madinah fields were R. solani (Rs5), F. solani (Fs3 and Fs4), F.oxysporum, F. roseum, F. verticillioides, Cladosporium oxysporum, A. fumigatus from roots,R. solani (Rs6) and F. oxysporum, A. flavus, A. terreus, A. niger, Rhizopus stolonifer and Penicillium sp. from the soil, while the fields of Al-Amarah the following fungi R. solani (Rs8 and Rs9), M. phaseolina (Mph2), F.roseum, F. avenaceum and A. fumigatus, A. sojae, Mucor sp., Pacilomyces sp. from the roots, R. solani (Rs7), F. moniliforme, R. stolonifer, A.flavus, A. terreus, A. oryzae and Penicillium sp. from the soil, whereas from the Kumait fields obtained fungi from the roots were R. solani (Rs10), F. oxysporum, A. alternata, Bipolaris sp., R. solani (Rs11), A. terreus, A. flavus, and Penicillium sp., and from Al-Al Sharqi fields R. solani (Rs12), F. oxysporum, and Mucor sp. from Roots, R. solani (Rs13), A.niger and R. stolonifer were from the soil, and from the Ali-Al Gharbi fields of R. solani (Rs14), F. solani (Fs5) M. phaseolina (Mph3), A. alternata, Mucor sp., Penicillium sp., A.flavus from the roots, R. solani (Rs15) and A. flavus A. niger, R. stolonifer, and Penicillium sp. as shown in Table 2 and Figs. 3, 4, 5, 6, 7, 8 and 9. Table 2 Fungal isolates from different wheat fields at Basra and Maysan province Fungal isolates from roots and soils from different wheat fields: A. F. verticillioides B.C: a F. verticillioides, b C. oxysporum, c Phoma sp., d. A. alternata, e Bipolaris sp., f F. oxysporum, g A. flavus, h A. terreus, i Mucor sp., j A.niger, k A. fumigatus, l Monilia sp., m R. stolonifer, n A. oryzae, o A. sojae, p A. parasiticus, q Penicillium sp., r F. monilifome, s F. avenaceum, t Pacilomyces sp., u F. roseum, v F. chlamydosporum Different culture of R. solani isolated from infected soil and wheat root samples a R. solani on PDA (top), b R. solani on PDA (bottom), c hypha of R. solani branched at almost right angle with a narrow restriction near the branch area with a transverse barrier. d Monilioid cells Different culture of F. solani isolated from infected soil and wheat root samples a F. solani on PDA (TOP), b F. solani on PDA (bottom), c macroconidia and macroconidia, d chlamydospore Different culture of M. phaseolina isolated from infected soil and wheat root samples a M. phaseolina on PDA are one week (top), b M. phaseolina on PDA are three week (top), c M. phaseolina on PDA (bottom), d Pycnidia, e sclerotia, f conidia The percentage of appearance and frequency of fungi isolated from wheat roots and soil Table 3 shows that the highest appearance and frequency percentages of the fungus Rhizoctonia solani were 21.6 and 20.4%, respectively, followed by Fusarium solani with an appearance and frequency of 16.11 and 14.01%, respectively, and by Macrophomina phaseolina with an appearance and frequency of 12.5% and 11.1%, respectively. As for the rates of appearance and frequency of fungi isolated from the soil, which are shown in Table 4, the fungus A. flavus recorded an appearance rate of 18%, Table 3 Percentage of frequency and appearance of fungi isolated from roots of infected wheat plants Table 4 Percentage of frequency and appearance of fungi isolated from infected soil followed by Rhizopus stolonifer and A. niger with an appearance rate of 16 and 11.6%, respectively. The appearance of other fungi ranged from 2.3 to 10.2%, and A. flavus recorded a frequency ratio of 11.9%, while the frequency of other fungi ranged from 2 to 11.1%. Description of fungi Fifteen isolates of R. solani fungi were isolated and identified from the roots of wheat and surrounding soil from the study fields, Fig. 4. Variation of isolates was observed in the phenotypic characteristics of the fungal culture on the PDA culture medium after incubation for 3 weeks at a temperature of 25 ± 2 °C, isolates were varied between light brown and dark brown, while some isolates were distinguished by white in the early stages of growth, then turned to light brown. As well as a variation in the color of the obverse of the fungal colonies, as the colors ranged from dark yellow to light yellow to light brown, different growth patterns and the density of air fungal were examined. It was also noticed that there were common characteristics among the isolates of fungi, which are branching type of the fungal spinning in a right angle and containing constriction in the areas of the origin of the branch and the formation of barriers in the branches near the area of development and without any sexual forms, and the presence of different forms of fungi spinning branches, and all the isolates were producing sclerotia with brown to dark brown color with a difference in their density, as well as the spread on the culture medium PDA, as the isolates Rs1, Rs5 and Rs9 showed their ability to form barrel-shaped bulging cells in the form of long or short chains called monilioid cells (Fig. 5). Three isolates of F. solani were obtained from the study fields, Fig. 6, and the fungus was characterized by producing white to creamy hyphae on the PDA culture medium after incubation for a period of 3 weeks at a temperature of 25 ± 2 °C. The colony's texture is lumbar, and its edges are regular. All fungal isolates produced macro- and microconidia and chlamydospores, all of which are hyaline. Macroconidia were relatively broad in the form of a sickle with the blunt end with dimensions 4–5 × 26–28 nm, and the number of septa is 2–3. As macroconidia were oval or nephrotic as they are 2–3 × 8–10 nm and may be not divided or divided by one or two septa, chlamydospores were intercalary, terminal, globose to oval shaped in all the isolates as shown in Fig. 7. Three isolates were obtained from the M. phaseolina fungus (Table 2, Fig. 8), isolates were identified on the PDA culture medium after three weeks of incubation at a temperature of 25 ± 2 °C according to the microscopic examination, the mycelium is divided and branched, and it is hyaline at the beginning, then it turns green, and olive brown, finally black, with the age of the colony. The conidia are single-celled ovals, and the size ranges from 7–10 × 18–23 nm. Pycnidia is spherical black with dimensions ranging from 140 to 190 nm (Fig. 9). Effect of R. solani, F. solani and M. phaseolina treatment on germination of wheat seeds The results showed a significant variation in the effect of fungal isolates on the germination of wheat seeds in Petri dishes (Table 5, Fig. 10). Rs1 isolate was the most effective, as the percentage of germination of wheat seeds was reduced to 56.6% compared to the control treatment which was 100% and with significant differences than the other isolates, followed by Fs1 and Mph1 isolates with a percentage of germination 66.6 and 63.3%, respectively. The isolates Rs6, Rs15 and Mph1 recorded a germination percentage of 70%, with significant differences from the 100% control treatment. The results also revealed a significant difference in the effect of fungal isolates on the germination of wheat seeds and the damping-off in the wheat plant in the pots, and the Rs1 isolate was more effective than the other isolates, with the percentage of seed germination and damping-off 46.3 and 71.6%, respectively, followed by the Fs1 and Mph1 isolates, as the percentage of seed germination and damping-off reached (56.6, 41.6%) and (53.3,47.5%), respectively (Table 5). Table 5 Seed germination and damping-off percentage of wheat in plates and pots experiments Effect of R. solani, F. solani and M. phaseolina on germination of wheat seeds on water agar plates The results in Table 6 showed that all fungal isolates had a significant effect on reducing the radicle and hypocotyl length, as well as on the fresh and dry weight of radicle and hypocotyl, the Rs1 isolate was more effective than the other isolates, followed by Rs9, Fs1 and Mph1. Table 6 The length of the radicle and the hypocotyl (cm) and the fresh and dry weight (mg) of the radicle and the hypocotyl of wheat in the pot Based on field survey, the results elucidated high severity indices of damping-off disease in wheat fields at Basra province; most of wheat seedlings showed a typical symptoms of the examined disease, and the disease severity indices were varied among the wheat fields and thus could be attributed to the different types of varieties, planting date, seeding rate, fertilizer usage and agricultural rotation, as well as to agricultural practices used by farmers at each field (Ernesto et al. 2015). Soilborne pathogens are commonly known to cause serious t diseases in wheat and other crops, resulting in yield losses, stand reductions, white heads and rotting of root, crown, subcrown, as well as lower parts of stem tissues (Andrade et al. 2011). Among the important soilborne pathogens are members of the Fusarium complex, responsible for fusarium crown and root rot (FCR) of wheat (Cook 2010). Other important genera of pathogens that can affect seedlings, crowns and fodder are Bipolaris sorokiniana and Rhizoctonia spp. With high ability to cause diseases such as a common root rot and root rot of Rhizoctonia sp. (Acuña 2008), the fungal pathogens of wheat plants are able to cause several diseases in different patterns including singularly or in co-exist at the same wheat field, or even within same plants (Paulitz et al. 2002). Results of identification and prevalence of fungal pathogens showed that Rhizoctonia, Fusarium as well as Macrophomina genera were the most dominant genera at the examined wheat fields, which could be explained by their high level of adaptation in response to changes in temperature, seasonal moisture distribution, amount of moisture and edaphic factors (Moya-Elizondo et al. 2011). The high percentage of frequency of the R. solani fungus is due to its ability to form sclerotia which are very resistant to the unfavorable conditions in the soil, as well as to their saprophytic activities on plant debris between seasons (Abbas et al. 2017). R. solani fungi cause damping-off with brown or black spots on the roots 1 mm in size and delay the growth of the wheat plant compared to the control treatment (Gyula et al. 2013). Nirupama et al. (2017) mentioned that R. solani causes a number of common diseases such as the damping-off pre- and post-emergence, root rot, stem bases and seed and fruit rot. AL-Musawi et al. (2017) reported that R. solani and Fusarium sp. caused diseases of seed rot and damping-off. Ishtiaq et al. (2019) indicated that R. solani fungus is a soil-borne fungus and causes a reduction in yield and causes a soft and dry weight reduction for the roots of wheat. The effect of pathogens on the seedlings may be due to its ability to produce a group of enzymes that cause the fall of these seedlings. For example, R. solani produces polygalactoronase (PG), polymethyl-galactoronase (PMG), B-glucosidase and cellulas (Xue et al. 2018), and F. solani produces extracellular enzymes such as catalase, cellulose laccase, amylase, protease, lipase and pectinase (Mezzomo et al. 2019). M. phaseolina produces a number of aqueous hydrolysis enzymes that analyze the components of plant cell wall such as cellulose, hemicellulose, pectin and lignin (Islam et al. 2012). The present work has been conducted to evaluate the wheat damping-off disease in different fields at Basra and Maysan province, as well as determine the prevalence of disease fungal pathogens at these fields. Basra wheat fields were found to be high in their disease severity, more specifically Al-Qurna wheat fields, compared to Maysan wheat fields. R. solani, F.solani and M. phaseolina had the highest prevalent rates at all examined fields. Results of laboratory experiments revealed the pathogenicity of isolated fungi from wheat roots and soils on wheat seeds and seedlings. Future studies are required to evaluate different management practices to control these potential pathogens. Abbas A, Jiang D, Fu Y (2017) Trichoderma spp as antagonist of Rhizoctonia solani. J Plant Pathol Microbiol 8:402. https://doi.org/10.4172/2157-7471.1000402 Acuña R. (2008) Compendio de fitopatógenos de cultivos agrícolas en Chile. 123 pp. Servicio Agrícola y Ganadero (SAG), División Protección Agrícola, Santiago, Chile AL-Musawi MA, Lahov AA, Jaafar OH (2017) Isolation and diagnosis of the pathogens causing seed decay and damping-off disease on wheat and control them using some biological and chemical factors. Karbala J Agric Sci 4(1) (in Arabic language). Andrade O, Campillo R, Peyrelongue A, Barrientos L (2011) Soils suppressive against Gaeumannomyces graminis var. tritici identified under wheat crop monoculture in southern Chile. Ciencia e Investigación Agraria 38:345–356. https://doi.org/10.4067/S0718-16202011000300004 AOSA (1983) Association of Official Seed Analysts Seed Vigour Testing Handbook. Contribution32 to Handbook on SeedTesting Association of Official Seed Analysts, Lincoln, NE, USA, p 88 Cook RJ (2010) Fusarium root, crown, and foot rots and associated seedling diseases. In: Bockus WW, Bowden R, Hunger R, Morrill W, Murray T, Smiley R (eds) Compendium of wheat diseases and pests, 3rd edn. The Pennsylvania State University Press, University Park, pp 37–39 Dewan MM (1989) Identity and frequency of occurrence of fungi in root of wheat and rye grass and their effect on take-all and host growth. Ph.D. thesis Univ. Western Australia Domsch KH, Gams W, Anderson T (2007) Compendium of soil fungi, 2nd edn. IHW-Verlag, Eching, p 672. https://doi.org/10.1111/j.1365-2389.2008.01052_1.x Dugan FM (2006) The identification of fungi an illustrated introduction with keys, glossary, and guide to literature. U.S. Department of Agriculture, Agricultural Research Service, Washington State University Pullman, p 176 Ernesto M, Nolberto A, María PC, Herman D (2015) Distribution and prevalence of crown rot pathogens affecting wheat crops in southern Chile. Chilean J Agric Res. https://doi.org/10.4067/S0718-58392015000100011 FAO (2018) Food and Agriculture Organization of the United Nations Database Results, Geneva. 2018. http://www.fao.org/faostat/en/#data/QC Farazimah Y, Hussein T, Pooja Sh (2019) Isolation of fungi from various habitats and their possible bioremediation. Curr Sci 116(5):733–740 Gyula O, Zoltán N (2013) Donát M (2013) Susceptibility of wheat varieties to soil-borne Rhizoctonia infection. Am J Plant Sci 4:2240–2258 Ishtiaq M, Tanveer H, Khizarhayat B, Tony A, Mehwish M, Shehzad A, Abul Ghani A (2019) Management of root rot diseases of eight wheat varieties using resistance and biological control agents techniques. Pak J Bot. https://doi.org/10.30848/PJB2019-1(16) Islam Md, Shahidul Md, Samiul H, Mohammad MI, Emdadul ME, Abdul Halim QM, Mosaddeque H, Zakir H, Borhan A, Sifatur R, Sharifur R, Monjurul A, Shaobin H, Xuehua W, Jennifer AS (2012) Maqsudul A (2012) Tools to kill: genome of one of the most destructive plant pathogenic fungi Macrophomina phaseolina. BMC Genomics 13:493 ISTA (2005) International Seed Testing Association International Rules for Seed Testing. Adopted at the Ordinary Meeting.2004, Budapest, Hungary to become effective Jones RW, Pettit RE, Taber RA (1984) Lignite and stillage: carrier and substrate for application of fungal biocontrol agent to soil. Am Phytopathol Soc 74:1167–1170. https://doi.org/10.1094/Phyto-74-1167 Kaur N (2016) Root rot pathogens of wheat in South Dakota and their effect on seed germination and seedling blight in spring wheat cultivars. Theses and Dissertations 1117. https://doi.org/10.5586/aa.1657 Leslie JF, Summerell BA (2006) The fusarium laboratory manual. Blackwell Publishing, Ames Marconi E, Carcea M (2001) Pasta from nontraditional raw materials. Cereal Foods World 46:522–529 Mesterhazy A, Bartok T, Kaszonyi G, Varga M, Toth B, Varga J (2005) Common resistance to different Fusarium spp. causing Fusarium head blight in wheat. Euro J Plant Path. https://doi.org/10.1590/1678-992x-2016-0407 Mezzomo R, Rolim JM, Santos AF, Poletto T, Walker C, Maciel CG, Muniz MFB (2019) Aggressiveness of Fusarium oxysporum and F. solani isolates to yerba-mate and production of extracellular enzymes. Summa Phytopathologica 45(2):141–145 Minati MH, Mohammed-Ameen MK (2019) Interaction between Fusarium head blight and crown rot disease incidence and cultural practices on wheat in the south of Iraq, Basra province. Bull Natl Res Cent 43:200. https://doi.org/10.1186/s42269-019-0257-9 Moya-Elizondo E, Rew RL, Jacobsen B, Hogg AC, Dyer AT (2011) Distribution and prevalence of Fusarium crown rot and common root rot pathogens of wheat in Montana. Plant. https://doi.org/10.1094/PDIS-11-10-0795 Nicol JM, Bagci A, Hekimhan H, Bolat N, Braun HJ, Trethowan R (2004) Strategy for the identification and breeding of resistance to dryland root rot complex for international spring and winter wheat breeding programs. In: Proceedings of 4th International Crop Science Congress, Brisbane, Australia, p 283 Nirupama RK, Devi BS, Devi S (2017) Native Trichoderma for the management of wire stem of mustard (Brassica spp.) caused by Rhizoctonia solani. Int J Curr Microbiol Appl Sci. https://doi.org/10.20546/ijcmas.2017.609.284 Nyongesa BW, Okoth S, Ayugi V (2015) Identification key for Aspergillus species isolated from maize and soil of Nandi County, Kenya. Adv Microbiol 5:205–229. https://doi.org/10.4236/aim.2015.54020 Pathak V, Shrivastav S (2015) Biochemical studies on wheat (Triticum aestivum L.). J Pharmacognosy Phytochem 4(3):171 Paulitz TC, Smiley RW, Cook RJ (2002) Insight into the prevalence and management of soilborne cereal pathogens under direct seeding in the Pacific Northwest, USA. Can J Plant Path 24:416–428 Pooja S, Prashant S, Singh MP (2015) Assessment of antifungal activity of PGPR (plant growth promoting rhizobacterial) isolates against Rhizoctonia solani in wheat. Int J Adv Res 3(10):803–812 Tunali B, Nicol JM, Hodson D, Uçkun Z, Büyük O, Erdurmuş D, Hekimhan H, Aktaş H, Akbudak MA, Bağcı SA (2008) Root and crown rot fungi associated with spring, facultative, and winter wheat in Turkey. Plant Dis 92:1299–1306 Watanabe T (2002) Pictorial atlas of soil and seed fungi: morphologies of cultured fungi and the key to species, 2nd edn. CRC Press, Boca Raton. https://doi.org/10.1201/9781420040821 Xue CY, Zhou RJ, Li YJ, Xiao D, Fu JF (2018) Cell-wall-degrading enzymes produced in vitro and in vivo by Rhizoctonia solani, the causative fungus of peanut sheath blight. PeerJ 6:e5580. https://doi.org/10.7717/peerj.5580 Yonghao L (2013) Department of plant pathology and ecology. The Connecticut Agricultural Experiment station. https://www.ct.gov/caes. The authors are thankful for Dr. Yahya Salih (Department of Plant Protection, Collage of Agriculture, University of Basra) for helping in fungal identification. The authors declare that they have no funding support during this study. Plant Protection Department, College of Agriculture, University of Basrah, Basrah, Iraq Mohammed Hamza Abass, Qusai Hattab Madhi & Abdulnabi Abdul Ameer Matrood Mohammed Hamza Abass Qusai Hattab Madhi Abdulnabi Abdul Ameer Matrood Q.H. helped in cooperation in fields surveys, sampling and laboratory experiments; A.M. designed the work and data analysis; M.H.A. helped in cooperation in laboratory experiment and manuscript writing. All authors have read and approved the manuscript. Correspondence to Mohammed Hamza Abass. Not applicable (this study does not involve human participants, human data or human tissue). Abass, M.H., Madhi, Q.H. & Matrood, A.A.A. Identity and prevalence of wheat damping-off fungal pathogens in different fields of Basrah and Maysan provinces. Bull Natl Res Cent 45, 51 (2021). https://doi.org/10.1186/s42269-021-00506-0 Damping-off
CommonCrawl
Performance analysis on joint channel decoding and state estimation in cyber-physical systems Liang Li ORCID: orcid.org/0000-0001-7027-681X1, Shuping Gong3, Ju Bin Song2 & Husheng Li1,2 EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 158 (2017) Cite this article We propose to use an mean square error (MSE) transfer chart to evaluate the performance of the proposed belief propagation (BP)-based channel decoding and state estimation scheme. We focus on two models to evaluate the performance of BP-based channel decoding and state estimation: the sequential model and the iterative model. The numerical results show that the MSE transfer chart can provide much insight about the performance of the proposed channel decoding and state estimation scheme. Communication has been of great importance in cyber-physical systems (CPSs), which sends observations of the physical dynamics from the sensor to the controller as illustrated in Fig. 1. One promising way to improve the performance of physical dynamics (or system state) estimation is the BP-based joint channel decoding and system state estimation algorithm, which we have already developed in [1] to utilize time-domain redundancy of system state to assist channel decoding. For example, the quantized codeword before source/channel encoding at discrete time t, denoted as b(t), is generated by the observation of the physical dynamics, denoted as y(t), where t can be viewed as the beginning of tth time slot. Due to the time correlation of the system states, the observation y(t) is correlated with y(t−1), thus, b(t−1) can provide some information for decoding the quantized codeword, i.e., b(t) in discrete time t. Even though the effectiveness of the proposed joint channel decoding and system state estimation algorithm has been verified by numerical results in [1], the procedure of the given algorithm is still left unspecified. Contributing toward the previous work, this paper addresses the procedure of the message passing between the channel decoder, which processes the information of quantized bits, and the state estimator, which handles the information of continuous state values. We analyze the proposed algorithm from the following perspectives: Does the proposed algorithm converge and help to improve channel decoding and system state estimation? An illustration of the components in CPSs How much gain can be obtained by using redundancy of observations in time domain to assist channel decoding? As pointed out before, the CPS is a hybrid system [2], which consists of system state x(t), observation y(t) with continuous values, and information bits b(t) transmitted in wireless communication with discrete values. The challenges in the channel decoding and system state estimation framework are that the priori information transmitted from a state estimator to a channel decoder is the prediction of y(t), while the channel decoder actually requires the priori information of each quantized bit of y(t), and that the output of the channel decoder is the extrinsic information of each quantized bit of y(t), while the state estimator actually requires the estimation of y(t) from the channel decoder. To handle these challenges, two models, the BP-based sequential model and the BP-based iterative model, are given to evaluate the performance of BP-based channel decoding and system state estimation framework. The former can be used to evaluate the performance of system state estimation over multiple time slots, e.g., the gain by utilizing priori information from the previous time slot to assist channel decoding and state estimation at the current time slot. The latter can be used to check the following points: Does the iterative channel decoding and estimation converge, and how many iterations are sufficient? How much gain can be obtained by utilizing priori information from the previous time slots to assist state estimation at the current time slot? In the area of wireless communication, the purpose of performance analysis for decoding scheme is to find out if, for any given encoder, decoder, and channel noise power, a message-passing iterative decoder can correct the errors or not. To analyze the performance, in [3–5], Ten Brink proposed using an extrinsic information transfer (EXIT) chart to track the iterative decoding performance. Based on the assumption that the distribution of the extrinsic log-likelihood ratios (LLRs) is a Gaussian distribution, the EXIT chart tracks mutual information from the extrinsic LLRs through an iterative decoding process. Compared with the previously used method of density evolution, the EXIT chart is computationally simplified, and it also allows to visualize the evolution of mutual information through iterative decoding process in a graph. The details of the EXIT chart can be found in [6]. The EXIT chart has two useful properties as shown in [7]. One is the necessary condition for the convergence of iterative decoding that the flipped EXIT chart curve of the outer decoder for iterations lies below the EXIT chart curve of the inner coder. The other is that the area under the EXIT curve of outer code relates to the rate of inner coder. In [8], the authors demonstrated that if the priori channel is an erasure channel, for any outer code of rate R, the area under the EXIT curve is 1−R. To our best knowledge, the area property of the EXIT chart has been proved only for cases where the priori channels are erasure channels. The mean square error (MSE) transfer chart improves the EXIT chart as shown in [7], because the area property of the MSE transfer chart corresponding to the area property of the EXIT charts has been proven in both erasure channels and AWGN channels. Instead of tracking mutual information, the MSE transfer chart, as an alternative to evaluate decoding performance, has been proposed in [9] to track the iterative decoding performance based on the relationship between mutual information and the minimum mean square error (MMSE) for the additive white Gaussian noise (AWGN) channel. In this paper, we use the MSE transfer chart to analyze the message passing procedure of channel decoding and system state estimation by assuming that the priori information is an AWGN channel. Compared with [9], our hybrid model prioritizes practicality because the system states and observations considered are continuous values while the information transmitted in wireless system are quantized information bits. Unlike previous research, our algorithm addresses the message passing between continuous values from the state estimator and quantized information bits from the channel decoder with the condition that the system state is correlated over different time slots. In addition, in order to view the evolution of the estimation error, we analyze the performance of state estimation in two cases: within two time slots and more than two time slots. Our work is also informed by other areas in wireless communication, which have faced similar issues in source coding (quantization) [10–13] and joint source and channel decoding [14–19]. The idea of source coding (quantization) in the context of this work is combining the side information available at the controller to assist system state estimation, and the route of joint source and channel decoding in [20–25] is to utilize redundancy in the source to assist channel decoding. Our work can also be considered as one special case of joint source and channel decoding. However, there are two major differences. One difference is that most works focus on the source with binary values and use the EXIT chart [26, 27] or the protograph EXIT (PEXIT) [16] for performance analysis, while [28, 29] considered the case with the source of non-binary values, but the performance analysis of decoding was not provided. The other difference is that most works, such as [29], considered the joint source and channel decoding within two time slots. For instance, only the estimation from the previous time slot is used to calculate the estimation of current time slot. In our work, the dynamic state changes over more than two time slots, and the performance of channel decoding and system estimation at the current time slots also impacts its performance in all future time slots. Therefore, we study the performance of iterative estimation and decoding across multiple time slots. This paper is organized as following. Following the review of the literature on iterative decoding performance analysis method in Section 1, Section 2 briefly introduces on the EXIT chart and the MSE transfer chart for the performance evaluation of iterative channel decoding. Section 3 describes the system models for performance analysis. Section 4 presents the message passing framework between system observation and channel decoding. Section 5 describes the MSE transfer chart, and Section 6 presents how to use the MSE transfer chart to evaluate BP-based sequential and iterative channel decoding and state estimation. Finally, a brief conclusion is given in Section 7. Preliminaries on the EXIT Chart In this section, we review the concept of the EXIT chart and the MSE transfer chart by iterative decoding the output of a serially concatenated encoder. In Section 2.1, we use an example to illustrate the serially concatenated coding scheme and its iterative decoding process. Then, in Section 2.2, we review how to use the EXIT chart and the MSE transfer chart to analyze the performance of iterative decoding. A serially concatenated encoding scheme and corresponding iterative decoding algorithm Figure 2 shows a simple serially concatenated encoding scheme and its corresponding iterative decoding scheme. A demo of concatenated encoding and iterative decoding At the transmitter, the source S with binary values is a vector with the length L s , i.e., \(\phantom {\dot {i}\!}\mathbf {S}=[S_{1}, \cdot \cdot \cdot, S_{L_{s}}]\). S is encoded by the outer channel encoder, which is a systematic convolutional encoder whose generator is g out with an output of B out, a vector with the length L out. Next, B out is encoded by the inner channel encoder, also a systematic convolutional encoder whose the generator is g in with an output of B in, a vector with the length L in. Finally, B in is modulated with an output of B m , i.e., B m,i =2B in,i −1, i=1,···,L in, and then sent over an AWGN channel with an output that can be calculated by $$ Y_{\text{in}, i}=B_{m,i} +\frac{1}{\sqrt{\text{SNR}}}v_{i}\quad i=1,\cdot\cdot\cdot, L_{\text{in}} $$ where \(\text {SNR}=\frac {E_{b}}{N_{0}}\) is the signal power to noise power ratio and v i is a zero mean and unit variance Gaussian noise. At the receiver, decoding is done iteratively between the inner decoder and the outer decoder. The inputs for the inner channel decoder are the received signal Y in and a priori information from the outer decoder, i.e., \(\mathbf {L}_{A}^{\text {in}, k}=\mathbf {L}_{E}^{\text {out}, k-1}\), where \(\mathbf {L}_{E}^{\text {out}, k-1}\) is the extrinsic information of the outer decoder from (k−1)th decoding round, and the output for it is \(\mathbf {L}_{E}^{\text {in}, k}\), i.e., $$ \mathbf{L}_{E, i}^{\text{in}, k}= \text{LLR}\left(S_{i} |\mathbf{Y}_{\text{in}}, \mathbf{L}_{A,i}^{\text{in}, k}, g_{\text{in}}\right) \quad i=1,\cdot\cdot\cdot, L_{s} $$ where \( \mathbf {L}_{A,i}^{\text {in}, k}\) means the priori information of \(\mathbf {L}_{A,i}^{\text {in}, k}\) for all S except S i . The input for the outer channel decoder is a priori information from the inner decoder, i.e., \(\mathbf {L}_{A}^{\text {out}, k}=\mathbf {L}_{E}^{\text {in}, k}\), and the output for it is \(\mathbf {L}_{E}^{\text {out}, k}\), i.e., $$ \mathbf{L}_{E,i}^{\text{out}, k}= \text{LLR}\left(S_{i} |\mathbf{L}_{A,i}^{\text{out}, k}, g_{\text{out}}\right) \quad i=1,\cdot\cdot\cdot, L_{s} $$ where \(\mathbf {L}_{A,i}^{\text {out}, k}\) means the priori information of \(\mathbf {L}_{A,i}^{\text {out}, k}\) for all S except S i . The EXIT chart and the MSE transfer chart The iterative decoding scheme in [30] can be analyzed by tracking the density evolution over iterations. However, the density evolution is complex as it requires to obtain probability density function (PDF) of extrinsic LLRs for each iteration; in addition, it does not provide much insight about the operations of iterative decoding. In order to overcome the drawbacks of density evolution, many transfer chart-based analysis frameworks have been proposed, such as the EXIT chart by [3–5] and the MSE transfer chart by [9]. The idea of these transfer chart-based analysis frameworks is to approximate the PDF of extrinsic LLRs exchanged between the inner decoder and outer decoder by a parameter, i.e., $$ F(\mathbf{S, L})=\frac{1}{L_{s}} {\sum\nolimits}_{i=1}^{L_{s}}F(S_{i}, \mathbf{L}_{i}),\quad i=1, \cdot\cdot\cdot, L_{s} $$ where L i is the extrinsic information, i.e. \(\mathbf {L}_{E,i}^{\text {out}, k}, \mathbf {L}_{E,i}^{\text {in}, k}, \mathbf {L}_{A,i}^{\text {out}, k}\), and \(\mathbf {L}_{A,i}^{\text {in}, k}\). The measure used by the EXIT chart is mutual information, i.e., F(S , L)=I(S , L), which is based on the observation that the PDF of extrinsic LLRs can be approximated by a Gaussian distribution [3–5]. The measure used by the MSE transfer chart is F(S , L)=E[ tanh2(L/2)], which is related to the MMSE estimation of S based on observation Y in [9], i.e., $$ \left\{\begin{array}{rl}\!\! \text{MMSE}\left(\mathbf{S}| \mathbf{Y}_{\text{in}}\right)\!&=\!E\left[\left(\mathbf{S}\,-\,\hat{\mathbf{S}}\right)^{2}|\mathbf{Y}_{\text{in}}\! \right]\!=1-E\left[\tanh^{2}\left(\mathbf{L}/2\right)\right]\\ \hat{\mathbf{S}}&=E\left[{\mathbf{S}}|\mathbf{Y}_{\text{in}} \right]= \tanh(\mathbf{L}/2)\\ \mathbf{L}&=\text{LLR}\left({\mathbf{S}}|\mathbf{Y}_{\text{in}}\right)=\log \left(\frac{P\left(\mathbf{S}=1, \mathbf{Y}_{\text{in}} \right)}{P\left(\mathbf{S}=-1, \mathbf{Y}_{\text{in}} \right)}\right) \end{array}\right. $$ The EXIT chart and the MSE transfer chart-based decoding frameworks for a serially concatenated encoding scheme are shown in Fig. 3 (a and b), respectively. The transfer chart includes two transfer curves. One curve is the measure for the priori information of inner decoder, i.e., \(F_{A}^{\text {in}}(\mathbf {S, L})\), versus the measure for the extrinsic information of inner decoder, i.e., \(F_{E}^{\text {in}}(\mathbf {S, L})\); the other curve is the measure for the priori information of outer decoder, i.e., \(F_{A}^{\text {out}}(\mathbf {S, L})\), versus the measure for the extrinsic information of outer decoder, i.e., \(F_{E}^{\text {out}}(\mathbf {S, L})\). The EXIT chart and the MSE transfer chart for the serially concatenated coding scheme are shown in Figs. 4 and 5, respectively. The predicted decoding path is also shown in these two figures. Since there is a decoding path found between the two curves, the iterative decoding converges. The EXIT chart and the MSE transfer chart model for concatenated encoding and iterative decoding. a EXIT chart. b MSE based transfer chart An example of the EXIT chart for concatenated encoding and iterative decoding, g out=g in=[1,1,0,1;1,0,0,1],SNR=−2 dB An example of the MSE transfer chart for concatenated encoding and iterative decoding, g out=g in=[1,1,0,1;1,0,0,1],SNR=−2 dB In this section, we describe the system model for the analysis. Linear dynamic system and communication system We consider a discrete time linear dynamic system, whose state evolution is given by $$ \left\{\begin{array}{lll} \mathbf{x}(t+1)&=&\mathbf{A}\mathbf{x}(t)+\mathbf{B} \mathbf{u}(t)+ \mathbf{n}(t)\\ \mathbf{y}(t)&=&\mathbf{C}\mathbf{x}(t)+ \mathbf{w}(t) \end{array}\right. $$ where x(t) is the N-dimensional vector of system state at time slot t, u(t) is the M-dimensional control vector, y(t) is the K-dimensional observation vector, and n(t) and w(t) are noise vectors, which are assumed to be Gaussian distributions with zero mean and covariance matrix Σ n and Σ w , respectively. For simplicity, we do not consider u(t). Additionally, we assume that the observation vector y(t) is obtained by a sensor1, and the sensor quantizes each dimension of the observation vector y(t) using B bits, thus forming a KB-dimensional binary vector, which is given by $$ \mathbf{b}(t)=\left(b_{1}(t), b_{2}(t), \ldots, b_{KB}(t)\right) $$ The information bits b(t) are then put into an encoder to generate a codeword c(t). Suppose that the binary phase-shift keying (BPSK) is used for the transmission between the sensor and the controller, and c(t) is converted to alphabet {−1,+1} with s(t)=2c(t)−1. Next, the sequence s(t) is passed through a modulator and transmitted into an AWGN channel. Then, the received signal at the controller is given by $$ \mathbf{r}(t)=\mathbf{s}(t)+\mathbf{e}(t) $$ where the additive white Gaussian noise e(t) has a zero expectation and variance Σ c . Note that we consider the AWGN channel, ignore the fading and normalize the transmit power to be 1. The algorithm and conclusion in this work can be easily extended to the cases with different channels and different types of fading. Models for belief propagation based channel decoding and state estimation In this section, we firstly introduce the Bayesian network structure and then use the following two models to evaluate BP-based channel decoding and state estimation: BP-based sequential processing and BP-based iterative processing. Bayesian network structure and the message passing The Bayesian network structure of the dynamic system and communication system is shown in Fig. 6, where the message passing in the system is illustrated by dotted arrows and dashed arrows in the figure with three time slots: x(t−2),x(t−1), and x(t). The dashed arrows transmit π-message from a parent to its children in Pearl's BP, and the details and formal description can be found in [31]. For instance, the message passed from x(t−2) to x(t−1) is π x(t−2),x(t−1)(x(t−2)), which is the priori information of x(t−2) given that all the information x(t−2) has been received. The dotted arrows transmit λ-message from a child to its parent. For instance, the message passed from x(t−1) to x(t−2) is λ x(t−1),x(t−2)(x(t−2)), which is the likelihood of x(t−2) given that the information x(t−1) has been received. Bayesian network structure and the message passing for CPSs Note that the π-message and λ-message are passing in the form of probability distribution function (PDF), and based on the Bayesian network structure in Fig. 6 and Pearl's BP, the updating order and the message passing in one iteration is given as follows: step 1: x(t−1)→y(t−1); step 2: y(t−1)→b(t−1); step 3: b(t−1)→y(t−1); step 4: y(t−1)→x(t−1); step 5: x(t−1)→x(t); step 6: x(t)→y(t); step 7: y(t)→b(t); step 8: b(t)→y(t); step 9: y(t)→x(t); step 10: x(t)→x(t−1); and step 11: x(t−1) updates information. The key steps in the Pearl's BP have be derived as shown in Table 1, where the PDF of λ b(t),y(t)(y(t)) will be computed by iterative decoding provided in Section 4. Table 1 Message passing in BP-based channel decoding and state estimation system Model for BP-based sequential processing Based on the Bayesian network structure in Fig. 6, we use the framework as shown in Fig. 7 (a) to evaluate sequential channel decoding and state estimation over more than two time slots. The priori information from the time slot t−1, i.e., π x(t−1),x(t)(x(t−1)), is used to assist channel decoding and state estimation at the time slot t, i.e., the estimated distribution of x(t−1), and we assume that it is also a Gaussian distribution with mean \( \mathbf {x}_{\pi _{x},t-1}\) and covariance matrix \(\mathbf {P}_{\pi _{x},t-1} \), i.e., \(\mathcal {N}(\mathbf {x}_{t-1}, \mathbf {x}_{\pi _{x},t-1}, \mathbf {P}_{\pi _{x},t-1}) \). a Sequential channel decoding and state estimation, b The measure of sequential channel decoding and state estimation As noted in Section 1, two objectives here are to evaluate how much gain can be obtained by utilizing π x(t−1),x(t)(x(t−1)) to assist channel decoding and state estimation at time slot t and to evaluate the performance of state estimation over multiple time slots, i.e., the evolution of \(\mathbf {P}_{\pi _{x},t-1}\) as shown in Fig. 7(b). Model for BP-based iterative processing between two time slots Figure 8(a) illustrates the model used to evaluate BP-based iterative channel decoding and state estimation between two time slots. The inputs for this model include the received signals at the controller within the two time slots, i.e., r(t) and r(t+1), which can be found in Fig. 6, and the priori information from the previous time slot t−1, i.e., π x(t−1),x(t)(x(t−1)). Model for the iterative message passing with priori information from x(t−1). a Iterative channel decoding and state estimation, b Iterative channel decoding and state estimation with previous observation, c Iterative channel decoding and state estimation without previous observation The goal is to evaluate the performance of iterative channel decoding and state estimation for different realizations of the two distributions for π x(t−1),x(t)(x(t−1)). For instance, when π x(t−1),x(t)(x(t−1)) (say, \(\mathbf {P}_{\pi _{x},t-1}\)) equals to 0×I, x(t−1) is a determined state estimation. Therefore, this reference model can be converted to the model shown in Fig. 8 (b) by setting π x(t−1),x(t)(x(t−1) (say, \(\mathbf {P}_{\pi _{x},t-1}\)) as 0×I. Or if π x(t−1),x(t)(x(t−1)) (say, \(\mathbf {P}_{\pi _{x},t-1}\)) is set as ∞×I, x(t−1) is unknown. Then, the reference model is transformed to the model shown in Fig. 8 (c). Message passing between state estimator and channel decoder In this section, we describe the message passing between state estimator and channel decoder, which is the most challenging and vital part for the evaluation of BP-based channel decoding and state estimation. As stated in Table 1, π y(t),b(t)(y(t)) has been assumed to be a Gaussian distribution with the mean y π,t and the covariance matrix S π,t and λ b(t),y(t)(y(t)) with the mean y λ,t and covariance matrix S λ,t . We can simplify the procedure of the message exchanging by substituting π y(t),b(t)(y(t)) and λ b(t),y(t)(y(t)) with S π,t and S λ,t , respectively, as shown in Fig. 9. Below, we explain how the averaging S λ,t , i.e., \( \bar {\mathbf {S}}_{\lambda, t}\), corresponding to S π,t can be computed by using iterative decoding. Information exchange between channel decoder and state estimator Quantizing the message between channel decoder and state estimator The relationships among y π,t , S π,t , y ′(t), and channel noise are shown in Fig. 10, where y ′(t) is one realization of π y(t),b(t)(y(t)). Then, y ′(t) is quantized, modulated, and transmitted over the wireless channel. We denote the corresponding quantized vector, modulated vector, channel noise vector, and received vector as b ′(t), c ′(t), e ′(t) and r ′(t), respectively. Model for S λ,t based on sample y π,t , S π,t , y ′(t), and e ′(t) The physical meaning of π y(t),b(t)(y(t)) is the PDF of y(t). Next, we can use π y(t),b(t)(y(t)) as the priori information to estimate each bit of b(t) based on its quantization scheme (the quantization scheme used for converting y(t) to b(t)), i.e., L A (y π,t ,S π,t ). Note that L A (y π,t ,S π,t ) is determined by the PDF, i.e., π y(t),b(t)(y(t)), and also the quantization scheme. Thus, L A (y π,t ,S π,t ) is a KB-dimensional vector and its ith element can be calculated as $$ {} \mathbf{L}_{A,i}\left(\mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right) =\log \frac{P\left(b_{i}(t)=1|\mathbf{y}(t) \in \mathcal{N}\left(\mathbf{y}_{t}, \mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right)\right)}{P\left(b_{i}(t)=0|\mathbf{y}(t) \in \mathcal{N}\left(\mathbf{y}_{t}, \mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right)\right)} $$ The inputs for the channel decoder are L A (y π,t ,S π,t ) and r ′(t) shown in Fig. 10. The feedback from the channel decoder to the state estimator is the extrinsic LLR, which is denoted as L E (L A (y π,t ,S π,t ),y ′(t),e ′(t)). It is a KB-dimension vector and its ith element equals to $$ \begin{aligned} &\mathbf{L}_{E, i}\left(\mathbf{L}_{A}\left(\mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right), \mathbf{y}^{\prime}(t), \mathbf{e}^{\prime}(t)\right)\\ &\qquad\qquad=\log \frac{P\left(b_{i}(t)=1|\mathbf{L}_{A}\left(\mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right), \mathbf{r}^{\prime}(t)\right)}{P\left(b_{i}(t)=0|\mathbf{L}_{A}\left(\mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right), \mathbf{r}^{\prime}(t)\right)} \end{aligned} $$ Note that L E (L A (y π,t ,S π,t ),y ′(t),e ′(t)) depends on L A (y π,t ,S π,t ), y ′(t), and e ′(t). Based on L E (L A (y π,t ,S π,t ),y ′(t),e ′(t)), if we assume c(t)=b(t), based on [7], b i (t) can be estimated as $$ \begin{aligned} \tilde{\mathbf{b}}_{i}(t)&=\frac{1}{2}\left(\tilde{\mathbf{s}}_{i}(t)+1\right)\\ &=\frac{1}{2}\tanh\left\{\frac{L_{E,i}\left[\mathbf{L}_{A}\left(\mathbf{y}_{\pi, t},\mathbf{S}_{\pi, t} \right), \mathbf{y}^{\prime}(t), \mathbf{e}^{\prime}(t)\right]}{2}\right\} +\frac{1}{2} \end{aligned} $$ where \(\tilde {\mathbf {s}}_{i}(t)\) is the estimated-modulated bit and the MMSE in estimating b i (t) from L E (L A (y π,t ,S π,t ),y ′(t),e ′(t)) is given as $$ \begin{aligned} &\text{MMSE}_{\mathbf{b}_{i}(t)}\\ &\quad=\text{MMSE}\left({\mathbf{b}}_{i}(t)|\mathbf{L}_{E}\left(\mathbf{L}_{A}\left(\mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right), \mathbf{y}^{\prime}(t), \mathbf{e}^{\prime}(t)\right)\right)\\ &\quad=\frac{1}{4} -\frac{1}{4}\tanh^{2}\left(\frac{L_{E,i}\left[\mathbf{L}_{A}\left(\mathbf{y}_{\pi, t},\mathbf{S}_{\pi, t} \right), \mathbf{y}^{\prime}(t), \mathbf{e}^{\prime}(t)\right] }{2}\right) \end{aligned} $$ Then, y(t) can be estimated as $$\begin{array}{*{20}l} \tilde{\mathbf{y}}_{k}(t)=Q_{I} \sum_{i=1}^{B} \tilde{\mathbf{b}}_{(k-1)B+i}(t) 2^{i-1}+Q_{\text{min}} \end{array} $$ and the MMSE in estimating y k (t) is given by $$ \begin{aligned} &\text{MMSE}_{\mathbf{y}_{k}(t)}\\ &\quad=\text{MMSE}\left({\mathbf{y}}_{k}(t)|\mathbf{L}_{E}\left(\mathbf{L}_{A}\left(\mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right), \mathbf{y}^{\prime}(t), \mathbf{e}^{\prime}(t)\right)\right)\\ &\quad= (Q_{I})^{2} E\left\{\sum_{i=1}^{B}[ \tilde{\mathbf{b}}_{(k-1)B+i}(t) - {\mathbf{b}}_{(k-1)B+i}(t)]2^{i-1}\right\}^{2}\\ &\quad= (Q_{I})^{2} \sum_{i=1}^{B}\text{MMSE}_{\mathbf{b}_{i}(t)} 2^{2(i-1)} \end{aligned} $$ where [Q min,Q max] is the range for quantization and Q I is the quantization interval, which is given as \(Q_{I}=\frac {Q_{\text {max}}-Q_{\text {min}}}{2^{B}-1}\). Note that when i≠j, \(E\{[ \tilde {\mathbf {b}}_{(k-1)B+i}(t) - {\mathbf {b}}_{(k-1)B+i}(t)][ \tilde {\mathbf {b}}_{(k-1)B+j}(t) - {\mathbf {b}}_{(k-1)B+j}(t)]\}=0\), which is obtained based on the independence of channel noise and required for the derivation of (14). Then, for a given π y(t),b(t)(y(t)), which has PDF as \(\mathcal {N}(\mathbf {y}_{t}, \mathbf {y}_{\pi, t}, \mathbf {S}_{\pi, t})\), a realization y ′(t), and a channel noise e ′(t), the feedback from the channel decoder to the node y(t) is λ b(t),y(t)(y(t)), corresponding to \(\mathcal {N}(\mathbf {y}_{t}, \mathbf {y}_{\lambda, t}, \mathbf {S}_{\lambda, t})\), whose mean and covariance matrix are given as $$ \begin{aligned} \mathbf{y}_{\lambda, t}&=\left[\tilde{\mathbf{y}}_{1}(t), \cdot\cdot\cdot,\tilde{\mathbf{y}}_{K}(t)\right]\\ \mathbf{S}_{\lambda, t}(k,k)&=\text{MMSE}_{\mathbf{y}_{k}(t)} \end{aligned} $$ Thus, we finalize the computation of \(\mathcal {N}(\mathbf {y}_{t}, \mathbf {y}_{\lambda, t}, \mathbf {S}_{\lambda, t})\) based on one realization, y ′(t), which is obtained from the PDF of π y(t),b(t)(y(t)). Approximation for the message passing between state estimator and channel decoder As shown in Fig. 11 (a), for a given π y(t),b(t)(y(t)), the extrinsic information transferred from the channel decoder to the node y(t) can be computed as $$ \begin{aligned} &\mathbf{S}_{\lambda, t}\left(\mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right) \\ &= {E}_{\mathbf{y}^{\prime}(t)\in\pi_{\mathbf{y}(t), \mathbf{b}(t)}(\mathbf{y}(t)), \mathbf{e}^{\prime}(t) }\left[\text{MMSE}_{\mathbf{y}_{k}(t)}\right] \end{aligned} $$ Framework for S λ,t averaging over y ′(t) and e ′(t) And, as shown in Fig. 11 (b), if we denote the PDF of y π,t as \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi, t}}\), we can obtain the average S λ,t corresponding to S π,t by integrating S λ,t (y π,t ,S π,t ) over y π,t , $$ \begin{aligned} \mathbf{S}_{\lambda, t}(\mathbf{S}_{\pi, t}) &={E}_{\mathbf{y}_{\pi, t}}\left[\mathbf{S}_{\lambda, t}\left(\mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right)\right]\\ &= {E}_{\mathbf{y}_{\pi, t}}\left\{{E}_{\mathbf{y}^{\prime}(t)\in\pi_{\mathbf{y}(t), \mathbf{b}(t)}(\mathbf{y}(t)), \mathbf{e}^{\prime}(t)}\left[\text{MMSE}_{\mathbf{y}_{k}(t)}\right]\right\} \end{aligned} $$ Note that the computation of S λ,t (S π,t ) in (17) would require the averaging over a sufficient number of realizations (i.e., large enough to show the probability distribution based on a fixed y π,t ) of y ′(t) and the averaging over all possible y π,t based on its PDF \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi, t}}\) to be computed sequentially, i.e., Forward Process and Backward Process should be calculated sequentially. The Forward Process generally refers to the message passing from a node to its children, and the Backward Process generally refers to the message passing from a node to its parents. This arises because y π,t not only impacts priori information L A (y π,t ,S π,t ) but also defines the set of codewords which are generated by quantizing y ′(t). However, the computation as discussed above prohibits the independent evaluation of the message passing for Forward Process and Backward Process. Thus, to bypass this difficulty, we make the following approximations for the computation of S λ,t (S π,t ), so that the Forward Process and Backward Process can be considered separately: $$ \begin{aligned} \mathbf{S}_{\lambda, t}(\mathbf{S}_{\pi, t}) &={E}_{\mathbf{y}_{\pi, t}}\left[\mathbf{S}_{\lambda, t}\left(\mathbf{y}_{\pi, t}, \mathbf{S}_{\pi, t}\right)\right]\\ &= {E}_{\mathbf{y}_{\pi, t}}\left\{{E}_{\mathbf{y}^{\prime}(t)\in\boldsymbol{U}(\mathbf{R}(\mathbf{y})), \mathbf{e}^{\prime}(t) }\left[\text{MMSE}_{\mathbf{y}_{k}(t)}\right]\right\}\\ &= {E}_{\mathbf{y}_{\pi, t}, \mathbf{y}^{\prime}(t)\in\boldsymbol{U}(\mathbf{R}(\mathbf{y})), \mathbf{e}^{\prime}(t) }\left[\text{MMSE}_{\mathbf{y}_{k}(t)}\right]\\ &= {E}_{\mathbf{L}_{A}\left(\mathbf{y}_{\pi, t},\mathbf{S}_{\pi, t}\right), \mathbf{y}^{\prime}(t)\in\boldsymbol{U}(\mathbf{R}(\mathbf{y})), \mathbf{e}^{\prime}(t) }\left[\text{MMSE}_{\mathbf{y}_{k}(t)}\right] \end{aligned} $$ First, we approximate the PDF of the realizations y ′(t) as a uniform distribution instead of \(\mathcal {N}(\mathbf {y}_{t}, \mathbf {y}_{\pi, t}, \mathbf {S}_{\pi, t})\), and we denote the uniform distribution as U(R(y)), where R(y) is the range of y(t). Given this approximation, the integration of y ′(t) over U(R(y)) is equivalent to the integration of all codewords with an equal probability, and hence, the realization y ′(t) can be viewed as being independent of π y(t),b(t)(y(t)). Next, as shown in the final step of (18), the averaging over y π,t can be changed to the averaging over L A (y π,t ,S π,t ). This exists because y π,t impacts only priori information L A (y π,t ,S π,t ) and the set L A (y π,t ,S π,t ) includes all information from y π,t . The framework equivalent to (18) is shown in Fig. 12 (a), which is denoted as approximate framework 1, and the computation of S λ,t (S π,t ) in (18) can be divided into the following three steps: Compute the PDF of L A (y π,t ,S π,t ) and \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi,t}}\). Approximate framework for S λ,t averaging over y ′(t), e ′(t) and y π,t . a Approximate Framework 1. b Approximate Framework 2 Compute the extrinsic information from the channel decoder, i.e., the PDF of L E (L A (y π,t ,S π,t ),y ′(t),e ′(t)). Compute S λ,t (S π,t ) from the extrinsic information L E (L A (y π,t ,S π,t ),y ′(t),e ′(t)). Second, we can approximate the PDF of both L A (y π,t ,S π,t ) and L E (L A (y π,t ,S π,t ), y ′(t), e ′(t)) as the Gaussian distributions with zero means. Next, only the covariance matrixes need to be passed through the decoding process. Note that L E (L A (y π,t ,S π,t ),y ′(t),e ′(t)) for different bits might have a huge difference. This Gaussian approximation requires a good interleaver which can evenly distribute L E (L A (y π,t ,S π,t ),y ′(t),e ′(t)). Furthermore, if we represent the LLRs by mutual information or MMSE [7] as shown in Fig. 12(b), S λ,t (S π,t ) can be computed in the following three steps: Compute the mutual information I A based on the PDF of L A (y π,t ,S π,t ) corresponding to S π,t and \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi,t}}\). Compute the extrinsic information from the channel decoder, i.e., mutual information I E or \(\text {MMSE}_{\text {ext}}^{B}\) based on the PDF of L E (L A (y π,t ,S π,t ),y ′(t)),e ′(t)). Compute S λ,t (S π,t ) from the extrinsic information of the channel decoder I E or \(\text {MMSE}_{\text {ext}}^{B}\). The MSE transfer chart for channel decoding and state estimation In this section, we show how to obtain the MSE transfer chart for evaluating BP-based sequential channel decoding and state estimation as shown in Fig. 7(b) and BP-based iterative channel decoding and state estimation as shown in Fig. 8(a). The MSE transfer chart for sequential channel decoding and state estimation The corresponding model to Fig. 7(b) for the MSE transfer chart of the sequential message passing is illustrated in Fig. 13 (a). The MSE transfer chart for BP-based sequential channel decoding and state estimation shows the curves between \(\text {MMSE}_{\text {ap}}^{S}\) versus \(\text {MMSE}_{\text {ext}}^{S}\). As shown in Fig. 13 (a), \(\text {MMSE}_{\text {ap}}^{S}\) is used to approximate S π,t (starting from node y(t)), i.e., $$ \mathbf{S}_{\pi, t}=\text{MMSE}_{\text{ap}}^{S}\mathbf{I}_{K} $$ The MSE transfer chart for the sequential message passing and iterative message passing. a Sequential message passing. b Iterative message passing and the PDF y π,t , i.e., \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi, t}}\), is assumed to keep the same in all time slots and all iterations. Similarly, \(\text {MMSE}_{\text {ext}}^{S}\) represents the averaging S π,t+1 corresponding to S π,t and y π,t with the PDF of \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi, t}}\). Compared with the model shown in Fig. 7 (b), the MSE transfer chart has two differences. First, the starting node for the message passing is changed from x(t−1) to y(t). The reason for this changing is to keep alignment with the structure of the MSE transfer chart for BP-based iterative channel decoding and state estimation. Note that since there is no extra information added from node x(t−1) to y(t), x(t−1) and y(t) provide the same amount of information for channel decoding in the context of information theory. From this point, these two models are equivalent. The second modification is that the scalar measures, to draw the MSE transfer chart \(\text {MMSE}_{\text {ap}}^{S}\) and \(\text {MMSE}_{\text {ext}}^{S}\), rather than the matrix measures are used to evaluate the performance of the sequential message passing. The value of \(\text {MMSE}_{\text {ext}}^{S}\) can be obtained by applying the results in Table 1, and the message passing flow is shown as y(t)→b(t), we have \(\mathbf {S}_{\pi, t}=\text {MMSE}_{\text {ap}}^{S}\mathbf {I}_{K}\) and the PDF of y π,t is \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi, t}}\). b(t)→y(t), S λ,t , the detailed derivation is provided in Section 4. y(t)→x(t), we have \( \mathbf {P}_{\lambda _{y},t} = \mathbf {C}^{-1} (\mathbf {S}_{\lambda, t}+\mathbf {\Sigma }_{w})(\mathbf {C}^{-1})^{T} \). x(t)→x(t+1), we have \( \mathbf {P}_{\pi _{x},t}=(\mathbf {P}_{l,t}^{-1}+\mathbf {P}_{\lambda _{y},t}^{-1})^{-1} \), where \( \mathbf {P}_{l, t} = \mathbf {A} \mathbf {P}_{\pi _{x},t-1} \mathbf {A}^{T} +\mathbf {\Sigma }_{n}\). x(t+1)→y(t+1), we have \( \mathbf {P}_{\pi _{y},t+1}=\mathbf {A}\times \mathbf {P}_{\pi _{x},t+1}\times \mathbf {A}^{T} +\mathbf {\Sigma }_{n} \). y(t+1)→b(t+1), we have \( \mathbf {S}_{\pi,t+1}=\mathbf {C}\times \mathbf {P}_{\pi _{y},t+1}\times \mathbf {C}^{T} +\mathbf {\Sigma }_{w} \). Note that S π,t+1 is a matrix, \(\text {MMSE}_{\text {ext}}^{S}\) is calculated by solving the following equation: $$ I_{A}(\text{MMSE}_{\text{ext}}^{S}\mathbf{I}_{K})=I_{A}(\mathbf{S}_{\pi,t+1}) $$ where we first obtain the priori information I A for each ith diagonal variance of S π,t+1, where i∈{1,..,K}. Next, to achieve (20), we compute the average value of all the variances, as \(\text {MMSE}_{\text {ext}}^{S}\). An example of the calculation will be illustrated Section 6.1 in Fig. 15. The physical meaning of (20) is that \(\text {MMSE}_{\text {ext}}^{S}\) is the value such that \(\text {MMSE}_{\text {ext}}^{S}\mathbf {I}_{K}\) can provide the same amount of priori information for the channel decoder as S π,t+1. The MSE transfer chart for BP-based iterative channel decoding and state estimation In this section, we illustrate how the MSE transfer chart is modeled in order to evaluate the BP-based iterative channel decoding and state estimation as shown in Fig. 8 (a). The corresponding model for the MSE transfer chart is shown in Fig. 13(b), and it includes two flows of the message passing: one from y(t) to y(t+1) in Fig. 14(a) and the other from y(t+1) to y(t) in Fig. 14(b). The flow from y(t) to y(t+1): the starting node is y(t), and \(\text {MMSE}_{\text {ap}}^{t}\) represents the approximated S π,t , i.e., $$ \mathbf{S}_{\pi, t} = \text{MMSE}_{\text{ap}}^{t} \mathbf{I}_{K} $$ Message passing over two time slots. a Time slot t to t+1. b Time slot t+1 to t and the PDF of y π,t , i.e., \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi, t}}\), is assumed to keep the same in all time slots and all iterations and is denoted by \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi }}\). \(\text {MMSE}_{\text {ext}}^{t}\) represents the average S π,t+1 corresponding to \(\mathbf {S}_{\pi, t}=\text {MMSE}_{\text {ap}}^{t}\mathbf {I}_{k}\) and y π,t with the PDF, \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi }}\). The flow from y(t+1) to y(t): the starting node is y(t+1), and \(\text {MMSE}_{\text {ap}}^{t+1}\) represents the approximated S π,t+1, i.e., $$ \mathbf{S}_{\pi, t+1} = \text{MMSE}_{\text{ap}}^{t+1} \mathbf{I}_{K} $$ and the PDF of y π,t+1 is \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi }}\). \(\text {MMSE}_{\text {ext}}^{t+1}\) represents the averaging S π,t corresponding to \(\mathbf {S}_{\pi, t}=\text {MMSE}_{\text {ap}}^{t+1} \mathbf {I}_{K}\) and y π,t+1 with the PDF, \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi }}\). Then, we obtain two curves: one curve with \(\text {MMSE}_{\text {ap}}^{t}\) versus \(\text {MMSE}_{\text {ext}}^{t}\) for the flow from y(t) to y(t+1); the other flipped curve with \(\text {MMSE}_{\text {ext}}^{t+1}\) versus \(\text {MMSE}_{\text {ap}}^{t+1}\) for the flow from y(t+1) to y(t). Finally, by following the steps in Table 1: (1) y(t)→b(t), (2) b(t)→y(t), (3) y(t)→x(t), (4) x(t)→x(t+1), (5) x(t+1)→y(t+1), (6) y(t+1)→b(t+1), (7) y(t+1)→b(t+1), (8) b(t+1)→y(t+1), (9) y(t+1)→x(t+1), (10) x(t+1)→x(t), (11) x(t)→y(t), and (12) y(t)→b(t), we can calculate the values of \(\text {MMSE}_{\text {ext}}^{t}\) and \(\text {MMSE}_{\text {ext}}^{t+1}\) by \(I_{A}(\text {MMSE}_{\text {ext}}^{t}\mathbf {I}_{K})=I_{A}(\mathbf {S}_{\pi,t+1})\) in step 5 and \(I_{A}(\text {MMSE}_{\text {ext}}^{t+1}\mathbf {I}_{K})=I_{A}(\mathbf {S}_{\pi,t})\) in step 11, respectively. We consider an electric generator dynamic system for verification. Each dimension of the observation y(t) is quantized with 14 bits, and the dynamic range for quantization is [−432,432]. A \(\frac {1}{2}\)-rate recursive systemic convolution (RSC) code is used as the channel encoding scheme, and the code generator is set as g=[1,1,1;1,0,1]. The PDF of y π,t , i.e., \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi }}\), is Gaussian with zero mean, and the covariance matrix of y π,t is obtained from the stationary distribution of y(t), namely $$ f_{\mathbf{y}_{\pi}} = \mathcal{N} (\mathbf{y}, \mathbf{0}, \mathbf{\Sigma}_{\mathbf{y}_{\pi}}) $$ where \(\phantom {\dot {i}\!}\mathbf {\Sigma }_{\mathbf {y}_{\pi }}\) from the dynamic system is $$ \mathbf{\Sigma}_{\mathbf{y}_{\pi}}= \left[\begin{array}{ccccccc} 2857 & 0 &0 & 0 & 0& 0 & 0\\ 0& 6056 & 0 & 0 & 0 &0 &0 \\ 0 & 0 &747 & 0 & 0 & 0 &0 \\ 0& 0 & 0 & 1624 & 0 & 0 &0 \\ 0& 0 & 0 & 0& 44 & 0 &0 \\ 0 & 0 & 0 & 0 & 0 & 10 &0 \\ 0 & 0 &0 & 0& 0 & 0 & 0.79 \end{array}\right] $$ In this section, we show the performance results of the proposed channel decoding and state estimation algorithm, especially the message passing between the state estimator and channel decoder. In addition, we illustrate how much gain can be obtained by using the redundancy of system dynamics to assist channel decoding. To achieve above, the approximate framework 2 shown in Fig. 12 (b) is considered, but different from that we set S π,t and S λ,t as \(\text {MMSE}_{\text {ap}}^{Y}\mathbf {I}_{K}\) and \(\text {MMSE}_{\text {ext}}^{Y}\mathbf {I}_{K}\). Figure 15 demonstrates the priori mutual information I A provided by seven different \(\text {MMSE}_{\text {ap}}^{Y}\). The curve labeled with D i , corresponding to \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi }, i}\), complies to a zero-mean Gaussian distribution with variance as ith diagonal element of \(\phantom {\dot {i}\!}\mathbf {\Sigma }_{\mathbf {y}_{\pi }}\). The curve labeled with mean, corresponding to \(\phantom {\dot {i}\!}f_{\mathbf {y}_{\pi }}\), is a zero-mean Gaussian distribution with the covariance matrix as \(\phantom {\dot {i}\!}\mathbf {\Sigma }_{\mathbf {y}_{\pi }}\). To calculate I A for all eight curves, the curve I A of curve mean is computed as the average of the other I A labeled with D1,···,D7. When \(\text {MMSE}_{\text {ap}}^{Y}\) equals to 1.0e −1, 1.1, 1.0e + 1, 1.0e + 2, I A equals to 0.55,0,4,0.25,0.15, respectively, and when \(\text {MMSE}_{\text {ap}}^{Y}\) equals to 1.0e + 5, the prediction of y(t) cannot provide any priori information for the channel decoder since I A equals to 0. Note that although I A can be as high as 0.8 when \(\text {MMSE}_{\text {ap}}^{Y}\) equals to 1.0e −3, it is not achievable as the minimum value of \(\text {MMSE}_{\text {ap}}^{Y}\) is limited by covariance matrix of state dynamics and system observation noise, i.e., Σ n and Σ w . Priori mutual information for b(t) from y(t) with different \(\text {MMSE}_{\text {ap}}^{Y}\) Figure 16 illustrates the relationship between \(\text {MMSE}_{\text {ap}}^{Y}\) and \(\text {MMSE}_{\text {ext}}^{Y}\). As shown in Fig. 15, the prediction of y(t) with \(\text {MMSE}_{\text {ap}}^{S}\)=1.0e + 5 does not provide any priori information for b(t). Therefore, when \(\text {MMSE}_{\text {ap}}^{t}\)=1.0e + 5, the corresponding \(\text {MMSE}_{\text {ext}}^{t}\) is contributed by neither priori information x(t−1) nor the extrinsic information from the channel decoder at time slot t. The gain of \(\text {MMSE}_{\text {ext}}^{t}\) from \(\text {MMSE}_{\text {ap}}^{S}\) can be obtained by comparing it with the value of \(\text {MMSE}_{\text {ext}}^{t}\) corresponding to \(\text {MMSE}_{\text {ap}}^{S}\)=1.0e + 5. Following this flow, the gains of \(\text {MMSE}_{\text {ext}}^{Y}\) with different \(\text {MMSE}_{\text {ap}}^{Y}\) and \(\frac {E_{b}}{N_{0}}\) in Fig. 15 are shown in Fig. 17. \(\text {MMSE}_{\text {ext}}^{Y}\) with different \(\text {MMSE}_{\text {ap}}^{Y}\) and \(\frac {E_{b}}{N_{0}}\) Gain of \(\text {MMSE}_{\text {ext}}^{Y}\) with different \(\text {MMSE}_{\text {ap}}^{Y}\) with \(\frac {E_{b}}{N_{0}}\) Performance analysis for sequential channel decoding and state estimation In this section, we show how to use the proposed MSE transfer chart to evaluate the performance of BP-based sequential channel decoding and state estimation over multiple time slots under different channel conditions, where the different channel conditions are modeled by different \(\frac {E_{b}}{N_{0}}\). Figure 18 shows the relationship between \(\text {MMSE}_{\text {ap}}^{S}\) and \(\text {MMSE}_{\text {ext}}^{S}\) with different \(\frac {E_{b}}{N_{0}}\). In this figure, we have the following observations: When \(\text {MMSE}_{\text {ap}}^{S}\) is less than 1, the corresponding \(\text {MMSE}_{\text {ext}}^{S}\) for all \(\frac {E_{b}}{N_{0}}\) are equal. Note that \(\text {MMSE}_{\text {ap}}^{S}\) is used to model the amount of priori information from x(t−1). Therefore, the smaller \(\text {MMSE}_{\text {ap}}^{S}\) is, the higher amount of priori information attained from x(t−1) is. Although the extrinsic information from the channel decoder at time slot t can also contribute to the prediction of y(t+1), with small \(\text {MMSE}_{\text {ap}}^{S}\), the priori information from x(t−1) is dominant in the prediction of y(t+1). Therefore, the difference of gains from channel decoder with different \(\frac {E_{b}}{N_{0}}\) is not seen in \(\text {MMSE}_{\text {ext}}^{S}\). Relationship between \(\text {MMSE}_{\text {ext}}^{S}\) and \(\text {MMSE}_{\text {ap}}^{S}\) for the sequential message passing with different \(\frac {E_{b}}{N_{0}}\) With a large \(\text {MMSE}_{\text {ap}}^{S}\), the higher \(\frac {E_{b}}{N_{0}}\) is, the higher \(\text {MMSE}_{\text {ext}}^{S}\) is. With the increasing of \(\text {MMSE}_{\text {ap}}^{S}\), x(t−1) provides less amount of priori information for the prediction y(t), and the extrinsic information from channel decoder becomes dominant in the prediction of y(t). Then, the channel gains with different \(\frac {E_{b}}{N_{0}}\) are seen. When \(\text {MMSE}_{\text {ap}}^{S}\) is around 1.0e −2, the values \(\text {MMSE}_{\text {ext}}^{S}\) are flat. This arises because, based on the information from the time slots priori to t+1, the prediction of y(t) is not reliable since state dynamics n(t) and the noise of system observation w(t) are not predictable. This leads that the minimum value of \(\text {MMSE}_{\text {ext}}^{S}\) is limited by the covariance matrix of n(t) and w(t), i.e., Σ n and Σ w , respectively. Following the same idea of the EXIT chart and the MSE transfer chart for iterative channel decoding, we use the MSE transfer chart to evaluate the performance of BP-based sequential channel decoding and state estimation over mutiple time slots. In the MSE transfer chart, we have two curves: one is \(\text {MMSE}_{\text {ap}}^{S,t}\) versus \(\text {MMSE}_{\text {ext}}^{S,t}\), which is equivalent to the curve for \(\text {MMSE}_{\text {ap}}^{S}\) versus \(\text {MMSE}_{\text {ext}}^{S}\)(\(\text {MMSE}_{\text {ap}}^{S}\) and \(\text {MMSE}_{\text {ext}}^{S}\) at time slot t); the other is \(\text {MMSE}_{\text {ext}}^{S,t+1}\) versus \(\text {MMSE}_{\text {ap}}^{S,t+1}\), which is equivalent to the flipped curve for \(\text {MMSE}_{\text {ap}}^{S}\) versus \(\text {MMSE}_{\text {ext}}^{S}\)(\(\text {MMSE}_{\text {ap}}^{S}\) and \(\text {MMSE}_{\text {ext}}^{S}\) at time slot t+1). The MSE transfer chart for \(\frac {E_{b}}{N_{0}}=\)5 dB is shown in Fig. 19. There is one crossing/convergence point between these two curves, which means that it is not a successful decoding from I A =0 to I A =1. The arrows with black color show the trace of the sequential message passing(starting from \(\text {MMSE}_{\text {ap}}^{S}\)=1.0e −2 at time slot t). We can see that trace moves toward the convergence point at time slot t+1 and t+2. The trace shows that even if we have perfect knowledge of y(t) at time slot t, the performance of state estimation degrades toward the crossing point after running a few time slots. The arrows with red color show the other trace (starting from \(\text {MMSE}_{\text {ap}}^{S}\)=1.0e + 5), at which there is no priori information from the prediction of y(t). Similarly, we can see the trace is moving toward the convergence point at time slot t+1,t+2,···. Different from the above, this trace shows that even though there is no priori information of y(t) at time slot t, the performance of state estimation is improved after running a few time slots, and finally, the gain reaches the convergence point. The MSE transfer chart for sequential channel decoding and state estimation: \(\frac {E_{b}}{N_{0}}\)=5 dB Figure 20 illustrates the MSE transfer chart as \(\frac {E_{b}}{N_{0}}=\)3 dB with one crossing point at \(\text {MMSE}_{\text {ap}}^{S,t}\)=1.0e −0.15. In the range of [1.0e −0.15, 1.0e + 2], the gap between the curve (\(\text {MMSE}_{\text {ap}}^{S,t}\) versus \(\text {MMSE}_{\text {ext}}^{S,t}\)) and the flipped curve (\(\text {MMSE}_{\text {ext}}^{S,t+1}\) versus \(\text {MMSE}_{\text {ap}}^{S,t+1}\)) is very small. This means that the convergence speed is slow and it takes many time slots to converge to the convergence point when the state estimator has no priori information of y(t) at time slot t. The MSE transfer chart for sequential channel decoding and state estimation \(\frac {E_{b}}{N_{0}}\)=3 dB Figure 21 shows the MSE transfer chart as \(\frac {E_{b}}{N_{0}}=\)1 dB with one crossing point at \(\text {MMSE}_{\text {ap}}^{S,t}\)=1.0e −0.15. Note that after the crossing point the curve (\(\text {MMSE}_{\text {ap}}^{S,t}\) versus \(\text {MMSE}_{\text {ext}}^{S,t}\)) almost overlaps with the flipped curve (\(\text {MMSE}_{\text {ext}}^{S,t+1}\) versus \(\text {MMSE}_{\text {ap}}^{S,t+1}\)). This shows two conditions when the state estimator has no priori information of y(t) at time slot t: one is that it will gain a high MSE and cannot coverage to the crossing point; the other is that it will take many time slots to converge to the crossing point. Compared with previous two results, for \(\frac {E_{b}}{N_{0}}\)=1 dB, the MSEs of estimating x(t) and y(t) are much higher than that for \(\frac {E_{b}}{N_{0}}\) equals to 3 and 5 dB. Similar results go with the case when \(\frac {E_{b}}{N_{0}}=-\)1 dB. Performance analysis for iterative channel decoding and state estimation In this section, we show how to use the proposed MSE transfer chart to evaluate the performance of BP-based iterative channel decoding and state estimation within two time slots. As we stated, the node x(t−1) and the node y(t) provide the same amount of information in the context of information theory. From this point, the priori information at the node y(t), as same as the node x(t−1), will be \( \pi _{\mathbf {x}(t-1), \mathbf {x}(t)}(\mathbf {x}(t-1))= \mathcal {N}(\mathbf {x}_{t-1}, \mathbf {x}_{\pi _{x}, t-1}, P_{\pi _{x}, t-1}) \) with \(\mathbf {x}_{\pi _{x}, t-1}=\mathbf {0}\) and \(P_{\pi _{x}, t-1}=\text {MMSE}_{\text {ap}}^{t-1,x}\mathbf {I}_{k}\). With the priori information as an initial input for iterative decoding, the gains of \(\text {MMSE}_{\text {ext}}^{t}\)(based on \(\text {MMSE}_{\text {ap}}^{t}\)) with different \(\text {MMSE}_{\text {ap}}^{t-1,x}\) and \(\frac {E_{b}}{N_{0}}\)=5 dB are illustrated in Fig. 22, and from this figure, we have the following observations: With the same \(\text {MMSE}_{\text {ap}}^{t-1,x}\), the higher \(\text {MMSE}_{\text {ap}}^{t}\) is, the lower gain of \(\text {MMSE}_{\text {ext}}^{t}\) can be obtained from \(\text {MMSE}_{\text {ap}}^{t}\). Gain of \(\text {MMSE}_{\text {ext}}^{t}\) from \(\text {MMSE}_{\text {ap}}^{t}\) with different \(\text {MMSE}_{\text {ap}}^{t-1, x}\) and \(\frac {E_{b}}{N_{0}}\)=5 dB The higher the \(\text {MMSE}_{\text {ap}}^{t-1,x}\) is, the higher gain of \(\text {MMSE}_{\text {ext}}^{t}\) can be obtained from \(\text {MMSE}_{\text {ap}}^{t}\). This exists because the prediction of y(t+1) is contributed by both priori information from x(t−1) and the extrinsic information from the channel decoder at time slot t. When \(\text {MMSE}_{\text {ap}}^{t-1,x}\) is small, the information from x(t−1) is dominant in predicting y(t+1). Although the prediction of y(t) can increase the amount of extrinsic information from channel decoder at time slot t, it cannot contribute much to the gain of \(\text {MMSE}_{\text {ext}}^{t}\) as the priori information from x(t−1) is dominant. Following the same idea, the gain of \(\text {MMSE}_{\text {ext}}^{t+1}\) (based on \(\text {MMSE}_{\text {ap}}^{t+1}\)) is obtained and shown in Fig. 23. Similar observations as above are obtained: With the same \(\text {MMSE}_{\text {ap}}^{t-1,x}\), the higher \(\text {MMSE}_{\text {ap}}^{t+1}\) is, the lower gain of \(\text {MMSE}_{\text {ext}}^{t+1}\) can be obtained from \(\text {MMSE}_{\text {ap}}^{t}\). Gain of \(\text {MMSE}_{\text {ext}}^{t+1}\) from \(\text {MMSE}_{\text {ap}}^{t+1}\) with different \(\text {MMSE}_{\text {ap}}^{t-1,x}\) and \(\frac {E_{b}}{N_{0}}\)=5 dB The higher the \(\text {MMSE}_{\text {ap}}^{t-1,x}\) is, the higher gain of \(\text {MMSE}_{\text {ext}}^{t+1}\) can be obtained from \(\text {MMSE}_{\text {ap}}^{t+1}\). Then, the MSE transfer chart for BP-based iterative channel decoding and state estimation is formed by the curve (\(\text {MMSE}_{\text {ap}}^{t}\) versus \(\text {MMSE}_{\text {ext}}^{t}\)) and the flipped curve (\(\text {MMSE}_{\text {ap}}^{t+1}\) versus \(\text {MMSE}_{\text {ext}}^{t+1}\)), and the results with different \(\text {MMSE}_{\text {ap}}^{t-1,x}\) and \(\frac {E_{b}}{N_{0}}=\)5 dB are shown in Fig. 24. In the following, we try to explain our observations based on the case where no priori information are considered. BP-based iterative channel decoding can decrease \(\text {MMSE}_{\text {ext}}^{t}\) and help improve the estimation of x(t+1). The values of the crossing point between the curve (\(\text {MMSE}_{\text {ap}}^{t}\) versus \(\text {MMSE}_{\text {ext}}^{t}\)) and the flipped curve (\(\text {MMSE}_{\text {ap}}^{t+1}\) versus \(\text {MMSE}_{\text {ext}}^{t+1}\)) for \(\text {MMSE}_{\text {ap}}^{t}\) and \(\text {MMSE}_{\text {ext}}^{t}\) are 1.0e + 1.33 and 1.0e + 1.25, respectively, while the values of \(\text {MMSE}_{\text {ap}}^{t}\) and \(\text {MMSE}_{\text {ext}}^{t}\) with no priori information are 1.0e + 5 and 1.0e + 1.43, respectively. Therefore, the total gain of \(\text {MMSE}_{\text {ext}}^{t}\) for BP-based iterative channel decoding and state estimation is 10∗(1.43−1.25)=1.8 dB. MSE-based transfer chart for iterative channel decoding and system state estimation with different \(\text {MMSE}_{\text {ap}}^{t-1,x}\) and \(\frac {E_{b}}{N_{0}}\)=5 dB The gain of \(\text {MMSE}_{\text {ext}}^{t}\) with only three steps is close to the gain of \(\text {MMSE}_{\text {ext}}^{t}\) at the convergence point. The trace of BP-based iterative channel decoding and state estimation with the mentioned three steps is shown by the arrows with blue color in the figure, and the details are listed as below: From y(t) to y(t+1): As there is no priori information, the starting point is \(\text {MMSE}_{\text {ap}}^{t}\) with the value of 1.0e + 5, and the corresponding value of \(\text {MMSE}_{\text {ext}}^{t}\) is 1.0e + 1.43. From y(t+1) to y(t): We have \(\text {MMSE}_{\text {ap}}^{t+1}\) with the value of 1.0e + 1.43, and the corresponding value of \(\text {MMSE}_{\text {ext}}^{t+1}\) is 1.0e + 1.34. From y(t) to y(t+1): We have \(\text {MMSE}_{\text {ap}}^{t}\) with the value of 1.0e + 1.34, and the corresponding value of \(\text {MMSE}_{\text {ext}}^{t}\) is 1.0e + 1.255. Compared with step (1), 10∗(1.43−1.255)=1.75 dB is obtained for \(\text {MMSE}_{\text {ext}}^{t}\) while it is 1.8 dB for \(\text {MMSE}_{\text {ext}}^{t}\) from convergence point. Thus, the gain loss for BP-based iterative channel decoding and state estimation with these three steps is just 1.8−1.75=0.05 dB. In summary, we can implement BP-based iterative channel decoding and state estimation with above three steps to obtain the gain of \(\text {MMSE}_{\text {ext}}^{t}\), which is close to the gain at convergence point. Note that when \(\text {MMSE}_{\text {ap}}^{t-1,x}\) equals to 1 or 10, the BP-based iterative channel decoding and state estimation scheme cannot improve the performance of state estimation. This is because with a small \(\text {MMSE}_{\text {ap}}^{t-1,x}\) the priori information from x(t) is dominant in estimating y(t+1), which leads that the prediction of y(t) in channel decoder at time slot t is negligible in predicting y(t+1). The MSE transfer charts for \(\frac {E_{b}}{N_{0}}=3\) and 1 dB have similar observations as that for \(\frac {E_{b}}{N_{0}}=\)5 dB, so we will not list the results here. Performance analysis for Kalman filtering-based heuristic approach Similarly, by utilizing the redundancy of system dynamics, a Kalman filtering-based heuristic approach is evaluated in this section. In the Kalman filtering-based heuristic approach, the prediction of y(t) based on Kalman filtering is used as the priori information for b(t), and instead of using only the extrinsic information of the channel decoder to obtain a soft estimation of y(t), the total information including both the priori information and the extrinsic information generated by the channel decoder is used to obtain a hard estimation of y(t). The corresponding framework used for the Kalman filtering-based heuristic approach is similar to the BP-based channel decoding and state estimation as shown in Fig. 12 (b). The priori information from y(t) is modeled with \(\mathbf {S}_{\pi, t}=\text {MMSE}_{\text {ap}}^{Y}\mathbf {I}_{K}\), and the priori information for b(t) is represented by mutual information I A . Finally, the total information from the channel decoder for estimating of b(t) is modeled by \(\text {MMSE}^{B}_{\text {tot}}\), which means the MMSE of estimating b(t) based on the total information including both the priori information and the extrinsic information from the channel decoder. Figure 25 illustrates the relationship between \(\text {MMSE}_{\text {ap}}^{Y}\) and \(\text {MMSE}_{\text {tot}}^{Y}\). The gains of \(\text {MMSE}_{\text {tot}}^{Y}\) with different \(\text {MMSE}_{\text {ap}}^{Y}\) and \(\frac {E_{b}}{N_{0}}\) are shown in Fig. 26, which are further improved comparing to Fig. 17. \(\text {MMSE}_{\text {tot}}^{Y}\) with different \(\text {MMSE}_{\text {ap}}^{Y}\) and \(\frac {E_{b}}{N_{0}}\) Gain of \(\text {MMSE}_{\text {tot}}^{Y}\) with different \(\text {MMSE}_{\text {ap}}^{Y}\) and \(\frac {E_{b}}{N_{0}}\) We propose to use the MSE transfer chart to evaluate the performance of BP-based channel decoding and state estimation. We focus on two models, the BP-based sequential processing model and the BP-based iterative processing model, for channel decoding and state estimation. The former can be used to evaluate the performance of sequential processing over multiple time slots, and the latter can be used to evaluate the performance of iterative processing within two time slots. The numerical results show by utilizing the MSE transfer chart the proposed channel decoding and state estimation algorithm can decrease the MSE and improve performance of channel decoding and state estimation. Specifically, a total 1.75 dB gain can be earned through three-step BP-based iterative channel decoding and state estimation process when no prior information is given. S Gong, H Li, L Lai, RC Qiu, in 2011 IEEE International Conference on Communications (ICC). Decoding the'nature encoded'messages for distributed energy generation control in microgrid (IEEEKyoto, 2011), pp. 1–5. H Li, Communications for control in cyber physical systems: theory, design and applications in smart grids (Morgan Kaufmann, Cambridge, 2016). S ten Brink, Convergence of iterative decoding. Electron. Lett.35(10), 806–808 (1999). S Ten Brink, in Proceedings of 3rd IEEE/ITG Conference on Source and Channel Coding, Munich, Germany. Iterative decoding trajectories of parallel concatenated codes, (2000), pp. 75–80. S Ten Brink, Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun.49(10), 1727–1737 (2001). M El-Hajjar, L Hanzo, Exit charts for system design and analysis. IEEE Commun. Surveys Tutorials. 16(1), 127–153 (2013). K Bhattad, KR Narayanan, An MSE-based transfer chart for analyzing iterative decoding schemes using a Gaussian approximation. IEEE Trans. Inf. Theory.53(1), 22–38 (2007). A Ashikhmin, G Kramer, S ten Brink, Extrinsic information transfer functions: model and erasure channel properties. IEEE Trans. Inf. Theory. 50(11), 2657–2673 (2004). D Guo, S Shamai, Verdu, Ś, Mutual information and minimum mean-square error in Gaussian channels. IEEE Trans. Inf. Theory. 51(4), 1261–1282 (2005). M Fu, CE de Souza, State estimation for linear discrete-time systems using quantized measurements. Automatica. 45(12), 2937–2945 (2009). S Yüksel, Bas, Ţ,ar, Stochastic networked control systems. AMC. 10:, 12 (2013). L Li, H Li, in Global Communications Conference (GLOBECOM), 2016 IEEE. Dynamic state aware source coding for networked control in cyber-physical systems (IEEEWashington, DC, 2016), pp. 1–6. S Yüksel, On stochastic stability of a class of non-Markovian processes and applications in quantization. SIAM J. Control. Optim.55(2), 1241–1260 (2017). N Ramzan, S Wan, E Izquierdo, Joint source-channel coding for wavelet-based scalable video transmission using an adaptive turbo code. EURASIP J. Image Video Process.2007(1), 1–12 (2007). V Kostina, Verdu, Ś, Lossy joint source-channel coding in the finite blocklength regime. IEEE Trans. Inf. Theory. 59(5), 2545–2575 (2013). H Wu, L Wang, S Hong, J He, Performance of joint source-channel coding based on protograph LDPC codes over Rayleigh fading channels. IEEE Commun. Lett.18(4), 652–655 (2014). X He, X Zhou, P Komulainen, M Juntti, T Matsumoto, A lower bound analysis of hamming distortion for a binary ceo problem with joint source-channel coding. IEEE Trans. Commun.64(1), 343–353 (2016). Y Wang, M Qin, KR Narayanan, A Jiang, Z Bandic, in Global Communications Conference (GLOBECOM), 2016 IEEE. Joint Source-Channel Decoding of Polar Codes for Language-Based Sources (IEEEWashington, DC, 2016), pp. 1–6. V Kostina, Y Polyanskiy, S Verd, Joint source-channel coding with feedback. IEEE Trans. Inf. Theory. 63(6), 3502–3515 (2017). L Yin, J Lu, Y Wu, in Communication Systems, 2002. ICCS 2002. The 8th International Conference On, 1. LDPC-based joint source-channel coding scheme for multimedia communications (IEEESingapore, 2002), pp. 337–341. J Garcia-Frias, W Zhong, LDPC codes for compression of multi-terminal sources with hidden Markov correlation. IEEE Commun. Lett. 7(3), 115–117 (2003). W Zhong, Y Zhao, J Garcia-Frias, in Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers. Turbo-like codes for distributed joint source-channel coding of correlated senders in multiple access channels (IEEEPacific Grove, 2003), pp. 840–844. Z Mei, L Wu, Joint source-channel decoding of Huffman codes with LDPC codes. J Electron (China). 23(6), 806–809 (2006). X Pan, A Cuhadar, AH Banihashemi, Combined source and channel coding with JPEG2000 and rate-compatible low-density parity-check codes. IEEE Trans. Signal Process.54(3), 1160–1164 (2006). L Pu, Z Wu, A Bilgin, MW Marcellin, B Vasic, LDPC-based iterative joint source-channel decoding for JPEG2000. IEEE Trans. Image Process.16(2), 577–581 (2007). M Fresia, F Perez-Cruz, HV Poor, S Verdu, Joint source and channel coding. IEEE Signal Proc. Mag.27(6), 104–113 (2010). E Koken, E Tuncel, Joint source-channel coding for broadcasting correlated sources. IEEE Trans. Commun. (2017). B Girod, AM Aaron, S Rane, D Rebollo-Monedero, Distributed video coding. Proc. IEEE. 93(1), 71–83 (2005). Y Zhao, J Garcia-Frias, Joint estimation and compression of correlated nonbinary sources using punctured turbo codes. IEEE Trans. Commun.53(3), 385–390 (2005). R Gallager, Low-density parity-check codes. IRE Trans. Informa. Theory. 8(1), 21–28 (1962). J Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (Morgan Kaufmann Publishers Inc., San Francisco, 1988). The authors would like to thank the support of National Science Foundation under grants ECCS-1407679, CNS-1525226, CNS-1525418, and CNS-1543830. Department of Electrical Engineering and Computer Science, the University of Tennessee, Knoxville, Knoxville, USA Liang Li & Husheng Li Sequans Communications, 1732, Deogyoung Road, Giheung, 446701, Yongin, South Korea Ju Bin Song Department of Electronic Engineering, Kyung Hee University, 90 Washington Valley Road, Bedminster, 07921, NJ, USA Shuping Gong Search for Liang Li in: Search for Shuping Gong in: Search for Ju Bin Song in: Search for Husheng Li in: HL conceived the Brainbow strategies for this work. JBS and HL supervised the project. SG built the initial constructs and LL validated them, analyzed the data, and wrote the paper. All authors read and approved the final manuscript. Correspondence to Liang Li. Li, L., Gong, S., Song, J.B. et al. Performance analysis on joint channel decoding and state estimation in cyber-physical systems. J Wireless Com Network 2017, 158 (2017) doi:10.1186/s13638-017-0943-y DOI: https://doi.org/10.1186/s13638-017-0943-y Channel coding State estimation EXIT chart MSE transfer chart CPSs
CommonCrawl
Volume of a tetrahedron vectors Hint: Here, we will use the concept that volume of tetrahedron is given as one – sixth of the modulus of the products of the vectors from which it is formed. So, first we will find the vectors and then calculate the scalar triple product which is equal to the determinant of the coefficients of the vectors. Complete step-by-step answer: We know that a tetrahedron is one kind of pyramid, which is a polyhedron with a flat polygon base and triangular faces connecting the base to a common point. In the case of a tetrahedron the base is a triangle and any of the four faces can be considered as the base , so a tetrahedron is also known as a triangular pyramid. A tetrahedron has four faces, six edges and four vertices. Its three edges meet at each vertex. The four vertices that we have been given in the question are A(1,1,0) B(-4,3,6) C(-1,0,3) and D(2,4,-5). So, the vector AB will be = (-4-1) i + (3-1) j + (6-0) k = -5i + 2j +6k Similarly vector AC is given as = -2i –j + 3k And, vector AD = i + 3j – 5k Now, the formula for the volume of the tetrahedron will be = $\dfrac{1}{6}\times |scalar\,triple\,product\,of\,these\,three\,vectors|$ \[=\dfrac{1}{6}\times |\left\{ \left( \overrightarrow{AB}\times \overrightarrow{AC} \right).\overrightarrow{AD} \right\}|\] Scalar triple product = $\left( \begin{matrix} -5 & 2 & 6 \\ -2 & -1 & 3 \\ 1 & 3 & -5 \\ \end{matrix} \right)$ $\begin{align} & =-5\left( 5-9 \right)-2\left( 10-3 \right)+6\left\{ -6-\left( -1 \right) \right\} \\ & =-5\left( -4 \right)-2\left( 7 \right)+6\left( -6+1 \right) \\ & =20-14-30 \\ & =-6-30=-36 \\ \end{align}$ Therefore, volume of the tetrahedron = $\dfrac{1}{6}\times |-36|=\dfrac{36}{6}=6$ Hence, the volume of the given tetrahedron is 6 cubic units. Note: Here, it should be noted that we always apply modulus while finding the volume of the tetrahedron. If the scalar triple product is negative, we have to make it positive because volume is always a positive quantity. Sours: https://www.vedantu.com/question-answer/find-the-volume-of-tetrahedron-whose-vertices-class-10-maths-cbse-5eedf688ccf1522a9847565b Volume of tetrahedron, build on vectors online calculator Volume of the tetrahedron equals to (1/6) times scalar triple product of vectors which it is build on: Because of the value of scalar triple vector product can be the negative number and the volume of the tetrahedrom is not, one should find the magnitude of the result of triple vector product when calculating the volume of geometric body. To calculate the volume of the tetrahedron, build on vectors, one can use our online calculator with step by step solution. Check vectors complanarity online calculator Find power of complex number online calculator Leave your comment: © Mathforyou 2021 Contacts: [email protected] Sours: https://mathforyou.net/en/online/vectors/volume/tetrahedron/ How old is naruto Piano music with chords 1 masonry hole saw Sours: https://www.varsitytutors.com/ssat_upper_level_math-help/how-to-find-the-volume-of-a-tetrahedron Volume of Tetrahedron Proof using Box Product-VECTORS - IIT JEE MAIN,ADVANCED Polyhedron with 4 faces Not to be confused with tetrahedroid or Tetrahedron (journal). Regular tetrahedron (Click here for rotating model) Type Platonic solid shortcode 3> 2z Elements F = 4, E = 6 V = 4 (χ = 2) Faces by sides 4{3} Conway notation T Schläfli symbols {3,3} h{4,3}, s{2,4}, sr{2,2} Face configuration V3.3.3 Wythoff symbol 3 | 2 3 | 2 2 2 Coxeter diagram = Symmetry Td, A3, [3,3], (*332) Rotation group T, [3,3]+, (332) References U01, C15, W1 Properties regular, convexdeltahedron Dihedral angle 70.528779° = arccos(1⁄3) (Vertex figure) Self-dual (dual polyhedron) 3D model of regular tetrahedron. In geometry, a tetrahedron (plural: tetrahedra or tetrahedrons), also known as a triangular pyramid, is a polyhedron composed of four triangularfaces, six straight edges, and four vertex corners. The tetrahedron is the simplest of all the ordinary convex polyhedra and the only one that has fewer than 5 faces.[1] The tetrahedron is the three-dimensional case of the more general concept of a Euclideansimplex, and may thus also be called a 3-simplex. The tetrahedron is one kind of pyramid, which is a polyhedron with a flat polygon base and triangular faces connecting the base to a common point. In the case of a tetrahedron the base is a triangle (any of the four faces can be considered the base), so a tetrahedron is also known as a "triangular pyramid". Like all convex polyhedra, a tetrahedron can be folded from a single sheet of paper. It has two such nets.[1] For any tetrahedron there exists a sphere (called the circumsphere) on which all four vertices lie, and another sphere (the insphere) tangent to the tetrahedron's faces.[2] Regular tetrahedron[edit] A regular tetrahedron is a tetrahedron in which all four faces are equilateral triangles. It is one of the five regular Platonic solids, which have been known since antiquity. In a regular tetrahedron, all faces are the same size and shape (congruent) and all edges are the same length. Five tetrahedra are laid flat on a plane, with the highest 3-dimensional points marked as 1, 2, 3, 4, and 5. These points are then attached to each other and a thin volume of empty spaceis left, where the five edge angles do not quite meet. Regular tetrahedra alone do not tessellate (fill space), but if alternated with regular octahedra in the ratio of two tetrahedra to one octahedron, they form the alternated cubic honeycomb, which is a tessellation. Some tetrahedra that are not regular, including the Schläfli orthoscheme and the Hill tetrahedron, can tessellate. The regular tetrahedron is self-dual, which means that its dual is another regular tetrahedron. The compound figure comprising two such dual tetrahedra form a stellated octahedron or stella octangula. Coordinates for a regular tetrahedron[edit] The following Cartesian coordinates define the four vertices of a tetrahedron with edge length 2, centered at the origin, and two level edges: Expressed symmetrically as 4 points on the unit sphere, centroid at the origin, with lower face level, the vertices are: with the edge length of . Still another set of coordinates are based on an alternatedcube or demicube with edge length 2. This form has Coxeter diagram and Schläfli symbol h{4,3}. The tetrahedron in this case has edge length 2√2. Inverting these coordinates generates the dual tetrahedron, and the pair together form the stellated octahedron, whose vertices are those of the original cube. Tetrahedron: (1,1,1), (1,−1,−1), (−1,1,−1), (−1,−1,1) Dual tetrahedron: (−1,−1,−1), (−1,1,1), (1,−1,1), (1,1,−1) Regular tetrahedron ABCD and its circumscribed sphere Angles and distances[edit] For a regular tetrahedron of edge length a: Face area Surface area[3] Height of pyramid[4] Centroid to vertex distance Edge to opposite edge distance Volume[3] Face-vertex-edge angle (approx. 54.7356°) Face-edge-face angle, i.e., "dihedral angle"[3] Vertex-Center-Vertex angle,[5] the angle between lines from the tetrahedron center to any two vertices. It is also the angle between Plateau borders at a vertex. In chemistry it is called the tetrahedral bond angle. This angle (in radians) is also the arclength of the geodesic segment on the unit sphere resulting from centrally projecting one edge of the tetrahedron to the sphere. (approx. 109.4712°) Solid angle at a vertex subtended by a face (approx. 0.55129 steradians) (approx. 1809.8 square degrees) Radius of circumsphere[3] Radius of insphere that is tangent to faces[3] Radius of midsphere that is tangent to edges[3] Radius of exspheres Distance to exsphere center from the opposite vertex With respect to the base plane the slope of a face (2√2) is twice that of an edge (√2), corresponding to the fact that the horizontal distance covered from the base to the apex along an edge is twice that along the median of a face. In other words, if C is the centroid of the base, the distance from C to a vertex of the base is twice that from C to the midpoint of an edge of the base. This follows from the fact that the medians of a triangle intersect at its centroid, and this point divides each of them in two segments, one of which is twice as long as the other (see proof). For a regular tetrahedron with side length a, radius R of its circumscribing sphere, and distances di from an arbitrary point in 3-space to its four vertices, we have[6] Isometries of the regular tetrahedron[edit] The proper rotations, (order-3 rotation on a vertex and face, and order-2 on two edges) and reflection plane (through two faces and one edge) in the symmetry group of the regular tetrahedron The vertices of a cube can be grouped into two groups of four, each forming a regular tetrahedron (see above, and also animation, showing one of the two tetrahedra in the cube). The symmetries of a regular tetrahedron correspond to half of those of a cube: those that map the tetrahedra to themselves, and not to each other. The tetrahedron is the only Platonic solid that is not mapped to itself by point inversion. The regular tetrahedron has 24 isometries, forming the symmetry groupTd, [3,3], (*332), isomorphic to the symmetric group, S4. They can be categorized as follows: T, [3,3]+, (332) is isomorphic to alternating group, A4 (the identity and 11 proper rotations) with the following conjugacy classes (in parentheses are given the permutations of the vertices, or correspondingly, the faces, and the unit quaternion representation): identity (identity; 1) rotation about an axis through a vertex, perpendicular to the opposite plane, by an angle of ±120°: 4 axes, 2 per axis, together 8 ((1 2 3), etc.; 1 ± i ± j ± k/2) rotation by an angle of 180° such that an edge maps to the opposite edge: 3 ((1 2)(3 4), etc.; i, j, k) reflections in a plane perpendicular to an edge: 6 reflections in a plane combined with 90° rotation about an axis perpendicular to the plane: 3 axes, 2 per axis, together 6; equivalently, they are 90° rotations combined with inversion (x is mapped to −x): the rotations correspond to those of the cube about face-to-face axes Orthogonal projections of the regular tetrahedron[edit] The regular tetrahedron has two special orthogonal projections, one centered on a vertex or equivalently on a face, and one centered on an edge. The first corresponds to the A2Coxeter plane. Cross section of regular tetrahedron[edit] A central cross section of a regular tetrahedronis a square. The two skew perpendicular opposite edges of a regular tetrahedron define a set of parallel planes. When one of these planes intersects the tetrahedron the resulting cross section is a rectangle.[7] When the intersecting plane is near one of the edges the rectangle is long and skinny. When halfway between the two edges the intersection is a square. The aspect ratio of the rectangle reverses as you pass this halfway point. For the midpoint square intersection the resulting boundary line traverses every face of the tetrahedron similarly. If the tetrahedron is bisected on this plane, both halves become wedges. A tetragonal disphenoid viewed orthogonally to the two green edges. This property also applies for tetragonal disphenoids when applied to the two special edge pairs. Spherical tiling[edit] The tetrahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. Helical stacking[edit] Regular tetrahedra can be stacked face-to-face in a chiral aperiodic chain called the Boerdijk–Coxeter helix. In four dimensions, all the convex regular 4-polytopes with tetrahedral cells (the 5-cell, 16-cell and 600-cell) can be constructed as tilings of the 3-sphere by these chains, which become periodic in the three-dimensional space of the 4-polytope's boundary surface. Other special cases[edit] Tetrahedral symmetry subgroup relations Tetrahedral symmetries shown in tetrahedral diagrams An isosceles tetrahedron, also called a disphenoid, is a tetrahedron where all four faces are congruent triangles. A space-filling tetrahedron packs with congruent copies of itself to tile space, like the disphenoid tetrahedral honeycomb. In a trirectangular tetrahedron the three face angles at one vertex are right angles. If all three pairs of opposite edges of a tetrahedron are perpendicular, then it is called an orthocentric tetrahedron. When only one pair of opposite edges are perpendicular, it is called a semi-orthocentric tetrahedron. An isodynamic tetrahedron is one in which the cevians that join the vertices to the incenters of the opposite faces are concurrent, and an isogonic tetrahedron has concurrent cevians that join the vertices to the points of contact of the opposite faces with the inscribed sphere of the tetrahedron. Isometries of irregular tetrahedra[edit] The isometries of an irregular (unmarked) tetrahedron depend on the geometry of the tetrahedron, with 7 cases possible. In each case a 3-dimensional point group is formed. Two other isometries (C3, [3]+), and (S4, [2+,4+]) can exist if the face or edge marking are included. Tetrahedral diagrams are included for each type below, with edges colored by isometric equivalence, and are gray colored for unique edges. Tetrahedron name Schön. Cox. Orb. Four equilateral triangles It forms the symmetry group Td, isomorphic to the symmetric group, S4. A regular tetrahedron has Coxeter diagram and Schläfli symbol {3,3}. T [3,3] [3,3]+ *332 An equilateral triangle base and three equal isosceles triangle sides It gives 6 isometries, corresponding to the 6 isometries of the base. As permutations of the vertices, these 6 isometries are the identity 1, (123), (132), (12), (13) and (23), forming the symmetry group C3v, isomorphic to the symmetric group, S3. A triangular pyramid has Schläfli symbol {3}∨( ). C3 [3] [3]+ *33 Mirrored sphenoid Two equal scalene triangles with a common base edge This has two pairs of equal edges (1,3), (1,4) and (2,3), (2,4) and otherwise no edges equal. The only two isometries are 1 and the reflection (34), giving the group Cs, also isomorphic to the cyclic group, Z2. =C1h =C1v [ ] * 2 Irregular tetrahedron (No symmetry) Four unequal triangles Its only isometry is the identity, and the symmetry group is the trivial group. An irregular tetrahedron has Schläfli symbol ( )∨( )∨( )∨( ). C1 [ ]+ 1 1 Disphenoids (Four equal triangles) Tetragonal disphenoid Four equal isosceles triangles It has 8 isometries. If edges (1,2) and (3,4) are of different length to the other 4 then the 8 isometries are the identity 1, reflections (12) and (34), and 180° rotations (12)(34), (13)(24), (14)(23) and improper 90° rotations (1234) and (1432) forming the symmetry group D2d. A tetragonal disphenoid has Coxeter diagram and Schläfli symbol s{2,4}. S4 [2+,4] [2+,4+] 2*2 2× 8 Rhombic disphenoid Four equal scalene triangles It has 4 isometries. The isometries are 1 and the 180° rotations (12)(34), (13)(24), (14)(23). This is the Klein four-groupV4 or Z22, present as the point group D2. A rhombic disphenoid has Coxeter diagram and Schläfli symbol sr{2,2}. D2 [2,2]+ 222 4 Generalized disphenoids (2 pairs of equal triangles) Digonal disphenoid Two pairs of equal isosceles triangles This gives two opposite edges (1,2) and (3,4) that are perpendicular but different lengths, and then the 4 isometries are 1, reflections (12) and (34) and the 180° rotation (12)(34). The symmetry group is C2v, isomorphic to the Klein four-groupV4. A digonal disphenoid has Schläfli symbol { }∨{ }. Phyllic disphenoid Two pairs of equal scalene or isosceles triangles This has two pairs of equal edges (1,3), (2,4) and (1,4), (2,3) but otherwise no edges equal. The only two isometries are 1 and the rotation (12)(34), giving the group C2 isomorphic to the cyclic group, Z2. C2 [2]+ 22 2 General properties[edit] Volume[edit] The volume of a tetrahedron is given by the pyramid volume formula: where A0 is the area of the base and h is the height from the base to the apex. This applies for each of the four choices of the base, so the distances from the apexes to the opposite faces are inversely proportional to the areas of these faces. For a tetrahedron with vertices a = (a1, a2, a3), b = (b1, b2, b3), c = (c1, c2, c3), and d = (d1, d2, d3), the volume is 1/6|det(a − d, b − d, c − d)|, or any other combination of pairs of vertices that form a simply connected graph. This can be rewritten using a dot product and a cross product, yielding If the origin of the coordinate system is chosen to coincide with vertex d, then d = 0, so where a, b, and c represent three edges that meet at one vertex, and a · (b × c) is a scalar triple product. Comparing this formula with that used to compute the volume of a parallelepiped, we conclude that the volume of a tetrahedron is equal to 1/6 of the volume of any parallelepiped that shares three converging edges with it. The absolute value of the scalar triple product can be represented as the following absolute values of determinants: or where are expressed as row or column vectors. which gives where α, β, γ are the plane angles occurring in vertex d. The angle α, is the angle between the two edges connecting the vertex d to the vertices b and c. The angle β, does so for the vertices a and c, while γ, is defined by the position of the vertices a and b. If we do not require that d = 0 then Given the distances between the vertices of a tetrahedron the volume can be computed using the Cayley–Menger determinant: where the subscripts i, j ∈ {1, 2, 3, 4} represent the vertices {a, b, c, d} and dij is the pairwise distance between them – i.e., the length of the edge connecting the two vertices. A negative value of the determinant means that a tetrahedron cannot be constructed with the given distances. This formula, sometimes called Tartaglia's formula, is essentially due to the painter Piero della Francesca in the 15th century, as a three dimensional analogue of the 1st century Heron's formula for the area of a triangle.[8] Denote a, b, c be three edges that meet at a point, and x, y, z the opposite edges. Let V be the volume of the tetrahedron; then[9] The above formula uses six lengths of edges, and the following formula uses three lengths of edges and three angles. Heron-type formula for the volume of a tetrahedron[edit] Six edge-lengths of Tetrahedron If U, V, W, u, v, w are lengths of edges of the tetrahedron (first three form a triangle; u opposite to U and so on), then[10] Volume divider[edit] Any plane containing a bimedian (connector of opposite edges' midpoints) of a tetrahedron bisects the volume of the tetrahedron.[11] Non-Euclidean volume[edit] For tetrahedra in hyperbolic space or in three-dimensional elliptic geometry, the dihedral angles of the tetrahedron determine its shape and hence its volume. In these cases, the volume is given by the Murakami–Yano formula.[12] However, in Euclidean space, scaling a tetrahedron changes its volume but not its dihedral angles, so no such formula can exist. Distance between the edges[edit] Any two opposite edges of a tetrahedron lie on two skew lines, and the distance between the edges is defined as the distance between the two skew lines. Let d be the distance between the skew lines formed by opposite edges a and b − c as calculated here. Then another volume formula is given by Properties analogous to those of a triangle[edit] The tetrahedron has many properties analogous to those of a triangle, including an insphere, circumsphere, medial tetrahedron, and exspheres. It has respective centers such as incenter, circumcenter, excenters, Spieker center and points such as a centroid. However, there is generally no orthocenter in the sense of intersecting altitudes.[13] Gaspard Monge found a center that exists in every tetrahedron, now known as the Monge point: the point where the six midplanes of a tetrahedron intersect. A midplane is defined as a plane that is orthogonal to an edge joining any two vertices that also contains the centroid of an opposite edge formed by joining the other two vertices. If the tetrahedron's altitudes do intersect, then the Monge point and the orthocenter coincide to give the class of orthocentric tetrahedron. An orthogonal line dropped from the Monge point to any face meets that face at the midpoint of the line segment between that face's orthocenter and the foot of the altitude dropped from the opposite vertex. A line segment joining a vertex of a tetrahedron with the centroid of the opposite face is called a median and a line segment joining the midpoints of two opposite edges is called a bimedian of the tetrahedron. Hence there are four medians and three bimedians in a tetrahedron. These seven line segments are all concurrent at a point called the centroid of the tetrahedron.[14] In addition the four medians are divided in a 3:1 ratio by the centroid (see Commandino's theorem). The centroid of a tetrahedron is the midpoint between its Monge point and circumcenter. These points define the Euler line of the tetrahedron that is analogous to the Euler line of a triangle. The nine-point circle of the general triangle has an analogue in the circumsphere of a tetrahedron's medial tetrahedron. It is the twelve-point sphere and besides the centroids of the four faces of the reference tetrahedron, it passes through four substitute Euler points, one third of the way from the Monge point toward each of the four vertices. Finally it passes through the four base points of orthogonal lines dropped from each Euler point to the face not containing the vertex that generated the Euler point.[15] The center T of the twelve-point sphere also lies on the Euler line. Unlike its triangular counterpart, this center lies one third of the way from the Monge point M towards the circumcenter. Also, an orthogonal line through T to a chosen face is coplanar with two other orthogonal lines to the same face. The first is an orthogonal line passing through the corresponding Euler point to the chosen face. The second is an orthogonal line passing through the centroid of the chosen face. This orthogonal line through the twelve-point center lies midway between the Euler point orthogonal line and the centroidal orthogonal line. Furthermore, for any face, the twelve-point center lies at the midpoint of the corresponding Euler point and the orthocenter for that face. The radius of the twelve-point sphere is one third of the circumradius of the reference tetrahedron. There is a relation among the angles made by the faces of a general tetrahedron given by[16] where αij is the angle between the faces i and j. The geometric median of the vertex position coordinates of a tetrahedron and its isogonic center are associated, under circumstances analogous to those observed for a triangle. Lorenz Lindelöf found that, corresponding to any given tetrahedron is a point now known as an isogonic center, O, at which the solid angles subtended by the faces are equal, having a common value of π sr, and at which the angles subtended by opposite edges are equal.[17] A solid angle of π sr is one quarter of that subtended by all of space. When all the solid angles at the vertices of a tetrahedron are smaller than π sr, O lies inside the tetrahedron, and because the sum of distances from O to the vertices is a minimum, O coincides with the geometric median, M, of the vertices. In the event that the solid angle at one of the vertices, v, measures exactly π sr, then O and M coincide with v. If however, a tetrahedron has a vertex, v, with solid angle greater than π sr, M still corresponds to v, but O lies outside the tetrahedron. Geometric relations[edit] A tetrahedron is a 3-simplex. Unlike the case of the other Platonic solids, all the vertices of a regular tetrahedron are equidistant from each other (they are the only possible arrangement of four equidistant points in 3-dimensional space). A tetrahedron is a triangular pyramid, and the regular tetrahedron is self-dual. A regular tetrahedron can be embedded inside a cube in two ways such that each vertex is a vertex of the cube, and each edge is a diagonal of one of the cube's faces. For one such embedding, the Cartesian coordinates of the vertices are (+1, +1, +1); (−1, −1, +1); (−1, +1, −1); (+1, −1, −1). This yields a tetrahedron with edge-length 2√2, centered at the origin. For the other tetrahedron (which is dual to the first), reverse all the signs. These two tetrahedra's vertices combined are the vertices of a cube, demonstrating that the regular tetrahedron is the 3-demicube. The volume of this tetrahedron is one-third the volume of the cube. Combining both tetrahedra gives a regular polyhedral compound called the compound of two tetrahedra or stella octangula. The interior of the stella octangula is an octahedron, and correspondingly, a regular octahedron is the result of cutting off, from a regular tetrahedron, four regular tetrahedra of half the linear size (i.e., rectifying the tetrahedron). The above embedding divides the cube into five tetrahedra, one of which is regular. In fact, five is the minimum number of tetrahedra required to compose a cube. To see this, starting from a base tetrahedron with 4 vertices, each added tetrahedra adds at most 1 new vertex, so at least 4 more must be added to make a cube, which has 8 vertices. Inscribing tetrahedra inside the regular compound of five cubes gives two more regular compounds, containing five and ten tetrahedra. Regular tetrahedra cannot tessellate space by themselves, although this result seems likely enough that Aristotle claimed it was possible. However, two regular tetrahedra can be combined with an octahedron, giving a rhombohedron that can tile space. However, several irregular tetrahedra are known, of which copies can tile space, for instance the disphenoid tetrahedral honeycomb. The complete list remains an open problem.[18] If one relaxes the requirement that the tetrahedra be all the same shape, one can tile space using only tetrahedra in many different ways. For example, one can divide an octahedron into four identical tetrahedra and combine them again with two regular ones. (As a side-note: these two kinds of tetrahedron have the same volume.) The tetrahedron is unique among the uniform polyhedra in possessing no parallel faces. A law of sines for tetrahedra and the space of all shapes of tetrahedra[edit] Main article: Trigonometry of a tetrahedron A corollary of the usual law of sines is that in a tetrahedron with vertices O, A, B, C, we have Sours: https://en.wikipedia.org/wiki/Tetrahedron Of vectors volume a tetrahedron The lewdness of her actions made her even more excited, and she began to move more intensely. Being on the verge of pain, turning into a pleasant languor, she suddenly felt the throbbing head, pushing apart the walls of the. Rectum, began to shoot abundant portions of sperm. HOW TO FIND VOLUME OF TETRAHEDRON - VECTOR DIFFERENTIAL CALCULUS SOLVED PROBLEM 6 - LECTURE 6 I was saying to the neighbor, feeling how my dick in her hot ass, slides over Vitka's dick. This was clearly felt through the septum separating the vagina and anus. No, don't say it, but to fuck Mikhalych's wife in the ass is much nicer than in her pussy. Aunt Oksana's point was thoroughly broken by members of the Caucasian guys from the market. And this is great. Catalytic converter bank 1 Radio shack scanner programming software Kittens for adoption san mateo Dodge charger gt plus horsepower Corsair 550 watt power supply Build our machine remix lyrics Maid in pots and pans I probably thought that he was doing it under the table. We went into the room, Natasha told me to lie on the bed and, stepping over me, sat on my face, began to fidget on me and jump. I licked her cunt, parted her buttocks, licked her ass. Suddenly Natasha lay down on me and lowered my shorts, swallowed my cock. I was delighted, but did not stop licking. volume of a tetrahedron vectorsdesigning a house in revitlyft office in chicagostar episode 5 season 2lexus check engine light blinking1997 ford f150 water pumpzillow rentals beaufort, scdiy power rack wood Special ed yung ro lyrics Escience lab 7 osmosis answers What was schizophrenia originally called Window motor for dodge caravan 1999 honda crv ignition coil Rear shock mount mazda 3 2001 porsche 911 body kit 2003 infiniti g35 wheel bearing Room for rent in sunnyside Digital camo apple watch band Motos chopper harley davidson Arctic cat drive shaft removal Happy birthday animation images Taxi service gettysburg, pa Sesame street you tube Central florida weather radar Original xbox wireless controller mod 1984 chevy c10 steering wheel Stafford new homes for sale Honda goldwing for sale colorado Fortnite season 6 map Wella hair color chart Kansas city craigslist free Sony portable dvd player amazon Lenovo 90 watt power supply Springfield armory 1911 price Black widow red hair color Destin beach water temp Rv 7 way wiring diagram Faux window scene wall art Double sided ivy fence roll Vp of digital marketing salary Kimmel park winston salem Cast iron umbrella base costco Copyright © 2020 ns-cafe.com |
CommonCrawl
How does the bypass air provide thrust? To my knowledge, bypass air produces 80% of total thrust. But I don't understand to how it does that. By accelerating the air, by increasing the speed or increasing the pressure of the air? Is it doing this by Bernoulli's principle or something like that? turbofan thrust theory bypass-ratio Bianfable Yakup şekerYakup şeker The bypass air is accelerated by the fan at the front of the turbofan engine. This changes its velocity and therefore its momentum, which is the definition of a force (in this case: thrust): $$ F = \frac{\text{d}}{\text{d}t} p = m \frac{\text{d}}{\text{d}t} v = m \cdot a $$ This thrust contributes to the total thrust of the engine. How much will depend on the bypass ratio of the engine and other parameters, but 80% is plausible for a high bypass turbofan. Bernoulli's principle has nothing to do with this. You can see the accelerated airflow in the following animation: (source: Wikimedia) The air coming from the engine core will move even faster, but there is less of it in a high bypass turbofan resulting in less total thrust coming from the core: The fan airflow, referred to as the cold air stream, is accelerated by the fan and passes through the engine remaining outside of the engine core. The cold air stream moves much slower than the hot stream gas flow passing through the engine core. (skybrary.aero) BianfableBianfable $\begingroup$ "The air coming from the engine core will move even faster, but there is less of it in a high bypass turbofan resulting in less total thrust coming from the core" To expand on that: a jet engine's efficiency is (speaking simply) related to the change in velocity of the airstream. The greater the difference in velocity between intake and exhaust, the lower the efficiency (but the greater the thrust). Therefore, it is more efficient (and therefore cheaper) to give a much smaller velocity increase to the air, and use a much larger volume of air so that it adds up to the same overall thrust. $\endgroup$ – anaximander Dec 16 '19 at 10:54 A bypass fan provides thrust in the same way a propeller provides thrust: by increasing the energy content of the gas mass passing through the disk. The added energy is most effectively converted into thrust by allowing it to expand until internal pressure is equal to ambient pressure, so all added energy is converted into kinetic energy. This expansion takes place in the cowling behind the fan. But the initial energy addition is a mixture of both increased pressure and increased velocity: the gas will require time to accelerate to the outlet velocity. KoyovisKoyovis It doesn't bypass everything, just the combustion chamber. Notably, the air still goes through a fan. High-bypass turbofan engines make most of their thrust from the ducted fan. Very much like a turboprop except the blades are smaller and enclosed by the cowling. @Harper's answer on What is the difference between turbojet and turbofan engines? explains that nicely: a high-bypass turbofan extracts most of the exhaust energy from the jet part and uses it to spin the fan, instead of exhausting really high speed air from the jet part directly. Low-bypass turbofans have a mix of thrust from fan + jet, while a pure turbojet has zero bypass, just compressor blades and no fan. Peter CordesPeter Cordes Not the answer you're looking for? Browse other questions tagged turbofan thrust theory bypass-ratio or ask your own question. What is the difference between turbojet and turbofan engines? What does a reduction gearbox do in a turbofan engine? What provides the greatest thrust in a high-bypass turbofan engine? What is the bypass air in a turbofan engine actually for? How do aircraft engine manufacturers achieve a higher bypass ratio while still meeting the thrust requirements for a given aircraft? Why does a turbofan engine force bypass air into a smaller space? Why does the diffuser section generate thrust in a jet engine? How do thrust reversers in high-bypass turbofan engines counteract initial thrust? On a ducted propeller, how does duct length affect thrust? Does bypass air produce thrust? Does the P&W F100 turbofan engine of the F-16 really produce this much power? How does the power distribution change for a turbofan engine from being stationary to going at full speed?
CommonCrawl
Antipyretic, anti-inflammatory and analgesic activity of Acacia hydaspica R. Parker and its phytochemical analysis Tayyaba Afsar1, Muhammad Rashid Khan1, Suhail Razak2, Shafi Ullah1 & Bushra Mirza1 BMC Complementary and Alternative Medicine volume 15, Article number: 136 (2015) Cite this article Inflammation and pain underlies several pathological conditions. Synthetic drugs used for the management of these conditions carry severe toxic effects. Globally efforts are ongoing to introduce novel medicinal plants to develop effective, economic and innocuous drugs. The current study was aimed at investigating the antipyretic, anti-inflammatory and analgesic activity of methanol extract of A. hydaspica aerial parts (AHM) and its active fraction. Furthermore identification and isolation of polyphenolic compounds was carried out to identify the active principles. Yeast induced pyrexia, Paw edema, acetic acid-induced writhing and hot plate test were carried out in vivo. HPLC-DAD analysis and combination of different chromatographic techniques, involving vacuum liquid chromatography (VLC) and flash chromatography (FC) were carried out for chemical characterization. The structural heterogeneity of flavanols was characterized by ESI- MS, 1H NMR, 13C NMR and 2D NMR spectroscopic analyses, and also by comparison with reported literature. Oral administration of A. hydaspica methanol extract (AHM) and A. hydaspica ethyl acetate fraction (AHE), showed dose and time dependent decrease in body temperature in yeast induced pyrexia, comparable to standard, Paracetamol. AHM and AHE (150 mg/kg) significantly (p < 0.001) inhibit pain sensation in various pain models, i.e. acetic acid induced writhing and hot plate test. Similarly AHM and AHE demonstrated an anti-inflammatory effect in carrageenan-induced paw edema in rats and 150 mg/kg dose being distinctly more effective (91.92% inhibition). When studied on prostaglandin E2 (PGE2) induced edema in rats, AHM and AHE showed maximum inhibition of edema at 150 mg/kg after 4 h. HPLC chromatogram of AHM revealed the presence of gallic acid, catechin, rutin and caffeic acid. Chromatographic separation and structure characterization of AHE, has led to the identification of three flavan-3-ol derivative including 7-O-galloyl catechin, +catechin and methyl gallate, which have been reported for the first time in A. hydaspica. These results revealed that the presence of bioactive compounds in A. hydaspica might be responsible for the pharmacological activities, confirming the indigenous utility of A. hydaspica against inflammatory disorders. For centuries people of developing countries like Pakistan, India and China, rely on traditional medicinal system for the cure of various ailments as substitute health care services due to safety and cost-effectiveness of herbal medications. In different regions of Pakistan local practice of medicinal plants for curing number of diseases is very common. In Pakistan medicinal plants prescribers called Tabib/Hakim use approximately 600–1000 medicinal plants of the country based on their experience, without scientific knowledge for the treatment of wide range of disorders [1]. Certainly local practice of plants is unrestricted in developing countries, but it is obligatory to ascertain the pharmaceutically vital agents responsible for protection against fatal diseases. Currently developed world is also inclined towards complementary and alternative medicines, specifically derived from natural source. At the present, varieties of herbal plants have been extensively used as a curative agent for various infectious diseases globally. It is anticipated that about one quarter of approved modern medicines has been derived from botanicals [2]. Inflammation, pain and pyrexia underlie several pathological conditions. Synthetic drugs, i.e. NSAIDs, opioids and corticosteroids are clinically most important drugs used for the treatment of inflammatory disorders, however their long term use may induce toxic effects including; gastrointestinal ulcers, bleeding, renal disorders etc. [3,4]. Globaly efforts are ongoing to introduce novel medicinal plants to develop effective, economic and innocuous drugs [5]. Medicinal plants are believed to be an important source of useful compounds with potential therapeutic effects. The research on plants with apparent folkloric use, as agony relievers, anti-inflammatory agents, should therefore be regarded as a prolific and a rational research strategy in the search for new anti-inflammatory drugs [6]. A. hydaspica R. Parker belongs to family Leguminosae. This species is reported to be common in Iran, India and Pakistan, commonly used as fodder, fuel and wood [7]. It is treated as a synonym of A. eburnea [8]. The bark and seeds are the source of tannins. The plant is locally used as antiseptic. The traditional healers of India use various parts of the plant for the treatment of diarrhea; the leaves and the bark are useful in arresting secretion or bleeding. The pods are helpful in removing catarrhal matter and phlegm from the bronchial tubes. The gum dispels irascibility of the skin and soothes the inflamed membranes of the pharynx, alimentary canal and genito-urinary organs (http://trade.indiamart.com/details.mp?offer=6763150691). Different species of Acacia were evaluated for their anti-inflammatory, antipyretic and analgesic activities in various animal models. Aqueous extract of the bark of A. karroo provided remarkable anti-inflammatory activity against the carrageenan and histamine induced edema and analgesic activity via acetic acid induced writhing model in experimental animals [9]. Bukhari et al. evaluated the analgesic, anti-inflammatory and antiplatelet activities of the methanol extract of A. modesta by using acetic acid, formalin, hot plate and carrageenan induced edema in rodents [10]. Acute (xylene- induced) and chronic (cotton pellet-induced) anti-inflammatory effects of A. nilotica have been investigated in rat [11]. Petroleum ether, chloroform and methanol extract of A. comigera were evaluated against croton oil induced dermatitis in mice [12]. Ethanol extract of the seeds of A. suma was evaluated against the carrageenan induced hind paw edema in rat model, whereas analgesic activity was evaluated by acetic acid induced writhing and formalin induced linking tests in mice [13]. A. nilotica extract showed inhibitory effect on carrageenan induced paw edema and yeast induced pyrexia in rats [14]. Till date, no pharmacological study has been conducted to evaluate the antipyretic, anti-inflammatory, and analgesic activity of A. hydaspica, supporting traditional uses of this plant in folklore medicine. Hence the present study was undertaken to evaluate the antipyretic, anti-inflammatory and analgesic activity of methanol extract and its derived fractions using rat model. Furthermore HPLC finger printing and compounds isolation was performed to identify the active principle compounds responsible for various pharmacological activities. Plant collection The aerial parts (bark, twigs, and leaves) of A. hydaspica were collected from Kirpa area Islamabad, Pakistan in the month of April 2011. After identification with the help of relevant literature, voucher specimen (0642531) was assigned and the herbarium specimen was submitted in the Herbarium of Pakistan, Museum of Natural History, Islamabad for future reference. Shade dried aerial parts (bark, twigs, and leaves) of A. hydaspica were ground in to powder (3 kg) and soaked for 14 days in crude methanol, with occasional shaking. The extract was filtered through filter paper (Whatman filter paper number 45), and concentrated under vacuum using a rotary evaporator (Buchi, R114, Switzerland) at 40°C and 472 g of A. hydaspica crude methanol extract (AHM, 15.73%) was obtained. Partial purification or separation of AHM was done by solvent-solvent extraction. Briefly 12 g of AHM was suspended in 500 ml distilled water in separating funnel (1000 ml) and successively partitioned with n-Hexane, Ethyl-acetate, Chloroform and n-Butanol. Each extraction process was repeated three times with 500 ml of each solvent, same process was repeated to get enough mass of fractions for various bioactivity testing and chromatographic separation. These solvents with varying polarities theoretically partitioned different plant constituents. The filtrate was concentrated using rotary evaporator and weighed to determine the resultant mass. After this initial partitioning we got four soluble fractions; A. hydaspica n-hexane fraction (AHH, 5.27% yield), A. hydaspica ethyl acetate fraction (AHE, 27.77% yield), A. hydaspica chloroform fraction (AHC, 1.94% yield), A. hydaspica n-butanol fraction (AHB, 41.66% yield) and remaining aqueous fraction (AHA, 8.05% yield). The AHE showed to be the most active fraction and AHH, AHC, AHB and AHA fractions were not found active in any of the assays carried out in this study. AHM was subjected to High Performance Liquid Chromatography (HPLC) for compound fingerprinting and AHE was subjected to chromatographic isolation for further fractionation and purification of polyphenolics. Compositional analysis by HPLC HPLC analysis of AHM was carried out by using HPLC-DAD (Agilent Germany) equipment using Sorbex RX-C8 (Agilent USA) analytical column with 5 μm particle size. Mobile phase consisted of eluent A, (acetonitrile–methanol–water–acetic acid /5: 10: 85: 1) and eluent B (acetonitrile-methanol-acetic acid/40: 60: 1). The gradient (A: B) utilized was the following: 0–20 min (0 to 50% B), 20–25 min (50 to 100% B), and then isocratic 100% B (25-40 min) at flow rate of 1 ml/min, and injection volume was 20 μl. Rutin and Gallic acid were analyzed at 257 nm, catechin and apigenin at 279 nm, caffeic acid at 325 nm and quercitin, myricetin, kaempherol were analyzed at 368 nm. Every time column was reconditioned for 10 min before the next analysis. All samples were assayed in triplicate. Quantification was carried out by the integration of the peak using the external standard method. All chromatographic operations were carried out at ambient temperature. Isolation of compounds The scheme of fractionation and isolation is explained in flow chart (Figure 1). Briefly 10 g of A. hydaspica ethyl acetate fraction (AHE) was dissolved in DCM (Dichloromethane), mixed with neutral acid wash (super cell NF) and dried down completely with rotavap. Dried extract sample was loaded on silica gel using glass column packed with silica (200–400 mesh), attached with vacuum line source (vacuum liquid chromatography, VLC). The column was eluted with dichloromethane (DCM), then DCM-methanol mixture in increasing order of polarity and 10 fractions were collected (1 L each), and three major phenolic fractions (VLC-AHE/F4-F6) selected on the basis of TLC (silica gel 60 F254 plates, MERCK) and 1HNMR spectra similarity, and were combined, and subjected to flash liquid chromatography, carried on Combi-flash Teledyne ISCO using silica and column eluted with mixture of DCM: Methanol in increasing order of polarity. Spectra were monitored at all wavelengths (200 nm-780 nm) with Peak width 2 min, and Thresh hold 0.02 AU. 146 fractions collected with ISCO were pooled into 27 fractions according to their TLC and ISCO chromatogram spectral peaks. 1HNMR of ISCO fractions indicated the presence of chromatographically pure compounds IF 9 (C1), IF7 (C2) and IF3 (C3); Structures of isolated compounds were determined by NMR assignments. Schematic representation of extraction and isolation of compounds from A. hydaspica. Nuclear magnetic resonance spectroscopy (NMR) 1H- and 13C-NMR spectrum for all compounds was recorded on a CDD NMR instrument: Varian 600 MHz (1H and 13C frequencies of 599.664 and 150.785 MHz, respectively) at 25°C. Spectra of all compounds were obtained in Methanol-d4 and DMSO-d6. Conventional 1D and 2D Fourier transform techniques were employed as necessary to achieve unequivocal signal assignments and structure proof for all compounds independently. Stereo-chemical assignments were made with ROESY and NOESY experiments. Detailed analysis of resolution enhanced spectra (Peak picking, integration and multiplet analysis) was performed using Varian NMR and ACD/NMR processor, Academic Edition. The NMR spectra and chemical shifts of isolated compounds were compared with published data. Sprague Dawley rats (180-220 g) of either sex, maintained at primate facility Quaid-i-Azam University Islamabad, Pakistan were used for the study. Animals were kept at standard conditions of temperature (25 ± 1°C) and 12/12 h light/dark cycle, have free access to standard laboratory feed and water. All experimental procedures involving animals were conducted in accordance with the guidelines of the National Institutes of Health (NIH guidelines). The study protocols were approved by Ethical committee of Quaid-i-Azam University Islamabad. Six animals were used in each group. Experiments on animals were performed in accordance with the guidelines of the institute of animal ethical committee, NIH, Islamabad. Acute toxicity study The acute toxicity studies were conducted as per the guidelines 425 of Organization for Economic Cooperation and Development (OECD) for testing of chemicals for acute oral toxicity [15]. Rats (n = 6) of either sex treated with different doses (50, 250, 500, 1000, 2000 and 3000 mg/kg, p.o.), while the control group received saline (10 ml/kg). All the groups were observed up to 6 h for any gross effect and then mortality rate was observed after 24 h of treatment. Pharmacological activities Antipyretic activity The antipyretic activity of AHM and its active fraction AHE was evaluated using Sprague Dawley rats (180–220 g) of either sex using previously explained method [16]. The selected animals were healthy and normal body temperature of each rat was checked by using digital thermometer. Pyrexia was induced in all rats by injection of 20% aqueous suspension of brewer's yeast (Saccharomyces cerevisiae) 10 ml/kg. Subcutaneous (SC). All animal groups (6/group) were fasted with access to only water, after injection of yeast for 24 h. After that, the rectal temperature of each rat was recorded and pyrexia was confirmed by increase in temperature more than 1°C, while animals showing less than 1°C rise in temperature were excluded from the experiment. The group I received saline (10 ml/kg), group II received paracetamol (100 mg/kg) as a standard drug, while group III-XII received 50, 100 and 150 mg/kg, p.o. doses (through feeding tubes) of AHM and AHE respectively. The rectal temperature of the groups was recorded at 1 h intervals for 5 h. Anti-inflammatory activity Carrageenan induced paw edema The anti-inflammatory activity was performed on rat of either sex (180-220 g). The normal paw volumes of all the rats were measured ab initio and animals were divided into different groups, each consisting of six rats [1]. The group I was treated with normal saline (10 ml/kg, i.p.), Group II, III and IV with the standard drugs i.e. diclofenac sodium, aspirin and fluoxetine (10 mg/kg) respectively while rest of the groups were treated with AHM and AHE (50, 100 and 150 mg/kg, p.o.). After thirty minutes of the above intra-peritoneal and oral administration, carrageenan (1%, 0.1 ml) was given subcutaneously into the sub plantar tissue of the right hind paw of each rat. The paw volume was quantified using digital plethysmometer before and at 1st, 2nd, 3rd and 4th h after carrageenan administration. The edema volume of paw and percent inhibition of edema were calculated using the following formulae: $$ \mathrm{E}\mathrm{V}=\mathrm{P}\mathrm{V}\mathrm{A}-\mathrm{P}\mathrm{V}\mathrm{I} $$ EV = edema volume, PVI = Paw volume before carrageenan administration (i.e. initial paw volume) and, PVA = Paw volume after carrageenan administration. $$ \mathrm{Percent}\kern0.5em \mathrm{inhibition}=\left[\frac{\mathrm{EVc}-\mathrm{E}\mathrm{V}\mathrm{t}\Big)}{\mathrm{EVc}} \times 100\right] $$ EVc = Edema volume of control animals, EVt = Edema volume of test drug animals. Prostaglandin E2-induced paw edema Rats of either sex were divided into different groups (n = 6) and treated intraperitoneal with saline (control), Diclofenac Sodium (10 mg/kg, i.p., reference standard), AHM and AHE (50, 100 and 150 mg/kg orally). After 30 min of treatment, 100 μl of prostaglandin E2 (0.01 μl/ml) was administered into the sub planter side of right hind paw of each rat, and the edema size was determined as aforesaid [17]. Analgesic activity Acetic acid induced writhing test The method used in this test has previously been described by khan et al. [18]. Total number of writhing movements following i.p. administration of acetic acid solution (10 ml/kg, 1%) was recorded over a period of 10 min, starting 5 min after acetic acid injection. Rats were treated with AHM and AHE (50, 100 and 150 mg/kg), vehicle (saline), standard drugs (diclofenac sodium and aspirin 10 mg/kg), 30 min before acetic acid injection [19]. The numbers of writhings movements (Constriction of abdominal muscles along with the stretching of hind limbs) were counted in both untreated and treatment groups and percentage inhibition in abdominal writhings was calculated as follow. $$ \%\kern0.5em \mathrm{inhibition}\kern0.5em \mathrm{of}\kern0.5em \mathrm{abdominal}\kern0.5em \mathrm{writhing}=\left[\frac{\mathrm{Wc}-\mathrm{W}\mathrm{t}\Big)}{\mathrm{Wc}} \times 100\right], $$ W = No. of writhing, c = Control, and t = Test. Hot plate test The procedure described by Muhammad et al. [1], was followed to perform this test. Sprague Dawley rats of either sex (n = 6) weighing 180–220 g were used. Animals were subjected to pre-testing on a hot plate (Harvard apparatus) maintained at 55 ± 0.1°C. Animals having latency time greater than 15 sec on hot plate during pre-testing were excluded. Animals were divided randomly into 5 groups, each containing of six rats. The group I was treated with saline (10 ml/kg), group II and III with diclofenac sodium and fluoxetine (10 mg/kg, i.p.), and Group IV-IX were treated with oral doses of 50, 100 and 150 mg/kg of AHM and AHE respectively. Diclofenac sodium and fluoxetine were used as reference drugs for comparison [19,20]. After 30 min of dose administration, rats were dropped inside the cylinder onto the hot plate and the latency time (time for which rat remains on the hot plate without licking or flicking of hind limb or jumping) was recorded in seconds. In order to prevent the tissue damage the cut off time of 30 sec was set for all animals. The latency time was recorded for each group at 0, 30, 60, 90 and 120 min following drug administration. Percent analgesia was calculated using the following formula. $$ \%\kern0.5em \mathrm{Analgesia}=\left[\frac{\left(\mathrm{Test}\ \mathrm{latency}\ \hbox{--}\ \mathrm{control}\ \mathrm{latency}\right)\ }{\left(\mathrm{Cut}\ \mathrm{off}\ \mathrm{time}-\mathrm{control}\ \mathrm{latency}\right)} \times 100\right] $$ All values were expressed as mean ± SEM and data were analyzed by Graph pad using One-way analysis of variance followed by Tukey's multiple comparison test and Two-way analysis of variance (ANOVA) followed by Bonferroni multiple comparison test. p < 0.05 was considered significant. Estimation of acute toxicity AHM and AHE found safe at all tested doses (up to 3000 mg/kg) and did not show any noxious symptom in rats like sedation, convulsions, diarrhea, and irritation. During the 48 h assessment, no mortality was found. The AHM and AHE significantly (p < 0.001) attenuated hyperthermia in rats. The mean increase in rectal temperature recorded after 24 h of yeast injection, was 2°C-2.43°C. The inhibition was dose dependent and remained significant up to 5 h of administration. AHE 150 mg/kg dose showed the maximum antipyretic effect and return body temperature to normal levels (p > 0.05) more efficiently than standard drug paracetamol (Table 1). Table 1 Effect of AHM and its fraction on brewer's yeast-induced pyrexia In the carrageenan-induced edema, crude extract of A. hydaspica (AHM) and its fraction (AHE), induced a dose and time dependent reduction in paw edema. The anti-inflammatory activity became significant (p < 0.05), 2 h after carrageenan injection by 150 mg/kg dose whilst; maximum inhibition was observed after 4 h. The percent inhibition of inflammation by AHM (150 mg/kg) was more pronounced than aspirin, whilst the inflammatory activity revealed by diclofenac sodium was non-significantly different from AHM (Table 2). Table 2 Effect of AHM and its fraction on carrageenan-induced paw edema in rats Prostaglandin induced paw edema AHM and AHE revealed a dose dependent (50-15 mg/kg) and time dependent reduction in prostaglandin E2 (PGE2) induced paw edema in rats. Reversal of edema starts 2 h after the injection of PGE2, and peak inhibition was seen after 4 h. The AHE induced anti-inflammatory activity comparable to standard reference drug (Table 3). Table 3 Effect of AHM and its fraction on prostaglandin E 2 -induced paw edema in rats Peripheral analgesic effect Results showed that AHM and its derived fraction AHE significantly (p < 0.001) and dose dependently (50, 100 and 150 mg/kg), reduced the number of abdominal constriction induced by administration of 1% acetic acid solution. The inhibitory effect of Diclofenac sodium was non-significantly different from the protective effect revealed by AHM and AHE (100-150 mg/kg dose) (Table 4). Table 4 Effect of AHM and its fraction on acetic acid induced writhing Central analgesic effect Hot plate method (Thermal stimulation) In this assay A. hydaspica methanol extract (AHM) and its ethyl acetate fraction (AHE) exhibited a dose dependent increase in latency time and inhibited pain sensation in a pattern similar to standard drug; diclofenac sodium whilst the effect of both AHM and AHE, was shown to be more pronounced than fluoxetine at 150 mg/kg dose (Figure 2, Table 5). Percent effect of AHM, AHE, diclofenac sodium and fluoxetine in hot plate test. Data analyzed by Two-way ANOVA followed by Bonferroni comparison test. Asterisks *** indicated statistically significant (p < 0.001) values from the control, ### indicated statistically significant (p < 0.001) difference of AHM and AHE (150 mg/kg dose) to fluoxetine. Table 5 Effect of AHM and its fraction in hot plate test To establish the fingerprint chromatogram for AHM, Gallic acid, catechins, caffeic acid, rutin, apigenin, kaempferol, myricetin and quercetin were used as markers. HPLC chromatogram revealed the presence of gallic acid, catechin, caffeic acid, and rutin by comparison with retention time and UV absorbance of purified standards. The relative amounts of the four phenolic compounds found in AHM were in the order of catechin (558.9 μg/100 mg dry sample) > Gallic acid (300.051 μg/100 mg dry sample) > rutin (235.38 μg /100 mg dry sample) > caffeic acid (137.43 μg/100 mg dry sample), respectively (Figure 3). HPLC chromatogram of AHM reveals the presence of Gallic acid, rutin, catechin and Caffeic acid. Phytochemistry of AHE The ethyl-acetate fraction of A. hydaspica methanol extract was fractionated by VLC chromatography and ISCO flash chromatography to afford several enriched fractions and three pure compounds C1, C2 and C3. Isolated compounds were identified as 7-O-galloyl catechin (C1), catechin (C2) [20,21] and methyl gallate (C3) [22] by comparison of their 1D and 2D NMR spectral data with the reported data in the literature. Figure 4 has shown the structure of isolated flavanols from AHE. A. hydaspica ethyl-acetate extract (AHE) yields 187.5 mg/g of Compound 1, 100 mg/g of Compound 2 and 37.5 mg/g of Compound 3 (Figure 4). Chemical structures of isolated polyphenols from A. hydaspica ethyl acetate fraction (AHE). To the best of our knowledge this is the first report on antipyretic, analgesic and anti- inflammatory activities of A. hydaspica methanol extract of arial parts and its derived ethyl acetate fraction (AHE). Present investigation showed that the AHM and AHE possess noticeable antipyretic, analgesic and anti-inflammatory properties with a reasonable protection profile. HPLC analysis of AHM extract shows the presence of bioflavonoids, Gallic acid, catechin, caffeic acid and rutin, which were identified for the first time in subject plant. Furthermore chemical investigation of AHE resulted in isolation of 7-O-galloyl catechin, +catechin and methyl gallate. Gallic acid revealed anti-inflammatory activity by interfering with the functioning of polymorphonuclear leukocytes (PMNs) and assembly of active NADPH-oxidase. Presence of Gallic acid in AHM might be contributing to strong anti-inflammatory activity as, O-dihydroxy group of gallic acid is important for the inhibitory activity in vitro [23]. Caffeic acid exerted both in vitro and in vivo anti-inflammatory effects probably through modulation of iNOS expression and other inflammatory mediators. Catechin is the class of flavonoids with potent cancer chemo-preventive, neuro-protective, anti-apoptotic, and anti-inflammatory properties in clinical disorders [24,25]. Rutin is an important plant secondary metabolite and reported as hepato-protective, antioxidant, and anti-inflammatory agent [25]. AHE was also found to possess the most significant pharmacological activities, which might be due to the presence of isolated flavanols: 7-O-galloyl catechin, +catechin and methyl gallate, previous data also affirmed that these compounds possess anti inflammatory, analgesic and pain relieving potential [26]. A subcutaneous injection of Brewer's yeast induces pyrexia (called as pathogenic fever) by increasing the production of prostaglandins. Antipyretic activity is commonly mentioned as a characteristic of drugs or compounds which have an inhibitory effect on prostaglandin-biosynthesis and it is considered as a useful test for the screening of plant materials as well as synthetic drugs for their antipyretic potential [1]. AHM and its fraction AHE exhibited significant antipyretic activity at 150 mg/kg dose, comparable to paracetamol (standard drug). Inhibition of prostaglandin synthesis by blocking the cyclooxygenase enzyme activity could be the possible mechanism of antipyretic action as that of paracetamol. Besides that antipyretic effect might be governed by the ability of extract samples to reduce pro-inflammatory mediators, improve anti-inflammatory signals at sites of injury, or increase antipyretic messages within the brain [27]. The observed effects might be due to the presence of pharmacologically active metabolites that might interfere with the release of prostaglandins. However, it must be noted that several biochemical events occur ultimately to the production of prostaglandins. It may therefore be worthwhile to investigate the exact point in the biochemical events where the extract exerts its antipyretic effect. Carrageenan-induced paw edema is an acceptable preliminary screening test for anti-inflammatory activity [28]. Carrageenan induces paw edema bi-phasically: the initial phase extending from 0–2.5 h, predominantly results due to the release of histamine, serotonin and bradykinin. However, COX enzyme is known to play a key role in the development of the later phase of inflammation by converting arachidonic acid into prostaglandins. This enzyme is considered to be identified target for a variety of NSAIDs, such as aspirin and diclofenac sodium, which inhibit rat paw edema at the later phase following carrageenan injection [29-31]. Therefore for comparison three standards were used. AHM significantly (p < 0.001) inhibited paw edema in the later phase in a pattern similar to diclofenac sodium, whose mechanism of action is inhibition of cyclooxygenase enzyme synthesis. Although the actual mechanism of action is not known, it is possible that, the anti-inflammatory activity exhibited by A. hydaspica extracts could be attributed to the inhibition of the synthesis, release or action of inflammatory mediators. Therefore; we tested AHM and AHE extracts against PGE2 induced paw edema in rats. Results showed that both extracts significantly reduced PGE2 induced edema with maximum protection observed after 4 h. PGE2 is an important mediator of second phase inflammation. These results are in concordance with reported literature for other Acacia species which possesses anti-asthmatic, analgesic, anti-inflammatory, and antioxidant properties [32]. The inhibition of inflammation by extracts could be attributed to the presence of active constituents. Acetic acid induced writhing test is well proposed method in evaluating the medicinal agents for the analgesic potential. Pain sensation in acetic acid induced writhing paradigm is elicited by producing a localized inflammatory response due to the release of free arachidonic acid from tissue phospholipids via COX, and producing prostaglandins specifically PGE2 and PGE2α, and level of lipoxygenase products may also increase in peritoneal fluid [33-35]. These prostaglandins and lipoxygenase product cause swelling and agony by the cumulative capillary permeability and liberating endogenous substances that stimulate pain nerve endings. NSAIDs cause inhibition of COX enzyme in the peripheral tissues and affect the transduction mechanism of key afferent nociceptors [36]. Our results of acetic acid-induced abdominal constriction assay demonstrated a prominent reduction in writhing reflux. The analgesic effect observed at 150 mg/kg dose was comparable with the NSAID standard drug diclofenac sodium (Table 4) [37,38]. These findings strongly recommend that AHM and AHE extracts of A. hydaspica have peripheral analgesic activity and their mechanisms of action may be mediated through inhibition of local peritoneal receptors via cyclooxygenase inhibition. Thermal nociception models such as hot plat tests were used to evaluate the central analgesic activity. Both AHM and AHE showed significant (p < 0.001) analgesic effect in the hot plate test, implicating that plant extract may act as a narcotic analgesic (Table 5). Diclofenac sodium induces analgesic effect through activation of opioid receptors [39], and the apparent similarity between the results of extracts with standard diclofenac sodium, indicates that they might work in a same manner to reduce pain sensation as diclofenac sodium. The profound analgesic activity of A. hydaspica extracts might be due to the interference of their active principle (s) with the release of pain mediators, as the flavonoids increase the amount of endogenous serotonin or may interact with 5-HT2A and 5- HT3 receptors which may be involved in the mechanism of central analgesic activity [38]. Diclofenac sodium, fluoxetine (10 mg/kg) AHM, and AHE (150 mg/kg) raised the pain threshold level within 30 min of administration. AHM and AHE showed more pronounced analgesic effect at 150 mg/kg dose than fluoxetine (10 mg/kg); stronger effects of plant extracts than fluoxetine, is also in line with a proven concept that medicinal plants possess a combination of constituents, involving different mode (s) of action, offering synergistic effects with no side effects. However the difference in concentrations to achieve the maximum analgesic point could be explained by differences in the metabolic rate of each drug [40]. The presence of gallic acid, methyl gallate, 7-O-galloyl catechin, catechin, rutin and caffeic acid in the other species of the genus Acacia, their effect against COX and 5-lopoxygenase and the antipyretic action of the methanol extract of A. modesta leaves [10], supplemented the antipyretic, anti-inflammatory and analgesic activities of our tested extracts and validated its ethno-medicinal use as anti-inflammatory agent. To conclude, the methanol extract of A. hydaspica aerial parts (AHM) and its derived ethyl acetate fraction (AHE) were evidenced as a natural safe remedy for the treatment of pyrexia, pain and inflammation. Our current findings demonstrated mechanistic evidence for why indigenous people of Pakistan and India found it useful for inflammatory disorders. Interestingly the findings that extracts exhibited both peripheral as well as central analgesic effect provide a rationale for developing opioids alternative which leads to vomiting and nausea. The observed pharmacological activities might have been attributed to the presence of active principles. But there is still need for entailment of identification and acquaintance of molecular targets. Muhammad N, Saeed M, Khan H. Antipyretic, analgesic and anti-inflammatory activity of Viola betonicifolia whole plant. BMC Complement Altern Med. 2012;12(1):59. Sahreen S, Khan MR, Khan RA, Alkreathy HM. Cardioprotective role of leaves extracts of Carissa opaca against CCl4 induced toxicity in rats. BMC Res Notes. 2014;7(1):224. Griffin M, Marie R. Epidemiology of nonsteroidal anti-inflammatory drug–associated gastrointestinal injury. Am J Med. 1998;104(3):23S–9. Cryer B, Duboisø A. The advent of highly selective inhibitors of cyclooxygenase-a review. Prostaglandins Other Lipid Mediat. 1998;56(5):341–61. Vane J, Botting R. New insights into the mode of action of anti-inflammatory drugs. Inflamm Res. 1995;44(1):1–10. Mirjalili M, Tabatabaei S, Hadian J, Ebrahimi SN, Sonboli A. Phenological variation of the essential oil of Artemisia scoparia waldst. et Kit from Iran. J Essent Oil Res. 2007;19(4):326–9. Zargari A. Medicinal plants. Tehran: Tehran University Publications; 1997;1(6):249–65. Chakrabarty T, Gangopadhyay M. The genus Acacia P. Miller (leguminosae: mimosoideae) in India. J Econ Taxon Bot. 1996;20(3):599–633. Adedapo AA, Sofidiya MO, Masika PJ, Afolayan AJ. Anti‐inflammatory and analgesic activities of the aqueous extract of acacia Karroo stem bark in experimental animals. Basic Clin Pharmacol Toxicol. 2008;103(5):397–400. Bukhari IA, Khan RA, Gilani AH, Ahmed S, Saeed SA. Analgesic, anti-inflammatory and anti-platelet activities of the methanolic extract of Acacia modesta leaves. Inflammopharmacology. 2010;18(4):187–96. Sokeng SD, Koubé J, Dongmo F, Sonnhaffouo S, Nkono BLNY, Taïwé GS, et al. Acute and chronic anti-inflammatory effects of the aqueous extract of Acacia nilotica (L.) Del. (Fabaceae) pods. Academia J Med Plant. 2013;1(1):01–5. Maldini M, Sosa S, Montoro P, Giangaspero A, Balick M, Pizza C, et al. Screening of the topical anti-inflammatory activity of the bark of Acacia cornigera Willdenow, Byrsonima crassifolia Kunth, Sweetia panamensis Vakovlev and the leaves of Sphagneticola trilobata Hitchcock. J Ethnopharmacol. 2009;122(3):430–3. Mondal S, Raja S, Suresh P, Kumar G. Analgesic, anti-inflammatory and antipyretic properties of Acacia suma stem bark. Int J Phytomed. 2013;5(3):302–7. Dafallah AA. al-Mustafa Z. Investigation of the anti-inflammatory activity of Acacia nilotica and Hibiscus sabdariffa. Am J Chin Med. 1996;24(3–4):263–9. OECD: OECD guideline for testing chemicals 425. Acute oral toxicity-up and down procedure 2001; 2:12–6. Kang JY, Khan MN, Park NH, Cho JY, Lee MC, Fujii H, et al. Antipyretic, analgesic, and anti-inflammatory activities of the seaweed Sargassum fulvellum and Sargassum thunbergii in mice. J Ethnopharmacol. 2008;116(1):187–90. Safaei‐Ghomi J, Bamoniri A, Sarafraz MB, Batooli H. Volatile components from Artemisia scoparia Waldst et Kit growing in central Iran. Flavour Fragrance J. 2005;20(6):650–2. Khan H, Saeed M, Gilani AH, Muhammad N, Haq I, Ashraf N, et al. Antipyretic and anticonvulsant activity of Polygonatum verticillatum: comparison of rhizomes and aerial parts. Phytother Res. 2013;27(3):468–71. Singh VP, Jain NK, Kulkarni S. On the antinociceptive effect of fluoxetine, a selective serotonin reuptake inhibitor. Brain Res. 2001;915(2):218–26. Yahagi T, Yakura N, Matsuzaki K, Kitanaka S. Inhibitory effect of chemical constituents from Artemisia scoparia Waldst. et Kit. on triglyceride accumulation in 3 T3-L1 cells and nitric oxide production in RAW 264.7 cells. J Natural Med. 2013;68(2):1–7. El-toumy SA, Mohamed SM, Hassan EM, Mossa A-TH. Phenolic metabolites from Acacia nilotica flowers and evaluation of its free radical scavenging activity. J Am Sci. 2011;7(3):287–95. Yeung HC. Handbook of Chinese herbs and formulas. Los Angeles, CA: Institute of Chinese Medicine; 1985. Han D, Tian M, Row KH. Isolation of four compounds from Herba Artemisiae Scopariae by preparative column HPLC. J Liq Chromatogr Relat Technol. 2009;32(16):2407–16. Tan RX, Zheng W, Tang H. Biologically active substances from the genus Artemisia. Planta Med. 1998;64(04):295–302. Shah NA, Khan MR, Naz K, Khan MA. Antioxidant Potential, DNA Protection, and HPLC-DAD Analysis of Neglected Medicinal Jurinea dolomiaea Roots. BioMed Res Int. 2014;2014:726241. Singh R, Akhtar N, Haqqi TM. Green tea polyphenol epigallocatechi3-gallate: Inflammation and Arthritis. Life Sci. 2010;86(25):907–18. Khan MA, Khan H, Tariq SA, Pervez S. Urease Inhibitory Activity of Aerial Parts of Artemisia scoparia: Exploration in an In Vitro Study. Ulcers. 2014;2014:184736. Tian M, Row K. Separation of four bioactive compounds from Herba artemisiae scopariae by HPLC with ionic liquid-based silica column. J Anal Chem. 2011;66(6):580–5. Gilani A-uH, Janbaz KH. Protective effect of Artemisia scoparia extract against acetaminophen-induced hepatotoxicity. Gen Pharmacol Vasc Syst. 1993;24(6):1455–8. Blokhina O, Virolainen E, Fagerstedt KV. Antioxidants, oxidative damage and oxygen deprivation stress: a review. Ann Bot. 2003;91(2):179–94. Birben E, Sahiner UM, Sackesen C, Erzurum S, Kalayci O. Oxidative stress and antioxidant defense. World Allergy Organ J. 2012;5(1):9. Malviya S, Rawat S, Kharia A, Verma M. International journal of pharmacy & life sciences. Int J of Pharm & Life Sci (IJPLS). 2011;2(6):830–7. Granger DN. Role of xanthine oxidase and granulocytes in ischemia-reperfusion injury. Am J Physiol Heart Circ Physiol. 1988;255(6):H1269–75. Duarte I, Nakamura M, Ferreira S. Participation of the sympathetic system in acetic acid-induced writhing in mice. Braz J Med Biol Res. 1987;21(2):341–3. Fenton H. LXXIII.—Oxidation of tartaric acid in presence of iron. Transactions. 1894;65:899–910. Scandalios JG. Genomic responses to oxidative stress. Molecular Medicine: Encyclopedia of Molecular Cell Biology. 2004;5(2):489–512. Zulfiker A, Rahman MM, Hossain MK, Hamid K, Mazumder M, Rana MS. In vivo analgesic activity of ethanolic extracts of two medicinal plants-Scoparia dulcis L. and Ficus racemosa Linn. Biol Med. 2010;2(2):42–8. Kaushik D, Kumar A, Kaushik P, Rana A. Analgesic and Anti-Inflammatory Activity of Pinus roxburghii Sarg. Advances in pharmacological sciences 2012; 2012. Article ID 245431. Brogden R, Heel R, Pakes G, Speight TM, Avery G. Diclofenac sodium: a review of its pharmacological properties and therapeutic use in rheumatic diseases and pain of varying origin. Drugs. 1980;20(1):24–48. Gilani AH. Trends in ethnopharmacology. J Ethnopharmacol. 2005;100(1):43–9. We acknowledge Dr. Christine Salomon, Assistant Professor and Assistant Director, Center for Drug Design, University of Minnesota, Minneapolis, MN 55455 for their help in purification of compounds and NMR data interpretation for structure elucidation. Ihsan Ul Haq, Department of Pharmacy, Faculty of Biological Sciences, Quaid-i-Azam University, Islamabad 45320, Pakistan, for his help in HPLC-DAD analysis. We are great full to Higher Education Commission (HEC) of Pakistan for providing funding for the research work. Department of Biochemistry, Faculty of Biological Sciences, Quaid-i-Azam University, Islamabad, Pakistan Tayyaba Afsar , Muhammad Rashid Khan , Shafi Ullah & Bushra Mirza Center for Research in Experimental and Applied Medicine, (CREAM), Army Medical College, NUST, Pakistan Suhail Razak Search for Tayyaba Afsar in: Search for Muhammad Rashid Khan in: Search for Suhail Razak in: Search for Shafi Ullah in: Search for Bushra Mirza in: Correspondence to Tayyaba Afsar. TA made significant contributions to conception, design, experimentation, acquisition and interpretation of data and drafting of the manuscript. MRK has made substantial contributions to analyzing, and revising the manuscript for intellectual content. SR and SU made a contribution into revising and editing the manuscript. BM made contribution to the HPLC experimentation and analysis. All authors read and approved the final manuscript. This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Afsar, T., Khan, M.R., Razak, S. et al. Antipyretic, anti-inflammatory and analgesic activity of Acacia hydaspica R. Parker and its phytochemical analysis. BMC Complement Altern Med 15, 136 (2015) doi:10.1186/s12906-015-0658-8 Received: 26 November 2014 Accepted: 20 April 2015 A. hydaspica
CommonCrawl
Recurring obscuration in NGC 3783 (1805.03538) J.S. Kaastra, M. Mehdipour, E. Behar, S. Bianchi, G. Branduardi-Raymont, L. Brenneman, M. Cappi, E. Costantini, B. De Marco, L. di Gesu, J. Ebrero, G.A. Kriss, J. Mao, U. Peretz, P.-O. Petrucci, G. Ponti, D. Walton May 9, 2018 astro-ph.HE Obscuration of the continuum emission from active galactic nuclei by streams of gas with relatively high velocity (> 1000 km/s) and column density (>3E25 per m2) has been seen in a few Seyfert galaxies. This obscuration has a transient nature. In December 2016 we have witnessed such an event in NGC 3783. The frequency and duration of these obscuration events is poorly known. Here we study archival data of NGC 3783 in order to constrain this duty cycle. We use archival Chandra/NuSTAR spectra taken in August 2016. We also study the hardness ratio of all Swift XRT spectra taken between 2008-2017. In August 2016, NGC 3783 also showed evidence for obscuration. While the column density of the obscuring material is ten times lower than in December 2016, the opacity is still sufficient to block a significant fraction of the ionising X-ray and EUV photons. From the Swift hardness ratio behaviour we find several other epochs with obscuration. Obscuration with columns >1E26 per m2 may take place in about half of the time. Also in archival X-ray data taken by ASCA in 1993 and 1996 we find evidence for obscuration. Obscuration of the ionising photons in NGC 3783 occurs more frequently than previously thought. This may not always have been recognised due to low spectral resolution observations, too limited spectral bandwidth or confusion with underlying continuum variations. Multi-wavelength campaign on NGC 7469: III. Spectral energy distribution and the AGN wind photoionisation modelling, plus detection of diffuse X-rays from the starburst with Chandra HETGS (1803.08525) M. Mehdipour, J.S. Kaastra, E. Costantini, E. Behar, G.A. Kriss, S. Bianchi, G. Branduardi-Raymont, M. Cappi, J. Ebrero, L. Di Gesu, S. Kaspi, J. Mao, B. De Marco, R. Middei, U. Peretz, P.-O. Petrucci, G. Ponti, F. Ursini March 22, 2018 astro-ph.HE We investigate the physical structure of the AGN wind in the Seyfert-1 galaxy NGC 7469 through high-resolution X-ray spectroscopy with Chandra HETGS and photoionisation modelling. Contemporaneous data from Chandra, HST, and Swift are used to model the optical-UV-X-ray continuum and determine the spectral energy distribution (SED) at two epochs, 13 years apart. For our investigation we use new observations taken in December 2015-January 2016, and historical ones taken in December 2002. We study the impact of a change in the SED shape, seen between the two epochs, on the photoionisation of the wind. The HETGS spectroscopy shows that the AGN wind in NGC 7469 consists of four ionisation components, with their outflow velocities ranging from -400 to -1800 km/s. From our modelling we find that the change in the ionising continuum shape between the two epochs results in some variation in the ionisation state of the wind components. However, for the main ions detected in X-rays, the sum of their column densities over all four components, remains in practice unchanged. For two of the four components, which are found to be thermally unstable in both epochs, we obtain 2 < r < 31 pc and 12 < r < 29 pc using the cooling and recombination timescales. For the other two thermally stable components, we obtain r < 31 pc and r < 80 pc from the recombination timescale. The results of our photoionisation modelling and thermal stability analysis suggest that the absorber components in NGC 7469 are consistent with being a thermally-driven wind from the AGN torus. Finally, from analysis of the zeroth-order ACIS/HETG data, we discover that the X-ray emission between 0.2-1 keV is spatially extended over 1.5-12". This diffuse soft X-ray emission is explained by coronal emission from the nuclear starburst ring in NGC 7469. Multi-wavelength campaign on NGC 7469 IV. The broad-band X-ray spectrum (1803.07334) R. Middei, S. Bianchi, M. Cappi, P-O. Petrucci, F. Ursini, N. Arav, E. Behar, G. Branduardi-Raymont, E. Costantini, B. De Marco, L. Di Gesu, J. Ebrero, J. Kaastra, S. Kaspi, G. A. Kriss, J. Mao, M. Mehdipour, S. Paltani, U. Peretz, G. Ponti March 20, 2018 astro-ph.GA, astro-ph.HE We conducted a multi-wavelength six-month campaign to observe the Seyfert galaxy NGC~7469, using the space-based observatories \textit{HST}, \textit{Swift}, \textit{XMM-Newton} and \textit{NuSTAR}. Here we report the results of the spectral analysis of the 7 simultaneous \textit{XMM-Newton} and \textit{NuSTAR} observations. The sources shows significant flux variability within each observation, but the average flux is less variable among the different pointings of our campaign. Our spectral analysis reveals a prominent narrow neutral \ion{Fe} K$\alpha$ emission line in all the spectra, with weaker contributions from Fe K$\beta$, neutral Ni K$\alpha$ and ionised iron. We find no evidence for variability or relativistic effects acting on the emission lines, which indicates that they originate from distant material. Analysing jointly \textit{XMM-Newton} and \textit{NuSTAR} data a constant photon index is found ($\Gamma$=$1.78\pm0.02$), together with a high energy cut-off $E_{\rm{cut}}=170^{+60}_{-40}$ keV. Adopting a self-consistent Comptonization model, these values correspond to an average coronal electron temperature of kT=$45^{+15}_{-12}$ keV and, assuming a spherical geometry, an optical depth $\tau=2.6\pm0.9$. The reflection component is consistent with being constant, with a reflection fraction in the range $R=0.3-0.6$. A prominent soft excess dominates the spectra below 4 keV. This is best fit with a second Comptonization component, arising from a \virg{warm corona} with an average $kT=0.67\pm0.03$ keV and a corresponding optical depth $\tau=9.2\pm0.2$. Anatomy of the AGN in NGC 5548 IX. Photoionized emission features in the soft X-ray spectra (1712.07852) Junjie Mao, J. S. Kaastra, M. Mehdipour, Liyi Gu, E. Costantini, G. A. Kriss, S. Bianchi, G. Branduardi-Raymont, E. Behar, L. Di Gesu, G. Ponti, P.-O. Petrucci, J. Ebrero Dec. 21, 2017 astro-ph.HE The X-ray narrow emission line region (NELR) of the archetypal Seyfert 1 galaxy NGC\,5548 has been interpreted as a single-phase photoionized plasma that is absorbed by some of the warm absorber components. This scenario requires those overlaying warm absorber components to have larger distance (to the central engine) than the X-ray NELR, which is not fully consistent with the distance estimates found in the literature. Therefore, we reanalyze the high-resolution spectra obtained in 2013--2014 with the Reflection Grating Spectrometer (RGS) aboard \textit{XMM}-Newton to provide an alternative interpretation of the X-ray narrow emission features. We find that the X-ray narrow emission features in NGC\,5548 can be described by a two-phase photoionized plasma with different ionization parameters ($\log \xi=1.3$ and $0.1$) and kinematics ($v_{\rm out}=-50$ and $-400~{\rm km~s^{-1}}$), and no further absorption by the warm absorber components. The X-ray and optical NELR might be the same multi-phase photoionized plasma. Both X-ray and optical NELR have comparable distances, asymmetric line profiles, and the underlying photoionized plasma is turbulent and compact in size. The X-ray NELR is not the counterpart of the UV/X-ray absorber outside the line of sight because their distances and kinematics are not consistent. In addition, X-ray broad emission features that we find in the spectrum can be accounted for by a third photoionized emission component. The RGS spectrum obtained in 2016 is analyzed as well, where the luminosity of most prominent emission lines (the \ion{O}{vii} forbidden line and \ion{O}{viii} Ly$\alpha$ line) are the same (at a 1 $\sigma$ confidence level) as in 2013--2014. Chandra imaging of the $\sim$kpc extended outflow in 1H 0419-577 (1710.04714) L. Di Gesu, E. Costantini, E. Piconcelli, J.S. Kaastra, M. Mehdipour, S. Paltani Oct. 10, 2017 astro-ph.HE The Seyfert 1 galaxy 1H 0419-577 hosts a $\sim$kpc extended outflow that is evident in the [\ion{O}{iii}] image and that is also detected as a warm absorber in the UV/X-ray spectrum. Here, we analyze a $\sim$30 ks Chandra-ACIS X-ray image, with the aim of resolving the diffuse extranuclear X-ray emission and of investigating its relationship with the galactic outflow. Thanks to its sub-arcsecond spatial resolution, Chandra resolves the circumnuclear X-ray emission, which extends up to a projected distance of at least $\sim$16 kpc from the center. The morphology of the diffuse X-ray emission is spherically symmetrical. We could not recover a morphological resemblance between the soft X-ray emission and the ionization bicone that is traced by the [\ion{O}{iii}] outflow. we argue that the photoionized gas nebula must be distributed mostly along the polar directions, outside our line of sight. In this geometry, the X-ray/UV warm absorber must trace a different gas component, physically disconnected from the emitting gas, and located closer to the equatorial plane. Multi-wavelength campaign on NGC7469 II. Column densities and variability in the X-ray spectrum (1709.09633) U. Peretz, E. Behar, G. A. Kriss, J. Kaastra, N. Arav, S. Bianchi, G. Branduardi-Raymont, M. Cappi, E. Costantini, B. De Marco, L. Di Gesu, J. Ebrero, S. Kaspi, M. Mehdipour, R. Middei, S. Paltani, P.O. Petrucci, G. Ponti, F. Ursini Sept. 27, 2017 astro-ph.HE We investigate the ionic column density variability of the ionized outflows associated with NGC$\sim$7469, to estimate their location and power. This could allow a better understanding of galactic feedback of AGNs to their host galaxies. Analysis of seven XMM-Newton grating observations from 2015 is reported. We use an individual-ion spectral fitting approach, and compare different epochs to accurately determine variability on time-scales of years, months, and days. We find no significant column density variability in a 10 year period implying that the outflow is far from the ionizing source. The implied lower bound on the ionization equilibrium time, 10 years, constrains the lower limit on the distance to be at least 12 pc, and up to 31 pc, much less but consistent with the 1 kpc wide starburst ring. The ionization distribution of column density is reconstructed from measured column densities, nicely matching results of two 2004 observations, with one large high ionization parameter ($\xi$) component at $2<\log \xi<3.5$, and one at $0.5<\log \xi<1$ in cgs units. The strong dependence of the expression for kinetic power, $\propto1/\xi$, hampers tight constraints on the feedback mechanism of outflows with a large range in ionization parameter, which is often observed and indicates a non-conical outflow. The kinetic power of the outflow is estimated here to be within 0.4 and 60 \% of the Eddington luminosity, depending on the ion used to estimate $\xi$. Density diagnostics of ionized outflows in active galactic nuclei: X-ray and UV absorption lines from metastable levels in Be-like to C-like ions (1707.09552) Junjie Mao, J. S. Kaastra, M. Mehdipour, A. J. J. Raassen, Liyi Gu, J. M. Miller Aug. 15, 2017 astro-ph.HE Ionized outflows in Active Galactic Nuclei (AGN) are thought to influence their nuclear and local galactic environment. However, the distance of the outflows with respect to the central engine is poorly constrained, which limits our understanding of their kinetic power as a cosmic feedback channel. Therefore, the impact of AGN outflows on their host galaxies is uncertain. However, when the density of the outflows is known, their distance can be immediately obtained from their modelled ionization parameter. With the new self-consistent PhotoIONization (PION) model in the SPEX code, we are able to calculate detailed level populations, including the ground and metastable levels. This enables us to determine under what physical conditions the metastable levels are significantly populated. We then identify characteristic lines from these metastable levels in the 1 -- 2000 {\AA} wavelength range. In the large density range of $n_H \in (10^6, 10^{20} m^{-3}$, the metastable levels 2s 2p $(^3P_{0-2})$ in Be-like ions can be significantly populated. For B-like ions, merely the first excited level 2s$^2$ 2p $(^2P_{3/2})$ can be used as a density probe. For C-like ions, the first two excited levels 2s$^2$ 2p$^2$ ($^3P_1$ and $^3P_2$) are better density probes than the next two excited levels 2s$^2$ 2p$^2$ ($^1S_0$ and $^1D_2$). Different ions in the same isoelectronic sequence cover not only a wide range of ionization parameter, but also a wide range of density. On the other hand, within the same isonuclear sequence, less ionized ions probe lower density and smaller ionization parameter. Finally, we re-analyzed the high-resolution grating spectra of NGC 5548 observed with Chandra in January 2002, using a set of PION components to account for the ionized outflow. We derive lower (or upper) limits of plasma density in five out of six PION components, based on the presence (or absence) of the metastable absorption lines. Chasing obscuration in type-I AGN: discovery of an eclipsing clumpy wind at the outer broad-line region of NGC 3783 (1707.04671) M. Mehdipour, J.S. Kaastra, G.A. Kriss, N. Arav, E. Behar, S. Bianchi, G. Branduardi-Raymont, M. Cappi, E. Costantini, J. Ebrero, L. Di Gesu, S. Kaspi, J. Mao, B. De Marco, G. Matt, S. Paltani, U. Peretz, B.M. Peterson, P.-O. Petrucci, C. Pinto, G. Ponti, F. Ursini, C.P. de Vries, D.J. Walton July 15, 2017 astro-ph.GA, astro-ph.HE In 2016 we carried out a Swift monitoring program to track the X-ray hardness variability of eight type-I AGN over a year. The purpose of this monitoring was to find intense obscuration events in AGN, and thereby study them by triggering joint XMM-Newton, NuSTAR, and HST observations. We successfully accomplished this for NGC 3783 in December 2016. We found heavy X-ray absorption produced by an obscuring outflow in this AGN. As a result of this obscuration, interesting absorption features appear in the UV and X-ray spectra, which are not present in the previous epochs. Namely, the obscuration produces broad and blue-shifted UV absorption lines of Ly$\alpha$, C IV, and N V, together with a new high-ionisation component producing Fe XXV and Fe XXVI absorption lines. In soft X-rays, only narrow emission lines stand out above the diminished continuum as they are not absorbed by the obscurer. Our analysis shows that the obscurer partially covers the central source with a column density of few $10^{23}$ cm$^{-2}$, outflowing with a velocity of few thousand km s$^{-1}$. The obscuration in NGC 3783 is variable and lasts for about a month. Unlike the commonly-seen warm-absorber winds at pc-scale distances from the black hole, the eclipsing wind in NGC 3783 is located at about 10 light days. Our results suggest the obscuration is produced by an inhomogeneous and clumpy medium, consistent with clouds in the base of a radiatively-driven disk wind at the outer broad-line region of the AGN. Science with hot astrophysical plasmas (1707.01251) J.S. Kaastra, L. Gu, J. Mao, M. Mehdipour, F. Mernier, J. de Plaa, A.J.J. Raassen, I. Urdampilleta July 5, 2017 astro-ph.HE We present some recent highlights and prospects for the study of hot astrophysical plasmas. Hot plasmas can be studied primarily through their X-ray emission and absorption. Most astrophysical objects, from solar system objects to the largest scale structures of the Universe, contain hot gas. In general we can distinguish collisionally ionised gas and photoionised gas. We introduce several examples of both classes and show where the frontiers of this research in astrophysics can be found. We put this also in the context of the current and future generation of X-ray spectroscopy satellites. The data coming from these missions challenge the models that we have for the calculation of the X-ray spectra. X-ray emission from thin plasmas. Collisional ionization for atoms and ions of H to Zn (1702.06007) I. Urdampilleta, J. S. Kaastra, M. Mehdipour March 3, 2017 astro-ph.GA, astro-ph.HE Every observation of astrophysical objects involving a spectrum requires atomic data for the interpretation of line fluxes, line ratios and ionization state of the emitting plasma. One of the processes which determines it is collisional ionization. In this study an update of the direct ionization (DI) and excitation-autoionization (EA) processes is discussed for the H to Zn-like isoelectronic sequences. In the last years new laboratory measurements and theoretical calculations of ionization cross sections have become available. We provide an extension and update of previous published reviews in the literature. We include the most recent experimental measurements and fit the cross sections of all individual shells of all ions from H to Zn. These data are described using an extension of Younger's and Mewe's formula, suitable for integration over a Maxwellian velocity distribution to derive the subshell ionization rate coefficients. These ionization rate coefficients are incorporated in the high-resolution plasma code and spectral fitting tool SPEX V3.0. Systematic comparison of photoionised plasma codes with application to spectroscopic studies of AGN in X-rays (1610.03080) M. Mehdipour, J.S. Kaastra, T. Kallman Atomic data and plasma models play a crucial role in the diagnosis and interpretation of astrophysical spectra, thus influencing our understanding of the universe. In this investigation we present a systematic comparison of the leading photoionisation codes to determine how much their intrinsic differences impact X-ray spectroscopic studies of hot plasmas in photoionisation equilibrium. We carry out our computations using the Cloudy, SPEX, and XSTAR photoionisation codes, and compare their derived thermal and ionisation states for various ionising spectral energy distributions. We examine the resulting absorption-line spectra from these codes for the case of ionised outflows in active galactic nuclei. By comparing the ionic abundances as a function of ionisation parameter $\xi$, we find that on average there is about 30% deviation between the codes in $\xi$ where ionic abundances peak. For H-like to B-like sequence ions alone, this deviation in $\xi$ is smaller at about 10% on average. The comparison of the absorption-line spectra in the X-ray band shows that there is on average about 30% deviation between the codes in the optical depth of the lines produced at log $\xi$ ~ 1 to 2, reducing to about 20% deviation at log $\xi$ ~ 3. We also simulate spectra of the ionised outflows with the current and upcoming high-resolution X-ray spectrometers, on board XMM-Newton, Chandra, Hitomi, and Athena. From these simulations we obtain the deviation on the best-fit model parameters, arising from the use of different photoionisation codes, which is about 10 to 40%. We compare the modelling uncertainties with the observational uncertainties from the simulations. The results highlight the importance of continuous development and enhancement of photoionisation codes for the upcoming era of X-ray astronomy with Athena. Multiwavelength campaign on Mrk 509 XV. A global modeling of the broad emission lines in the Optical, UV and X-ray bands (1606.06579) E. Costantini, S. Bianchi, M. Mehdipour, K.C. Steenbrugge Leiden Univ., INAF-IASF Bologna, CNRS-IPAG, June 21, 2016 astro-ph.CO, astro-ph.GA, astro-ph.HE We model the broad emission lines present in the optical, UV and X-ray spectra of Mrk 509, a bright type 1 Seyfert galaxy. The broad lines were simultaneously observed during a large multiwavelength campaign, using the XMM-Newton-OM for the optical lines, HST-COS for the UV lines and XMM-Newton-RGS and Epic for the X-ray lines respectively. We also used FUSE archival data for the broad lines observed in the far-ultra-violet. The goal is to find a physical connection among the lines measured at different wavelengths and determine the size and the distance from the central source of the emitting gas components. We used the "Locally optimally emission Cloud" (LOC) model which interprets the emissivity of the broad line region (BLR) as regulated by powerlaw distributions of both gas density and distances from the central source. We find that one LOC component cannot model all the lines simultaneously. In particular, we find that the X-ray and UV lines likely may originate in the more internal part of the AGN, at radii in the range ~5x10^{14}-3x10^{17} cm, while the optical lines and part of the UV lines may likely be originating further out, at radii ~3x10^{17}-3x^{18} cm. These two gas components are parametrized by a radial distribution of the luminosities with a slope gamma of ~1.15 and ~1.10, respectively, both of them covering at least 60% of the source. This simple parameterization points to a structured broad line region, with the higher ionized emission coming from closer in, while the emission of the low-ionization lines is more concentrated in the outskirts of the broad line region. Anatomy of the AGN in NGC 5548 VIII. XMM-Newton's EPIC detailed view of an unexpected variable multilayer absorber (1604.01777) M. Cappi, B. De Marco, G. Ponti, F. Ursini, P.-O. Petrucci, S. Bianchi, J.S. Kaastra, G.A. Kriss, M. Mehdipour, M. Whewell, N. Arav, E. Behar, R. Boissay, G. Branduardi-Raymont, E. Costantini, J. Ebrero, L. Di Gesu, F.A. Harrison, S. Kaspi, G. Matt, S. Paltani, B.M. Peterson, K.C. Steenbrugge, D.J. Walton April 6, 2016 astro-ph.GA, astro-ph.HE In 2013 we conducted a large multi-wavelength campaign on the archetypical Seyfert 1 galaxy NGC 5548. Unexpectedly, this usually unobscured source appeared strongly absorbed in the soft X-rays during the entire campaign, and signatures of new and strong outflows were present in the almost simultaneous UV HST/COS data. Here we carry out a comprehensive spectral analysis of all available XMM-Newton observations of NGC 5548 (for a total of ~763 ks) in combination with 3 simultaneous NuSTAR obs. We obtain a best-fit composed by i) a weakly varying flat (Gamma~1.5-1.7) power-law component; ii) a constant, cold reflection (FeK + continuum) component; iii) a soft excess, possibly due to thermal Comptonization; and iv) a constant, ionized scattered emission-line dominated component. Our main findings are that, during the 2013 campaign, the first 3 of these components appear to be partially covered by a heavy and variable obscurer located along the l.o.s. that is consistent with a multilayer of cold and mildly ionized gas. We characterize in detail the short timescale (~ks-to-days) spectral variability of this new obscurer, and find it is mostly due to a combination of Nh and Cf variations, on top of intrinsic power-law variations. In addition, our best-fit spectrum is left with several (but marginal) absorption features at rest-frame energies ~6.7-6.9 keV and ~8 keV, as well as a weak broad emission line feature redwards of the 6.4 keV emission line. These could indicate a more complex underlying model, e.g. a P-Cygni-type profile if allowing for a large velocity and wide-angle outflow. These findings are consistent with a picture where the obscurer represents the manifestation along the l.o.s. of a multilayer of gas that is likely outflowing at high speed, and simultaneously producing heavy obscuration and scattering in the X-rays and broad absorption features in the UV. Anatomy of the AGN in NGC 5548: VII. Swift study of obscuration and broadband continuum variability (1602.03017) M. Mehdipour, J.S. Kaastra, G.A. Kriss, M. Cappi, P.-O. Petrucci, B. De Marco, G. Ponti, K.C. Steenbrugge, E. Behar, S. Bianchi, G. Branduardi-Raymont, E. Costantini, J. Ebrero, L. Di Gesu, G. Matt, S. Paltani, B.M. Peterson, F. Ursini, M. Whewell Feb. 9, 2016 astro-ph.GA, astro-ph.HE We present our investigation into the long-term variability of the X-ray obscuration and optical-UV-X-ray continuum in the Seyfert 1 galaxy NGC 5548. In 2013 and 2014, the Swift observatory monitored NGC 5548 on average every day or two, with archival observations reaching back to 2005, totalling about 670 ks of observing time. Both broadband spectral modelling and temporal rms variability analysis are applied to the Swift data. We disentangle the variability caused by absorption, due to an obscuring weakly-ionised outflow near the disk, from variability of the intrinsic continuum components (the soft X-ray excess and the power-law) originating from the disk and its associated coronae. The spectral model that we apply to this extensive Swift data is the global model that we derived for NGC 5548 from analysis of the stacked spectra from our multi-satellite campaign of summer 2013 (including XMM-Newton, NuSTAR and HST). The results of our Swift study show that changes in the covering fraction of the obscurer is the primary and dominant cause of variability in the soft X-ray band on timescales of 10 days to ~ 5 months. The obscuring covering fraction of the X-ray source is found to range between 0.7 and nearly 1.0. The contribution of the soft excess component to the X-ray variability is often much less than that of the obscurer, but it becomes comparable when the optical-UV continuum flares up. We find that the soft excess is consistent with being the high-energy tail of the optical-UV continuum and can be explained by warm Comptonisation: up-scattering of the disk seed photons in a warm, optically thick corona as part of the inner disk. To this date, the Swift monitoring of NGC 5548 shows that the obscurer has been continuously present in our line of sight for at least 4 years (since at least February 2012). Anatomy of the AGN in NGC 5548. VI. Long-term variability of the warm absorber (1601.02385) J. Ebrero, J. S. Kaastra, G. A. Kriss, L. Di Gesu, E. Costantini, M. Mehdipour, S. Bianchi, M. Cappi, R. Boissay, G. Branduardi-Raymont, P.-O. Petrucci, G. Ponti, F. Pozo-Nunez, H. Seta, K. C. Steenbrugge, M. Whewell Jan. 11, 2016 astro-ph.HE (Abridged) The archetypal Seyfert 1 galaxy NGC 5548 was observed in 2013-2014 in the context of an extensive multiwavelength campaign, which revealed the source to be in an extraordinary state of persistent heavy obscuration. We re-analyzed the archival grating spectra obtained by XMM-Newton and Chandra between 1999 and 2007 in order to characterize the classic warm absorber (WA) using consistent models and up-to-date photoionization codes and atomic physics databases and to construct a baseline model that can be used as a template for the WA in the 2013 observations. The WA in NGC 5548 is composed of 6 distinct ionization phases outflowing in 4 kinematic regimes in the form of a stratified wind with several layers intersected by our line of sight. If the changes in the WA are solely due to ionization or recombination processes in response to variations in the ionizing flux among the different observations, we are able to estimate lower limits on the density of the WA, finding that the farthest components are less dense and have a lower ionization. These limits are used to put stringent upper limits on the distance of the WA components from the central ionizing source, with the lowest ionization phases <50, <20, and <5 pc, respectively, while the intermediately ionized components lie at <3.6 and <2.2 pc from the center, respectively. The highest ionization component is located at ~0.6 pc or closer to the AGN central engine. The mass outflow rate summed over all WA components is ~0.3 Msun/yr, about six times the nominal accretion rate of the source. The total kinetic luminosity injected into the ISM is a small fraction (~0.03%) of the bolometric luminosity of the source. After adding the contribution of the UV absorbers, this value augments to ~0.2% of the bolometric luminosity, well below the minimum amount of energy required by current feedback models to regulate galaxy evolution. Anatomy of the AGN in NGC 5548: V. A clear view of the X-ray narrow emission lines (1509.00274) M. Whewell, G. Branduardi-Raymont, J.S. Kaastra, M. Mehdipour, K.C. Steenbrugge, S. Bianchi, E. Behar, J. Ebrero, M. Cappi, E. Costantini, B. De Marco, L. Di Gesu, G.A. Kriss, S. Paltani, B.M. Peterson, P. -O Petrucci, C. Pinto, G. Ponti Sept. 1, 2015 astro-ph.HE Context. Our consortium performed an extensive multi-wavelength campaign of the nearby Seyfert 1 galaxy NGC 5548 in 2013-14. The source appeared unusually heavily absorbed in the soft X-rays, and signatures of outflowing absorption were also present in the UV. He-like triplets of neon, oxygen and nitrogen, and radiative recombination continuum (RRC) features were found to dominate the soft X-ray spectrum due to the low continuum flux. Aims. Here we focus on characterising these narrow emission features using data obtained from the XMM-Newton RGS (770 ks stacked spectrum). Methods. We use SPEX for our initial analysis of these features. Self-consistent photoionisation models from Cloudy are then compared with the data to characterise the physical conditions of the emitting region. Results. Outflow velocity discrepancies within the O VII triplet lines can be explained if the X-ray narrow-line region (NLR) in NGC 5548 is absorbed by at least one of the six warm absorber components found by previous analyses. The RRCs allow us to directly calculate a temperature of the emitting gas of a few eV ($\sim10^{4}$ K), favouring photoionised conditions. We fit the data with a Cloudy model of log $\xi = 1.45 \pm 0.05$ erg cm s$^{-1}$, log $N_H = 22.9 \pm 0.4$ cm$^{-2}$ and log v$_{turb} = 2.25 \pm 0.5$ km s$^{-1}$ for the emitting gas; this is the first time the X-ray NLR gas in this source has been modelled so comprehensively. This allows us to estimate the distance from the central source to the illuminated face of the emitting clouds as $13.9 \pm 0.6$ pc, consistent with previous work. Line absorption of He-like triplet lines by Li-like ions. Caveats of using line ratios of triplets for plasma diagnostics (1505.06034) M. Mehdipour, J. S. Kaastra, A. J. J. Raassen June 3, 2015 astro-ph.HE He-like ions produce distinctive series of triplet lines under various astrophysical conditions. However, this emission can be affected by line absorption from Li-like ions in the same medium. We investigate this absorption of He-like triplets and present the implications for diagnostics of plasmas in photoionisation equilibrium using the line ratios of the triplets. Our computations were carried out for the O VI and Fe XXIV absorption of the O VII and Fe XXV triplet emission lines, respectively. The fluorescent emission by the Li-like ions and continuum absorption of the He-like ion triplet lines are also investigated. We determine the absorption of the triplet lines as a function of Li-like ion column density and velocity dispersion of the emitting and absorbing medium. We find O VI line absorption can significantly alter the O VII triplet line ratios in optically-thin plasmas, by primarily absorbing the intercombination lines, and to a lesser extent, the forbidden line. Because of intrinsic line absorption by O VI inside a photoionised plasma, the predicted ratio of forbidden to intercombination line intensity for the O VII triplet increases from 4 up to an upper limit of 16. This process can explain the triplet line ratios that are higher than expected and that are seen in some X-ray observations of photoionised plasmas. For the Fe XXV triplet, line absorption by Fe XXIV becomes less apparent owing to significant fluorescent emission by Fe XXIV. Without taking the associated Li-like ion line absorption into account, the density diagnosis of photoionised plasmas using the observed line ratios of the He-like ion triplet emission lines can be unreliable, especially for low-Z ions. Anatomy of the AGN in NGC 5548 IV. The short-term variability of the outflows (1505.02562) L. Di Gesu, E. Costantini, J. Ebrero, M. Mehdipour, J.S. Kaastra, F. Ursini, P.O. Petrucci, M. Cappi, G.A. Kriss, S. Bianchi, G. Branduardi-Raymont, B. De Marco, A. De Rosa, S. Kaspi, S. Paltani, C. Pinto, G. Ponti, K.C. Steenbrugge, M. Whewell May 11, 2015 astro-ph.HE During an extensive multiwavelength campaign that we performed in 2013-14 the prototypical Seyfert 1 galaxy NGC 5548 has been found in an unusual condition of heavy and persistent obscuration. The newly discovered "obscurer" absorbs most of the soft X-ray continuum along our line of sight and lowers the ionizing luminosity received by the classical warm absorber. Here we present the analysis of the high resolution X-ray spectra collected with XMM-Newton and Chandra throughout the campaign, which are suitable to investigate the variability of both the obscurer and the classical warm absorber. The time separation between these X-ray observations range from 2 days to 8 months. On these timescales the obscurer is variable both in column density and in covering fraction. This is consistent with the picture of a patchy wind. The most significant variation occurred in September 2013 when the source brightened for two weeks. A higher and steeper intrinsic continuum and a lower obscurer covering fraction are both required to explain the spectral shape during the flare. We suggest that a geometrical change of the soft X-ray source behind the obscurer cause the observed drop in the covering fraction. Due to the higher soft X-ray continuum level the September 2013 Chandra spectrum is the only X ray spectrum of the campaign where individual features of the warm absorber could be detected. The spectrum shows absorption from Fe-UTA, \ion{O}{iv}, and \ion{O}{v}, consistent to belong to the lower-ionization counterpart of the historical NGC 5548 warm absorber. Hence, we confirm that the warm absorber has responded to the drop in the ionizing luminosity caused by the obscurer. The Swift-UVOT ultraviolet and visible grism calibration (1501.02433) N. P. M. Kuin, W. Landsman, A. A. Breeveld, M. J. Page, C. James, H. Lamoureux, M. Mehdipour, M. Still, V. Yershov, P. J. Brown, M. Carter, K. O. Mason, T. Kennedy, F. Marshall, P. W. A. Roming, M. Siegel, S.Oates, P. J. Smith, M. De Pasquale Feb. 23, 2015 astro-ph.IM We present the calibration of the Swift UVOT grisms, of which there are two, providing low-resolution field spectroscopy in the ultraviolet and optical bands respectively. The UV grism covers the range 1700-5000 Angstrom with a spectral resolution of 75 at 2600 Angstrom for source magnitudes of u=10-16 mag, while the visible grism covers the range 2850-6600 Angstrom with a spectral resolution of 100 at 4000 Angstrom for source magnitudes of b=12-17 mag. This calibration extends over all detector positions, for all modes used during operations. The wavelength accuracy (1-sigma) is 9 Angstrom in the UV grism clocked mode, 17 Angstrom in the UV grism nominal mode and 22 Angstrom in the visible grism. The range below 2740 Angstrom in the UV grism and 5200 Angstrom in the visible grism never suffers from overlapping by higher spectral orders. The flux calibration of the grisms includes a correction we developed for coincidence loss in the detector. The error in the coincidence loss correction is less than 20%. The position of the spectrum on the detector only affects the effective area (sensitivity) by a few percent in the nominal modes, but varies substantially in the clocked modes. The error in the effective area is from 9% in the UV grism clocked mode to 15% in the visible grism clocked mode . Anatomy of the AGN in NGC 5548: III. The high-energy view with NuSTAR and INTEGRAL (1501.03426) F. Ursini, R. Boissay, P.-O. Petrucci, G. Matt, M. Cappi, S. Bianchi, J. Kaastra, F. Harrison, D. J. Walton, L. di Gesu, E. Costantini, B. De Marco, G. A. Kriss, M. Mehdipour, S. Paltani, B. M. Peterson, G. Ponti, K. C. Steenbrugge We describe the analysis of the seven broad-band X-ray continuum observations of the archetypal Seyfert 1 galaxy NGC 5548 that were obtained with XMM-Newton or Chandra, simultaneously with high-energy (> 10 keV) observations with NuSTAR and INTEGRAL. These data were obtained as part of a multiwavelength campaign undertaken from the summer of 2013 till early 2014. We find evidence of a high-energy cut-off in at least one observation, which we attribute to thermal Comptonization, and a constant reflected component that is likely due to neutral material at least a few light months away from the continuum source. We confirm the presence of strong, partial covering X-ray absorption as the explanation for the sharp decrease in flux through the soft X-ray band. The obscurers appear to be variable in column density and covering fraction on time scales as short as weeks. A fit of the average spectrum over the range 0.3-400 keV with a realistic Comptonization model indicates the presence of a hot corona with a temperature of 40(+40,-10) keV and an optical depth of 2.7(+0.7,-1.2) if a spherical geometry is assumed. Accretion, ejection and reprocessing in supermassive black holes (1501.02768) A. De Rosa, S. Bianchi, M. Giroletti, A. Marinucci, M. Paolillo, I. Papadakis, G. Risaliti, F. Tombesi, M. Cappi, B. Czerny, M. Dadina, B. De Marco, M. Dovciak, V. Karas, D. Kunneriath, F. La Franca, A. Manousakis, A. Markowitz, I. McHardy, M. Mehdipour, G. Miniutti, S. Paltani, M. Perez-Torres, B. Peterson, E. Piconcelli, P. O. Petrucci, G. Ponti, A. Rozanska, M. Salvati, M. Sobolewska, J. Svoboda, D. Trevese, P. Uttley, F. Vagnetti, S. Vaughan, C. Vignali, F. H. Vincent, A. Zdziarski This is a White Paper in support of the mission concept of the Large Observatory for X-ray Timing (LOFT), proposed as a medium-sized ESA mission. We discuss the potential of LOFT for the study of active galactic nuclei. For a summary, we refer to the paper. Anatomy of the AGN in NGC 5548: I. A global model for the broadband spectral energy distribution (1501.01188) M. Mehdipour, J. S. Kaastra, G. A. Kriss, M. Cappi, P. -O. Petrucci, K. C. Steenbrugge, N. Arav, E. Behar, S. Bianchi, R. Boissay, G. Branduardi-Raymont, E. Costantini, J. Ebrero, L. Di Gesu, F. A. Harrison, S. Kaspi, B. De Marco, G. Matt, S. Paltani, B. M. Peterson, G. Ponti, F. Pozo Nuñez, A. De Rosa, F. Ursini, C. P. de Vries, D. J. Walton, M. Whewell Jan. 6, 2015 astro-ph.GA, astro-ph.HE An extensive multi-satellite campaign on NGC 5548 has revealed this archetypal Seyfert-1 galaxy to be in an exceptional state of persistent heavy absorption. Our observations taken in 2013-2014 with XMM-Newton, Swift, NuSTAR, INTEGRAL, Chandra, HST and two ground-based observatories have together enabled us to establish that this unexpected phenomenon is caused by an outflowing stream of weakly ionised gas (called the obscurer), extending from the vicinity of the accretion disk to the broad-line region. In this work we present the details of our campaign and the data obtained by all the observatories. We determine the spectral energy distribution of NGC 5548 from near-infrared to hard X-rays by establishing the contribution of various emission and absorption processes taking place along our line of sight towards the central engine. We thus uncover the intrinsic emission and produce a broadband continuum model for both obscured (average summer 2013 data) and unobscured ($<$ 2011) epochs of NGC 5548. Our results suggest that the intrinsic NIR/optical/UV continuum is a single Comptonised component with its higher energy tail creating the 'soft X-ray excess'. This component is compatible with emission from a warm, optically-thick corona as part of the inner accretion disk. We then investigate the effects of the continuum on the ionisation balance and thermal stability of photoionised gas for unobscured and obscured epochs. Anatomy of the AGN in NGC 5548: II. The Spatial, Temporal and Physical Nature of the Outflow from HST/COS Observations (1411.2157) N. Arav, C. Chamberlain, G.A. Kriss, J.S. Kaastra, M. Cappi, M. Mehdipour, P.-O. Petrucci, K.C. Steenbrugge, E. Behar, S. Bianchi, R. Boissay, G. Branduardi-Raymont, E. Costantini, J.C. Ely, J. Ebrero, L. di Gesu, F.A. Harrison, S.Kaspi, J. Malzac, B. De Marco, G. Matt, K.P. Nandra, S. Paltani, B.M. Peterson, C. Pinto, G. Ponti, F. Pozo Nuñez, A. De Rosa, H. Seta, F. Ursini, C.P.de Vries, D.J.Walton, M. Whewel Nov. 8, 2014 astro-ph.GA (Abridged) Our deep multiwavelength campaign on NGC 5548 revealed an unusually strong X-ray obscuration. The resulting dramatic decrease in incident ionizing flux allowed us to construct a comprehensive physical, spatial and temporal picture for the long-studied AGN wind in this object. Here we analyze the UV spectra of the outflow acquired during the campaign as well as from four previous epochs. We find that a simple model based on a fixed total column-density absorber, reacting to changes in ionizing illumination, matches the very different ionization states seen in five spectroscopic epochs spanning 16 years. Absorption troughs from C III* appeared for the first time during our campaign. From these troughs, we infer that the main outflow component is situated at 3.5+-1 pc from the central source. Three other components are situated between 5-70 pc and two are further than 100 pc. The wealth of observational constraints and the disparate relationship of the observed X-ray and UV flux between different epochs make our physical model a leading contender for interpreting trough variability data of quasar outflows. Multiwavelength campaign on Mrk 509 XIV. Chandra HETGS spectra (1409.2021) J.S. Kaastra, J. Ebrero, N. Arav, E. Behar, S. Bianchi, G. Branduardi-Raymont, M. Cappi, E. Costantini, G.A. Kriss, B. De Marco, M. Mehdipour, S. Paltani, P.-O. Petrucci, C. Pinto, G. Ponti, K.C. Steenbrugge, , C.P. de Vries Sept. 6, 2014 astro-ph.GA, astro-ph.HE We present in this paper the results of a 270 ks Chandra HETGS observation in the context of a large multiwavelength campaign on the Seyfert galaxy Mrk 509. The HETGS spectrum allows us to study the high ionisation warm absorber and the Fe-K complex in Mrk 509. We search for variability in the spectral properties of the source with respect to previous observations in this campaign, as well as for evidence of ultra-fast outflow signatures. The Chandra HETGS X-ray spectrum of Mrk 509 was analysed using the SPEX fitting package. We confirm the basic structure of the warm absorber found in the 600 ks XMM-Newton RGS observation observed three years earlier, consisting of five distinct ionisation components in a multikinematic regime. We find little or no variability in the physical properties of the different warm absorber phases with respect to previous observations in this campaign, except for component D2 which has a higher column density at the expense of component C2 at the same outflow velocity (-240 km/s). Contrary to prior reports we find no -700 km/s outflow component. The O VIII absorption line profiles show an average covering factor of 0.81 +/- 0.08 for outflow velocities faster than -100 km/s, similar to those measured in the UV. This supports the idea of a patchy wind. The relative metal abundances in the outflow are close to proto-solar. The narrow component of the Fe Kalpha emission line shows no changes with respect to previous observations which confirms its origin in distant matter. The narrow line has a red wing that can be interpreted to be a weak relativistic emission line. We find no significant evidence of ultra-fast outflows in our new spectrum down to the sensitivity limit of our data. A fast and long-lived outflow from the supermassive black hole in NGC 5548 (1406.5007) J.S. Kaastra, G.A. Kriss, M. Cappi, M. Mehdipour, P.-O. Petrucci, K.C. Steenbrugge, N. Arav, E. Behar, S. Bianchi, R. Boissay, G. Branduardi-Raymont, C. Chamberlain, E. Costantini, J.C. Ely, J. Ebrero, L. Di Gesu, F.A. Harrison, S. Kaspi, J. Malzac, B. De Marco, G. Matt, K. Nandra, S. Paltani, R. Person, B.M. Peterson, C. Pinto, G. Ponti, F. Pozo Nuñez, A. De Rosa, H. Seta, F. Ursini, C.P. de Vries, D.J. Walton, M. Whewell Supermassive black holes in the nuclei of active galaxies expel large amounts of matter through powerful winds of ionized gas. The archetypal active galaxy NGC 5548 has been studied for decades, and high-resolution X-ray and UV observations have previously shown a persistent ionized outflow. An observing campaign in 2013 with six space observatories shows the nucleus to be obscured by a long-lasting, clumpy stream of ionized gas never seen before. It blocks 90% of the soft X-ray emission and causes simultaneous deep, broad UV absorption troughs. The outflow velocities of this gas are up to five times faster than those in the persistent outflow, and at a distance of only a few light days from the nucleus, it may likely originate from the accretion disk.
CommonCrawl
Formula to calculate password cracking time in years, taking into account Moore's law and known adversary guessing power [closed] We know that the biggest human rights violators in human history are capable of one trillion password guesses per second as of approximately January 2013. Assume that the 1 trillion guesses per second is not a dictionary attack, but a brute force search of all possible permutations of the available characters in the password. Assume this rate of password guessing is the same speed regardless of their computing equipment. Assume they have special equipment e.g. GPUs/ASICs capable of performing the industry standard Password Based Key Derivation Function (PBKDF2-SHA2 with large number of iterations) for each password guess and they can still guess 1 trillion password combinations per second. Therefore their actual hardware will not factor into the equation, just the 1 trillion guesses per second they can perform. Discard the assumption of weak passwords and assume the password is very strong and made up of uniformly and randomly selected characters available on a standard US keyboard layout including special characters (95 possible characters total). We also know from Moore's law that transistor count on an integrated circuit doubles every two years, which loosely translates to doubling of computing power every two years. So in January 2015 they will be able to guess 2 trillion passwords per second. In January 2017 they can guess 4 trillion per second and so on. Assume this trend will continue regardless of speculation that this law may come to an end and it needs to be factored into the formula. To successfully guess a password, it often only requires 2^n-1 attempts. That needs to be factored into the equation. Please also factor this into the formula. What I would like is a reusable formula which takes into account the known adversary power of 1 trillion guesses per second as of January 2013 and its future power with regards to Moore's law. I would like to dynamically enter in the total number of password characters and the current date. The formula will return a calculation of how many years this password will be secure from brute force search from that current date. Apologies if this is the incorrect StackExchange forum, but I think it is in the right place as I am after a correct mathematical formula which I can then turn into a software function. Feel free to move it if that is more appropriate. permutations computer-science computational-complexity combinations mathematical-physics eichroyseichroys closed as unclear what you're asking by RE60K, Najib Idrissi, kjetil b halvorsen, graydad, Narasimham Apr 11 '15 at 20:13 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. $\begingroup$ I'm thinking this is best for Security.SE... $\endgroup$ – Mario Carneiro Apr 11 '15 at 10:20 $\begingroup$ There's no mathematicians there! $\endgroup$ – eichroys Apr 11 '15 at 10:20 $\begingroup$ Not so sure about that, from the answers I've seen in the past. If you get a good answer, I think that this level of math will not pose a problem over there. For this to be a good fit for math.SE, I would recommend cutting out all the text and just giving the model; as it is there is way too much "word problem" stuff. $\endgroup$ – Mario Carneiro Apr 11 '15 at 10:21 $\begingroup$ I put the question into the context of a security question, but it reduces to a simple maths problem. I.e. how many years to try every combination of 95 chars on an x length string, based on x operations per second at this date, and take into account a doubling of operations per second every two years. $\endgroup$ – eichroys Apr 11 '15 at 10:25 $\begingroup$ I don't think the question is relevant. Most systems throw you out if you make too many incorrect guesses. Your reasoning only applies if the device has been physically taken (eg a stolen encrypted hard disk) and it doesn't have a limit on guesses. But no one expects it to be invulnerable in those situations anyway. $\endgroup$ – user117644 Apr 11 '15 at 10:41 Let $t_0$ be the current time in years from January 2013, and $n$ be the number of bits in the password. If $y$ is the number of attempts since the NSA started trying to hack your password, then we have the equation $$\frac{dy}{dt}=10^{12}\cdot60\cdot60\cdot24\cdot365\cdot 2^{t/2}=:k\,2^{t/2}.$$ The big number $k$ is for converting the $10^{12}$ attacks-per-second figure into years, and the derivative is because this is measuring the accumulation of attacks done. Now, we want to find the number of attacks that occur between now and some future time $t_0+t$, which we obtain by integrating this: $$y(t_0+t)=\int_{t_0}^{t_0+t}k\,2^{x/2}\,dx=\left.\frac{2k}{\log 2}2^{x/2}\right|_{t_0}^{t_0+t}=\frac{2k}{\log 2}2^{t_0/2}(2^{t/2}-1).$$ Now, we are interested in finding when this number of attacks exceeds the maximum that our password can tolerate, which is roughly $2^n$ (I'm ignoring the $-1$ because the difference is negligible compared to other approximations of the model): $$y(t_0+t)=2^n\implies2^{t/2}+1=\frac{2^n\log 2}{2k\,2^{t_0/2}}\implies t=2\log_2\left[\frac{2^n\log 2}{2k\,2^{t_0/2}}-1\right].$$ And there is your formula, given inputs $t_0$ and $n$, and the constant $k$. (Edit: I notice I have not factored in the information about keyboard layouts into this analysis, since the "$2^n-1$ attempts" part is already enough to answer the question. If the passwords are not bit strings but instead strings of characters from an alphabet of $95$ symbols, replace $2^n$ with ${95}^n$; nothing else is affected.) Discussion: Looking at the form of the formula, we can get a feel for the implications of Moore's law in action. The denominator involves factors $k$ and $t_0$ that you can't do much about (we can't choose the era we live in), but $n$ is of course under our control, so it helps to isolate that part. Note that we can rewrite the equation as $t=2\log_2(2^{n+a}-1)$, where $a=\log_2\frac{\log 2}{2k\,2^{t_0/2}}$ is a constant; thus the overall speed of computing can be offset simply by adding a constant amount to your password length. For large $n$, the $-1$ factor becomes negligible, and we get $t\sim 2\log_2(2^{n+a})=2(n+a)$. Thus the time to break the password goes up by about 2 years for every extra bit in the password. If we are working from a $95$-symbol alphabet, this changes to $t\sim 2\log_2(95^{n+a'})\approx13.1(n+a')$, so that each extra US keyboard character adds 13 years to the password strength. Mario CarneiroMario Carneiro While the mathematics of this problem are well covered by Mario Carneiro, the reality in the OP is rather distorted. Ignoring the problems with Moore's law as it is. The main issue you will run into very soon is the energy consumed. To further simplify the problem we ignore the fact that energy costs money and just focus on the actual limits of how much energy we can produce. And then to make it easier on myself we crib Bruce Schneier's work from Applied Cryptography as reposted here so I don't have to do all the calculations myself. One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than $kT$, where $T$ is the absolute temperature of the system and $k$ is the Boltzman constant. (Stick with me; the physics lesson is almost over.) Given that $k =1.38 \cdot 10^{-16} \mathrm{erg}/{^\circ}\mathrm{Kelvin}$, and that the ambient temperature of the universe is $3.2{^\circ}\mathrm K$, an ideal computer running at $3.2{^\circ}\mathrm K$ would consume $4.4 \cdot 10^{-16}$ ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump. Now, the annual energy output of our sun is about $1.21 \cdot 10^{41}$ ergs. This is enough to power about $2.7 \cdot 10^{56}$ single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all of its energy for 32 years, without any loss, we could power a computer to count up to $2^{192}$. Of course, it wouldn't have the energy left over to perform any useful calculations with this counter. But that's just one star, and a measly one at that. A typical supernova releases something like $10^{51}$ ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states. These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space. Now to look at a more realistic example (and do at least a little work) let us look at the output of the world as given by wikipedia for 2008 World energy supply. That gives us a base supply of 143,851 terawatt hours which is $5.178636 × 10^{27}$ ergs. Lets round up and call it $10^{28}$ ergs. Take the bit setting energy and round it down to $10^{-16}$ ergs per bit. That gives us $10^{44}$ bit flips. That's about enough to run a 144 bit counter through its paces. Notice that we are using ALL of the worlds energy for a year and making super unrealistic assumptions (like being able to run a computer at the ambient temperature of the universe). Altogether, already below 128 bits of entropy it's orders of magnitudes easier and cheaper to bribe or beat the key or password out of someone than to try and crack it. More over this assessment will not change in the near future unless some world changing physics happen. DRFDRF $\begingroup$ GREAT! :D Amazing info $\endgroup$ – RE60K Apr 11 '15 at 13:33 Not the answer you're looking for? Browse other questions tagged permutations computer-science computational-complexity combinations mathematical-physics or ask your own question. Beyond Goedel incompleteness and lack of soundness/completeness of higher-order logics A computer's memory is finite, so how can there be languages more powerful than regular? A question to clarify the use of divergent series in calculating the casimir effect Optimization of English Braille: Using the fewest dots permutation combinations and other possibilities (e.g. pizza toppings)
CommonCrawl
A Novel Bilayer Wound Dressing Composed of a Dense Polyurethane/Propolis Membrane and a Biodegradable Polycaprolactone/Gelatin Nanofibrous Scaffold Electrospun polycaprolactone nanofibrous membranes loaded with baicalin for antibacterial wound dressing Weiwei Zeng, Nga-man Cheng, … Ya-wei Li Novel lipophosphonoxin-loaded polycaprolactone electrospun nanofiber dressing reduces Staphylococcus aureus induced wound infection in mice Duy Dinh Do Pham, Věra Jenčová, … Peter Gál Electrospun cellulose acetate/gelatin nanofibrous wound dressing containing berberine for diabetic foot ulcer healing: in vitro and in vivo studies Hadi Samadian, Sina Zamiri, … Majid Salehi Dual bio-active factors with adhesion function modified electrospun fibrous scaffold for skin wound and infections therapeutics Jianhang Jiao, Chuangang Peng, … Su Pan Fabrication and characterization of scaffolds containing different amounts of allantoin for skin tissue engineering Yeganeh Dorri Nokoorani, Amir Shamloo, … Hamideh Moravvej Epidermal and fibroblast growth factors incorporated polyvinyl alcohol electrospun nanofibers as biological dressing scaffold Amnah Asiri, Syafiqah Saidin, … Rania Hussien Al-Ashwal Sustainable drug release from polycaprolactone coated chitin-lignin gel fibrous scaffolds Turdimuhammad Abdullah, Kalamegam Gauthaman, … Adnan Memic Hemostatic and antibacterial PVA/Kaolin composite sponges loaded with penicillin–streptomycin for wound dressing applications Tamer M. Tamer, Maysa M. Sabet, … Mohamed A. Hassan An injectable self-healing anesthetic glycolipid-based oleogel with antibiofilm and diabetic wound skin repair properties Yadavali Siva Prasad, Sandeep Miryala, … Subbiah Nagarajan Asghar Eskandarinia ORCID: orcid.org/0000-0003-0191-13511, Amirhosein Kefayat ORCID: orcid.org/0000-0002-0616-88702, Maria Agheb ORCID: orcid.org/0000-0001-8888-99211, Mohammad Rafienia ORCID: orcid.org/0000-0001-6030-50821,3, Moloud Amini Baghbadorani ORCID: orcid.org/0000-0003-4888-09511, Sepehr Navid ORCID: orcid.org/0000-0002-8039-73054, Karim Ebrahimpour ORCID: orcid.org/0000-0002-6230-01195, Darioush Khodabakhshi ORCID: orcid.org/0000-0003-4000-82451 & Fatemeh Ghahremani ORCID: orcid.org/0000-0002-7244-60666 Scientific Reports volume 10, Article number: 3063 (2020) Cite this article Preclinical research One-layer wound dressings cannot meet all the clinical needs due to their individual characteristics and shortcomings. Therefore, bilayer wound dressings which are composed of two layers with different properties have gained lots of attention. In the present study, polycaprolactone/gelatin (PCL/Gel) scaffold was electrospun on a dense membrane composed of polyurethane and ethanolic extract of propolis (PU/EEP). The PU/EEP membrane was used as the top layer to protect the wound area from external contamination and dehydration, while the PCL/Gel scaffold was used as the sublayer to facilitate cells' adhesion and proliferation. The bilayer wound dressing was investigated regarding its microstructure, mechanical properties, surface wettability, anti-bacterial activity, biodegradability, biocompatibility, and its efficacy in the animal wound model and histopathological analyzes. Scanning electron micrographs exhibited uniform morphology and bead-free structure of the PCL/Gel scaffold with average fibers' diameter of 237.3 ± 65.1 nm. Significant anti-bacterial activity was observed against Staphylococcal aureus (5.4 ± 0.3 mm), Escherichia coli (1.9 ± 0.4 mm) and Staphylococcus epidermidis (1.0 ± 0.2 mm) according to inhibition zone test. The bilayer wound dressing exhibited high hydrophilicity (51.1 ± 4.9°), biodegradability, and biocompatibility. The bilayer wound dressing could significantly accelerate the wound closure and collagen deposition in the Wistar rats' skin wound model. Taking together, the PU/EEP-PCL/Gel bilayer wound dressing can be a potential candidate for biomedical applications due to remarkable mechanical properties, biocompatibility, antibacterial features, and wound healing activities. Skin is always at the exposer of different types of damages1. Severe skin damages can be life-threatening due to loss of human body fluids, electrolytes, and nutritional components from the wound area. Therefore, wound dressings have gained lots of attention2. An ideal wound dressing should protect the wound from external contaminants and facilitate the healing process. However, one-layer wound dressings cannot meet all the clinical needs due to their individual characteristics and shortcomings. Therefore, bilayer wound dressings which are composed of two layers with different properties have gained lots of attention1,3. The difference in the structure and characteristics of each layer can provide several advantages4. A dense top layer can protect the wound from infection and mechanical stress. In addition, this layer can prevent from wound dehydration and provides a moist environment at the wound area5. The sublayer is in direct contact with the wound area and should mimic the structure of extracellular matrix to facilitate cells' adhesion and accelerate their proliferation6. The utilized materials in a wound dressing can deeply affect its efficacy. Therefore, selection of appropriate materials for the top layer and sublayer synthesis is a determinative step to design an efficient wound dressing. Chitosan, alginate, collagen, gelatin, polyurethane, and polycaprolactone are widely used to prepare different kinds of wound dressings7,8,9. Gelatin (Gel) is one of the most biocompatible and biodegradable polymers which is produced by collagen hydrolysis. Its arginine-glycine-aspartic acid sequence is highly appropriate for cells' adhesion10. Moreover, it contains large amounts of hydroxyproline, glycine, and proline amino acids which potentially accelerate wound healing process11. However, natural polymers like gelatin exhibit rapid degradation. Thus, blending of the natural polymers with synthetic polymers can improve the structural stability of the scaffold due to their slow degradation rate. In addition, these blends exhibit better mechanical properties and cell-scaffold interactions in comparison with the scaffolds which are purely composed of natural or synthetic polymers12. Polycaprolactone (PCL) is a synthetic polyester characterized by slow degradation rate and high plasticity13,14. Recent studies have demonstrated high efficacy of PCL/Gel scaffolds for skin tissue engineering application15,16. However, these scaffolds exhibit different weaknesses including poor mechanical properties, unsuitable water vapor transmission rate, and poor anti-bacterial properties. Therefore, utilizing from a protective membrane as the top layer can significantly enhance the efficacy of these scaffolds as wound dressing. The top layer is response for control of wound microenvironment condition. Moist and incubator-like microenvironment is important to accelerate the healing process and decrease scar formation. A dense membrane can prevent wound dehydration and conserve wound moist. One of the most well-characterized polymers for synthesis of these membranes is polyurethanes (PU)17,18. The wound dressings composed of PU have exhibited high efficacy in control of wound moist19,20. Also, the top layer should exhibit anti-bacterial properties21. Propolis is a natural substance with high efficacy at anti-bacterial and pro-wound healing properties. It is composed of plant exudates, beeswax, and the salivary secretions of bee22. Its anti-bacterial properties can be attributed to containing different compounds including ketones, alcohols, steroids, flavonoid, phenolic acids, phenolic aldehyde, and some inorganic compounds23. Also, its anti-fungal properties have been demonstrated by previous studies24,25. Propolis is a safe and biocompatible natural product with extremely rare reports of allergy incidents. Moreover, propolis can quench and neutralize the free radicals at the wound site26,27. In the present study, PCL/Gel scaffold (sublayer) was electrospun on the PU/EEP membrane (top layer) to produce a bilayer wound dressing. The top layer was a membrane with dense structure and anti-bacterial properties to protect the wound from bacteria and external contaminants. Also, the sublayer was a scaffold with high ability to improve cells' adhesion and proliferation. The prepared bilayer wound dressing was investigated at morphological, structural, mechanical, antibacterial, and biological properties through in vitro and in vivo experiments. Method and Materials The utilized propolis was collected from the Shahr-e Kord beehives, Iran. PU (Tecoflex EG-80A) was purchased from Noveon, Inc. (Germany). The applied PU was a medical grade commercial elastomer, an aliphatic poly(ether-urethane) prepared from poly (tetramethylene glycol) (PTMG), HMDI and BDO. PTMG is a polyol polyether with good flexibility and biocompatibility. Tetrahydrofuran (THF) and dimethylformamide (DMF) were purchased from Merck Company (Germany). Gelatin type A (300 Bloom from porcine skin), poly (ε-caprolactone) PCL (MW 80,000), and solvent 2,2,2-trifluoroethanol (TFE) were all purchased from the Sigma-Aldrich (USA). The cell culture materials including RPMI, fetal bovine serum, 0.05% trypsin/EDTA, and phosphate buffer saline (PBS) were purchased from Sigma-Aldrich (USA). Double distilled water was used as the solvent throughout the experiment was purchased from Sigma-Aldrich (USA). L929 fibroblast cell line and Wistar rats were purchased from Pastor Institute of Tehran, Iran. The MTT assay kit was purchased from Sigma-Aldrich (USA). The Staphylococcus aureus (ATCC 25,923), Staphylococcus epidermidis (ATCC 25,925), Escherichia coli (ATCC 25,922) and Pseudomonas aeruginosa (ATCC 27,853) were purchased from the Pasteur Institute of Tehran, Iran. Preparation of ethanolic extract of propolis (EEP) Shahr-e Kord suburbs' beehives were selected for collection of propolis. The ethanolic extract of propolis (EEP) was prepared according to previous our study28. The harvested propolis was frozen at −20 for 24 h and then crushed in a blender. 70% ethanol solution was used to dissolve the propolis with a ratio of 1:10 (25 g of propolis in 250 mL of ethanol). The product was kept in a dark incubator at 37 °C for 14 days. After multiple times filtering of the suspension using Whatman No. 4 filter papers, the solvent was removed by employing a rotary evaporator at 40 °C. The product was stored at 4 °C until further uses. Preparation of the bilayer wound dressing Solvent casting technique was used to prepare the PU/EEP membrane. PU was dissolved into DMF/THF (volume ratio 50:50) and stirred for 3 h at 25 °C to prepare the polymer solution. The concentration of polymer was 10% w/v. Then, EEP was added to the solution for achieving 0.5% w/w concentration and mixed for 1 more hour. The mixed solution was poured in PTFE mold. The cast films were kept at room temperature for 24 h for solvent evaporation. Subsequently, the Gel (10% w/v) and PCL (10% w/v) were separately dissolved in TFE and their solutions were mixed together and stirred at room temperature until achieving a completely transparent solution29. The solution consisted of Gel and PCL (50:50) was loaded into a syringe with a metal needle (G22, diameter = 0.41 mm) and then electrospun toward the prepared PU/EEP membranes using an electrospinning machine (Sabz Inc., Tehran, Iran). Figure 1 schematically illustrates the bilayer wound dressing preparation steps. Schematic illustration of the fabrication and preparation of the bilayer wound dressing. Attenuated total reflectance/fourier transform infrared spectroscopy (ATR/FTIR) The chemical composition of the layers and the possible interactions between their components were investigated by infrared spectroscopy technique. The analysis was performed using an attenuated total reflectance (ATR) cell on the spectrophotometer FTIR-4200 type A (JASCO, USA), in a range of 500–4000 cm−1, set on 4 cm−1 resolutions and 64 scans. Gas chromatography/mass spectrometry The EEP was analyzed using Gas Chromatography-Mass Spectrophotometry (7890 A, Agilent Technologies, Inc.). About 5 mg of the EEP was mixed with 50 μL of dry pyridine and 75 μL bis(trimethylsilyl) trifluoroacetamide, heated at 80 °C for 20 min and analyzed by GC-MS. Operative conditions were set according to our previous study30. The spectrum was analyzed and compounds identified using the NIST05 data library. Fiber morphology observation The morphology and microstructure of the fabricated PCL/Gel scaffolds and PU/EEP-PCL/Gel dressing was observed by employing a scanning electron microscopy (SEM, TESCAN-Vega 3, Czech Republic) at 10 kV accelerating voltage. Prior to performing the analysis, the samples were coated with a thin layer of gold in a sputtering coating device (Q150R-ES, Quorum Technologies, UK). The Image J and MATLAB software were used to calculate the average diameter and porosity of the fibers, respectively. Instron Universal Testing Machine (Instron Engineering Corporation, USA) was employed to investigated the tensile strength (TS) and elongation at break (E%) of the PCL/Gel scaffolds, PU/EEP membranes, and PCL/Gel-PU/EEP dressings. The PCL/Gel, PU/EEP, and the PU/EEP-PCL/Gel samples were cut using the ASTM standard dumbbell shape template to obtain dumbbell shaped specimens with 30 mm length and 5 mm width. The tensile properties were examined by stretching the samples to break at a crosshead speed of 5 mm/min. The test was repeated five times. Hydrolytic and enzymatic degradation Stability of the PU/EEP and PCL/Gel wound dressings against hydrolytic and enzymatic degradation were investigated according to our previous study30. Briefly, the PU/EEP (weight: 0.25284 ± 0.02658 g, diameters: 50 mm × 15 mm, n = 9) and PCL/Gel (weight: 0.03618 ± 0.00334 g, diameters: 50 mm × 15 mm, n = 9) samples were immersed in 10 mL of PBS solution (pH 7.4, 37 °C) for 28 days, to evaluate its stability in an aqueous medium (hydrolytic degradation). The enzymatic degradation behavior of the PU/EEP (weight: 0.25284 ± 0.02658 g, diameters: 50 mm × 15 mm, n = 9) and PCL/Gel (weight: 0.03618 ± 0.00334 g, diameters: 50 mm × 15 mm, n = 9) samples were investigated by the immersion of the samples into a falcon tube containing 10 mL of PBS solution (pH 7.4, 37 °C) with collagenase (0.2 mg/mL) for 28 days. 9 samples were used for each dressing at both hydrolytic and enzymatic degradation conditions (n = 9). Contact angle measurement Water contact angle measurement was used to investigate the wettability of the PU/EEP-PCL/Gel dressing, PCL/Gel scaffold, and PU/EEP membrane. The contact angle meter XCA-50 (USA) was employed to perform the sessile drop technique. Therefore, distilled water was used to place a 4 µL droplet was on the surface of the samples. Triplicate individual measurements were carried out to calculate an average value. Propolis release The propolis release from the PU/EEP membrane was carried according to our previous study30. Briefly, the membrane was immersed in PBS under oscillation and the propolis concentration was measured using high-performance liquid chromatography at different time points. The experiment was done in triplicate. Anti-bacterial activity The anti-bacterial activity of the PU/EEP membrane (top layer) was analyzed using inhibition zone (ZOI) test against common wound pathogens. The Staphylococcus aureus (ATCC 25923), Staphylococcus epidermidis (ATCC 25925), Escherichia coli (ATCC 25922), Pseudomonas aeruginosa (ATCC 27853), and Klebsiella pneumoniae (ATCC 27553) were purchased from Pasteur Institute of Tehran, Iran. Nutrient agar medium was prepared and sterilized according to our previous study31. A glass L-rod was used to spread the overnight cultured medium containing bacteria (100 µL) over the nutrient agar medium. The PU/EEP samples were placed over the medium and incubated at standard condition for 24 h. The plates were monitored to measure the clearance zones around the discs. The test was done in triplicate. MTT assay To determine the biocompatibility of the PCL/Gel, PU/EEP and PU/EEP-PCL/Gel dressings with normal fibroblast cells, L929 murine fibroblast cells were purchased from the Pasteur Institute of Tehran, Iran. The PCL/Gel, PU/EEP and PU/EEP-PCL/Gel samples (n = 3) were sterilized by UV light Sirradiation for 30 min and placed at the bottom of 24-well cell culture plates. At least 3 wells were used for each sample (n = 3). Then, home-made Teflon inserts were sterilized by autoclave and used to fix the samples at the wells' bottom. The Teflon inserts defined a circular seeding area with a diameter of 8 mm. 104 L929 cells were seeded in each well and 400 µL RPMI culture media supplemented with 10% fetal bovine serum (Sigma, USA) and 1% penicillin/streptomycin (Sigma, USA) was added to each well. Also, L929 cells were seeded to 3 wells containing culture medium without any film dressing as the control wells. The plates were incubated in a humidified incubator at 37 °C in a 5% CO2 atmosphere and the culture medium was refreshed every 48 h. The cells' survival was evaluated on the 1st, 4th, and 7th day of culture using MTT assay according to kit manufacturer (Sigma-Aldrich, USA). The optical density was recorded at 590 nm by a microplate reader (Bio-RAD 680, USA). The test was performed in triplicates. Cells and the sublayer interactions The behavior of L929 fibroblasts on the surface of PCL/Gel scaffold was studied to evaluate the sublayer cytocompatibility. After UV sterilization, the PCL/Gel scaffold was fitted in a 24-well cell culture plate. Each plate was immersed in the culture medium and seeded with 104 L929 cells. Then, the plates were incubated at cell culture incubator for 7 days. Subsequently, the PBS solution was used to remove non-adherent cells through multiple times washing. Then, incubation with 2 vol% glutaraldehyde aqueous solution for 1 h was used for fixing the samples. After multiple times PBS washing, the fixed samples were dehydrated in a graded concentration of ethanol (30, 50, 70, 80, 90, 95, 100%) and dried. The dried samples were immediately sputter-coated with gold and the cells morphology was examined under scanning electron microscope (SEM). This test was repeated three times. Animal model and histological evaluation The animal experiments were done according to our previous studies30,32. Briefly, 24 female Wistar rats (6–8 weeks old, 150–180 g) were purchased from Pastor Institute of Tehran, Iran. The rats were maintained at standard condition with complete access to standard rodents' chow and water. The animals were acclimated for 14 days before entering any experiment. Then, the rats were anesthetized with intraperitoneal injection of Ketamine-Xylazine (Ketamine: 191.25 mg/kg, Xylazine: 4.25 mg/kg) solution. The wounds were created at the rats' dorsal skin using a punch-biopsy needle with 11 mm diameter after exact shaving and disinfection of the target area. Subsequently, the wounded rats were randomly divided into three groups (n = 8) including control (no-treatment), PU/EEP, and the PU/EEP-PCL/Gel group. The scaffolds were applied precisely on the wound site. The wounds were covered by sterile gauze at the control group. To manage post-operative pain, Ketoprofen (5 mg/kg) was administered subcutaneously until 72 h after operation. The wound healing progression was monitored by continues measurement of the wound diameters using a digital caliper every five days (1st, 5th, 10th, and 15th day). The remaining wound areas' percentage was calculated based on the below-mentioned Eq. (1). Also, wound's photographs were captured at the certain days (1st, 5th, 10th, 15th day after operation)33. $${\rm{Remaining}}\,{\rm{wound}}\,{\rm{area}}\,{\rm{percentage}}\,=\,\frac{{\rm{wound}}\,{\rm{area}}\,{\rm{day}}\,0\,-\,{\rm{contracted}}\,{\rm{wound}}\,{\rm{area}}\,\text{day}(n)}{{\rm{wound}}\,{\rm{area}}\,{\rm{day}}\,0}\times 100$$ At the 15th day after wound creation, the rats were scarified by overdose of Ketamine-Xylazine mixture (KX). Then, full thickness skin excisions were made from the wound area (n = 8). The obtained specimens were fixed in 10% formalin neutral buffer solution for 24 h. The fixed specimens were processed overnight using an automatic tissue processor (Sakura, Japan). Then, the specimens were embedded in paraffin blocks and a microtome (Leica Biosystems, Germany) was employed to obtain multiple sections with 4 µm thickness. The tissue sections were stained by Hematoxylin & Eosin (H&E)34 and Masson's trichrome35 methods, separately. The mounted slides were observed by a digital light microscope which was equipped with an Olympus DP70 digital camera (Olympus, Japan). All animal experiments were carried out in accordance with the Arak University of Medical Sciences' Guidelines for the Care and Use of Laboratory Animals, which refers to American Association for Laboratory Animals Science and the guidelines laid down by the NIH. All experimental protocols were approved by the ethics committee of Arak University of Medical Sciences (IR.ARAKMU.REC.1398.147). In this study, female Wistar rats were used for animal skin wound model. The minimum required number of animals was used which was enough to obtain reliable results, precise statistical analyzes, prevent from the experiments' repetition. The rats were maintained at standard condition with complete access to standard rodents' chow and water. The surgical procedures were done under anesthetize through intraperitoneal injection of Ketamine-Xylazine (KX) solution and at completely aseptic conditions. To manage post-operative pain, Ketoprofen (5 mg/kg) was administered subcutaneously until 72 h after operation. If any signs of post-operative pain, massive necrosis, wound infection or bleeding, failure to eat and drink for over 3 days, inability at limb movement were observed during any steps of the study, the animals were sacrificed by overdose of KX solution. No human subject was used in this study. The statistical analyses were performed by employing JMP 11.0 software (SAS Institute, Japan) and using one-way analysis of variance (ANOVA) with Tukey's post-hoc test. The results were displayed as the mean ± standard deviation (SD). The difference was considered statistically significant if P < 0.05. (*P < 0.05) ATR-FTIR analyses were carried out for characterization of the PCL, Gel, and PCL/Gel scaffolds (Fig. 2A). As Fig. 2A illustrates, the characteristic bands of PCL were appeared at 2949 cm−1 (asymmetric CH2 stretching), 2865 cm−1 (symmetric CH2 stretching), 1727 cm−1 (carbonyl stretching), 1293 cm−1 (C–O and C–C stretching), 1240 cm−1 (asymmetric C–O–C stretching), and 1170 cm−1 (symmetric C–O–C stretching)36. The characteristic bands of Gel were observed at approximately 1650 cm−1 (amide I) and 1540 cm−1 (amide II) wavelengths. 1540 cm−1 peak is related to coupling of N–H bending bond and C–N stretching bond. Also, the 1650 cm−1 peak can be attributed to the stretching vibrations of C=O bond which can be observed at both the Gel and PCL/Gel spectra37. In addition, the PU, EEP, and PU/EEP spectra are illustrated in Fig. 2B. In the EEP spectrum, stretching vibrations of C-H bonds of the CH2 and CH3 groups caused formation of high-intensity peaks at 2930 cm−1 and 2870 cm−1, respectively. The spectroscopy of PU membrane exhibited characteristic absorption bands at 3320, 2960, 1710, 1530, 1220, 1110, and 777 cm−1 which represents (N–H), (C–H), and (C–O) bonds on substituted benzene, respectively38. The 3000–3700 peak showed the presence of O-H band at EEP. Also, a 3000–3500 cm−1 peak at the PU/EEP spectrum was observed which was wider than its counterpart peak at the PU spectrum. The ATR-FTIR spectra of (A) the sublayer and (B) the top layer. The TGA graphs of (C) the sublayer and (D) the top layer. Thermo-gravimetric analysis (TGA) TGA was carried out for the PCL, Gel, and PCL/Gel scaffolds to determine changes in weight in relation to change in temperature. The thermal gravimetric curves exhibited a large loss of mass in all samples between 260 and 420 °C, which is related to the characteristic thermal behavior of Gel and PCL (Fig. 2C). The remaining masses were 19%, 8%, and 13% for the Gel, PCL, and PCL/Gel scaffolds, respectively. The PCL/Gel scaffold exhibited remarkable mass loss in the initial decomposition temperature (~ 40–100 °C) in comparison with PCL. Gel has high ability in absorbing moisture. Therefore, this observation can be related to dehydration and loss of the absorbed moisture39. Figure 2D shows the TGA curves of the PU and PU/EEP membranes. For the PU sample, the initial decomposition temperature was about 230 °C. As Fig. 2D illustrates, the PU/EEP membrane displayed single-stage thermal degradation. The mass losses were 18% and 11% for the PU and PU/EEP membranes, respectively. The decomposition temperature of the PU/EEP membrane was lower than the PU. This observation can be attributed to incorporation of propolis which has low thermal stability. Moreover, incorporation of EEP to the PU matrix can decrease orientation of the polymer chains' and crystallinity40. GC-MS analysis Propolis is a natural product with promising anti-microbial properties. Its bioactivities are deeply dependent to its chemical composition. In the current study, the chemical composition of the utilized propolis was analyzed using GC-MS (Fig. 3). The identified components belonged to different groups of chemicals. The identified components with significant anti-bacterial properties were illustrated in Table 1. Many aromatic compounds with demonstrated anti-bacterial, anti-fungal, anti-viral, and anti-inflammatory properties, were detected in the utilized propolis41,42,43. The most important phenolic acids and flavonoid derivatives were 1,2-Benzenedicarboxylic acid (2.66%), caffeic acid (1.81%), stearic acid (8.84%), 5,7-Dihydroxy-2-phenyl-4H-1-benzoyran-4-one (32.65%), pinocembrin (2.21%), Icosanoic acid (1.39%), 1-(2,6-dihydroxy-4-methoxyphenyl)-3-phenyl-(1.95%), and naringenin (4.52%). The anti-bacterial activity of propolis can be attributed to the synergistic effect of its various components with significant anti-bacterial properties. The chromatogram of the EEP according to GC–MS analysis. The figure displays the full-length chart with no cropping. Table 1 GC-MS analyzes of the utilized EEP. Morphological observation Surface morphology of the bilayer wound dressing was investigated by SEM (Fig. 4). The SEM images of the PCL/Gel scaffolds under optimized conditions, exhibited a continuous and homogeneous fibrous structure without bead formation (Fig. 4A,B). The average diameter of the PCL/Gel nanofibers was 237.3 ± 65.1 nm, which was calculated from 50 random measurements. The diameters of the PCL/Gel fibers were in the range of 150–400 nm (Fig. 4C). Ramalingam et al. reported the fibers' diameter for the electrospun PCL/Gel hybrid composite nanfabrous woud dressing which ranged from 150 to 250 nm with an average of 234 ± 52 nm44. Ajmal et al. studied the electrospun PCL/Gel fibers effect on the full thickness wounds and the fabricated PCL/Gel fibers' mean diameter was 234.1 ± 98.2 nm45. In addition, they reported about 74% porosity for the fabricated PCL/Gel dressings. In our study, the porosity was estimated 82.2% which is appropriate for skin tissue engineering applications. Also, average pore size of the PCL/Gel scaffolds was determined 3.2 ± 0.9 µm. Ghasemi-Mobarakeh et al. reported no significant difference between mean pores' size of the electrospun PCL/Gel scaffolds with 50:50 (0.8 ± 0.2 µm) and 70:30 (1.0 ± 0.3 µm) PCL/Gel ratios. Also, mean fibers' diameter was estimated 113 ± 33 nm and 189 ± 56 nm for PCL/Gel 50:50 and 70:30, respectively46. According to previous studies, 60–90% porosity is ideal for scaffolds to facilitate fibroblast cells penetration and proliferation at their structure47,48. Also, the porous structure can maintain homeostasis at the wound area by ensuring enough gas and nutrient exchange49. The PCL/Gel scaffolds create more space for cells' migration due to gradual desolvation of the gelatin component. Moreover, appropriate elongation and deformation features of gelatin, facilitate space opening for cells' penetration in the scaffold structure50. At cross-sectional analyses of the wound dressing (Fig. 4D,E,F), the thickness was determined 143.9 ± 6.1 µm for the top layer (PU/EEP membrane) and 52.3 ± 3.4 µm for the sublayer (PCL/Gelatin scaffold). Also, the PU/EEP membrane exhibited a homogenous surface without cracks. SEM micrographs of the PCL/Gel nanofibers with (A) 20400 X and (B) 35700 X magnifications. (C) The fibers' diameter distribution chart. The cross-sections of the (D) PU/EEP and PU/EEP-PCL/Gel bilayer wound dressing with (E) 410X and (F) 710 X magnifications. Mechanical properties of the PU/EEP, PCL/Gel, and PU/EEP-PCL/Gel samples were investigated (Fig. 5A,B). The PCL/Gel scaffold exhibited the elongation at maximum load of 45.9 ± 3.3%, and a tensile strength of 1.7 ± 0.9 MPa. Other studies reported almost the same range of tensile strength like 2.14 MPa51 and 1.29 MPa52 for the electrospun PCL/Gel scaffolds. Yao et al. reported the mechanical properties of PCL/Gel fibers which were electrospun with different blend ratios of 4:1, 2:1, 1:1, 1:2, 1:4. The highest tensile strength and elongation at break values for these scaffolds were 2.9 MPa and ~80%, respectively53. As Fig. 5 illustrates, the PU/EEP membrane exhibited significantly higher TS and E% values (TS: 5.8 ± 0.6 MPa, E%: 342.1 ± 23.9%) in comparison with the PCL/Gel scaffold (TS: 1.7 ± 0.9 MPa, E% 45.9 ± 3.3%). On the other hand, the PU/EEP-PCL/Gel bilayer dressing exhibited almost the same values of TS (5.6 ± 0.6 MPa) and E% (333.2 ± 12.4%) in comparison with the PU/EEP membrane (P > 0.05). Therefore, mechanical properties of the PU/EEP-PCL/Gel bilayer dressing can be attributed to the PU/EEP membrane (top layer) and the PCL/Gel scaffold (sublayer) has no significant effect (P > 0.05) on these parameters. Taken together, the PU/EEP-PCL/Gel wound dressing exhibited appropriate mechanical properties due to presence of the PU/EEP membrane as the top layer, which can protect the wound from mechanical damages. Wang et al. designed a bilayer wound dressing consisted of collagen-rich extracellular matrix of the porcine small intestine submucosal layer (SIS). This bilayer wound dressing was consisted of a SIS membrane as top layer and a SIS cryogel as sublayer. They reported 35% extensibility for the SIS bilayer wound dressing54 which is signficantly lower than the PU/EEP-PCL/Gel bilayer dressing. Thu et al. reported alginate based bilayer hydrocolloid films for wound dressing which exhibited ~ 59% extensibility. However, the reported tesnile strength for this bilayer dressing was 27.2 MPa which is signficantly higher than the PU/EEP-PCL/Gel bilayer dressing55. However, the ideal tensile strength range for wound dressing and skin cell culture is 0.8–18 MPa56. (A) The tensile strength and (B) elongation at break parameters of the PU/EEP, PCL/Gel, and PU/EEP-PCL/Gel samples. The water contact angle measurement is a proper technique to measure surfaces' wettability (Fig. 6A,B). Apropriate hydrophilicity is necessary for the wound dressing's surface which is in direct contact with the wound area57. Therefore, the hydrophilicity of the top layer and sublayer were investigated. The contact angle of the PU/EEP membrane (top layer), PCL/Gel scaffold (sublayer), and the PU/EEP-PCL/Gel bilayer dressing was measured 98.3 ± 5.8°, 51.1 ± 4.9°, and 50.1 ± 2.1°, respectively. As Fig. 6A illustrates, no significant difference was observed between the PCL/Gel scaffold and PU/EEP-PCL/Gel dressing's contact angels (P > 0.05). The results indicated that the PCL/Gel scaffold has significantly higher hydrophilicity in comparison with the PU/EEP membrane (P < 0.05). This hydrophilicity can be attributed to the multiple hydrophilic functional groups of Gel58. Other studies have reported higher hydrophilicity of the electrospun PCL/Gel scaffold (contact angles: ranged ~40–60°) in comparsion with PCL which is moderately hydrophobic (contact angle: ranged ~90–130°)59,60,61,62. At the study of a PCL/Gel based nanofibers wound dressing by Ajmal et al., the PCL contact angel was reported 100.1 ± 3.1° which decreased to 55.5 ± 2.1° after gelatin incorporation at PCL/Gel scaffold. Therefore, the incorporation of gelatin significantly increases the hydrophilicity of the scaffolds because of the amine and carboxyl functional groups in the gelatin structure63. The hydrophilic properties of the sublayer can facilitate the cells' adhesion to the wound dressing's surface and accelerate wound closure and healing process64. (A) The water contact angles of the top layer, the sublayer, and the bilayer wound dressing. (B) Schematic illustration of water contact angels of the toplayer and sublayer of the bilayer wound dressing. (C) The hydrolytic and (D) enzymatic degradation of the PU/EEP membrane (top layer) and the PCL/Gel scaffold (sublayer). Data are expressed as mean ± SD (n = 5). Hydrolytic degradation of the PU/EEP membrane and PCL/Gel scaffold are shown in Fig. 6C. The samples were immersed in PBS for 28 days for evaluation of hydrolytic degradation. The PCL/Gel scaffold lost 36.9%, 58.2%, and 76% of its primary weight until the 7th, 14th, and 28th day, respectively (Fig. 6C). The PU/EEP membrane exhibited significantly slower degradation rate in comparison with the PCL/Gel. After 28 days, only 1.9% weight loss was observed at the PU/EEP membrane which can be attributed to release of EEP in the PBS and no significant changes were observed at its appearance. The PCL/Gel scaffold exhibited faster degradation rate in comparison with the PU/EEP membrane at hydrolytic degradation test. PCL is a crystalline polymer and Gel is an amorphous polymer in the PCL/Gel scaffold's structure. The amorphous regions degrade more rapidly than the crystalline regions during hydrolytic degradation of a scaffold65. Enzymatic degradation of the PU/EEP membrane and PCL/Gel scaffold are illustrated as Fig. 6D. The collagenase enzymes are often used for evaluating the enzymatic degradation of wound dressings66,67. The PCL/Gel scaffolds which were immersed in the collagenase enzymes solution exhibited 46% and 78% weight loss after 7 and 14 days, respectively. However, the PU/EEP membrane was significantly resistance to both hydrolytic and enzymatic degradation. Anti-bacterial activity and propolis release profile A wound dressing should prevent bacterial penetration and proliferation at the wound area through release of anti-bacterial agents (Fig. 7A). As Fig. 7B, C, D, E, F illustrate, the best anti-bacterial activity of the PU/EEP membrane (top layer) was observed against Staphylococcal aureus (5.4 ± 0.3 mm), Escherichia coli (1.9 ± 0.4 mm), and Staphylococcus epidermidis (1.0 ± 0.2 mm). However, S. epidermidis exhibited smaller inhibition zone in comparison with E. coli and S. aureus (Fig. 7G). This observation can be attributed to the slime layer of this strain which surrounds the bacteria cells and prevents from antibiotics and anti-bacterial agents entrance to the bacteria cells68. P. aeruginosa was another resistance strain to anti-bacterial effects of the PU/EEP membrane. This resistance can be related to excretion of alginate exopolysaccharide by this stain to perform mucoid layer which can cause the bacteria resistance to anti-bacterial agents28. Some studies have incorporated anti-microbial agents into the bilayer wound dressings. Hypericum perforatum oil incorporated bilayer films could exhibit effective anti-microbial activity against E. coli, S. aureus, and C. albicans69. Neto, R. J. G. et al. reported that chitosan/konjac glucomannan bilayer film can exhibit anti-bacterial activity against both Gram-positive and Gram-negative bacteria70. FL et al. reported significant anti-bacterial activity by bilayer chitosan wound dressing with sustainable silver sulfadiazine release. The release of sulfadiazine from the bilayer chitosan dressing displayed two phase release including burst release at the first days and then slow release. However, the release of silver from this bilayer wound dressing exhibited a slow release profile. The bilayer wound dressing exhibited high anti-microbial activity and growth inhibition against P. aeruginosa and S. aureus in agar plates in vitro and at infected wound site in vivo71. Also, Sripriya et al. designed a collagen bilayer dressing with ciprofloxacin. This bilayer wound dressing created a 33 ± 3 mm inhibition zone against the mixed culture of S. aureus and P. aeruginosa and the inhibition zones were maintained for more than 72 h72. (A) Schematic illustration of the prevention of bacteria penetration to the wound area by the PU/EEP membrane. The anti-bacterial activity of the PU/EEP membrane (top layer) against (B) S. epidermis, (C) S. aureus, (D) P. aeruginosa, (E) E. coli, and (F) K. pneumoniae according to the inhibition zones method. (G) Anti-microbial activity of the PU/EEP membrane (top layer) against different bacteria species according to the inhibition zone method (n = 3). (H) The propolis release profile of the PU/EEP membrane (n = 3). The EEP release profile of the PU/EEP membrane was investigated by HPLC after immersing in PBS (Fig. 7H). As Fig. 7H illustrates, two different release phages were detected. The first phage had high slope and continued at first 8 h. This phase is due to the burst release of the loaded propolis on the membrane's surface. The second phase happened after the initial burst release and exhibited significantly slower release. It continued its upward trend up to 48 h. Therefore, the EEP as the main anti-bacterial agent of the wound dressing is slowly released in the wound area which can cause a long-term anti-bacterial condition at the wound environment. Cell viability assay MTT assay was carried out in order to analyze the biocompatibility of the PU/EEP membrane, PCL/Gel scaffold, and the bilayer wound dressing. The cells' viability was investigated after 1, 4, and 7 days' incubation with the samples (Fig. 8A). No toxic effect was observed at any of the samples. The PCL/Gel and PCL/Gel-PU/EEP samples exhibited more pro-proliferative effects on the fibroblast cells in comparison with the PU/EEP which became significant from the 4th day of incubation. In our previous study, 0.5 wt% EEP exhibited the most appropriate biocompatibility with L929 fibroblast cells in comparison with 0.25 wt% and 1 wt% concentrations28. Therefore, 0.5 wt% EEP concentration was used in this study. Figure 8B,C show SEM images of the fibroblast cells on the surface of the bilayer wound dressing at 7th day. As SEM images exhibit the L929 cells can easily attach and extend on the surface of the PCL/Gel scaffolds. The presence of Gel in the scaffold structure (PCL/Gel) can enhance the cells proliferation. According to previous studies73, compositing of a natural polymer like Gel with PCL can improve the hydrophilicity and cellular affinity of the obtained scaffold which provides a favorable environment for cells' attachment and proliferation. (A) Investigation of the PCL/Gel scaffolds' biocompatibility with the L929 fibroblasts according to cell viability assay after 1, 4, and 7 days incubation (*P < 0.05). (B) SEM images of the fibroblast cells on the surface of the PCL/Gel scaffold after 7 days from seeding the cells. In vivo wound healing and histopathology analyzes The efficacy of the PU/EEP-PCL/Gel wound dressing was evaluated in comparison with control and PU/EEP group in the Wistar rats' skin wound model. The PU/EEP and PU/EEP-PCL/Gel dressings were placed at the wound area and the wound closure was monitored for 15 days. All the rats survived throughout the experiment and they were scarified at 15th day for histopathological examinations. The remaining wound area's percentage was calculated according to wound diameters at the definite time points (1, 5, 10, and 15 days after operation). Figure 9A illustrates the macroscopic photographs of wounds at control, PU/EEP, and PU/EEP-PCL/Gel groups during time progression. The wounds at the control group were just covered by gauze. On the 15th day, the PU/EEP group exhibited higher wound closure rate in comparison with the control which can be mainly attributed to maintaining the moist environment at the wound site74. In addition, the PU/EEP-PCL/Gel wound dressing treated group exhibited approximately healed and closed wounds at this day. As Fig. 9B shows, the remaining wound area of the control group was 100%, 60.3%, 36.3%, and 17.9% at the 1st, 5th, 10th, and 15th day, respectively. Also, the remaining wound area of the PU/EEP treated wounds at the 1st, 5th, 10th, and 15th day were 98.2%, 42.1%, 25. 4%, and 8.5%, respectively. The remaining wound area percentages of the PU/EEP-PCL/Gel treated group at the same time points were 99.0%, 20.1%, 7.9%, and 2.6%, respectively. These observations demonstrate the PU/EEP-PCL/Gel potential to accelerate wound healing process. Figure 9C illustrates histopathological evaluation of skin specimens from the healed wound area at 15th day after wound creation. The PU/EEP-PCL/Gel group specimens exhibited significantly more developed dermis in comparison with the PU/EEP and Ctrl specimens due to presence of lower number of inflammatory cells and development of more hair follicles75,76. Also, the specimens were stained with Masson's trichrome to analysis collagen deposition. Collagen deposition at the wound matrix plays a critical role in the healing process, as it provides scaffolds for wound-healing cells35. More accumulation of collagen fibers and collagen deposition was observed in the PU/EEP-PCL/Gel group's specimens in comparison with the PU/EEP membrane and Ctrl. Moreover, more densely packed collagen fibers with a parallel arrangement was observed in the extracellular matrix of the PU/EEP-PCL/Gel group's specimens in comparison with other groups. These observations can demonstrate appropriate wound healing activity of the PU/EEP-PCL/Gel wound dressings according to histopathological analyzes. Yao et al. investigated keratin-gelatin composite bilayer dressing in vivo. They created full-thickness rectangular wounds (1.5 cm × 1.5 cm) on the back of adult male Sprague–Dawley rats. The reported bilayer wound dressing caused complete wound closure at 14th day post-surgery. They reported accelerated dermis development and early formation of hair follicle and sebaceous glands at the bilayer wound dressing groups77. Xu et al. investigated their microporous silicone rubber membrane bilayer in BLAB/c mouse wound model. Full-thickness defects (10 mm × 10 mm) were created on the back of the mice and monitored for 7 days. It was observed that on the 7th day, the mean wound remaining area of the control, vaseline gauze, and bilayer wound dressing groups were 70%, 62.8%, and 35.8%, respectively. However, the amount of wound closure was the same at 1st and 3rd days78. Thu et al. used alginate-based bilayer hydrocolloid films for treatment of Sprague-Dawley rats' skin wound model. The full-thickness skin excision wounds were created by a punch-biopsy needle (6 mm diameter and about 1 mm depth) and a 1 cm2 area film dressings was applied on the wound. Although the normal saline group exhibited 20% mean wound remaining area after 10 days, the bilayer dressing treated wounds were attained full closure and significant re-epithelialization55. (A) Macroscopic photographs of the wounds at definite time points (1st, 5th, 10th and 15th day after wound creation) to exhibit wound closure progression at different groups (Ctrl, PU/EEP, and PU/EEP-PCL/Gel). (B) Wound closure progression according to calculation of the remaining wound area at definite time points (1st, 5th, 10th and 15th day after wound creation). The data are presented as mean ± SD (n = 8). (C) Photographs of the H&E and Masson's trichrome stained sections of the healed skin specimens at the 15th day after wound creation. In this study, PCL/Gel solution was successfully electrospun into a continuous, uniform, and bead-free nanofibrous scaffold over a PU/EEP membrane to form a bilayer wound dressing. This wound dressing exhibited appropriate biocompatibility, biodegradability, and mechanical properties. Also, remarkable anti-bacterial activity against common wound infection bacteria was observed due to presence of the top layer (PU/EEP). In addition, animal studies revealed that the PU/EEP-PCL/Gel bilayer wound dressing can significantly accelerate the wound healing progression and shorten wound closure time. Taking together, the PU/EEP-PCL/Gel bilayer wound dressing can be a potential candidate for biomedical application due to the high biocompatibility and significant anti-bacterial and wound healing activities. Yao, C.-H., Lee, C.-Y., Huang, C.-H., Chen, Y.-S. & Chen, K.-Y. Novel bilayer wound dressing based on electrospun gelatin/keratin nanofibrous mats for skin wound repair. Materials Science and Engineering: C 79, 533–540 (2017). Vig, K. et al. Advances in skin regeneration using tissue engineering. International journal of molecular sciences 18, 789 (2017). Article CAS PubMed Central Google Scholar Xie, Y., Yi, Z.-x, Wang, J.-x, Hou, T.-g & Jiang, Q. Carboxymethyl konjac glucomannan-crosslinked chitosan sponges for wound dressing. International journal of biological macromolecules 112, 1225–1233 (2018). Franco, R. A., Min, Y. K., Yang, H. M. & Lee, B. T. Fabrication and biocompatibility of novel bilayer scaffold for skin tissue engineering applications. Journal of biomaterials applications 27, 605–615, https://doi.org/10.1177/0885328211416527 (2013). Hinrichs, W., Lommen, E., Wildevuur, C. R. & Feijen, J. Fabrication and characterization of an asymmetric polyurethane membrane for use as a wound dressing. Journal of applied biomaterials 3, 287–303 (1992). Zhang, L. & Webster, T. J. Nanotechnology and nanomaterials: promises for improved tissue regeneration. Nano today 4, 66–80 (2009). Naseri-Nosar, M. & Ziora, Z. M. Wound dressings from naturally-occurring polymers: A review on homopolysaccharide-based composites. Carbohydrate Polymers 189, 379–398 (2018). Suarato, G., Bertorelli, R. & Athanassiou, A. J. Borrowing from Nature: biopolymers and biocomposites as smart wound care materials. Frontiers in Bioengineering and Biotechnology 6, 137, https://doi.org/10.3389/fbioe.2018.00137 (2018). Mele, E. J. et al. Electrospinning of natural polymers for advanced wound care: towards responsive and adaptive dressings. Journal of Materials Chemistry B 4, 4801–4812 (2016). Rosellini, E., Cristallini, C., Barbani, N., Vozzi, G. & Giusti, P. Preparation and characterization of alginate/gelatin blend films for cardiac tissue engineering. Journal of Biomedical Materials Research Part A: An Official Journal of The Society for Biomaterials, The Japanese Society for Biomaterials, and The Australian Society for Biomaterials and the Korean Society for Biomaterials 91, 447–453 (2009). Shi, M. et al. Antibiotic-releasing porous polymethylmethacrylate/gelatin/antibiotic constructs for craniofacial tissue engineering. Journal of controlled release 152, 196–205 (2011). Li, M., Mondrinos, M. J., Chen, X. & Lelkes, P. I. In Engineering in Medicine and Biology Society, 2005 . IEEE-EMBS 2005. 27th Annual International Conference of the. 5858–5861 (IEEE). Labet, M. & Thielemans, W. Synthesis of polycaprolactone: a review. Chemical Society reviews 38, 3484–3504, https://doi.org/10.1039/b820162p (2009). Ceonzo, K. et al. Polyglycolic acid-induced inflammation: role of hydrolysis and resulting complement activation. Tissue engineering 12, 301–308 (2006). Gomes, S., Rodrigues, G., Martins, G., Henriques, C. & Silva, J. C. Evaluation of nanofibrous scaffolds obtained from blends of chitosan, gelatin and polycaprolactone for skin tissue engineering. International journal of biological macromolecules 102, 1174–1185 (2017). Dulnik, J., Denis, P., Sajkiewicz, P., Kołbuk, D. & Choińska, E. Biodegradation of bicomponent PCL/gelatin and PCL/collagen nanofibers electrospun from alternative solvent system. Polymer Degradation and Stability 130, 10–21 (2016). Ozkaynak, M. U., Atalay‐Oral, C., Tantekin‐Ersolmaz, S. B. & Güner, F. S. In Macromolecular symposia. 177–184 (Wiley Online Library). Lee, S. M. et al. Physical, morphological, and wound healing properties of a polyurethane foam-film dressing. Biomaterials Research 20, 15 (2016). Mi, F. L. et al. Control of wound infections using a bilayer chitosan wound dressing with sustainable antibiotic delivery. Journal of Biomedical Materials Research: An Official Journal of The Society for Biomaterials, The Japanese Society for Biomaterials, and The Australian Society for Biomaterials and the Korean Society for Biomaterials 59, 438–449 (2002). Khil, M. S. et al. Electrospun nanofibrous polyurethane membrane as wound dressing. Journal of Biomedical Materials Research Part B Applied Biomaterials 67, 675–679 (2003). Simões, D. et al. Recent advances on antimicrobial wound dressing: A review. European Journal of Pharmaceutics and Biopharmaceutics (2018). Roy, N. et al. Biogenic synthesis of Au and Ag nanoparticles by Indian propolis and its constituents. Colloids and Surfaces B: Biointerfaces 76, 317–325 (2010). Banskota, A. H., Tezuka, Y. & Kadota, S. Recent progress in pharmacological research of propolis. Phytotherapy research 15, 561–571 (2001). Kim, D. M., Lee, G. D., Aum, S. H. & Kim, H. J. Preparation of propolis nanofood and application to human cancer. Biological & pharmaceutical bulletin 31, 1704–1710 (2008). Ota, C., Unterkircher, C., Fantinato, V. & Shimizu, M. J. M. Antifungal activity of propolis on different species of Candida 44, 375–378 (2001). Martinotti, S. & Ranzato, E. Propolis: a new frontier for wound healing? Burns & Trauma 3, 9 (2015). Oryan, A., Alemzadeh, E. & Moshiri, A. Potential role of propolis in wound healing: Biological properties and therapeutic activities. Biomedicine & Pharmacotherapy 98, 469–483 (2018). Eskandarinia, A. et al. Cornstarch-based wound dressing incorporated with hyaluronic acid and propolis: In vitro and in vivo studies. Carbohydrate Polymers 216, 25–35 (2019). Feng, B., Tu, H., Yuan, H., Peng, H. & Zhang, Y. Acetic-acid-mediated miscibility toward electrospinning homogeneous composite nanofibers of GT/PCL. Biomacromolecules 13, 3917–3925 (2012). Kalirajan, C., Hameed, P., Subbiah, N. & Palanisamy, T. A Facile Approach to Fabricate Dual Purpose Hybrid Materials for Tissue Engineering and Water Remediation. Scientific Reports 9, 1040 (2019). Khodabakhshi, D. et al. In vitro and in vivo performance of a propolis-coated polyurethane wound dressing with high porosity and antibacterial efficacy. Colloids and surfaces B: Biointerfaces 178, 177–184 (2019). Muthukumar, T., Anbarasu, K., Prakash, D. & Sastry, T. P. Effect of growth factors and pro-inflammatory cytokines by the collagen biocomposite dressing material containing Macrotyloma uniflorum plant extract—In vivo wound healing. Colloids and Surfaces B: Biointerfaces 121, 178–188 (2014). Foroushani, M. S. et al. Folate-graphene chelate manganese nanoparticles as a theranostic system for colon cancer MR imaging and drug delivery. In-vivo examinations. 54, 101223 (2019). Chen, X. et al. SIKVAV-Modified Chitosan Hydrogel as a Skin Substitutes for Wound Closure in Mice. Molecules 23, 2611 (2018). Gautam, S., Dinda, A. K. & Mishra, N. C. Fabrication and characterization of PCL/gelatin composite nanofibrous scaffold for tissue engineering applications by electrospinning method. Materials Science and Engineering: C 33, 1228–1235 (2013). Shirani, K., Nourbakhsh, M. S. & Rafienia, M. Electrospun polycaprolactone/gelatin/bioactive glass nanoscaffold for bone tissue engineering. International Journal of Polymeric Materials 68(10), 1–9 (2018). Unnithan, A. R. et al. Electrospun polyurethane-dextran nanofiber mats loaded with Estradiol for post-menopausal wound dressing. International Journal of Biological Macromolecules 77, 1–8 (2015). Rajzer, I., Menaszek, E., Kwiatkowski, R., Planell, J. A. & Castano, O. Electrospun gelatin/poly (ε-caprolactone) fibrous scaffold modified with calcium phosphate for bone tissue engineering. Materials Science and Engineering: C 44, 183–190 (2014). Kim, J. I., Pant, H. R., Sim, H.-J., Lee, K. M. & Kim, C. S. Electrospun propolis/polyurethane composite nanofibers for biomedical applications. Materials Science and Engineering: C 44, 52–57 (2014). Przybyłek, I. & Karpiński, T. M. Antibacterial Properties of Propolis. Molecules 24, 2047 (2019). Afrouzan, H., Tahghighi, A., Zakeri, S. & Es-haghi, A. Chemical Composition and Antimicrobial Activities of Iranian Propolis. Iranian Biomedical. Journal 22, 50 (2018). Probst, I. et al. Antimicrobial activity of propolis and essential oils and synergism between these natural products. Journal of Venomous Animals and Toxins including Tropical Diseases 17, 159–167 (2011). Ramalingam, R. et al. Poly-ε-Caprolactone/Gelatin Hybrid Electrospun Composite Nanofibrous Mats Containing Ultrasound Assisted Herbal Extract: Antimicrobial and Cell Proliferation Study. Nanomaterials 9, 462 (2019). Ajmal, G. et al. Biomimetic PCL-gelatin based nanofibers loaded with ciprofloxacin hydrochloride and quercetin: A potential antibacterial and anti-oxidant dressing material for accelerated healing of a full thickness wound. International Journal of Pharmaceutics 567, 118480 (2019). Ghasemi-Mobarakeh, L., Prabhakaran, M. P., Morshed, M., Nasr-Esfahani, M.-H. & Ramakrishna, S. Electrospun poly (ɛ-caprolactone)/gelatin nanofibrous scaffolds for nerve tissue engineering. Biomaterials 29, 4532–4539 (2008). Sundaramurthi, D., Krishnan, U. M. & Sethuraman, S. Electrospun nanofibers as scaffolds for skin tissue engineering. Polymer Reviews 54, 348–376 (2014). Cai, L., Shi, H., Cao, A. & Jia, J. Characterization of gelatin/chitosan ploymer films integrated with docosahexaenoic acids fabricated by different methods. Scientific reports 9 (2019). Chong, E. J. et al. Evaluation of electrospun PCL/gelatin nanofibrous scaffold for wound healing and layered dermal reconstitution. Acta biomaterialia 3, 321–330, https://doi.org/10.1016/j.actbio.2007.01.002 (2007). Zhang, Y., Ouyang, H., Lim, C. T., Ramakrishna, S. & Huang, Z. M. Electrospinning of gelatin fibers and gelatin/PCL composite fibrous scaffolds. Journal of Biomedical Materials Research Part B: Applied Biomaterials: An Official. Journal of The Society for Biomaterials, The Japanese Society for Biomaterials, and The Australian Society for Biomaterials and the Korean Society for Biomaterials 72, 156–165 (2005). Adeli-Sardou, M., Yaghoobi, M. M., Torkzadeh-Mahani, M. & Dodel, M. Controlled release of lawsone from polycaprolactone/gelatin electrospun nano fibers for skin tissue regeneration. International Journal of Biological Macromolecules 124, 478–491 (2019). Zhang, Y. et al. Electrospinning of gelatin fibers and gelatin/PCL composite fibrous scaffolds. Journal of Biomedical Materials Research Part B Applied Biomaterials 72, 156–165 (2005). Yao, R., He, J., Meng, G., Jiang, B. & Wu, F. J. Jo. Bs Polymer edition. Electrospun PCL/Gelatin composite fibrous scaffolds: mechanical properties and cellular responses 27, 824–838 (2016). Wang, L. et al. Novel bilayer wound dressing composed of SIS membrane with SIS cryogel enhanced wound healing process. Materials Science and Engineering C 85, 162–169 (2018). Thu, H.-E., Zulfakar, M. H. & Ng, S.-F. Alginate based bilayer hydrocolloid films as potential slow-release modern wound dressing. International Journal of Pharmaceutics 434, 375–383 (2012). Gomes, S. R. et al. In vitro and in vivo evaluation of electrospun nanofibers of PCL, chitosan and gelatin: A comparative study. Materials Science and Engineering C 46, 348–358 (2015). Kim, J. I., Pant, H. R., Sim, H. J., Lee, K. M. & Kim, C. S. Electrospun propolis/polyurethane composite nanofibers for biomedical applications. Materials science & engineering. C. Materials for biological applications 44, 52–57, https://doi.org/10.1016/j.msec.2014.07.062 (2014). Kim, S. E. et al. Electrospun gelatin/polyurethane blended nanofibers for wound healing. Biomedical materials (Bristol, England) 4, 044106, https://doi.org/10.1088/1748-6041/4/4/044106 (2009). Article ADS CAS Google Scholar Fu, W. et al. Electrospun gelatin/PCL and collagen/PLCL scaffolds for vascular tissue engineering. International journal of nanomedicine 9, 2335 (2014). Başaran, İ. & Oral, A. Grafting of poly (ε-caprolactone) on electrospun gelatin nanofiber through surface-initiated ring-opening polymerization. International Journal of Polymeric Materials and Polymeric Biomaterials 67, 1051–1058 (2018). Srikanth, M., Asmatulu, R., Cluff, K. & Yao, L. Material Characterization and Bioanalysis of Hybrid Scaffolds of Carbon Nanomaterial and Polymer Nanofibers. ACS omega 4, 5044–5051 (2019). Unalan, I. et al. Physical and Antibacterial Properties of Peppermint Essential Oil Loaded Poly (ε-caprolactone)(PCL) Electrospun Fiber Mats for Wound Healing. Frontiers in Bioengineering and Biotechnology 7 (2019). Xue, J. et al. Drug loaded homogeneous electrospun PCL/gelatin hybrid nanofiber structures for anti-infective tissue regeneration membranes. Biomaterials 35, 9395–9405 (2014). Unnithan, A. R. et al. Electrospun polyurethane-dextran nanofiber mats loaded with Estradiol for post-menopausal wound dressing. Int. J Biol. Macromol. 77, 1–8, https://doi.org/10.1016/j.ijbiomac.2015.02.044 (2015). Sattary, M., Khorasani, M. T., Rafienia, M. & Rozve, H. S. Incorporation of nanohydroxyapatite and vitamin D3 into electrospun PCL/Gelatin scaffolds: The influence on the physical and chemical properties and cell behavior for bone tissue engineering. Polymers for Advanced Technologies 29, 451–462 (2018). Agarwal, T. et al. Gelatin/carboxymethyl chitosan based scaffolds for dermal tissue engineering applications. International journal of biological macromolecules 93, 1499–1506 (2016). Lee, S. B., Kim, Y. H., Chong, M. S., Hong, S. H. & Lee, Y. M. Study of gelatin-containing artificial skin V: fabrication of gelatin scaffolds using a salt-leaching method. Biomaterials 26, 1961–1968 (2005). Patrick, C., Plaunt, M., Hetherington, S. & May, S. Role of the Staphylococcus epidermidis slime layer in experimental tunnel tract infections. Infection and immunity 60, 1363–1367 (1992). Gunes, S., Tamburaci, S. & Tihminlioglu, F. A novel bilayer zein/MMT nanocomposite incorporated with H. perforatum oil for wound healing. Journal of Materials Science: Materials in Medicine 31, 7 (2020). Neto, R. J. G. et al. Characterization and in vitro evaluation of chitosan/konjac glucomannan bilayer film as a wound dressing. Carbohydrate polymers 212, 59–66 (2019). Mi, F. L. et al. Control of wound infections using a bilayer chitosan wound dressing with sustainable antibiotic delivery 59, 438–449 (2002). Sripriya, R., Kumar, M. S., Ahmed, M. R. & Sehgal, P. K. Collagen bilayer dressing with ciprofloxacin, an effective system for infected wound healing. Journal of Biomaterials Science Polymer Edition 18, 335–351 (2007). Liverani, L. et al. Electrospun patterned porous scaffolds for the support of ovarian follicles growth: a feasibility study. Scientific reports 9 (2019). Wathoni, N. et al. Enhancing effect of γ-cyclodextrin on wound dressing properties of sacran hydrogel film. International journal of biological macromolecules 94, 181–186 (2017). Rezapour-Lactoee, A. et al. Thermoresponsive polyurethane/siloxane membrane for wound dressing and cell sheet transplantation: in-vitro and in-vivo. studies. Materials Science and Engineering: C 69, 804–814 (2016). Yao, C.-H. et al. Novel bilayer wound dressing based on electrospun gelatin/keratin nanofibrous mats for skin wound repair. Materials Science and Engineering: C 79, 533–540 (2017). Xu, R. et al. Novel bilayer wound dressing composed of silicone rubber with particular micropores enhanced wound re-epithelialization and contraction. Biomaterials 40, 1–11 (2015). This research was supported by Arak University of Medical Sciences (grant number: 3404). Department of Biomaterials, Tissue Engineering and Nanotechnology, School of Advanced Medical Technologies, Isfahan University of Medical Sciences, Isfahan, Iran Asghar Eskandarinia, Maria Agheb, Mohammad Rafienia, Moloud Amini Baghbadorani & Darioush Khodabakhshi Department of Oncology, Cancer Prevention Research Center, Isfahan University of Medical Sciences, Isfahan, Iran Amirhosein Kefayat Biosensor Research Center, Isfahan University of Medical Sciences, Isfahan, Iran Mohammad Rafienia Department of Microbiology, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran Sepehr Navid Department of Environmental Health Engineering, School of Health, Isfahan University of Medical Sciences, Isfahan, Iran Karim Ebrahimpour Department of Medical Physics and Radiotherapy, Arak School of Paramedicine, Arak University of Medical Sciences, Arak, Iran Fatemeh Ghahremani Asghar Eskandarinia Maria Agheb Moloud Amini Baghbadorani Darioush Khodabakhshi A. Eskandarinia, A. Kefayat, M. Agheb, M. Rafienia, D. Khodabakhshi, M. Amini Baghbadorani, and F. Ghahremani designed and fabricated the wound dressing. The bilayer wound dressing characterizations were carried out by A. Eskandarinia, A. Kefayat, and F. Ghahremani. The antibacterial tests were carried out by S. Navid. Also, Gas Chromatography and Mass Spectrometry experiments were carried out by K. Ebrahimpour. In addition, in vitro and in vivo experiments and histopathology analyses were carried out by A. Kefayat, F. Ghahremani, A. Eskandarinia, and D. Khodabakhshi. The data collection and statistical analyzes, manuscript writing, graphical abstract and figures designing, and manuscript revisions were done by A. Eskandarinia, A. Kefayat, and F. Ghahremani. Correspondence to Darioush Khodabakhshi or Fatemeh Ghahremani. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Eskandarinia, A., Kefayat, A., Agheb, M. et al. A Novel Bilayer Wound Dressing Composed of a Dense Polyurethane/Propolis Membrane and a Biodegradable Polycaprolactone/Gelatin Nanofibrous Scaffold. Sci Rep 10, 3063 (2020). https://doi.org/10.1038/s41598-020-59931-2 Antibacterial and in vivo toxicological studies of Bi2O3/CuO/GO nanocomposite synthesized via cost effective methods Asifa Qayyum Zahida Batool Rimsha Imran Overexpression of VEGF in dermal fibroblast cells accelerates the angiogenesis and wound healing function: in vitro and in vivo studies Forough Shams Hamideh Moravvej Hamidreza Moosavian Bioactive Natural and Synthetic Polymers for Wound Repair Zainab Ahmadian Hasan Adiban Mohammad Reza Eskandari Macromolecular Research (2022) Use of propolis for skin wound healing: systematic review and meta-analysis Julia Carnelós Machado Velho Thais Amaral França João Tadeu Ribeiro-Paes Archives of Dermatological Research (2022) Antimicrobial activity enhancement of PVA/chitosan films with the additive of CZTS quantum dots Seda Ceylan Rıdvan Küçükosman Kasim Ocakoglu Polymer Bulletin (2022) Top 100 in Materials Science About Scientific Reports Guide to referees Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
CommonCrawl
Transactions of the American Mathematical Society Published by the American Mathematical Society, the Transactions of the American Mathematical Society (TRAN) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Transactions of the American Mathematical Society is 1.43. Journals Home eContent Search About TRAN Editorial Board Author and Submission Information Journal Policies Subscription Information Determination of bounds for the solutions to those binary Diophantine equations that satisfy the hypotheses of Runge's theorem by David Lee Hilliker and E. G. Straus PDF Trans. Amer. Math. Soc. 280 (1983), 637-657 Request permission In 1887 Runge [13] proved that a binary Diophantine equation $F(x,y) = 0$, with $F$ irreducible, in a class including those in which the leading form of $F$ is not a constant multiple of a power of an irreducible polynomial, has only a finite number of solutions. It follows from Runge's method of proof that there exists a computable upper bound for the absolute value of each of the integer solutions $x$ and $y$. Runge did not give such a computation. Here we first deduce Runge's Theorem from a more general theorem on Puiseux series that may be of interest in its own right. Second, we extend the Puiseux series theorem and deduce from the generalized version a generalized form of Runge's Theorem in which the solutions $x$ and $y$ of the polynomial equation $F(x,y) = 0$ are integers, satisfying certain conditions, of an arbitrary algebraic number field. Third, we compute bounds for the solutions $(x,y) \in {{\mathbf {Z}}^2}$ in terms of the height of $F$ and the degrees in $x$ and $y$ of $F$. William J. Ellison, Variations sur un thème de Carl Runge, Séminaire Delange-Pisot-Poitou (13e année:1971/72), Théorie des nombres, Fasc. 1, Exp. No. 9, Secrétariat Mathématique, Paris, 1973, pp. 4 (French). MR 0419348 E. Heine, Handbuch der Kugelfunctionen. Theorie und Anwendungen. Vols. I, II, 2nd ed., G. Reimer, Berlin, 1878; 1881. See Jbuch. 10, 332; 13, 390-391. David Lee Hilliker, An algorithm for solving a certain class of Diophantine equations. I, Math. Comp. 38 (1982), no. 158, 611–626. MR 645676, DOI 10.1090/S0025-5718-1982-0645676-6 —, An algorithm for solving a certain class of Diophantine equations. II (to be submitted). —, An algorithm for computing the values of the ramification index in the Puiseux series expansions of an algebraic function (to be submitted). David Lee Hilliker and E. G. Straus, On Puiseux series whose curves pass through an infinity of algebraic lattice points, Bull. Amer. Math. Soc. (N.S.) 8 (1983), no. 1, 59–62. MR 682822, DOI 10.1090/S0273-0979-1983-15083-8 William Judson LeVeque, Topics in number theory. Vols. 1 and 2, Addison-Wesley Publishing Co., Inc., Reading, Mass., 1956. MR 0080682 Edmond Maillet, Sur les équations indéterminées à deux et trois variables qui n'ont qu'un nombre fini de solutions en nombres entiers, J. Math. Pures Appl. 6 (1900), 261-277. An abstract appeared in C. R. Acad. Sci. Paris 128 (1899), 1383-1395. See Jbuch. 30, 188-189; 31, 190-191. —, Sur une catégorie de'équations indéterminées n'ayant en nombres entiers qu'un nombre fini de solutions, Nouv. Ann. Math. 18 (1918), 281-292. See Jbuch. 46, 210. L. J. Mordell, Diophantine equations, Pure and Applied Mathematics, Vol. 30, Academic Press, London-New York, 1969. MR 0249355 Harry Pollard, The Theory of Algebraic Numbers, Carus Monograph Series, no. 9, Mathematical Association of America, Buffalo, N.Y., 1950. MR 0037319 G. Pólya and G. Szegö, Problems and theorems in analysis, Vols. I, II, Revised and enlarged transl. of 4th German ed., Die Grundlehren der Math. Wissenschaften, Bands 193, 216, Springer-Verlag, New York and Berlin, 1972, 1976; 1st German ed., Aufgaben und Lehrsätze aus der Analysis, Julius Springer, Berlin, 1925; 4th German ed., 1970, 1971; 1st German ed. also published in two volumes by Dover, New York, 1945. C. Runge, Über ganzzahlige Lösungen von Gleichungen zwischen zwei Veränderlichen, J. Reine Angew. Math. 100 (1887), 425-435. See Jbuch. 19, 76-77. A. Schinzel, An improvement of Runge's theorem on Diophantine equations, Comment. Pontificia Acad. Sci. 2 (1969), no. 20, 1–9 (English, with Latin summary). MR 276174 Carl Siegel, Approximation algebraischer Zahlen, Math. Z. 10 (1921), no. 3-4, 173–213 (German). MR 1544471, DOI 10.1007/BF01211608 —, Über einige Anwendungen diophantischer Approximationen, Abh. Preuss. Akad. Wiss. Phys. Math. Natur. K1. 1 (1929). Also in Gesammelte Abhandlungen Vol. I, Springer-Verlag, Berlin and New York. 1966, pp. 209-266. See Jbuch. 56, 180-184. Th. Skolem. Über ganzzahlige Löhungen einer Klasse unbestimmter Gleichungen, Norsk mat. Foren. Akrifter, Ser. I 10 (1922). See Jbuch. 48, 139. —, Diophantische Gleichungen, Verlag von Julius Springer, Berlin, 1938, reprinted by Chelsea, New York, 1950. Retrieve articles in Transactions of the American Mathematical Society with MSC: 11D41 Retrieve articles in all journals with MSC: 11D41 Journal: Trans. Amer. Math. Soc. 280 (1983), 637-657 MSC: Primary 11D41
CommonCrawl
Search all SpringerOpen articles EURASIP Journal on Advances in Signal Processing Multitarget tracking in cluttered environment for a multistatic passive radar system under the DAB/DVB network Yi Fang Shi1, Seung Hyo Park1 & Taek Lyul Song1 EURASIP Journal on Advances in Signal Processing volume 2017, Article number: 11 (2017) Cite this article The target tracking using multistatic passive radar in a digital audio/video broadcast (DAB/DVB) network with illuminators of opportunity faces two main challenges: the first challenge is that one has to solve the measurement-to-illuminator association ambiguity in addition to the conventional association ambiguity between the measurements and targets, which introduces a significantly complex three-dimensional (3-D) data association problem among the target-measurement illuminator, this is because all the illuminators transmit the same carrier frequency signals and signals transmitted by different illuminators but reflected via the same target become indistinguishable; the other challenge is that only the bistatic range and range-rate measurements are available while the angle information is unavailable or of very poor quality. In this paper, the authors propose a new target tracking algorithm directly in three-dimensional (3-D) Cartesian coordinates with the capability of track management using the probability of target existence as a track quality measure. The proposed algorithm is termed sequential processing-joint integrated probabilistic data association (SP-JIPDA), which applies the modified sequential processing technique to resolve the additional association ambiguity between measurements and illuminators. The SP-JIPDA algorithm sequentially operates the JIPDA tracker to update each track for each illuminator with all the measurements in the common measurement set at each time. For reasons of fair comparison, the existing modified joint probabilistic data association (MJPDA) algorithm that addresses the 3-D data association problem via "supertargets" using gate grouping and provides tracks directly in 3-D Cartesian coordinates, is enhanced by incorporating the probability of target existence as an effective track quality measure for track management. Both algorithms deal with nonlinear observations using the extended Kalman filtering. A simulation study is performed to verify the superiority of the proposed SP-JIPDA algorithm over the MJIPDA in this multistatic passive radar system. In a multistatic passive radar system, illuminators of opportunity such as radio or television transmitters can be used. The transmitted signals are not under the control of the receiver; thus, the receiver can remain hidden. One can measure the time difference of arrival (TDOA) and Doppler shift between signal received directly from the illuminator and delayed copies from potential targets. As the receiver and transmitter in the multistatic passive radar system are completely separated, the receiver therein just needs to process the received signals while it has to service the transmitter in a feedback manner and consumes much more power in some other radar systems; therefore, the multistatic passive radar system is more economic. Due to the non-cooperative illuminator transmits low RF signals, the chance to detect stealth and low altitude targets increases [1]; however, this causes problem in that measurement of the azimuth angle is often of very poor quality or not even available at some extreme situations. Therefore, in this multistatic passive radar system, the range and range rate are measured while the angle information is assumed to be unavailable, due to the low RF frequencies of the illuminating signals. In this paper, we focus on target tracking from preprocessed detections originating from the television or radio broadcasting signals that are modulated according to the digital audio broadcasting (DAB) or digital video broadcasting (DVB) standards [2, 3]. The using of the DAB/DVB signals delivers numbers of advantages compared to that of analog signals: such as, the improved detection performance [4], the more effective signal processing process [5], and the more easily estimated multipath. As a result, a couple of widely spaced transmitters that broadcast the DAB/DVB signals on the same carrier frequency with each responsible for a small and overlapping subscriber footprint, are used to cover a large surveillance space. However, there are two main challenges in this multistatic passive radar system with a DAB/DVB network. One is that there is a new association ambiguity between the measurements and the illuminators on top of the conventional ambiguity between the measurements and targets, which results in three-dimensional (3-D) data association and adds significant data association complexity. This is because all illuminators in this system transmit the same frequency broadcasting signals and it is not available for the receiver to differentiate the received signals from the different illuminators' signals. The other challenge is that more than one illuminator is needed to locate a single target due to the absence of angle information, which inevitably generates a lot of ghosts. There has been numbers of research focused on the problems in this multistatic passive radar system with DAB/DVB networks. The track-before-detect (TBD) algorithms [6, 7] that estimate the target positions directly from the unprocessed DAB/DVB signals can be used to solve the association measurements since the target is observed over several consecutive scans and hence reducing the number of possible associations. The TBD algorithms ensure better detection and estimation performance than conventional algorithms at the price of an increased computational load, while the empirical techniques can be adopted to reduce the complexity of the TBD algorithms, one can refer to [8–10], therein, the proposed algorithms have a complexity linear in the number of integrated scans and in the time on target. As opposed to from the unprocessed signals, most other researchers focus on the target position estimation from the preprocessed detections (measurements), which require less computation source over the TBD algorithms but have to address the data association problem. The authors in [11] propose a multi-dimensional assignment approach to solve the transmitter ambiguity for bistatic range, range rate, and precise azimuth measurements. In [12], the authors propose a multi-hypothesis tracking (MHT)-based three-stage approach that includes primary tracking directly on the measurements and a two-dimensional (2-D) estimate to address the association problem that is later resolved into 3-D. Another algorithm employing the likelihood ratio test to remove "ghost" tracks is investigated in [13]. Recently, Choi et al. [14, 15] propose two groups of algorithms for multitarget tracking directly in the Cartesian coordinates. One group consists of the extended Kalman filter (EKF) and unscented Kalman fitler (UKF) [16] based modified joint probabilistic data association (MJPDA) to resolve the additional ambiguity between measurements and illuminators, while the other group consists of the bootstrap particle filter (BPF) and auxiliary particle filter (APF) [17] based data association under the probabilistic multi-hypothesis tracker (PMHT) measurement model. However, these preprocessed-detection-based tracking algorithms neglect providing an effective track quality measure for track management, which motivates the authors to consider the probability of target existence as a track quality measure for multitarget tracking from preprocessed detections. This paper focuses on the multitarget tracking algorithms directly in three-dimensional (3-D) Cartesian coordinates using the preprocessed detections. Motivated by the techniques presented in [18] and [19, 20]. The authors propose a new algorithm entitled sequential processing-joint integrated probabilistic data association (SP-JIPDA) which provides tracks directly in 3-D Cartesian coordinates, and enable the track management using the probability of target existence as a track quality measure. To the best knowledge of the authors, the only existing multitarget tracking algorithms directly in 3-D Cartesian coordinates under the multistatic DAB/DVB passive radar system are proposed in [14, 15], they are the modified joint probabilistic data association (MJPDA) and the particle filtering under the probabilistic multiple hypothesis (PMHT) model, with the MJPDA delivers more robust tracking performance and requires less computational resources. However, both of the exiting algorithms neglect providing an efficient track quality measure for track management. Therefore, in order to compare the proposed algorithm (SP-JIPDA) to the exiting algorithms, the MJPDA algorithm is enhanced by incorporating the probability of target existence as a track quality measure for track management, and termed modified joint integrated probabilistic data association (MJIPDA). The proposed SP-JIPDA algorithm avoids the extra data association ambiguity between the measurements and illuminators, and moreover, the measurement information from various illuminators is utilized in a more effective way and delivers much better tracking performance compared with the MJIPDA algorithm. The paper is organized as follows. Section 2 describes the problem statement, and the modified JIPDA algorithm is presented in Section 3. Section 4 investigates the sequential processing JIPDA algorithm and the simulation is implemented in Section 5, with the conclusions in Section 6. The tracking algorithms presented in this paper are based on the infinite sensor resolution and point target assumptions. Denote a track or a potential target followed by a certain track by superscript τ, of which the interpretation is clear from the context. Let \(\chi _{k}^{\tau }\) denote the random existence event for a target being followed by track τ at time k. The event \(\chi _{k}^{\tau }\) propagates as a Markov process [21–23]. Denote the trajectory state with position and velocity components in 3-D Cartesian coordinates of target τ at time k by \(\mathbf {x}_{k}^{\tau }={[\mathbf {x}_{k}^{\tau,p}\;\;\;\mathbf {x}_{k}^{\tau,v}]^{T}}\), with \(\mathbf {x}_{k}^{\tau,p}={[x^{\tau }(k)\;\; y^{\tau }(k)\;\; z^{\tau }(k)]^{T}}\) and \(\mathbf {x}_{k}^{\tau,v}={[\dot {x}^{\tau }(k)\;\; \dot {y}^{\tau }(k)\;\; \dot {z}^{\tau }(k)]^{T}}\). The trajectory state is assumed to propagate linearly by $$ \mathbf{x}_{k + 1}^{\tau} = \mathbf{F}\mathbf{x}_{k}^{\tau} + \mathbf{v}_{k}^{\tau}, $$ where \(\mathbf {v}_{k}^{\tau }\) is the white Gaussian noise sequence with zero mean and covariance Q k , and F denotes the state transition matrix, which is calculated by $$ \mathbf{F}{\rm{= }}\left[ \begin{array}{ll} 1&T\\ 0&1 \end{array} \right] \otimes {\mathbf{I}_{3}}, $$ where T is the sampling interval between two consecutive scans, ⊗ denotes the Kronecker product and I 3 is the identity matrix of size 3. For reasons of simplicity, as well as better concentrating on the essence of problems caused by multitarget tracking in clutter for a multistatic passive radar system under the DAB/DVB network, we assume a constant velocity target trajectory model. There have two feasible methods to face with the possible target trajectory changes: one is to increase the value of target trajectory plant noise covariance matrix (Q k ) when the target trajectory changes slightly, therein, the increased plant noise covariance is able to account for certain amount of mismatch between the assumed trajectory model and the actual one. However, if the target trajectory changes dramatically or even maneuvers among different motion models, one can resort to the interactive multiple model (IMM) algorithm [24], which performs well on systems characterized by multiple models of target behavior. Each existing target τ at time k creates at most one detection with probability of detection P D . In this multistatic passive radar system of DAB/DVB network, there exist multiple illuminators located at x s =[x s y s z s ]T for s=1,⋯,N s and only one receiver located at x r =[x r y r z r ]T, the receiver can measure bistatic range γ k and range rate \({\dot \gamma _{k}}\) at time k, which are given by $$ {\gamma_{k}}\left({\mathbf{x}_{k}^{\tau}},{\mathbf{x}_{s}}\right) = \left\| {\mathbf{x}_{k}^{\tau,p} - {\mathbf{x}_{r}}} \right\| + \left\| {\mathbf{x}_{k}^{\tau,p} - {\mathbf{x}_{s}}} \right\|, $$ $$ {\dot \gamma_{k}}\left({\mathbf{x}_{k}^{\tau}},{\mathbf{x}_{s}}\right) = \frac{{{{\left(\mathbf{x}_{k}^{\tau,p} - {\mathbf{x}_{r}}\right)}^{T}}\mathbf{x}_{k}^{\tau,v}}}{{\left\| {\mathbf{x}_{k}^{\tau,p} - {\mathbf{x}_{r}}} \right\|}} + \frac{{{{\left(\mathbf{x}_{k}^{\tau,p} - {\mathbf{x}_{s}}\right)}^{T}}\mathbf{x}_{k}^{\tau,v}}}{{\left\| {\mathbf{x}_{k}^{\tau,p} - {\mathbf{x}_{s}}} \right\|}}. $$ Thus, the observation with respect to illuminator s is given as $$ {\mathbf{z}_{k}} = h_{k}^{s}\left({\mathbf{x}_{k}^{\tau}},\mathbf{x}_{s}\right) + \mathbf{w}_{k}, $$ where \({\mathbf {z}_{k}}\,=\,{[{\gamma _{k}}\;\;{\dot \!\! \gamma _{k}}]^{T}}, h_{k}^{s}({\mathbf {x}_{k}^{\tau }},\mathbf {x}_{s})={[{\gamma _{k}}({\mathbf {x}_{k}^{\tau }},{\mathbf {x}_{s}})\;\;{\dot \gamma _{k}}({\mathbf {x}_{k}^{\tau }},{\mathbf {x}_{s}})]^{T}}\), and w k is the white Gaussian noise sequence with zero mean and covariance R k , uncorrelated with the plant noise sequence \(\mathbf {v}_{k}^{\tau }\). At each time k, the receiver receives a random set of measurements Z k without prior information on the origin of each measurement. Each measurement has only one resource, either a target or clutter, but it can be originated from any illuminator. Denote the set of measurements up to and including time k by Z k={Z k−1, Z k }, and let Z k,i denote the ith measurement of Z k . At each scan, the sensor received a random number of clutter (false) measurements. The number of clutter measurements in the surveillance space is usually modeled by a Poisson distribution [24] with somehow known intensity, which is termed as the clutter measurement density. Denote the clutter measurement density of measurement Z k,i by the shortcut \({\rho _{k,i}} \buildrel \Delta \over = \rho ({\mathbf {Z}_{k,i}})\). Assume the volume of the surveillance space at time k is V k , thus the probability that the number of the clutter measurements equals m in V k at time k follows the Poisson distribution $$ u_{F}(m) = e^{- \int\limits_{{V_{k}}} {{\rho_{k,i}}dV}}\frac{\left(\int\limits_{{V_{k}}} \rho_{k,i}dV\right)^{m}}{{m!}}, $$ where the clutter measurement density \(\rho _{k,i}=\frac {P_{fa}}{V_{rc}}\) [24, 25], with P fa and V rc denote the probability of false alarm and the sensor resolution cell volume. As can be easily seen, the probability of false alarm impacts on the number of clutter measurements significantly, that is, if the probability of false alarm P fa increases, the clutter measurement density ρ k,i also increases, and the mean number of clutter measurements in the surveillance space V k increases, which results in a increased number of clutter measurements at each time k. In target tracking, the clutter measurement density is either a priori known or estimated adaptively based on the current measurements [26]. Multistatic passive radar in a DAB/DVB network As the focus of this paper is on target tracking from detections of the preprocessed signals in the multistatic passive radar systems of a DAB/DVB network, there mainly exist two challenges. One is that there is a new association ambiguity between measurements and illuminators besides the conventional one between measurement and targets, which results in 3-D data association among target-measurement illuminators. This is because each of the multiple illuminators transmits the same frequency digital signal, albeit with different delays due to the illuminator-target-receiver geometry, the fact that received signals are composed of multiple unlabeled delays per target, and that there is no useful information to discriminate the origin of the measurement. As the number of 3-D association events increases dramatically [15] even given small number of measurements, targets and illuminators, it is computationally unrealistic to directly implement conventional 2-D data association approaches to resolve the 3-D data association problem in this passive radar system. The other challenge is due to the broadcasting signal frequencies for passive radars and the type of receivers, the angle information is of realistically poor quality and target tracking using only the range and range rate without angle information inevitably generates ghost tracks. A bistatic range measurement can locate a target at an ellipsoid in 3-D Cartesian coordinates, but for the intersection of two ellipsoids, which is an ellipse, a third bistatic range measurement is necessary to possibly locate the target, resulting in the generation of multiple ghosts. Furthermore, in the presence of measurement noise and the clutter measurement, the situation is even more severe. In the following two sections, two track maintenance algorithms directly in 3-D Cartesian coordinates are introduced. The first presented in Section 3 is the modified JIPDA algorithm, which is enhanced by incorporating the probability of target existence as a track quality measure. The second algorithm, proposed in Section 4 is the sequential processing JIPDA algorithm, which is our main contribution of this paper. Modified JIPDA (MJIPDA) Supertarget and gate grouping The JPDA algorithm [24] enumerates and probabilistically evaluates every validated measurement to the target association event, and the target states are estimated by using the marginal association probability. In this multistatic passive radar system, as discussed in Section 2.3, the computational cost of evaluating all possible 3-D association events is much higher than the cost for 2-D association events. The authors in [15] proposed a suboptimal idea of supertarget \(\tilde \tau = \{ \tau,s\}\), which is a hypothetical target consisting of a pair of target τ and illuminator s, and succeeded to recast the 3-D association among measurements, targets and illuminators to a 2-D list-matching problem between measurements and supertargets. Given that the number of association events increases in the number of targets (supertargets) and measurements involved, the gate grouping concept is introduced [27]. Denote the set of gating validated measurements at time k with respect to supertarget \(\tilde \tau \) by \(\mathbf {z}_{k}^{\tilde \tau }\), with corresponding set cardinality \(N_{k}^{\tilde \tau }\). \(\mathbf {z}_{k,i}^{\tilde \tau }\) is the ith measurement of \(\mathbf {z}_{k}^{\tilde \tau }\). $${} \begin{aligned} \mathbf{z}_{k}^{\tilde \tau} = \left\{ {\mathbf{Z}_{k,i}} \in {\mathbf{Z}_{k}}:{\left({\mathbf{Z}_{k,i}} - h_{k}^{s}\left(\mathbf{x}_{k|k - 1}^{\tau},\mathbf{x}_{s}\right)\right)^{T}} {\left(\mathbf{S}_{k}^{\tau,s}\right)^{- 1}}\right.\\ \left.\left({\mathbf{Z}_{k,i}} - h_{k}^{s}\left(\mathbf{x}_{k|k - 1}^{\tau},\mathbf{x}_{s}\right)\right) < \kappa \right\} \end{aligned} $$ with the predicted measurement \(h_{k}^{s}(\mathbf {x}_{k|k - 1}^{\tau },\mathbf {x}_{s})\) and its associated covariance \(\mathbf {S}_{k}^{\tau,s}\) with respect to supertarget \(\tilde \tau = \{ \tau,s\}\), where κ is the gating threshold. The supertargets sharing at least one measurement in their corresponding validation gates are classified as one group. Thus, all of the 3-D association events can be separated as groups of 2-D association events, which decreases the number of association events on a large scale. Note that the concept of a group here is slightly different from the cluster, in the sense that the supertargets in different groups are possibly from the same target since one target can create N s supertargets via combining with N s illuminators. Table 1 gives an example of four supertargets, which are formed by combinating two targets (τ 1,τ 2) and two illuminators (s 1,s 2), respectively, as well as their corresponding validated measurements. There are two groups: the first group is \(({\tilde \tau _{1},\tilde \tau _{3}})\) with shared validated measurements (z 2,z 4), the other group is \(({\tilde \tau _{2},\tilde \tau _{4}})\) with shared validated measurements (z 1,z 3). Table 1 Example of supertargets Modified JIPDA using supertargets The proposed modified JPDA algorithm in [15] declares a track lost if no measurement falls within the target gates for some consecutive scans or the gate becomes too large. In this section, the modified JPDA is enhanced by incorporating the probability of target existence as a track quality measure for track management, which utilizes the probability of target existence to confirm or terminate tracks, and is termed as modified joint integrated probabilistic data association (MJIPDA). The MJIPDA algorithm recursively calculates the hybrid track state at each scan. Let us start the recursion from the predicted track state of τ at time k−1, which consists of track existence event \(\chi _{k}^{\tau }\) (discrete event) and the trajectory state \(\mathbf {x}_{k}^{\tau }\) (continuous variable), $$ p\left(\chi_{k}^{\tau},\mathbf{x}_{k}^{\tau}\left|{\mathbf{Z}^{k - 1}}\right.\right) = P\left(\chi_{k}^{\tau} \left|{\mathbf{Z}^{k - 1}}\right.\right)p\left(\mathbf{x}_{k}^{\tau} \left|\chi_{k}^{\tau},{\mathbf{Z}^{k - 1}}\right.\right), $$ $$ p\left(\mathbf{x}_{k}^{\tau} \left|\chi_{k}^{\tau},{\mathbf{Z}^{k - 1}}\right.\right) = \mathcal{N}\left(\mathbf{x}_{k}^{\tau}; \hat{\mathbf{x}}_{k|k - 1}^{\tau},\mathbf{P}_{k|k - 1}^{\tau} \right). $$ The trajectory state pdf \(p(\mathbf {x}_{k}^{\tau } |\chi _{k}^{\tau },{\mathbf {Z}^{k - 1}})\) is only conditioned on the target existence \(\chi _{k}^{\tau }\), and for the rest of this paper, this conditioning is only implicit. Data association In this section, the authors present the multi-target data association operation on one group, where the other groups also follow the same procedure. A feasible joint event (FJE) is an allocation of all measurements to all supertargets in the group and satisfies that each supertarget is assigned zero or one measurement and each measurement is allocated to zero (clutter) or one supertarget [24]. Let ξ j denote the jth FJE, where T 0(ξ j ) and T 1(ξ j ) are the set of supertargets allocated no measurement and one measurement with respect to the FJE ξ j , respectively. The posterior probability of FJE ξ j is given by $${} \begin{aligned} P\left({\xi_{j}}\left|{\mathbf{Z}^{k}}\right.\right) &= {c_{k}} {\underset{\tilde \tau \in {T_{0}}\left({\xi_{j}}\right)}{\Pi}} \!\left(\!1 - {P_{D}}{P_{G}}P\left(\chi_{k}^{\tilde \tau }|{\mathbf{Z}^{k - 1}}\right)\right) \\ &\quad\times {\underset{\tilde \tau \in {T_{1}}\left({\xi_{j}}\right)}{\Pi}}\left({P_{D}}{P_{G}}P\left(\chi_{k}^{\tilde \tau }|{\mathbf{Z}^{k - 1}}\right)\frac{{p_{k,i}^{\tilde \tau }}}{{{\rho_{k,i}}}}\right), \end{aligned} $$ where P G is the gating probability, c k is the normalizing constant. \(P(\chi _{k}^{\tilde \tau }|{\mathbf {Z}^{k - 1}}) = P(\chi _{k}^{\tau } |{\mathbf {Z}^{k - 1}})\) due to \(\tilde \tau = \{ \tau,s\}\). The likelihood of measurement \(\mathbf {z}_{k,i}^{\tilde \tau }\) allocated to supertarget \(\tilde \tau \) in FJE ξ j is obtained by $$ p_{k,i}^{\tilde \tau} = \frac{{\mathcal{N}\left({\mathbf{z}_{k,i}^{\tilde \tau}};\;h_{k}^{s}\left(\mathbf{x}_{k|k - 1}^{\tau},\mathbf{x}_{k}^{s}\right),\mathbf{S}_{k}^{\tau,s}\right)}}{{{P_{G}}}}, $$ where P G is the gating probability. Denote by \(\Xi (\tilde \tau,i)\) the set of FJEs which allocates measurement i to supertarget \(\tilde \tau \), and denote the event that the selected measurement i is the detection of supertarget \(\tilde \tau \) at time k by \(\chi _{k,i}^{\tilde \tau }\) (i=0 means no measurement is the supertarget \(\tilde \tau \) detection). The posterior probability that no measurement originates from the supertarget \(\tilde \tau \) detection is $$ P\left(\chi_{k,0}^{\tilde \tau }\left|{\mathbf{Z}^{k}}\right.\right) = \sum\limits_{{\xi_{j}} \in \Xi (\tilde \tau,0)} {P\left({\xi_{j}}\left|{\mathbf{Z}^{k}}\right.\right)}, $$ and the posterior probability that no measurement in the group is supertarget \(\tilde \tau \) detection as well as that supertarget \(\tilde \tau \) exists is $${} P\left(\chi_{k}^{\tilde \tau},\chi_{k,0}^{\tilde \tau }\left|{\mathbf{Z}^{k}}\!\right.\right) = \frac{{\left(1 - {P_{D}}{P_{G}}\right)P\left(\chi_{k}^{\tilde \tau }\left|{\mathbf{Z}^{k - 1}}\right.\right)}}{{1 - {P_{D}}{P_{G}}P\left(\chi_{k}^{\tilde \tau }\left|{\mathbf{Z}^{k - 1}}\right.\right)}}P\!\left(\chi_{k,0}^{\tilde \tau}\left|{\mathbf{Z}^{k}}\right.\right), $$ and the posterior probability of measurement \(\mathbf {z}_{k,i}^{\tilde \tau }\) in the group is supertarget \(\tilde \tau \) detection and supertarget \(\tilde \tau \) exists is $$ P\left(\chi_{k}^{\tilde \tau },\chi_{k,i}^{\tilde \tau }\left|{\mathbf{Z}^{k}}\right.\right) = \sum\limits_{{\xi_{j}} \in \Xi \left(\tilde \tau,i\right)} {P\left({\xi_{j}}\left|{\mathbf{Z}^{k}}\right.\right)}, $$ thus, the posterior probability of supertarget \(\tilde \tau \) existence is $$ P\left(\chi_{k}^{\tilde \tau }\left|{\mathbf{Z}^{k}}\right.\right) = \sum\limits_{i \ge 0} {P\left(\chi_{k}^{\tilde \tau}, \chi_{k,i}^{\tilde \tau }\left|{\mathbf{Z}^{k}}\right.\right)}. $$ The posterior data association probability for supertarget \(\tilde \tau \) is $$ {\beta_{i\tilde \tau }} = P\left(\chi_{k,i}^{\tilde \tau }\left|\chi_{k}^{\tilde \tau },{\mathbf{Z}^{k}}\right.\right) = \frac{{P\left(\chi_{k,i}^{\tilde \tau },\chi_{k}^{\tilde \tau }\left|{\mathbf{Z}^{k}}\right.\right)}}{{P\left(\chi_{k}^{\tilde \tau }\left|{\mathbf{Z}^{k}}\right.\right)}},i \ge 0. $$ Track state update Since each supertarget \(\tilde \tau \) is created by combing each track τ and each illuminator s (a total of N s illuminators), the track state output for τ at time k should involve all the supertargets \(\tilde \tau \) originated from track τ. Let E(τ) denote the set of supertargets that originated from track τ, with the set cardinality N s . $$ p\left(\chi_{k}^{\tau},\mathbf{x}_{k}^{\tau}\left|{\mathbf{Z}^{k}}\right.\right) = P\left(\chi_{k}^{\tau} \left|{\mathbf{Z}^{k}}\right.\right)p\left(\mathbf{x}_{k}^{\tau} \left|{\mathbf{Z}^{k}}\right.\right). $$ The probability density function of the track τ trajectory state is assumed to be a single Gaussian distribution, $$ p\left(\mathbf{x}_{k}^{\tau} \left|{\mathbf{Z}^{k}}\right.\right) = \mathcal{N}\left(\mathbf{x}_{k}^{\tau}; \hat{\mathbf{x}}_{k|k}^{\tau},\mathbf{P}_{k|k}^{\tau} \right), $$ $$ \hat{\mathbf{x}}_{k|k}^{\tau} = \sum\limits_{\tilde \tau \in \mathrm{E}(\tau)} {\sum\limits_{j = 0}^{N_{k}^{\tilde \tau }} {{\tilde{c}_{k} \beta_{j\tilde \tau }}} \hat{\mathbf{x}}_{k|k,j}^{\tilde \tau }}, $$ $${} \mathbf{P}_{k|k}^{\tau} =\! \sum\limits_{\tilde{\tau} \in \mathrm{E}(\tau)}{\sum\limits_{j = 0}^{N_{k}^{\tilde{\tau}}}{{\tilde{c}_{k} \beta_{j\tilde \tau }}\!\left(\mathbf{\!P}_{k|k,j}^{\tilde \tau } \!+ \hat{\mathbf{x}}_{k|k,j}^{\tilde \tau }{{\left(\!\hat{\mathbf{x}}_{k|k,j}^{\tilde \tau }\!\right)}^{T}}\!\right)}} - \hat{\mathbf{x}}_{k|k}^{\tau} {\left(\hat{\mathbf{x}}_{k|k}^{\tau}\!\right)^{T}}. $$ where \(\tilde {c_{k}}\) is the renormalized factor which satisfies \(\sum \limits _{\tilde \tau \in \mathrm {E}(\tau)} {\sum \limits _{j = 0}^{N_{k}^{\tilde \tau }} {{\tilde {c_{k}}\beta _{j\tilde \tau }}}} = 1\). The mean and covariance updated by measurement \(\mathbf {z}_{k,i}^{\tilde \tau }\) with respect to supertarget \(\tilde \tau =(\tau,s)\) are calculated by $${} \left[\hat{\mathbf{x}}_{k|k,i}^{\tilde{\tau}}\;\;\mathbf{P}_{k|k,i}^{\tilde{\tau}}\right] \,=\, \left\{ \begin{array}{ll} \left[\hat{\mathbf{x}}_{k|k - 1}^{\tau} \;\;\mathbf{P}_{k|k - 1}^{\tilde \tau }\right],&\!j = 0\\ {\mathbf{EKF}}_{\mathrm{U}}\!\left({\mathbf{z}_{k,i}^{\tilde \tau}},{\mathbf{R}_{k}},\hat{\mathbf{x}}_{k|k - 1}^{\tau}, \mathbf{P}_{k|k - 1}^{\tau}, h_{k}^{s}\right),& \!j > 0, \end{array} \right. $$ where E K F U is the extended Kalman filter update procedure, and the predicted measurement function is \(h_{k}^{s} = h_{k}^{s}\left (\hat {\mathbf {x}}_{k|k - 1}^{\tau },{\mathbf {x}_{s}}\right)\). The posterior probability of target existence with respect to track τ at time k is obtained by $$ P\left(\chi_{k}^{\tau} |{\mathbf{Z}^{k}}\right) = \sum\limits_{\tilde \tau \in \mathrm{E}(\tau)} {P\left(\chi_{k}^{\tilde \tau }\left|{\mathbf{Z}^{k}}\right.\right)} /{N_{s}}. $$ Sequential processing JIPDA (SP-JIPDA) The authors propose another methodology named sequential processing JIPDA (SP-JIPDA) for target tracking in clutter for this multistatic passive radar system, which avoids the association between measurements and illuminators by sequentially processing the tracks for each illuminator with all the measurements in the common measurement set, and then recursively calculates the probability of target existence as a track quality measure for track management. Note that the concept of the supertarget is not needed in this approach. Sequential processing framework The sequential implementation for multisensor multitarget tracking in clutter is introduced in [19, 20], which processes the measurements from one sensor at a time, and the measurements of the next sensor are then used to further improve the intermediate state estimation. However, the measurements of different illuminators in the multistatic passive radar system in a single frequency (DAB/DVB) network are indistinguishable, the typical sequential approach needs to be modified slightly. The sequential processing method proposed in this section is that the hybrid state of each track is updated sequentially for each illuminator with all the measurements in the common measurement set. As shown in Fig. 1, each track state is first updated for illuminator 1 with all of the measurements of measurement set Z k to obtain the first intermediate track state, then the intermediate track state is improved by the sequential update for the next illuminator with the common measurement set Z k . The track state update is continued till its corresponding intermediate track state is updated for the last illuminator with the common measurement set Z k . This sequential processing implementation subtly avoids the 3-D association among measurement, target and illuminator, and tangibly improves the track state estimation. Sequential processing JIPDA implementation The JIPDA [18, 28] is a multitarget tracking algorithm with single scan data association and the capability of track management using the probability of target existence. The sequential processing JIPDA (SP-JIPDA) algorithm is implemented by sequentially operating the JIPDA tracker to update each track τ for each illuminator with the common measurement set Z k . The proposed algorithm starts the recursion with the updated probability density function (pdf) of the track state at time k−1, which consists of the target existence and track trajectory state, as $${} p\left(\chi_{k - 1}^{\tau}, \mathbf{x}_{k - 1}^{\tau}\left|{\mathbf{Z}^{k - 1}}\right.\right) = P\left(\chi_{k - 1}^{\tau} \left|{\mathbf{Z}^{k - 1}}\right.\right)p\left(\mathbf{x}_{k - 1}^{\tau} \left|{\mathbf{Z}^{k - 1}}\right.\right), $$ $$ p\left(\mathbf{x}_{k - 1}^{\tau} |{\mathbf{Z}^{k - 1}}\right) = \mathcal{N}\left(\mathbf{x}_{k - 1}^{\tau} ;\hat{\mathbf{x}}_{k - 1|k - 1}^{\tau},\mathbf{P}_{k - 1|k - 1}^{\tau}\right). $$ The track τ state at time k is updated sequentially for each illuminator s using the common measurement set Z k in a recursive manner. The recursion includes Track state propagation Measurement selection and likelihood evaluation Multitarget data association The propagated track state at time k−1 of track τ with respect to the illuminator s equals $${} p\left(\!\chi_{k}^{\tau} (s),\mathbf{x}_{k}^{\tau} (s)\left|{\mathbf{Z}^{k - 1}}\!\right.\right) = P\left(\chi_{k}^{\tau} (s)\left|{\mathbf{Z}^{k - 1}}\right.\right)p\left(\mathbf{x}_{k}^{\tau} (s)\left|{\mathbf{Z}^{k - 1}}\right.\right), $$ $$ p\left(\mathbf{x}_{k}^{\tau} (s)\left|{\mathbf{Z}^{k - 1}}\right.\right) = \mathcal{N}\left(\mathbf{x}_{k}^{\tau} (s);\hat{\mathbf{x}}_{k|k - 1}^{\tau} (s),\mathbf{P}_{k|k - 1}^{\tau} (s)\right). $$ Within the sequential processing framework, the propagated track state for the first illuminator is the predicted track state \(p(\chi _{k}^{\tau },\mathbf {x}_{k}^{\tau } |{\mathbf {Z}^{k - 1}})\), and the propagated track state for the other illuminator s equals the posterior track state with respect to the last illuminator \(p(\chi _{k}^{\tau } (s - 1),\mathbf {x}_{k}^{\tau } (s - 1)|{\mathbf {Z}^{k}})\). The propagated probability of target existence with respect to the illuminator s is obtained by $$ P\left(\chi_{k}^{\tau} (s)|{\mathbf{Z}^{k - 1}}\right) = \left\{ \begin{array}{ll} P\left(\chi_{k}^{\tau} \left|{\mathbf{Z}^{k - 1}}\right.\right),&s = 1\\ P\left(\chi_{k}^{\tau} (s - 1)\left|{\mathbf{Z}^{k}}\right.\right),&s \ne 1, \end{array} \right. $$ $$ P\left(\chi_{k}^{\tau} \left|{\mathbf{Z}^{k - 1}}\right.\right) = {p_{1,1}}P\left(\chi_{k - 1}^{\tau} \left|{\mathbf{Z}^{k - 1}}\right.\right). $$ Here p 1,1 is the probability of target existence at time k given that it exists at time k−1 [21], and \(P(\chi _{k}^{\tau } (s - 1)|{\mathbf {Z}^{k}})\) is the posterior probability of target existence with respect to illuminator s−1. The propagated track trajectory state for the illuminator s is given by $${} \left[\hat{\mathbf{x}}_{k|k - 1}^{\tau} (s),\mathbf{P}_{k|k - 1}^{\tau} (s)\right] =\! \left\{ \begin{array}{ll} \left[\hat{\mathbf{x}}_{k|k - 1}^{\tau},\mathbf{P}_{k|k - 1}^{\tau} \right],&\!s = 1 \\ \left[\hat{\mathbf{x}}_{k|k}^{\tau} (s - 1), \mathbf{P}_{k|k}^{\tau} (s - 1)\right],&\!s \ne 1, \end{array} \right. $$ $$ {\left[\hat{\mathbf{x}}_{k|k - 1}^{\tau},\mathbf{P}_{k|k - 1}^{\tau} \right]} = {\mathbf{EKF}}_{P}\left(\hat{\mathbf{x}}_{k - 1|k - 1}^{\tau},\mathbf{P}_{k - 1|k - 1}^{\tau}, \mathbf{F},{\mathbf{Q}_{k}}\right), $$ where E K F P denotes the extended Kalman filter prediction procedure, and \(\hat {\mathbf {x}}_{k|k}^{\tau } (s - 1)\) and \(\mathbf {P}_{k|k}^{\tau } (s - 1)\) are the posterior mean and covariance of the track trajectory state with respect to the illuminator s−1 at time k. For the sake of saving computational resources, the measurement selection procedure is performed to select a subset of measurements \(\mathbf {z}_{k}^{\tau } (s)\) with cardinality \(m_{k}^{\tau } (s)\) by each track τ with respect to each illuminator s. $$ \begin{aligned} &\mathbf{z}_{k}^{\tau} (s) = \\ &\left\{{\mathbf{Z}_{k,i}} \in {\mathbf{Z}_{k}}:{\left({\mathbf{Z}_{k,i}} - h_{k}^{s}\left(\mathbf{x}_{k|k - 1}^{\tau} (s),\mathbf{x}_{s}\right)\right)^{T}} {\left(\mathbf{S}_{k}^{\tau} (s)\right)^{- 1}}\right.\\ &\left.\left({\mathbf{Z}_{k,i}} - h_{k}^{s}\left(\mathbf{x}_{k|k - 1}^{\tau} (s),\mathbf{x}_{s}\right)\right) < \kappa \right\} \end{aligned} $$ $$ \mathbf{S}_{k}^{\tau} (s) = \mathbf{H}_{k}^{\tau} (s)\mathbf{P}_{k|k - 1}^{\tau} (s){\left(\mathbf{H}_{k}^{\tau} (s)\right)^{T}} + {\mathbf{R}_{k}}, $$ where \(\mathbf {H}_{k}^{\tau } (s)\) is the Jacobian matrix of the measurement function \(h_{k}^{s}(.)\) evaluated at \(\hat {\mathbf {x}}_{k|k - 1}^{\tau } (s)\). Denote the ith measurement of \(\mathbf {z}_{k}^{\tau } (s)\) by \(\mathbf {z}_{k,i}^{\tau } (s)\). Then the likelihood of measurement \(\mathbf {z}_{k,i}^{\tau } (s)\) with respect to track τ and illuminator s is calculated by $$ p_{k,i}^{\tau} (s) = \frac{{\mathcal{N}\left(\mathbf{z}_{k,i}^{\tau} (s);\;h_{k}^{s}\left(\hat{\mathbf{x}}_{k|k - 1}^{\tau} (s),\mathbf{x}_{k}^{s}\right),\mathbf{S}_{k}^{\tau} (s)\right)}}{{{P_{G}}}}. $$ In multitarget tracking, as the measurement origins are no longer independent, the allocation between measurements and targets must be considered jointly or globally. As the number of feasible joint events for measurement to target allocation increases combinatorially, tracks are separated into clusters of tracks that share selected measurements. The multitarget data association operations are performed on each cluster of tracks simultaneously and independently. As the multitarget data association equations can be easily found in [28], they are interpreted by the pseudo-function of joint multitarget data association (JMTDA) $$ \begin{aligned} &\left[ {{{\left\{ P(\chi_{k}^{\tau} (s)|{\mathbf{Z}^{k}}),{{\left\{ \beta_{k,i}^{\tau} (s)\right\} }_{i \ge 0}}\right\} }_{\tau} }} \right] = {\mathbf{JMTDA}}\\ &\left[ {{{\left\{ P\left(\chi_{k}^{\tau} (s)|{\mathbf{Z}^{k - 1}}\right),{{\left\{ p_{k,i}^{\tau} (s)\right\} }_{i > 0}}\right\} }_{\tau} }} \right]. \end{aligned} $$ Track update The updated track state at time k of track τ with respect to the illuminator s equals $$ p\left(\chi_{k}^{\tau} (s),\mathbf{x}_{k}^{\tau} (s)\left|{\mathbf{Z}^{k}}\right.\right) = P\left(\chi_{k}^{\tau} (s)\left|{\mathbf{Z}^{k}}\right.\right)p\left(\mathbf{x}_{k}^{\tau} (s)\left|{\mathbf{Z}^{k}}\right.\right), $$ where the track τ trajectory state is approximated by a single Gaussian, $${} \begin{aligned} p\left(\mathbf{x}_{k}^{\tau} (s)\left|{\mathbf{Z}^{k}}\right.\right) &= \mathcal{N}\left(\mathbf{x}_{k}^{\tau} (s);\hat{\mathbf{x}}_{k|k}^{\tau} (s),\mathbf{P}_{k|k}^{\tau} (s)\right)\\ &= \sum\limits_{i = 0}^{m_{k}^{\tau} (s)} {\beta_{k,i}^{\tau} (s)} \mathcal{N}\left(\mathbf{x}_{k}^{\tau} (s);\hat{\mathbf{x}}_{k|k,i}^{\tau} (s),\mathbf{P}_{k|k,i}^{\tau} (s)\right), \end{aligned} $$ where the track state estimate \(\left [\hat {\mathbf {x}}_{k|k,i}^{\tau } (s),\mathbf {P}_{k|k,i}^{\tau } (s)\right ]\) is expressed as, when i=0 $$ \left[ {\hat{\mathbf{x}}_{k|k,i}^{\tau} (s),\mathbf{P}_{k|k,i}^{\tau} (s)} \right] = \left[ {\mathbf{x}_{k|k - 1}^{\tau} (s),\mathbf{P}_{k|k - 1}^{\tau} (s)} \right], $$ and when i>0, as $${} \left[ \!{\hat{\mathbf{x}}_{k|k,i}^{\tau} (s),\mathbf{P}_{k|k,i}^{\tau} (s)} \!\right] \,=\,{\mathbf{EKF}}_{\mathrm{U}}{\left(\!\mathbf{z}_{k,i}^{\tau} (s),{\mathbf{R}_{k}},\mathbf{x}_{k|k - 1}^{\tau} (s),\mathbf{P}_{k|k - 1}^{\tau} (s),h_{k}^{s}\right)}. $$ This recursive procedure with respect to each track τ at time k operates for each illuminator s=1,…,N s . Track output The posterior track τ state at time k is the track τ state updated for the last (\( {\mathcal N}_{s}\)) illuminator using the common measurement set Z k , $$ p\left(\chi_{k}^{\tau},\mathbf{x}_{k}^{\tau} \left|{\mathbf{Z}^{k}}\right.\right) = P\left(\chi_{k}^{\tau} (N_{s})\left|{\mathbf{Z}^{k}}\right.\right)p\left(\mathbf{x}_{k}^{\tau} (N_{s})\left|{\mathbf{Z}^{k}}\right.\right). $$ Simulation validation In this section, the numerical experiments for multitarget tracking in a multistatic passive radar system under the DAB/DVB network are discussed, in which the superiority of the proposed algorithm (SP-JIPDA) over the algorithm (MJIPDA) is validated and the track management in both algorithms using the probability of target existence as a track quality measure turns out to be efficient. Scenario description in the DAB/DVB network We consider a 3-D scenario of single DAB/DVB-network consisting of four illuminators and one receiver, the illuminators and receiver are all statically located on the ground (x−y plane). The multitarget geometry in this scenario is shown in Fig. 2. Five targets are simulated with a constant velocity. Targets 1, 2 and 3 move in a close distance and cross each other at a small intersection angle, targets 4 and 5 start to move at the same position. All targets appear simultaneously at scan 1 except target 3 born at scan 12, and all this five targets keep surviving until the last scan (scan 40). Targets 1, 2, and 3 move towards the illuminator-receiver plane while targets 4 and 5 move apart from it. Targets geometry in a DAB/DVB network Each pair of illuminator and receiver measure the targets independently. The probability of detection P D of each illuminator is modeled according the transmission loss, which depends on the distances among illuminator, target, and receiver, for simplicity, in this scenario, the detection probability P D is assumed equal for all targets and illuminators. In order to make the simulation challenged, the targets moving space is corrupted with heavy clutter measurements as shown in Fig. 3, where each target is measured by four pairs of illuminators-receiver. The maximum detection bistatic range is 14 km, and the minimum detection bistatic range is 1 km, as well as the maximum bistatic range rate 50 m/s and minimum bistatic range rate −50 m/s, and the number of clutter measurements follows a Poisson distribution and the clutter measurements are assumed to be uniformly distributed in this bistatic range and bistatic range rate surveillance space. N=100 Monte Carlo runs are performed, each run has 40 scans and each scan is sampled by 3 s. The measurement noise follows a Gaussian distribution, \(N\left (r;0,\sigma _{r}^{2}\right)\) for the range, and \(N\left ({\dot r};0,\sigma _{\dot r}^{2}\right)\) for the range rate. Example of received measurements for one scan with clutter density ρ=0.001 Performance measure criterion Due to the association ambiguity between measurements and illuminators as well as the unavailability of angle information, the track initiation using measurements in the multistatic passive radar with the DAB/DVB network is significantly computationally expensive and usually fails to give satisfactory performance, thus, is outside the scope of this paper; preliminary work can be found in [29], wherein the track initiation performance is quite sensitive to the clutter and target detection probability, and works only with very small number of clutter measurements and high target detection probability. In this simulation, the target ground truth information is used to initiate a track [15]. The initiated track position and velocity are generated from the target truth position and velocity with a certain disturbance (assumed to be a Gaussian distribution), i.e., the initiation position of target τ is \(\hat {\mathbf {x}}_{0}^{\tau,p} = \mathbf {x}_{0}^{\tau,p} + N\left (p_{0};0,\sigma _{I}^{2}\right)\), and the initiation velocity is \(\hat {\mathbf {x}}_{0}^{\tau,v} = \mathbf {x}_{0}^{\tau,v} + N\left (v_{0};0, {10^{-4}}\sigma _{I}^{2}\right)\), where \(\mathbf {x}_{0}^{\tau,p}\) and \(\mathbf {x}_{0}^{\tau,v}\) are the true initial target position and velocity of target τ, respectively. An important performance measure for target tracking in clutter is the track management. In this simulation, both the MJIPDA and the SP-JIPDA algorithms calculate the probability of target existence as the track quality measure for track management. Each initiated track is given an initial value as the probability of target existence, which is recursively updated by the measurements in the subsequent scans. The track management procedure confirms a track if its updated probability of target existence is greater than a predefined confirmation threshold, and it maintains the confirmed status until terminated when the updated probability of target existence falls below a predefined termination threshold. Since the tracks are initialized based on the targets' ground truth information at targets' born time, only the tracks following targets are initialized. In this experiments, the average number of confirmed tracks (ANCT) is utilized as the track management measure. It is averaged over N Monte Carlo runs and can be obtained by $$ \text{ANCT}(N)=\frac{1}{N}\sum\limits_{i = 1}^{N}{N_{c}^{i}}, $$ where \({N_{c}^{i}}\) is the number of confirmed tracks at ith Monte Carlo run. For fair performance comparison of track maintenance, both SP-JIPDA and MJIPDA algorithms are given the same track management parameters (the initial probability of target existence, the track confirmation threshold and termination threshold). Trajectory average estimation error The root mean square error (RMSE) is an efficient trajectory average estimation error. It is averaged over all N Monte Carlo runs and can be calculated by $$ \text{RMSE}(k)=\sqrt{\frac{1}{N}\sum\limits_{i = 1}^{N}{\lVert {\mathbf{x}_{k}^{\tau}-\mathbf{x}} \lVert}}, $$ where \(\mathbf {x}_{k}^{\tau }\) is the trajectory estimation of track τ at time k and x is the true target state. Numerical results In order to verify the robustness of the proposed algorithm, two cases of experiments with respect to different environment parameters are simulated as follows: Case 1: detection probability P D =0.9, measurement noise deviation σ r =10 m and \(\sigma _{\dot r}=0.1\;m/s\), clutter measurement density ρ=0.001 (n u m b e r/m 2.s −1), Gaussian disturbance deviation σ I =10 m. The track management performance across the proposed algorithms is compared using the average number of confirmed tracks and is shown in Fig. 4. As can be seen, in both cases, the proposed two algorithms succeed to confirm all the targets after several scans' delay, which firmly validates the probability of target existence that recursively calculated in both algorithms is an efficient track quality measure for track management. What is more, in each case, the proposed SP-JIPDA algorithm always delivers improved track management performance over the MJIPDA algorithm under the given simulation scenario, i.e., the SP-JIPDA algorithm confirms the targets earlier and faster than the MJIPDA algorithm. In case 1, the SP-JIPDA starts to confirm tracks at scan 2 and thereafter takes 4 scans' time to completely confirm four targets; however, the MJIPDA starts to confirm tracks at scan 4 and needs 8 scans' time to confirm all four targets, besides, when the fifth target appears at scan 12, the SP-JIPDA algorithm only takes 4 scans' time to confirm it rather than 9 scans' time needed by the MJIPDA. In case 2, the track management performances of both the SP-JIPDA and MJIPDA algorithm degenerate slightly due to the increased measurement noise and larger initialization error compared with that of in case 1, but the SP-JIPDA algorithm still confirms the targets earlier and faster over the MJIPDA algorithm. Average number of confirmed tracks Meanwhile, the position RMSEs of the confirmed tracks that follows targets are presented in Figs. 5, 6, 7, 8, and 9, respectively. As can be found therein, the SP-JIPDA algorithm performs significantly better than the MJIPDA algorithm in both cases 1 and 2. In case 1, the position RMSE of the SP-JIPDA with respect to each target is about 1 m less than that of the MJIPDA. In case 2, the position RMSEs of both algorithms increase significantly due to the increased measurement noise and initialization errors, but the SP-JIPDA algorithm still outperforms the MJIPDA with over 2 m position estimation error less. RMSE of target 1 As a consequence, the SP-JIPDA algorithm delivers much better tracking performance in terms of both track management and trajectory estimation. This can be explained by that the track state at each scan in SP-JIPDA algorithm is improved multiple times via sequential updating with respect to various illuminators, wherein the target measurements detected by different pair of illuminator-receivers are updated by incrementally accurate predicted track states at single scan; therefore, the likelihood of target measurements obtained by the SP-JIPDA algorithm is higher than that by the MJIPDA algorithm, which gives a faster increasing rate of the probability of target existence. The execution time in experiment 1 is given in Table 2 on a PC with Intel(R) Core(TM) i7-6700 CPU 3.4 GHz and RAM 16 GB running windows 7 and Matlab R2013a program. As shown in Table 2, the SP-JIPDA algorithm requires less computation time compared with the MJIPDA algorithm, and both algorithms are capable of real-time application since their execution times are much smaller than the entire simulation time. Table 2 Execution time [sec.] This paper investigates two solutions for multitarget tracking in clutter directly in 3-D Cartesian coordinates using multistatic passive radar with a DAB/DVB network, where both algorithms are capable of track management using the probability of target existence as the track quality measure. The MJIPDA algorithm is developed by incorporating the probability of target existence into the MJPDA algorithm as a track quality measure for track management, and the MJPDA algorithm addresses the target-measurement illuminator association ambiguity via "supertargets" using gate grouping. The SP-JIPDA algorithm sequentially operates the JIPDA tracker to update each track for each illuminator with the common measurement set at each scan. Compared with the MJIPDA algorithm, the SP-JIPDA enhances the target's track multiple times only at single scan by sequentially processing with respect to various illuminators; therefore, the target measurements can be utilized in a more efficient way to update the target's track state. The simulation validates the efficiency of the proposed algorithm and also shows the superiority performance of SP-JIPDA over the MJIPDA algorithm. There have several aspects worthy of further work: the availability and robustness of the proposed algorithms are worthwhile to be validated based on the real data obtained from a realistic setup of multistatic passive radar system under the DAB/DVB network; the more computationally efficient versions of the proposed multitarget tracking algorithm are attractive to be developed since the proposed multitarget tracking algorithms employ the optimal Bayesian multitarget joint data association approach which may suffer from the numerical explosion when numbers of close targets presented in the surveillance space; the extension of the proposed algorithms to multiple maneuver targets tracking in cluttered environment is also valuable. A. List of acronyms DAB/DVB Digital audio/video broadcast JPDA Joint probabilistic data association JIPDA Joint integrated probabilistic data association MJIPDA Modified joint integrated probabilistic data association SP-JIPDA Sequential processing-joint integrated probabilistic data association TDOA Time difference of arrival Extended Kalman filter Unscented Kalman filter Bootstrap particle filter Auxiliary particle filter Track-before-detect PMHT Probabilistic multi-hypothesis tracker FJE Feasible joint event JMTDA Joint multitarget data association ANCT Root mean square error B. List of symbols ∥x∥ The euclidean norm of vector x x T The transpose of vector x ⊗ The Kronecker product I 3 The identity matrix of size 3 s A illuminator indexed by s τ A track or a target followed by certain track by superscript τ \(\tilde {\tau }\) A hypothetical target consisting of a pair of target τ and illuminator s \(\chi _{k}^{\tau }\) Event that target τ exists at time k \(\chi _{k}^{\tau }(s)\) Event that target τ exists at time k for illuminator s \(\mathbf {x}_{k}^{\tau }\) The trajectory state in 3-D Cartesian coordinates of target τ at time k \(\mathbf {x}_{k}^{\tau,p}\) The position component of \(\mathbf {x}_{k}^{\tau }\) \(\mathbf {x}_{k}^{\tau,v}\) The velocity component of \(\mathbf {x}_{k}^{\tau }\) \(\mathbf {x}_{k}^{\tau }(s)\) Trajectory state in 3-D Cartesian coordinates of target τ at time k for illuminator s x s The position of illuminator s in 3-D Cartesian coordinates x r The position of the only receiver in 3-D Cartesian coordinates N s The number of entire illuminators γ k Sensor received bistatic range measurement at time k \({\dot \gamma _{k}}\) Sensor received bistatic range-rate measurement at time k F Target trajectory transition matrix Q k Target trajectory plant noise covariance matrix R k Measurement noise covariance matrix P D Target detection probability P G Gating probability that the (true) measurement will fall in the gate Z k Sensor received set of measurements at time k Z k,i The ith measurement of Z k Z k Set of measurements up to and including time k ρ k,i Clutter measurement density of Z k,i \(\mathbf {z}_{k}^{\tilde \tau }\) Set of selected measurements at time k with respect to supertarget \(\tilde \tau \) \(N_{k}^{\tilde \tau }\) The cardinality of \(\mathbf {z}_{k}^{\tilde \tau }\) \(\mathbf {z}_{k,i}^{\tilde \tau }\) The ith measurement of \(\mathbf {z}_{k}^{\tilde \tau }\) ξ j The jth feasible joint event \(\mathbf {z}_{k}^{\tau }(s)\) Set of selected measurements at time k with respect to track τ for illuminator s \(m_{k}^{\tau }(s)\) The cardinality of \(\mathbf {z}_{k}^{\tau }(s)\) \(\mathbf {z}_{k,i}^{\tau }(s)\) The ith measurement of \(\mathbf {z}_{k}^{\tau }(s)\) T 0(ξ j ) The set of supertargets allocated no measurement in ξ j T 1(ξ j ) The set of supertargets allocated one measurement in ξ j \(\Xi (\tilde \tau,i)\) The set of FJEs which allocate measurement i to supertarget \(\tilde {\tau }\) \(\chi _{k,i}^{\tilde \tau }\) Event that the selected measurement i is the detection of supertarget \(\tilde {\tau }\) at time k \(\beta _{i\tilde {\tau }}\) Posterior association probability measurement i is supertarget \(\tilde {\tau }\) detection \(\beta _{k,i}^{\tau }(s)\) Posterior association probability measurement i is target τ detection for s E(τ) The set of supertargets originated from track τ p 1,1 The probability of target existence at time k given that it exists at time k−1 T The sampling interval between two consecutive scans P fa The probability of false alarm V rc The sensor resolution cell volume H Kuschel, J Heckenbach, S Muller, R Appel, in Radar Conference. On the potentials of passive multistatic low frequency radars to counter stealth and detect low flying targets (IEEERome, 2008). U Reimers, Digitale Fernsehtechnik (Springer, Berlin, 1995). H Kuschel, VHF/UHF radar. 1. characteristics. Electron. Commun. Eng. J. 14(2), 61–72 (2002). PE Howland, Target tracking using television-based bistatic radar. IEE Proceedings-Radar, Sonar Navig. 146(3), 166–174 (1999). CR Berger, B Demissie, J Heckenbach, P Willett, SL Zhou, Signal processing for passive radar using OFDM waveforms. IEEE J. Sel. Top. Sign. Process. 4(1), 226–238 (2010). MH Cai, F He, LN Wu, in Image and Signal Processing. Application of UKF algorithm for target tracking in DTV-based passive radar (IEEETianjin, 2009), pp. 1–4. XF Yin, T Pedersen, P Blattnig, A Jaquier, BH Fleury, in 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop (DSP/SPE). A single-stage target tracking algorithm for multistatic DVB-T passive radar systems (IEEEFlorida, 2009), pp. 518–523. D Orlando, L Venturino, M Lops, G Ricci, Track-before-detect strategies for STAP radars. IEEE Trans. Signal Process. 58(2), 933–938 (2010). MathSciNet Article Google Scholar D Orlando, G Ricci, Y Bar-Shalom, Track-before-detect algorithms for targets with kinematic constraints. IEEE Trans. Aerosp. Electron. Syst. 47(3), 1837–1849 (2011). F Ehlers, D Orlando, G Ricci, Batch tracking algorithm for multistatic sonars. IET Radar, Sonar. Navig. 6(8), 746–752 (2012). R Tharmarasa, N Nandakumaran, M McDonald, T Kirubarajan, in SPIE Optical Engineering+ Applications. On the potentials of passive multistatic low frequency radars to counter stealth and detect low flying targets, (2009). M Daun, W Koch, in Radar Conference, 2008. RADAR '08. IEEE. Multistatic target tracking for non-cooperative illumination by DAB/DVB-T (IEEE, 2008), pp. 1–6. D Martina, N Ulrich, W Koch, Tracking in multistatic passive radar systems using DAB/DVB-T illumination. Signal Process.92(6), 1365–1386 (2012). S Choi, CR Berger, DF Crouse, P Willett, SL Zhou, in Optical Engineering+ Applications. Target tracking for multistatic radar with transmitter uncertainty (SPIECalifornia, 2009). S Choi, DF Crouse, P Willett, SL Zhou, Approaches to Cartesian data association passive radar tracking in a DAB/DVB network. IEEE Trans. Aerosp. Electron. Syst. 50(1), 649–663 (2014). B Ristic, S Arulampalam, N Gordon, Beyond the Kalman filter: Particle filters for tracking applications (Artech House, London, 2004). C Hue, JP Cadre, P Pérez, Tracking multiple objects with particle filtering. IEEE Trans. Aerosp. Electron. Syst. 38(3), 791–812 (2002). D Mušicki, R Evans, Joint Integrated Probabilistic Data Association - JIPDA. IEEE Trans. Aerosp. Electron. Syst. 40(3), 1093–1099 (2004). Y Bar-Shalom, T Fortman, Tracking and Data Association (Academic Press, Cambridge, 1988). LY Pao, CW Frei, in Proceedings of the 1995 American Control Conference. A comparison of parallel and sequential implementations of a multisensor multitarget tracking algorithm (IEEESeattle, 1995), pp. 1683–1687. D Mušicki, R Evans, S Stanković, Integrated Probabilistic Data Association (IPDA). IEEE Trans. Autom. Control. 39(6), 1237–1241 (1994). Article MATH Google Scholar U Khan, YF Shi, TL Song, Fixed lag smoothing target tracking in clutter for a high pulse repetition frequency radar. EURASIP J. Adv. Signal Process. 2015(1), 1–11 (2015). D Mušicki, BL Scala, R Evans, The integrated track splitting filter-Effcient multi-scan single target tracking. IEEE Trans. Aerosp. Electron. Syst.43(4), 1405–1429 (2007). Y Bar-Shalom, P Willett, X Tian, Tracking and Data Association:A Handbook of Algorithms (YBS Publishing, Storrs, 2011). T Hanselmann, D Mušicki, in 2005 7th International Conference on Information Fusion. Optimal signal detection for false track discrimination, (2005). TL Song, D Mušicki, Adaptive Clutter Measurement Density Estimation for Improved Target Tracking. IEEE Trans. Aerosp. Electron. Syst.47(2), 1457–1466 (2011). DF Crouse, Y Bar-Shalom, P Willett, L Svensson, in Defense, Security, and Sensing. The JPDAF in practical systems: Computation and snake oil (SPIEOrlando, 2010). S Challa, R Evans, M Morelande, Fundamentals of Object Tracking (Cambridge University Press, Cambridge, 2011). S Choi, DF Crouse, P Willett, SL Zhou, Multistatic target tracking for passive radar in a DAB/DVB network: initiation. IEEE Trans. Aerosp. Electron. Syst. 51(3), 2460–2469 (2015). This work was supported by the Agency for Defense Development, Republic of Korea (Grant UD160001DD). YS made the main contributions to conception and tracking algorithms' design, as well as drafting the article. SP mainly designed the simulation validation and results analysis. TS offered critical suggestions on the algorithms' design, provided significant revising for important intellectual content and gave final approval of the current version to be submitted. All authors read and approved the final manuscript. Department of Electronic Systems Engineering, Hanyang University, 55 Hanyangdaehak- ro, Ansan, Republic of Korea Yi Fang Shi, Seung Hyo Park & Taek Lyul Song Yi Fang Shi Seung Hyo Park Taek Lyul Song Correspondence to Taek Lyul Song. Shi, Y., Park, S. & Song, T. Multitarget tracking in cluttered environment for a multistatic passive radar system under the DAB/DVB network. EURASIP J. Adv. Signal Process. 2017, 11 (2017). https://doi.org/10.1186/s13634-017-0445-4 Received: 25 September 2016 Accepted: 13 January 2017 Multistatic passive radar Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Change of basis in the many worlds interpretation? Say we have two orthogonal states $|A\rangle$ and $|B\rangle$. In the many worlds interpretation, we can imagine two parallel universes in which we are either in state $A$ or $B$. But now if we change the basis so that we have: $|C\rangle = \frac{1}{\sqrt{2}}(|A\rangle+|B\rangle)$ and $|D\rangle = \frac{1}{\sqrt{2}}(|A\rangle-|B\rangle)$ It seems we have to then adjust our interpretation of the parallel universes. The only way out of seems to be if the universes $A$ and $B$ correspond to a universe with person with a brain beleiving or disbelieving a true/false statement. Since one cannot both believe something is true and false. But then don't we arrive back at the copenhegen interpretation with an observer collapsing a wave function? quantum-mechanics quantum-interpretations zoobyzooby $\begingroup$ There seem to be three unrelated ideas here: (1) the phase relationship between A and B (which ends up being irrelevant because of decoherence), (2) the preferred basis problem, and (3) the fact that an observer can't consciously experience the fact of being in a superposition of states. One could certainly draw connections among these ideas, but it's not obvious to me from reading the question what connection you have in mind. $\endgroup$ – Ben Crowell Jun 18 at 13:30 A measurement is an interaction that results in a record that can be copied and examined with arbitrarily high accuracy. In quantum physics, the information that can be copied in this way in a particular measurement consists of the eigenvalues of an observable: https://arxiv.org/abs/1212.3245 In the many worlds interpretation (MWI) the whole of physical reality consists of a structure in which to a good approximation information flows in channels each of which resembles the universe as described by classical physics: https://arxiv.org/abs/quant-ph/0104033 So the way that the multiverse is actually divided into universes is constrained by quantum physics so that each universe corresponds to eigenvalues of particular measured observables. We can understand this process to any precision you like in the MWI because there is a particular set of equations of motion that can be understood and modeled. This will coincide with the states of observers because observers have to be able to copy information to create knowledge, do observations and so on, but those observers are not put in a separate category from other physical processes. The Copenhagen interpretation (CI) avoids making any clear statement about what exists in reality. There is no prospect of the MWI exactly agreeing with the CI because the CI doesn't make clear statements or predictions. alanfalanf Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-interpretations or ask your own question. Can a super-positioned human be used to differentiate between the Copenhagen interpretation and many-worlds? Does the many worlds interpretation eliminate the spooky action at a distance paradox? What limits causality in the many-worlds interpretation (MWI)? Many-world interpretation: simple physics explanation? Wouldn't the thermodynamic cost of creating alternate universes make the Many Worlds interpretation implausible? Does the Many Worlds interpretation of quantum mechanics necessarily imply every world exist? How does the many-worlds interpretation explain that humans see just one world? Does many-worlds interpretation really imply every possible "combination"? What becomes of conservation of energy in many-worlds theories? Measurement problem: Origin of probabilities in Many-Worlds Interpretation
CommonCrawl
How did Lorentz transformations get their modern definition? Historically, Special Relativity was motivated by apparent inconsistencies between Maxwell's Electrodynamics and Newtonian Mechanics. In Einstein's well known paper "On the electrodynamics of moving bodies" he explains quite well his motivations. Central objects of the theory are the Lorentz transformations. If one forgets motivations, history and intuition the Lorentz transformations are formally defined as the linear transformations $\Lambda:\mathbb{R}^4\to \mathbb{R}^4$ such that $$\eta(\Lambda v,\Lambda w)=\eta(v,w),$$ where $\eta = \operatorname{diag}(-1,1,1,1)$. Furthermost, it seems that before this definition they were defined as the transformations which keep the speed of light the same in all frames. My question is: how did Lorentz transformations get this modern definition? How were they first defined, how did they relate to Einstein's paper, and how did they get the modern definition as "transformations which preserve the spacetime inner product"? Specifically I'm interested in knowing how from the motivations for relativity physicists got to the definition of Lorentz transformations as the transformations $\Lambda$ such that $\eta(\Lambda x,\Lambda y) = \eta(x,y)$ physics theoretical-physics relativity-theory electromagnetism $\begingroup$ This question doesn't really make much sense. You discuss two mathematically equivalent definitions, and ask when one gave way to the other. Since they're mathematically equivalent, there is no reason that one has to give way to the other. This is just a matter of a particular author's preferences regarding how to present the subject. $\endgroup$ $\begingroup$ I believe that the wording came out in a confuse manner. I'm not asking why would one pick the latter instead of the former. I agree that it is a matter of preference. But as far as I know, the first definition used was that based on Einstein's postulates which appear on his paper. The other definition, equivalent to the first, I believe appeared latter. What I'm asking here, is how physicists got to the second definition. How, from the first approach, which is what Einstein presented, it was discovered that this other definition could do the same? It is not a question regarding which to pick. $\endgroup$ Wikipedia has very adequate and well-sourced article on History of Lorentz transformations. Voigt formulated the not quite the modern ones back in 1887, of which Lorentz was unaware, and had to work out his own version independently. This might have been just as well since he later said he would have used Voigt's if he knew about them, but He presented them partially (without the time dilation) in 1895, the first complete version is due to Larmor (1897). Lorentz was apparently unaware of that either, and supplied his own full version in 1899, see What made Einstein believe (or know) that time was affected by speed and gravity?. None of them viewed the transformations algebraically or kinematically, they were seen as describing dynamic effects on bodies moving at high speeds. Larmor even supplemented a hypothesis that molecular forces are of electromagnetic nature, which would explain the effects. But as Poincare showed in 1905 purely electromagnetic forces could not account for the stability of the electron. He had to conjecture an additional stabilizing non-electromagnetic force that nonetheless obeyed the same transformation laws, which made it ad hoc. The first algebraic observation, that the transformations form a group, was made by Poincare in his 1904-1906 papers on the dynamics of the electron, but according to Weinstein "Poincaré did not associate this quadratic form with propagation of light in order to define a null interval like Einstein or a metric like Minkowski". This is particularly surprising because groups involved in Kleinian geometries are usually obtained by considering all transformations that preserve a quadratic form, as Poincare well knew. As alluded to by Weinstein, it was Einstein in 1905 who first characterized them kinematically, as the transformations that preserve the speed of light in all frames (i.e. preserve the null interval), and only Minkowsky, inspired by Einstein's paper, gave the modern geometric formulation of them as the transformations that preserve a (pseudo) metric in 1907-1909, see What was the motivation for Minkowski spacetime before special relativity? Transformations must preserve the structure of Maxwell's equations, which then automatically preserves the speed of light. Voigt, 1887, was the first, but there were several independent derivations. Note that Voight's transformations are not quite the same as those of Lorentz. The latter are consistent with the Principle of Relativity. Peter DiehrPeter Diehr Those Wikipedia and Wikiversity pages (History of Lorentz transformations and History of Topics in Special Relativity) have lots of information on the mathematical history of the Lorentz transformations preserving: $$\pm\left[x_{0}^{2}-x_{1}^{2}-x_{2}^{2}-x_{3}^{2}\right]$$ Starting as early as 1800 by Gauss and many others, quadratic forms equivalent to the spacetime interval and their transformations were studied in the theory of indefinite quadratic forms, elliptic functions, hyperbolic geometry, long before Lorentz, Einstein or Minkowski used them in physics. Most general Lorentz transformations Lagrange (1773), Gauss (1798–1818), Jacobi (1827, 1833/34), Lebesgue (1837), Bour (1856), Somov (1863), Killing (1878–1893), Poincaré (1881), Cox (1881–1883), Hill (1882), Picard (1882-1884), Callandreau (1885), Lie (1885-1890), Gérard (1892), Hausdorff (1899), Woods (1901-05), Liebmann (1904–05) LT via imaginary orthogonal transformation Euler (1771), Wessel (1799), Cauchy (1829), Lie (1871), Minkowski (1907–1908), Sommerfeld (1909) LT via hyperbolic functions Riccati (1757), Lambert (1768–1770), Taurinus (1826), Cayley (1859-84), Beltrami (1868), Klein (1871), Laisant (1874), Escherich (1874), Glaisher (1878), Günther (1880/81), Schur (1885/86, 1900/02), Lindemann (1890–91), Gérard (1892), Killing (1893,97), Woods (1903), Whitehead (1897/98), Liebmann (1904–05), Herglotz (1909/10) LT via velocity Euler (1735), Beltrami (1868), Schur (1885/86, 1900/02), Lipschitz (1885–86), Voigt (1887), Heaviside (1888), Thomson (1889), Searle (1896), Lorentz (1892, 1895), Larmor (1897, 1900), Lorentz (1899, 1904), Poincaré (1900, 1905), Einstein (1905), Minkowski (1907–1908), Sommerfeld (1909), Herglotz (1909/10), Varićak (1910), Ignatowski (1910), Noether (1910), Klein (1910), Conway (1911), Silberstein (1911), Ignatowski (1910/11), Herglotz (1911), Borel (1913–14), Gruner (1921) LT via conformal, spherical wave, and Laguerre transformation Lie (1871), Klein & Pockels & Bôcher (1871-91), Laguerre (1882), Stephanos (1883), Darboux (1887), Scheffers (1899), Smith (1900), Bateman and Cunningham (1909–1910) LT via Cayley–Hermite transformation Euler (1771), Cayley (1846–1884), Hermite (1853, 1854), Bachmann (1869), Laguerre (1882), Darboux (1887), Smith (1900), Borel (1913–14) LT via Cayley–Klein parameters, Möbius and spin transformations Lagrange (1773), Gauss (1800), Cayley (1854), Klein (1871–97), Selling (1873–74), Poincaré /1881-86), Bianchi (1888-93), Fricke (1891–97), Woods (1895), Herglotz (1909/10) LT via quaternions and hyperbolic numbers Euler (1771), Hamilton (1844/45), Cayley (1845), Cockle (1848), Cox (1882), Stephanos (1883), Buchheim (1884–85), Lipschitz (1885/86), Vahlen (1901/02), Noether (1910), Klein (1910), Conway (1911), Silberstein (1911) LT via trigonometric functions Bianchi (1886–1893), Darboux (1881/94), Scheffers (1899), Eisenhart (1905), Varićak (1910), Gruner (1921) LT via squeeze mappings Laisant (1874), Lie (1879-84), Günther (1880/81), Laguerre (1882), Darboux (1883–1891), Lipschitz (1885/86), Bianchi (1886–1893), Lindemann (1890/91), Smith (1900), Eisenhart (1905) BatiatusBatiatus Not the answer you're looking for? Browse other questions tagged physics theoretical-physics relativity-theory electromagnetism or ask your own question. What was the motivation for Minkowski spacetime before special relativity? How did the 'Poincaré patches' get their name? Were there doubts that Voigt's time dilation was correct rather than Einstein's? How did physicists deal with the variance of electromagnetism before special relativity? Did Lorentz remain an ether advocate till his death?
CommonCrawl
Cha: The Kernel Trick for Content-Based Media Retrieval in Online Social Networks Guang-Ho Cha The Kernel Trick for Content-Based Media Retrieval in Online Social Networks Abstract: Nowadays, online or mobile social network services (SNS) are very popular and widely spread in our society and daily lives to instantly share, disseminate, and search information. In particular, SNS such as YouTube, Flickr, Facebook, and Amazon allow users to upload billions of images or videos and also provide a number of multimedia information to users. Information retrieval in multimedia-rich SNS is very useful but challenging task. Content-based media retrieval (CBMR) is the process of obtaining the relevant image or video objects for a given query from a collection of information sources. However, CBMR suffers from the dimensionality curse due to inherent high dimensionality features of media data. This paper investigates the effectiveness of the kernel trick in CBMR, specifically, the kernel principal component analysis (KPCA) for dimensionality reduc-tion. KPCA is a nonlinear extension of linear principal component analysis (LPCA) to discovering nonlinear embeddings using the kernel trick. The fundamental idea of KPCA is mapping the input data into a high-dimensional feature space through a nonlinear kernel function and then computing the principal components on that mapped space. This paper investigates the potential of KPCA in CBMR for feature extraction or dimen-sionality reduction. Using the Gaussian kernel in our experiments, we compute the principal components of an image dataset in the transformed space and then we use them as new feature dimensions for the image dataset. Moreover, KPCA can be applied to other many domains including CBMR, where LPCA has been used to extract features and where the nonlinear extension would be effective. Our results from extensive experiments demonstrate that the potential of KPCA is very encouraging compared with LPCA in CBMR. Keywords: Content-Based Retrieval , Dimensionality Curse , Nearest Neighbor Query , Online Social Network , Kernel Method , Kernel Principal Component Analysis , Similarity Search , Social Network Service Ahmad and Ali [1] categorizes SNS into three categories: (1) services based on social interaction such as Facebook, MySpace, LinkedIn, Twitter, etc.; (2) services based on multimedia such as YouTube, Flickr, etc.; and (3) services based on modern question answering such as Yahoo! Answers, Stack Overflow, Quora, etc. The information based on multimedia including images, audios, and videos is shared by individuals on Instagram, Facebook, Flickr, YouTube, and so on. The explosion of the amount of digital multimedia in online social networks has brought about the need for the content-based media retrieval (CBMR) services that find images, audios, and videos required by users. This demand in recent years has made CBMR a very active research area. In CBMR image and video are usually searched using the visual features such as shape, color, texture, and so on. The visual features are extracted and stored as n-dimensional feature vectors. During the search, the feature vector of the query image is extracted and compared with feature vectors in the database. The images returned by query processing should be similar to the images given by the query. In this case, if feature vectors have moderate dimensionalities, say below 10, this similarity search problem can be efficiently solved using the conventional multidimensional indexing structures such as the R*-tree [2], the VP-tree [3,4], and the space filling curve-based indexing method such as the Hilbert-R tree [5] and the HG-tree [6]. But, unfortunately, there have been no efficient solutions so far to solve this problem in high dimensions, for example over 100. Thus, the major issue in this area is to overcome the problem of dimensionality curse—a phenomenon that the indexing and retrieval performance degrades drastically with dimensionality. In order to over¬come the problem caused by the dimensionality curse, the indexing techniques based on approximation have been developed. For example, the vector approximation-based techniques such as the VA-file [7] and the LPC-file [8] and the approximated hashing-based methods such as the locality-sensitive hashing [9,10]. However, in high dimensions, in theory or practice, the indexing methods based on the approxi¬mation technique have inherent weakness, for example, their performance is affected by the size of the dataset. In this paper, we try to solve the problem in view of the reduction of the dimensionality. The approaches to reduce the dimensionality of feature vectors have been attempted by using the techniques such as the principal component analysis (PCA) [11,12] and the singular value decomposition (SVD) [13,14]. PCA is a well-known method to identify patterns from a dataset and to express the dataset with these patterns. In CBMR, PCA is a powerful tool to extract fewer number of dimensions from the original dataset and to represent the dataset on these reduced dimensions. But it is very difficult that this PCA always detects distinguishing patterns in a given dataset. With the use of suitable nonlinear technique, we can extract more outstanding patterns. In this work, we extend the conventional linear PCA (LPCA) to discovering nonlinear embeddings using the kernel method and investigate the effectiveness of the kernel PCA (KPCA) in reducing the dimensionality of dataset in CBMR. From the next section, we introduce the KPCA that is a nonlinear extension of LPCA. In Section 3, we provide the process of reducing the dimensionality based on KPCA. In Section 4, we discuss the measures to evaluate and analyze the performance of KPCA. Section 5 provides the result of experiments to compare the performances between KPCA and LPCA. Finally, we conclude with discussions of the significance of the work in Section 6. 2. Kernel Principal Component Analysis The kernel trick is a method to extend algorithms into a nonlinearly mapped feature space. The kernel makes an algorithm work in the kernel transformed space. If F(x) is a transformation of a point x in the original data space to the feature space then the kernel f calculates the pairwise scalar product (or similarity) in the feature space of two data points from the original space. [TeX:] $$f(x, y)=\langle F(x), F(y)\rangle$$ This paper investigates the efficacy of the kernel trick, specifically KPCA, for feature extraction and dimensionality reduction in CBMR. PCA is a method that conducts an orthogonal basis transformation of the coordinate system where the original dataset is described. The principal components are the newly computed orthogonal variables by which the original dataset is represented. The smaller number of principal components than the dimensionality of the data space is usually sufficient to describe the major features in the dataset. We find the principal components of dataset that are nonlinearly related to the structure inherent to the dataset. Among these are feature variables taken by applying arbitrary higher order correlations between sample feature vectors. The new principal components are drawn by diagonalizing the covariance matrix C of a given dataset [TeX:] $$\left\{x_{i} \in R^{N} \mid i=1, \ldots, n\right\},$$ which are centered. It is defined as follows [TeX:] $$C=\frac{1}{n} \sum_{i=1}^{n} x_{i} x_{i}^{t}, \quad \sum_{i=1}^{n} x_{i}=0$$ To find the principal components, one has to solve the eigenvector problem. [TeX:] $$\lambda v=C v$$ for eigenvectors [TeX:] $$v \in R^{N}-\{0\}$$ and eigenvalues [TeX:] $$\lambda \geq 0.$$ The principal components are the coordinates in the eigenvectors v and they are represented by the linear combination of variables standardized in the original date space. The size of an eigenvalue [TeX:] $$\lambda$$ associated with an eigenvector v is equal to the size of variance in the direction of v. Moreover, the directions of the first p eigenvectors associated with the largest p eigenvalues show the sum of variances covered by p orthogonal directions of the dataset. In many real applications, the most interesting information is represented in those directions. For example, in data compression or in data denoising, one projects the original data onto the directions with largest variances to retain as much information as possible and drops deliberately the directions with small variances. In most cases, it cannot be asserted that LPCA detects most structures in a given dataset. Furthermore, LPCA may be affected seriously by the existence of outliers (or wild data). By applying the suitable nonlinear transform, we can draw effective information from the dataset. KPCA is very appropriate to draw informative nonlinear embeddings in a dataset [15]. Therefore, we adopt KPCA for dimensionality reduction in our work and investigate the efficacy of KPCA. In KPCA, all data points are first mapped into a feature space P using a nonlinear function F and then all computations are performed on the transformed data. In fact, KPCA adopts Mercer kernels instead of directly performing the mapping F because the mapped feature space P might be very high dimensional. A Mercer kernel is a function f(x, y) that constitutes a positive matrix [TeX:] $$G_{i j}=f\left(x_{i}, x_{j}\right)$$ for all datasets [TeX:] $$\left\{\boldsymbol{x}_{i}\right\}$$ [15,16]. Using the kernel function f instead of applying a scalar product in input space corresponds to transforming the data with some transformation function F to a feature space P, i.e., [TeX:] $$f(x, y)=(F(x) \cdot F(y)),$$ that allows us to calculate the value of scalar product in P without having to conduct the mapping F directly. After transforming the original data into the high dimensional feature space P via F, we perform LPCA, just as performing PCA in the original input space. To perform PCA in the feature space P, we have to find eigenvalues [TeX:] $$\lambda>0$$ and eigenvectors [TeX:] $$v \in P-\{0\}$$ that satisfy [TeX:] $$\lambda v=C v$$ with the covariance matrix C in P, defined as follows: [TeX:] $$C=\frac{1}{n} \sum_{i=1}^{n} F\left(x_{i}\right) \cdot F\left(x_{i}\right)^{t}, \sum_{i=1}^{n} F\left(x_{i}\right)=0$$ Every solution eigenvector v must be lain in the span of F-image of the original data. Thus, we can take into account the following equivalent equation as follows: [TeX:] $$\lambda\left(F\left(x_{i}\right) \cdot v\right)=\left(F\left(x_{i}\right) \cdot C v\right) \text { for all } i=1, \ldots, n$$ There exist coefficients [TeX:] $$c_{1}, \ldots, c_{n}$$ such that [TeX:] $$v=\sum_{i=1}^{n} c_{i} F\left(x_{i}\right)$$ We can get a problem that is given with regards to scalar product when we substitute C and (2) into (1) and define [TeX:] $$n \times n$$ Gram matrix [TeX:] $$G_{i j}=f\left(F\left(x_{i}\right), F\left(y_{j}\right)\right)=f\left(x_{i}, y_{j}\right).$$ Solve the eigenvector problem for non-zero eigenvalues [TeX:] $$\lambda$$ and eigenvectors [TeX:] $$c=\left(c_{1}, \ldots, c_{n}\right)^{t}$$ [TeX:] $$n \lambda c=G c$$ with [TeX:] $$\lambda_{p}$$ being the last non-zero eigenvalue subject to normalization condition [TeX:] $$\lambda_{m}\left(c^{m} \cdot c^{m}\right)=1$$ for all m = 1, , p. We calculate the projection onto the m-th component by the following equation in order to compute nonlinear principal components for F-image of the input point x: [TeX:] $$v^{m} \cdot F(x)=\sum_{i=1}^{n} c_{i}^{m} k\left(x, x_{i}\right)=\sum_{i=1}^{n} c_{i}^{m} F(x) \cdot F\left(x_{i}\right)$$ In fact, KPCA corresponds to LPCA in high dimensional feature space. As a result, all statistical and mathematical properties of LPCA are applicable to KPCA, with the modification that they become equations on a set of points [TeX:] $$F\left(x_{i}\right), i=1, \ldots, n,$$ in the feature space rather than in the original data space. Moreover, it can be shown that most forms of nonlinear embeddings appear as large eigenvectors of similarity matrix and are therefore special cases of KPCA. With the following properties, KPCA is actually the orthogonal basis transformation in F [15,16]: (1) the first [TeX:] $$p(p \in\{1, \ldots, n\})$$ principal components have more variance than any other orthogonal directions, (2) in representing input vectors, the approximation error caused by the first p eigenvectors is minimal, (3) the eigenvectors extracted are uncorrelated, and (4) the first p eigenvectors have maximal information with regard to the input vectors. 3. Image Feature Extraction and Dimensionality Reduction Fig. 1 illustrates the procedural architecture of KPCA for image feature extraction and feature vector comparison that consists of three layers which take different actions. The query image is given at the bottom layer with its feature vector x computed. The input (query) feature vector x and the feature vectors (sample patterns) [TeX:] $$x_{i}$$ are nonlinearly mapped via the function F into the high dimensional feature space P where scalar products (or similarities) are calculated. Using the kernel function F, these two layers are actually calculated in one step instead of knowing each feature value. The comparison results are then combined linearly using the weight cil computed by solving an eigenvalue problem and the result in the l-th nonlinear principal component corresponding to F. Therefore, when we assume that the eigenvectors are sorted in descending order of their corresponding size of eigenvalues, the first p principal components constitute the p-dimensional feature vector for an image. By selecting the suitable kernel function f, various transformation functions F can be induced indirectly. In this paper, we employ the Gaussian radial basis kernel [TeX:] $$f\left(x, x^{\prime}\right)=\exp \left(-\left(\left\|x-x^{\prime}\right\|^{2} / 2 \sigma^{2}\right)\right)$$ as our kernel because it is commonly used in pattern recognition and CBMR [17,18]. Image feature extraction and comparison architecture with KPCA. At the bottom layer, the input query feature vector is given. At the upper layer the query input feature vector is compared to sample feature vector using the chosen kernel function (Gaussian radial basis function in our work). Each output is then linearly combined with weight computed by solving the eigenvector problem. The functions of the network can be considered as projections onto the eigenvectors of the similarity matrix in high dimensional feature spaces. To perform KPCA, we carry out the following steps. We calculate first the Gram (or similarity) matrix [TeX:] $$G_{i j}=f\left(x_{i}, y_{j}\right).$$ And then, we diagonalize the Gram matrix G and solve the Eq. (3). We normalize the eigenvector coefficients [TeX:] $$c^{n}$$ such that [TeX:] $$\lambda_{n}\left(c^{n} \cdot c^{n}\right)=1.$$ For the F-image of an input image x, in order to draw nonlinear principal components that correspond to the kernel function f, we make projections onto the eigenvectors by Eq. (4). Unlike LPCA, KPCA permits to extract the amount of principal components that exceeds the dimensionality of input data because it diagonalizes the [TeX:] $$n \times n$$ Gram matrix [TeX:] $$G, G_{i j}=f\left(x_{i}, y_{j}\right),$$ instead of the covariance matrix C of the original data. For example, let us suppose that the number of inputs n exceeds the input dimensionality N. Even when LPCA is based on the [TeX:] $$n \times n$$ scalar product matrix, it can compute at most N non-zero eigenvalues and they are identical to the non-zero eigenvalues computed from [TeX:] $$N \times N$$ covariance matrix. Contrary to LPCA, KPCA can find up to n non-zero eigenvalues. It describes that KPCA cannot be directly performed with an [TeX:] $$N \times N$$ 4. Performance Analysis 4.1 Dimensionality Reduction In order to provide the insight into how KPCA in feature space P behaves in input space I, we provide our experimental results with an image dataset consisting of 13,000 images of US postage stamps with 256 colors as shown in Fig. 2. The images in the dataset are usually not randomly distributed but have very skew distributions in high-dimensional space. Postage stamps usually come in series (for example, flowers, states, persons, birds, etc.) with related designs and similar colors, and the US Postal Service (USPS) has usually used common colors or designs for many long-running postage stamps. Consequently, the image dataset used in our experiments reveals highly clustered and skewed distributions. In other words, the dimensionality of the transformed feature space may be much smaller than that of the input data space, and thus it is appropriate to reduce the dimensionality of the original image dataset. In our work, we retain a sufficient number of principal components so that we can account for at least 80% of the variance in each original variable. Image dataset of US postage stamps. MNIST dataset of handwritten digit images. 4.2 Recognition of Handwritten Digit Characters In our experiments, with the use of KPCA, we extract nonlinear principal components from a handwritten digit character dataset. We choose the MNIST dataset that contains 120,000 handwritten digit images of [TeX:] $$28 \times 28$$ black-and-white pixels as shown in Fig. 3. The Modified National Institute of Standards and Technology (MNIST) database is a large image dataset of handwritten digit characters. It has been widely used for training in many computer vision systems and also commonly used for and testing and training in machine learning fields. The MNIST dataset is also widely used for classifier benchmark and many techniques have been evaluated using this dataset. The feature of each digit image in the MNIST dataset is represented by a visual feature vector with [TeX:] $$784(=28 \times 28)$$ dimensions. For computational reasons, our experiments use only the first 5,000 images from the MNIST dataset. To assess the effectiveness of KPCA, we perform the k-nearest neighbor (k-NN) search to find the most similar k objects for the given query objects after transforming the image dataset based on LPCA and KPCA. 4.3 Performance Analysis In order to evaluate the performance of KPCA, the measures to assess performance should be considered. In traditional information retrieval for documents, to assess its performance, the recall and precision measures are commonly used. Recall is the fraction of the relevant objects that has been retrieved, that is, it evaluates the ability to retrieve useful objects. Precision is the fraction of the retrieved objects that is relevant, that is, it evaluates the ability to reject useless objects. Consider an example query for explanation. Let R be the number of relevant objects in a dataset, [TeX:] $$R_{a}$$ be the number of relevant objects that have been retrieved, and [TeX:] $$R_{r}$$ be the number of objects that have been retrieved. Then recall is defined by [TeX:] $$r_{a} / R_{r}.$$ In CBMR, recall and precision also can be used as suitable measures for performance evaluation. Normalized recall and precision have been suggested in IBM QBIC [19] that performs similarity search (or k-NN search) instead of exact search. These reflect the positions in the retrieved sequence where the relevant objects appear. Assume that a case of ideal retrieval, if there are R relevant objects in a dataset, then whole R relevant objects should appear as the first R retrievals (in any order). In [20], this is defined as the ideal average rank of relevant objects (IAVRR). IAVRR has the highest value if all relevant objects are retrieved and ranked at the top, in this case, IAVRR is computed by [TeX:] $$(0+1+\ldots+(R-1)) / R,$$ where the first (or topmost) position starts at the 0th place. The ratio of average rank of relevant objects (AVRR) to IAVRR (AVRR/IAVRR) presents a measure to evaluate the average retrieval accuracy on the number of experimental trials. In the case of ideal retrieval, that is, the perfect performance would generate this ratio to be 1 (i.e., AVRR/IAVRR = 1). As an example, we assume that the relevant objects for the query object [TeX:] $$Q$$ are defined as follows: [TeX:] $$Q 46, Q 18, Q 101, Q 52, Q 35, Q 120$$ where R = 6, and the system returns the query result as the following order of [TeX:] $$Q 85, Q 120, Q 109, Q 50, Q 18, Q 74, Q 46, Q 52, Q 57, Q 17, Q 35, Q 63, Q 16, Q 58, Q 101$$ then the relevant objects appear at positions 1, 4, 6, 7, 10 and 14. The AVRR for this is therefore [TeX:] $$(1+4 +6+7+10+14) / 6=7.$$ The IAVRR is computed by [TeX:] $$(0+1+2+3+4+5) / 6=2.5.$$ Therefore, AVRR/IAVRR = 7/2.5 = 2.8. When the retrieval order is significant, we can use the measure of Kendall's tau to represent the correlation between two datasets [21]. Kendall's tau indicates the disorder in the sequence of retrieved objects. As an example, let us examine two rankings as follows, where both retrievals select the same five objects but position them with different orders: [TeX:] $$\begin{array}{llllll} 1 & 2 & 3 & 4 & 5 & \text { (user's choice) } \\ 2 & 4 & 1 & 3 & 5 & \text { (system's choice) } \end{array}$$ For explanation, the user's first choice in the first row is ranked second in the second row chosen by the CBMR system. Kendall's tau is computed as follows: (no. of concordant pairs – no. of not concordant pairs) / (total no. of pair combinations) In the above example, 2 in the second row (system's choice) is followed by 4, 1, 3, and 5. Here, 24 is in order, thus it scores +1, 21 is out of order, thus it scores -1, and 23, 25 are in order, thus they score +1 each. Likely, 4 is followed by 1 and 3. Both are out of order, thus they score -1 each. 45 is in order, it scores +1. 1 is followed by 3 and 5. Both are in order, thus they score +1 each. Finally, 3 is followed by 5, it scores +1. The number of in-order pairs is seven, and out-of-order pairs is three, thus the total +7  3 = +4. And the total number of possible pairs, [TeX:] $${ }_{N} C_{2}$$ is 10 where N = 5. Thus, the value of tau is 4/10 = 0.4. In other words, Kendall's tau presents the measure of the difference or dissimilarity between two query result rankings. It is in the range [-1, +1]: the value of -1 denotes the complete disagreement, the value 0 represents there is no correlation between two rankings, and the value +1 means the complete agreement. In our experimental evaluation, we conduct 100 k-NN queries, average the results, and evaluate the performance based on the above performance measures. In the case of k-NN queries, recall and precision are the same since [TeX:] $$R=R_{r}.$$ It means that we cannot actually distinguish between the relevant objects and the irrelevant objects in k-NN queries because the judgment is not based on the exactness but the similarity. Therefore, as a representative of two, we calculate only the precision measure. The ratio AVRR/IAVRR takes into account ranks of relevant objects and presents the measure how much the ranking results are close to top ranks. Kendall's tau presents the measure of order (or disorder) of query results for the k-NN search. 5. Performance Evaluation With the use of two datasets (a set of 13,000 images with 256 colors of US postage stamps and a set of 6,000 handwritten digit images from MNIST dataset), we conduct extensive experiments for LPCA and KPCA, evaluate their performances base on the measures aforementioned, and compare their perfor¬mances. In the experiments, we employ 4 MPEG-7 [22] visual feature descriptors to extract feature vectors from the dataset of the US stamps. These visual features are general descriptors widely used in CBMR. They are: (1) 256-dimensional color structure: it is based on color histograms and the localized color distribution, (2) 30-dimensional homogeneous texture: it features the regional texture using the mean and the standard deviation of the input image, (3) 80-dimensional edge histogram: it presents local edge distribution of sub-images in an image, and (4) 35-dimensional region-based shape: it considers all pixels constituting the shape in the image, that is, both the interior and boundary pixels. To reduce the dimensionalities of the above four feature vector datasets consisting of four MPEG-7 visual descriptors extracted from the database of the US stamps, we apply KPCA and LPCA to those four datasets, respectively. We conduct k-NN queries in two categories of datasets, i.e., (1) the datasets where their dimensionalities are reduced by KPCA and (2) the datasets where their dimensionalities are reduced by LPCA. We choose the number k in k-NN queries as 20, 40, 60, 80 and 100 in all experiments. In each experiment, we process 100 random k-NN queries and average their results. The results of experiments using the four MPEG-7 visual feature datasets are shown in Tables 1–4. In each table, the original dimensionality and the reduced dimensionalities of the datasets are presented in the first column. Additionally, we also present the rate of the variance retained in each original variable in the first column of each table. As shown in Tables 1–4, with regard to the three performance measures, i.e., Kendall's tau, AVRR/ IAVRR, and precision, the performance of KPCA is superior over that of LPCA. In terms of Kendall's tau, the performance of KPCA is better than that of LPCA more than 50%. With respect to AVRR/IAVRR and precision, the performance of KPCA is around 10%–20% better than that of LPCA. These results of experiments demonstrate that it is desirable to extend LPCA so as to discover nonlinear embeddings in a dataset and KPCA can be adopted as an efficient nonlinear extension of LPCA in CBMR. Experiments on homogeneous texture features (d: dimensionality) Original, d = 30 LPCA KPCA AVRR/IAVRR 95%, d = 14 0.41 3.50 0.60 0.69 3.15 0.71 85%, d = 9 0.36 3.80 0.51 0.61 3.45 0.62 Experiments on edge histogram features (d: dimensionality) Experiments on region-based shape features (d: dimensionality) Experiments on color structure features (d: dimensionality) Original, d = 256 95%, d = 164 0.42 3.63 0.57 0.62 3.35 0.68 To estimate objectively the performance of KPCA, we regard the concept of query as the category of image sets in which the digit character image is included, that is, one of the labels from "0" to "9" are assigned to each category of digit images. We also assess the performance of precision of k-NN queries, where k is from 10 to 100, and calculate the precision by the ratio of the number of objects returned that are included in the query object category to the returned k objects. We conduct k-NN queries 100 times and average the results. From MNIST database, the query objects (images) are selected randomly. We show results of 100-NN queries in Figs. 4–9 to assess the performance of KPCA. The experimental results using single query images were presented in Figs. 4 and 5. The query image is shown in the top-left corner. As shown in Fig. 5, there are actually many digit images not "9" when we used the LPCA algorithm. As shown in Fig. 4 where KPCA is applied, there are only two result images other than the query image (digit "9"). This result of experiments demonstrates the supremacy of KPCA. The results of multi-object queries are shown from Fig. 6 to Fig. 9 where we use two-digit images as query objects. Two query objects are shown in the top-left. The first query uses digit "4" as an input query image. In the second query, two images with digits "0" and "6" are used as query images. For this kind of multi-object queries, we adopt the aggregate similarity metric used in FALCON [23] in which the α constant is chosen as -3. From Fig. 6 to Fig. 9, it is shown that there are several digit images other than the query objects when we adopt the LPCA algorithm. However, the KPCA algorithm returns the uniform result and the precision is 99%. In this multi-object query, KPCA also demonstrates a better result. The 100 images returned using the KPCA algorithm (precision = 98%). The query object is in the top-left corner. The 100 images returned using the LPCA algorithm (precision = 88%). The 100 images returned using the KPCA algorithm (precision = 99%). The query objects are the two images in the top-left corner. The 100 images returned using the KPCA algorithm (precision = 100%). The query objects are the two images in the top-left corner. The 100 objects returned using the LPCA algorithm (precision = 84%). Precision for single-object queries. Precision for multi-object queries. Figs. 10 and 11 compare the performance of precision between KPCA and LPCA for single-point and multi-point queries, respectively. Fig. 10 shows that KPCA achieves more than the precision of 90% for k-NN queries, on the other hand, LPCA could not achieve this performance. As shown in Fig. 11 for precisions for multi-object k-NN queries, KPCA performs more than the precision of 80% in any case, on the other hand, LPCA could not achieve this efficiency. In order to assess the statistical significance on the performance (i.e., the precision) superiority of KPCA to LPCA, we also conduct the paired t-test for the average precisions for KPCA and LPCA. Some symbols used for the paired t-test are defined as follows, where all k-NN queries are processed 100 times randomly and k = 10, 20, …,100: [TeX:] $$a_{i} \text { and } b_{i}$$ for [TeX:] $$1 \leq i \leq 10:$$ average precisions of KPCA and LPCA, respectively. [TeX:] $$a^{\prime} \text { and } b^{\prime} \text { : }$$ means of samples [TeX:] $$a_{i} \text { and } b_{i},$$ respectively. [TeX:] $$m_{1}(=10) \text { and } m_{2}(=10) :$$ number of samples come out of average k-NN query precisions of KPCA and LPCA, respectively. [TeX:] $$\alpha^{\prime 2} :$$ the population variance estimator and is defined by [TeX:] $$\alpha^{\prime 2}=\frac{1}{m_{1}+m_{2}-2}\left(\sum_{i=1}^{m_{1}}\left(a_{i}-a^{\prime}\right)^{2}+\sum_{i=1}^{m_{2}}\left(b_{i}-b^{\prime}\right)^{2}\right)$$ We also make some assumptions as follows: The populations of two techniques (that is, average precisions calculated from KPCA and LPCA) follow the normal distribution. The variances of the two populations are the same. We define the hypothesis to examine the average precision difference between KPCA and LPCA as follows: [TeX:] $$t=\frac{a^{\prime}-b^{\prime}}{\alpha_{1}^{\prime} \sqrt{1 / m_{1}+1 / m_{2}}}$$ [TeX:] $$T_{0}$$ is rejected with the significance level 5%, in other words, [TeX:] $$\theta_{1}>\theta_{2}$$ is justified if [TeX:] $$t \geq t_{m 1+m 2-2}(0.10)$$ in the t-distribution table. t and [TeX:] $$t_{n 1+n 2-2}(0.10)$$ are calculated in our experiments as follows: [TeX:] $$t=3.116 \text { for Fig. } 10, t=2.358 \text { for Fig. } 11 \text {, and } t_{m 1+m 2-2}(0.10)=1.732 \text {. }$$ Thus, in all cases, [TeX:] $$t \geq t_{m 1+m 2-2}(0.10)$$ and it is concluded that the performance of KPCA with respect to precision is better than that of LPCA. This paper presents the superiority of KPCA to LPCA for feature extraction and dimensionality reduction in CBMR and the potential of the kernel trick. With the use of Gaussian radial basis kernel, it is demonstrated that KPCA can be used to extract nonlinear embeddings and is successively utilized for dimensionality reduction in high-dimensional feature spaces of large media databases. Compared to LPCA, KPCA demonstrated its superior performance with regard to not only the retrieval precision but also the retrieval quality in CBMR. Thus, it can be concluded that KPCA can be adopted effectively to nonlinearly extend the functionality LPCA. Compared with other algorithms for nonlinear feature extraction, KPCA has merits that it only requires the solution for the eigenvector problem. Different kernels such as gaussian, sigmoid, and polynomial can lead to fine classification performances [18]. LPCA is actually widely used in many domains such as CBMR (i.e., image indexing and retrieval systems), the natural image analysis, density estimation, and noise reduction. KPCA can be employed to all applications where traditional LPCA has been successfully employed and where a nonlinear extension of feature extraction is needed. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. NRF-2017R1D1A1B03036561). He received the Ph.D. degree in computer science from the Korea Advanced Institute of Science and Technology, Daejeon, South Korea, in 1997. From 1999 to 2000, he was a visiting scientist at the IBM Almaden Research Center, San Jose, CA. He is currently a full professor in the Department of Computer Science and Engineering at the Seoul National University of Science and Technology, Seoul, South Korea. His research interests include content-based media indexing and retrieval, data mining, similarity search, and data management on new hardware. 1 W. Ahmad, R. Ali, "Information retrieval from social networks: a survey," in Proceedings of 2016 3rd International Conference on Recent Advances Information Technology (RAIT), Dhanbad, India, 2016;pp. 631-635. custom:[[[-]]] 2 N. Beckmann, H. P. Kriegel, R. Schneider, B. Seeger, "The R*-tree: an efficient and robust access method for points and rectangles," in Proceedings of the 1990 ACM SIGMOD International Conference on Management of DataAtlantic City, NJ, pp. 322-331, 1990.custom:[[[-]]] 3 A. W. C. Fu, P. M. S. Chan, Y. L. Cheung, Y. S. Moon, "Dynamic VP-tree indexing for n-nearest neighbor search given pair-wise distances," The VLDB Journal, vol. 9, no. 2, pp. 154-173, 2000.doi:[[[10.1007/PL00010672]]] 4 S. S. Lee, M. Shishibori, C. Y. Han, "An improvement video search method for VP-tree by using a trigonometric inequality," Journal of Information Processing Systems, vol. 9, no. 2, pp. 315-332, 2013.doi:[[[10.3745/JIPS.2013.9.2.315]]] 5 I. Kamel, C. Faloutsos, "Hilbert R-tree: an improved R-tree using fractals," in Proceedings of 20th International Conference on V ery Large Data Bases, Santiago de Chile, Chile, 1994;pp. 500-509. custom:[[[-]]] 6 G. H. Cha, C. W. Chung, "A new indexing scheme for content-based image retrieval," Multimedia Tools and Applications, vol. 6, no. 3, pp. 263-288, 1998.doi:[[[10.1023/A:1009608331551]]] 7 R. Weber, H. J. Schek, S. Blott, "A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces," in Proceedings of 24rd International Conference on V ery Large Data Bases, New York City, NY, 1998;pp. 194-205. custom:[[[-]]] 8 G. H. Cha, X. Zhu, P. Petkovic, C. W. Chung, "An efficient indexing method for nearest neighbor searches in high-dimensional image databases," IEEE Transactions on Multimedia, vol. 4, no. 1, pp. 76-87, 2002.custom:[[[-]]] 9 A. Andoni, P. Indyk, "Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions," Communications of the ACM, vol. 51, no. 1, pp. 117-122, 2008.doi:[[[10.1145/1327452.1327494]]] 10 A. Andoni, P. Indyk, "Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions," in Proceedings of 2006 47th Annual IEEE Symposium on Foundations of Computer ScienceBerkeley, CA, pp. 459-468, 2006.doi:[[[10.1145/1327452.1327494]]] 11 I. Jolliffe, in Encyclopedia of Statistics in Behavioral Science, UK: John Wiley & Sons, Chichester, 2005.custom:[[[-]]] 12 J. Lever, M. Krzywinski, N. Altman, "Points of significance: principal component analysis," Nature Methods, vol. 14, no. 7, pp. 641-643, 2017.custom:[[[-]]] 13 G. Srang, Introduction to Linear Algebra, 5th ed, MA: Wellesley-Cambridge Press, Wellesley, 2016.custom:[[[-]]] 14 G. Strang, K. Borre, Linear Algebra, Geodesy, and GPS, MA: Wellesley-Cambridge Press, Wellesley, 1997.custom:[[[-]]] 15 N. Pfister, P. Buhlmann, B Scholkopf, J. Peters, "Kernel‐based tests for joint independence," Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 80, no. 1, pp. 5-31, 2018.custom:[[[-]]] 16 B. Scholkopf, A. J. Smola, Learning with Kernels: Support V ector Machines, Regularization, Optimization, and Beyond, MA: MIT Press, Cambridge, 2018.custom:[[[-]]] 17 C. J. Simon-Gabriel, B. Scholkopf, "Kernel distribution embeddings: universal kernels, characteristic kernels and kernel metrics on distributions," The Journal of Machine Learning Research, vol. 19, no. 1, pp. 1-29, 2018.custom:[[[-]]] 18 P. B. Scholkopf, C. Burgest, V. V apnik, "Extracting support data for a given task," in Proceedings of the 1st International Conference on Knowledge Discovery and Data Mining (KDD), Montreal, Canada, 1995;pp. 252-257. custom:[[[-]]] 19 M. Flickner, H. Sawhney, W. Niblack, J. Ashley, Q. Huang, B. Dom, et al., Computer, vol, 28, no. 9, pp. 23-32, 1995.custom:[[[-]]] 20 C. Faloutsos, R. Barber, M. Flickner, J. Hafner, W. Niblack, D. Petkovic, W. Equitz, "Efficient and effective querying by image content," Journal of Intelligent Information Systems, vol. 3, no. 3-4, pp. 231-262, 1994.doi:[[[10.1007/BF00962238]]] 21 J. Payne, L. Hepplewhite, T. J. Stonham, "Texture, human perception, and information retrieval measures," in Proceedings of ACM SIGIR MF/IR Workshop, Athens, Greece, 2000;custom:[[[-]]] 22 MPEG-7 (Online). Available:, https://mpeg.chiariglione.org/standards/mpeg-7 23 L. Wu, C. Faloutsos, K. Sycara, T. R. Payne, in Proceedings of 26th International Conference on V ery Large Data Bases, Cairo, Egypt, 2000, pp, 297-306, 297-306. custom:[[[-]]] Received: June 30 2020 Revision received: September 24 2020 Accepted: December 1 2020 Corresponding Author: Guang-Ho Cha , [email protected] Guang-Ho Cha, Dept. of Computer Science and Engineering, Seoul National University of Science and Technology, Seoul, Korea, [email protected]
CommonCrawl
Arithmetic coding From formulasearchengine {{ safesubst:#invoke:Unsubst||$N=Use dmy dates |date=__DATE__ |$B= }} Arithmetic coding is a form of entropy encoding used in lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 ≤ n < 1.0). An arithmetic coding example assuming a fixed probability distribution of three Symbols "A", "B", and "C". Probability of "A" is 50%, probability of "B" is 33% and probability of "C" is 17%. Furthermore we assume that the recursion depth is known in each step. In step one we code "B" which is inside the interval (0.5, 0.83): The binary number "0.10x" is the shortest code that represents an Interval that is entirely inside [0.5, 0.83). "x" means an arbitrary bit sequence. There are two extreme cases: the smallest x stands for an infinite number of zeros which represents the left side of the represented interval. Then the left side of the interval is dec(0.10) = 0.5. The greatest x stands for an infinite number of ones which gives a number that converges towards dec(0.11) = 0.75 Therefore "0.10x" represents the interval [0.5, 0.75) which is inside [0.5, 0.83) Now we can leave out the "0." part since all intervals begin with "0." and we can ignore the "x" part because no matter what bit-sequence it represents, we will stay inside [0.5, 0.75]. 1 Implementation details and examples 1.1 Equal probabilities 1.2 Defining a model 1.3 Encoding and decoding: overview 1.4 Encoding and decoding: example 1.5 Sources of inefficiency 2 Adaptive arithmetic coding 3 Precision and renormalization 4 Arithmetic coding as a generalized change of radix 4.1 Theoretical limit of compressed message 5 p-adic interpretation of arithmetic coding algorithm 6 Connections with other compression methods 6.1 Huffman coding 7 US patents 8 Benchmarks and other technical characteristics 9 Teaching aid Implementation details and examples Equal probabilities In the simplest case, the probability of each symbol occurring is equal. For example, consider a set of three symbols, A, B, and C, each equally likely to occur. Simple block encoding would require 2 bits per symbol, which is wasteful: one of the bit variations is never used. That is to say, A=00, B=01, and C=10, but 11 is unused. A more efficient solution is to represent a sequence of these three symbols as a rational number in base 3 where each digit represents a symbol. For example, the sequence "ABBCAB" could become 0.0112013 (in arithmetic coding the numbers are between 0 and 1). The next step is to encode this ternary number using a fixed-point binary number of sufficient precision to recover it, such as 0.00101100012 — this is only 10 bits; 2 bits are saved in comparison with naïve block encoding. This is feasible for long sequences because there are efficient, in-place algorithms for converting the base of arbitrarily precise numbers. To decode the value, knowing the original string had length 6, one can simply convert back to base 3, round to 6 digits, and recover the string. Defining a model In general, arithmetic coders can produce near-optimal output for any given set of symbols and probabilities (the optimal value is −log2P bits for each symbol of probability P, see source coding theorem). Compression algorithms that use arithmetic coding start by determining a model of the data – basically a prediction of what patterns will be found in the symbols of the message. The more accurate this prediction is, the closer to optimal the output will be. Example: a simple, static model for describing the output of a particular monitoring instrument over time might be: 60% chance of symbol NEUTRAL 20% chance of symbol POSITIVE 10% chance of symbol NEGATIVE 10% chance of symbol END-OF-DATA. (The presence of this symbol means that the stream will be 'internally terminated', as is fairly common in data compression; when this symbol appears in the data stream, the decoder will know that the entire stream has been decoded.) Models can also handle alphabets other than the simple four-symbol set chosen for this example. More sophisticated models are also possible: higher-order modelling changes its estimation of the current probability of a symbol based on the symbols that precede it (the context), so that in a model for English text, for example, the percentage chance of "u" would be much higher when it follows a "Q" or a "q". Models can even be adaptive, so that they continually change their prediction of the data based on what the stream actually contains. The decoder must have the same model as the encoder. Encoding and decoding: overview In general, each step of the encoding process, except for the very last, is the same; the encoder has basically just three pieces of data to consider: The next symbol that needs to be encoded The current interval (at the very start of the encoding process, the interval is set to [0,1), but that will change) The probabilities the model assigns to each of the various symbols that are possible at this stage (as mentioned earlier, higher-order or adaptive models mean that these probabilities are not necessarily the same in each step.) The encoder divides the current interval into sub-intervals, each representing a fraction of the current interval proportional to the probability of that symbol in the current context. Whichever interval corresponds to the actual symbol that is next to be encoded becomes the interval used in the next step. Example: for the four-symbol model above: the interval for NEUTRAL would be [0, 0.6) the interval for POSITIVE would be [0.6, 0.8) the interval for NEGATIVE would be [0.8, 0.9) the interval for END-OF-DATA would be [0.9, 1). When all symbols have been encoded, the resulting interval unambiguously identifies the sequence of symbols that produced it. Anyone who has the same final interval and model that is being used can reconstruct the symbol sequence that must have entered the encoder to result in that final interval. It is not necessary to transmit the final interval, however; it is only necessary to transmit one fraction that lies within that interval. In particular, it is only necessary to transmit enough digits (in whatever base) of the fraction so that all fractions that begin with those digits fall into the final interval; this will guarantee that the resulting code is a prefix code. Encoding and decoding: example A diagram showing decoding of 0.538 (the circular point) in the example model. The region is divided into subregions proportional to symbol frequencies, then the subregion containing the point is successively subdivided in the same way. Consider the process for decoding a message encoded with the given four-symbol model. The message is encoded in the fraction 0.538 (using decimal for clarity, instead of binary; also assuming that there are only as many digits as needed to decode the message.) The process starts with the same interval used by the encoder: [0,1), and using the same model, dividing it into the same four sub-intervals that the encoder must have. The fraction 0.538 falls into the sub-interval for NEUTRAL, [0, 0.6); this indicates that the first symbol the encoder read must have been NEUTRAL, so this is the first symbol of the message. Next divide the interval [0, 0.6) into sub-intervals: the interval for NEUTRAL would be [0, 0.36), 60% of [0, 0.6). the interval for POSITIVE would be [0.36, 0.48), 20% of [0, 0.6). the interval for NEGATIVE would be [0.48, 0.54), 10% of [0, 0.6). the interval for END-OF-DATA would be [0.54, 0.6), 10% of [0, 0.6). Since .538 is within the interval [0.48, 0.54), the second symbol of the message must have been NEGATIVE. Again divide our current interval into sub-intervals: the interval for NEUTRAL would be [0.48, 0.516). the interval for POSITIVE would be [0.516, 0.528). the interval for NEGATIVE would be [0.528, 0.534). the interval for END-OF-DATA would be [0.534, 0.540). Now 0.538 falls within the interval of the END-OF-DATA symbol; therefore, this must be the next symbol. Since it is also the internal termination symbol, it means the decoding is complete. If the stream is not internally terminated, there needs to be some other way to indicate where the stream stops. Otherwise, the decoding process could continue forever, mistakenly reading more symbols from the fraction than were in fact encoded into it. Sources of inefficiency The message 0.538 in the previous example could have been encoded by the equally short fractions 0.534, 0.535, 0.536, 0.537 or 0.539. This suggests that the use of decimal instead of binary introduced some inefficiency. This is correct; the information content of a three-digit decimal is approximately 9.966 bits; the same message could have been encoded in the binary fraction 0.10001010 (equivalent to 0.5390625 decimal) at a cost of only 8 bits. (The final zero must be specified in the binary fraction, or else the message would be ambiguous without external information such as compressed stream size.) This 8 bit output is larger than the information content, or entropy of the message, which is ∑−log2⁡(pi)=−log2⁡(0.6)−log2⁡(0.1)−log2⁡(0.1)=7.381 bits{\displaystyle \sum -\log _{2}(p_{i})=-\log _{2}(0.6)-\log _{2}(0.1)-\log _{2}(0.1)=7.381{\text{ bits}}} But an integer number of bits must be used, so an encoder for this message would use at least 8 bits, on average, which is achieved by the coding method, resulting in a message 8.4% larger than the minimum. This inefficiency of at most 1 bit becomes less significant as the message size grows. Moreover, the claimed symbol probabilities were [0.6, 0.2, 0.1, 0.1], but the actual frequencies in this example are [0.33, 0, 0.33, 0.33]. If the intervals are readjusted for these frequencies, the entropy of the message would be 4.755 bits and the same NEUTRAL NEGATIVE ENDOFDATA message could be encoded as intervals [0, 1/3); [1/9, 2/9); [5/27, 6/27); and a binary interval of [0.00101111011, 0.00111000111). This is also an example of how statistical coding methods like arithmetic encoding can produce an output message that is larger than the input message, especially if the probability model is off. Adaptive arithmetic coding One advantage of arithmetic coding over other similar methods of data compression is the convenience of adaptation. Adaptation is the changing of the frequency (or probability) tables while processing the data. The decoded data matches the original data as long as the frequency table in decoding is replaced in the same way and in the same step as in encoding. The synchronization is, usually, based on a combination of symbols occurring during the encoding and decoding process. Adaptive arithmetic coding significantly improves the compression ratio compared to static methods; it may be 2 to 3 times as effectiveTemplate:Nonspecific. Precision and renormalization The above explanations of arithmetic coding contain some simplification. In particular, they are written as if the encoder first calculated the fractions representing the endpoints of the interval in full, using infinite precision, and only converted the fraction to its final form at the end of encoding. Rather than try to simulate infinite precision, most arithmetic coders instead operate at a fixed limit of precision which they know the decoder will be able to match, and round the calculated fractions to their nearest equivalents at that precision. An example shows how this would work if the model called for the interval [0,1) to be divided into thirds, and this was approximated with 8 bit precision. Note that since now the precision is known, so are the binary ranges we'll be able to use. Probability (expressed as fraction) Interval reduced to eight-bit precision (as fractions) Interval reduced to eight-bit precision (in binary) Range in binary A 1/3 [0, 85/256) [0.00000000, 0.01010101) 00000000 – 01010100 B 1/3 [85/256, 171/256) [0.01010101, 0.10101011) 01010101 – 10101010 C 1/3 [171/256, 1) [0.10101011, 1.00000000) 10101011 – 11111111 A process called renormalization keeps the finite precision from becoming a limit on the total number of symbols that can be encoded. Whenever the range is reduced to the point where all values in the range share certain beginning digits, those digits are sent to the output. For however many digits of precision the computer can handle, it is now handling fewer than that, so the existing digits are shifted left, and at the right, new digits are added to expand the range as widely as possible. Note that this result occurs in two of the three cases from our previous example. Digits that can be sent to output Range after renormalization A 1/3 00000000 – 01010100 0 00000000 – 10101001 B 1/3 01010101 – 10101010 None 01010101 – 10101010 C 1/3 10101011 – 11111111 1 01010110 – 11111111 Arithmetic coding as a generalized change of radix Recall that in the case where the symbols had equal probabilities, arithmetic coding could be implemented by a simple change of base, or radix. In general, arithmetic (and range) coding may be interpreted as a generalized change of radix. For example, we may look at any sequence of symbols: D⁢A⁢B⁢D⁢D⁢B{\displaystyle DABDDB} as a number in a certain base presuming that the involved symbols form an ordered set and each symbol in the ordered set denotes a sequential integer A = 0, B = 1, C = 2, D = 3, and so on. This results in the following frequencies and cumulative frequencies: Frequency of occurrence Cumulative frequency B 2 1 D 3 3 The cumulative frequency is the total of all frequencies below it in a frequency distribution (a running total of frequencies). In a positional numeral system the radix, or base, is numerically equal to a number of different symbols used to express the number. For example, in the decimal system the number of symbols is 10, namely 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The radix is used to express any finite integer in a presumed multiplier in polynomial form. For example, the number 457 is actually 4×102 + 5×101 + 7×100, where base 10 is presumed but not shown explicitly. Initially, we will convert DABDDB into a base-6 numeral, because 6 is the length of the string. The string is first mapped into the digit string 301331, which then maps to an integer by the polynomial: 65×3+64×0+63×1+62×3+61×3+60×1=23671{\displaystyle 6^{5}\times 3+6^{4}\times 0+6^{3}\times 1+6^{2}\times 3+6^{1}\times 3+6^{0}\times 1=23671} The result 23671 has a length of 15 bits, which is not very close to the theoretical limit (the entropy of the message), which is approximately 9 bits. To encode a message with a length closer to the theoretical limit imposed by information theory we need to slightly generalize the classic formula for changing the radix. We will compute lower and upper bounds L and U and choose a number between them. For the computation of L we multiply each term in the above expression by the product of the frequencies of all previously occurred symbols: L=(65×3)+3×(64×0)+(3×1)×(63×1)+(3×1×2)×(62×3)+(3×1×2×3)×(61×3)+(3×1×2×3×3)×(60×1)=25002{\displaystyle {\begin{aligned}\mathrm {L} ={}&(6^{5}\times 3)+{}\\&3\times (6^{4}\times 0)+{}\\&(3\times 1)\times (6^{3}\times 1)+{}\\&(3\times 1\times 2)\times (6^{2}\times 3)+{}\\&(3\times 1\times 2\times 3)\times (6^{1}\times 3)+{}\\&(3\times 1\times 2\times 3\times 3)\times (6^{0}\times 1){}\\={}&25002\end{aligned}}} The difference between this polynomial and the polynomial above is that each term is multiplied by the product of the frequencies of all previously occurring symbols. More generally, L may be computed as: L=∑i=1nnn−i⁢Ci⁢∏k=1i−1fk{\displaystyle \mathrm {L} =\sum _{i=1}^{n}n^{n-i}C_{i}{\prod _{k=1}^{i-1}f_{k}}} where Ci{\displaystyle \scriptstyle C_{i}} are the cumulative frequencies and fk{\displaystyle \scriptstyle f_{k}} are the frequencies of occurrences. Indexes denote the position of the symbol in a message. In the special case where all frequencies fk{\displaystyle \scriptstyle f_{k}} are 1, this is the change-of-base formula. The upper bound U will be L plus the product of all frequencies; in this case U = L + (3 × 1 × 2 × 3 × 3 × 2) = 25002 + 108 = 25110. In general, U is given by: U=L+∏k=1nfk{\displaystyle \mathrm {U} =\mathrm {L} +\prod _{k=1}^{n}f_{k}} Now we can choose any number from the interval [L, U) to represent the message; one convenient choice is the value with the longest possible trail of zeroes, 25100, since it allows us to achieve compression by representing the result as 251×102. The zeroes can also be truncated, giving 251, if the length of the message is stored separately. Longer messages will tend to have longer trails of zeroes. To decode the integer 25100, the polynomial computation can be reversed as shown in the table below. At each stage the current symbol is identified, then the corresponding term is subtracted from the result. Identified symbol Corrected remainder 25100 25100 / 65 = 3 D (25100 − 65 × 3) / 3 = 590 590 590 / 64 = 0 A (590 − 64 × 0) / 1 = 590 590 590 / 63 = 2 B (590 − 63 × 1) / 2 = 187 187 187 / 62 = 5 D (187 − 62 × 3) / 3 = 26 26 26 / 61 = 4 D (26 − 61 × 3) / 3 = 2 2 2 / 60 = 2 B During decoding we take the floor after dividing by the corresponding power of 6. The result is then matched against the cumulative intervals and the appropriate symbol is selected from look up table. When the symbol is identified the result is corrected. The process is continued for the known length of the message or while the remaining result is positive. The only difference compared to the classical change-of-base is that there may be a range of values associated with each symbol. In this example, A is always 0, B is either 1 or 2, and D is any of 3, 4, 5. This is in exact accordance with our intervals that are determined by the frequencies. When all intervals are equal to 1 we have a special case of the classic base change. Theoretical limit of compressed message The lower bound L never exceeds nn, where n is the size of the message, and so can be represented in log2⁡(nn)=n⁢log2⁡(n){\displaystyle \scriptstyle \log _{2}(n^{n})\;=\;n\log _{2}(n)} bits. After the computation of the upper bound U and the reduction of the message by selecting a number from the interval [L, U) with the longest trail of zeros we can presume that this length can be reduced by log2⁡(∏k=1nfk){\displaystyle \scriptstyle \log _{2}\left(\prod _{k=1}^{n}f_{k}\right)} bits. Since each frequency in a product occurs exactly same number of times as the value of this frequency, we can use the size of the alphabet A for the computation of the product ∏k=1nfk=∏k=1Afkfk{\displaystyle \prod _{k=1}^{n}f_{k}=\prod _{k=1}^{A}f_{k}^{f_{k}}} Applying log2 for the estimated number of bits in the message, the final message (not counting a logarithmic overhead for the message length and frequency tables) will match the number of bits given by entropy, which for long messages is very close to optimal: n⁢log2⁡(n)−∑i=1Afi⁢log2⁡(fi){\displaystyle n\log _{2}(n)-\sum _{i=1}^{A}f_{i}\log _{2}(f_{i})} p-adic interpretation of arithmetic coding algorithm Arithmetic coding being expressed in terms of real numbers looks very natural and is easy to understand. It is nothing but a sequence of semi intervals each lies inside the previous one. But here is a problem – one has to use infinite precision real numbers to implement this algorithm and there is no such a thing like effective infinite precision real arithmetic. This problem was always considered as a technical one. Solution is simple - just use integers instead. There is a canonical implementation, first written in C [Witten], which was later reproduced in other languages, but no analysis of what happens to the algorithm after moving it from the real numbers to integer numbers was published. In fact, the integer variant of the algorithm looks very artificial and contains some magic rules: E1, E2 and E3. Though these rules work quite well the question remains – do they have natural mathematical explanation? The p-adic numbers provides clear interpretation of the algorithm. In fact, all the intermediate data and the result can be seen as p-adic integers with p=2. The modified algorithm operates on p-adic semi intervals in the same way, as the original works with real semi intervals. For example the magic rules E1, E2 mean that the current p-adic semi interval lies completely in a p-adic ball. In this case the p-adic ball can be pushed out and p-adic semi interval rescaled. From this point of view Huffman algorithm is just a specific variant of arithmetic coding when semi intervals are always p-adic balls. The algorithm can be extended to arbitrary p. All E1, E2, and E3 rules work in this case too. More information on p-adic variant of arithmetic coding can be found in [Rodionov, Volkov 2007, 2010]. Connections with other compression methods {{#invoke:main|main}} There is great similarity between arithmetic coding and Huffman coding – in fact, it has been shown that Huffman is just a specialized case of arithmetic coding – but because arithmetic coding translates the entire message into one number represented in base b, rather than translating each symbol of the message into a series of digits in base b, it will sometimes approach optimal entropy encoding much more closely than Huffman can. In fact, a Huffman code corresponds closely to an arithmetic code where each of the frequencies is rounded to a nearby power of ½ — for this reason Huffman deals relatively poorly with distributions where symbols have frequencies far from a power of ½, such as 0.75 or 0.375. This includes most distributions where there are either a small number of symbols (such as just the bits 0 and 1) or where one or two symbols dominate the rest. For an alphabet {a, b, c} with equal probabilities of 1/3, Huffman coding may produce the following code: a → 0: 50% b → 10: 25% c → 11: 25% This code has an expected (2 + 2 + 1)/3 ≈ 1.667 bits per symbol for Huffman coding, an inefficiency of 5 percent compared to log23 ≈ 1.585 bits per symbol for arithmetic coding. For an alphabet {0, 1} with probabilities 0.625 and 0.375, Huffman encoding treats them as though they had 0.5 probability each, assigning 1 bit to each value, which does not achieve any compression over naive block encoding. Arithmetic coding approaches the optimal compression ratio of 1−[−0.625⁢log2⁡(0.625)+−0.375⁢log2⁡(0.375)]≈4.6%.{\displaystyle 1-[-0.625\log _{2}(0.625)+-0.375\log _{2}(0.375)]\approx 4.6\%.} When the symbol 0 has a high probability of 0.95, the difference is much greater: 1−[−0.95⁢log2⁡(0.95)+−0.05⁢log2⁡(0.05)]≈71.4%.{\displaystyle 1-[-0.95\log _{2}(0.95)+-0.05\log _{2}(0.05)]\approx 71.4\%.} One simple way to address this weakness is to concatenate symbols to form a new alphabet in which each symbol represents a sequence of symbols in the original alphabet. In the above example, grouping sequences of three symbols before encoding would produce new "super-symbols" with the following frequencies: 000: 85.7% 001, 010, 100: 4.5% each 011, 101, 110: 0.24% each 111: 0.0125% With this grouping, Huffman coding averages 1.3 bits for every three symbols, or 0.433 bits per symbol, compared with one bit per symbol in the original encoding. A variety of specific techniques for arithmetic coding have historically been covered by US patents, although various well-known methods have since passed into the public domain as the patents have expired. Techniques covered by patents may be essential for implementing the algorithms for arithmetic coding that are specified in some formal international standards. When this is the case, such patents are generally available for licensing under what is called "reasonable and non-discriminatory" (RAND) licensing terms (at least as a matter of standards-committee policy). In some well-known instances (including some involving IBM patents that have since expired) such licenses were available free, and in other instances, licensing fees have been required. The availability of licenses under RAND terms does not necessarily satisfy everyone who might want to use the technology, as what may seem "reasonable" for a company preparing a proprietary software product may seem much less reasonable for a free software or open source project. At least one significant compression software program, bzip2, deliberately discontinued the use of arithmetic coding in favor of Huffman coding due to the perceived patent situation at the time. Also, encoders and decoders of the JPEG file format, which has options for both Huffman encoding and arithmetic coding, typically only support the Huffman encoding option, which was originally because of patent concerns; the result is that nearly all JPEG images in use today use Huffman encoding[1] although JPEG's arithmetic coding patents[2] have expired due to the age of the JPEG standard (the design of which was approximately completed by 1990).[3] There are some archivers like PackJPG, that can losslessly convert Huffman encoded jpegs to jpegs with arithmetic coding (with custom file name .pjg), showing up to 25% size saving. Some US patents relating to arithmetic coding are listed below. U.S. Patent 4,122,440 — (IBM) Filed 4 March 1977, Granted 24 October 1978 (Now expired) U.S. Patent 4,286,256 — (IBM) Granted 25 August 1981 (Now expired) U.S. Patent 4,652,856 — (IBM) Granted 4 February 1986 (Now expired) U.S. Patent 4,891,643 — (IBM) Filed 15 September 1986, granted 2 January 1990 (Now expired) U.S. Patent 4,905,297 — (IBM) Filed 18 November 1988, granted 27 February 1990 (Now expired) U.S. Patent 4,933,883 — (IBM) Filed 3 May 1988, granted 12 June 1990 (Now expired) U.S. Patent 4,935,882 — (IBM) Filed 20 July 1988, granted 19 June 1990 (Now expired) U.S. Patent 4,989,000 — Filed 19 June 1989, granted 29 January 1991 (Now expired) U.S. Patent 5,099,440 — (IBM) Filed 5 January 1990, granted 24 March 1992 (Now expired) U.S. Patent 5,272,478 — (Ricoh) Filed 17 August 1992, granted 21 December 1993 (Now expired) Note: This list is not exhaustive. See the following link for a list of more patents.[4] The Dirac codec uses arithmetic coding and is not patent pending.[5] Patents on arithmetic coding may exist in other jurisdictions, see software patents for a discussion of the patentability of software around the world. Benchmarks and other technical characteristics Every programmatic implementation of arithmetic encoding has a different compression ratio and performance. While compression ratios vary only a little (usually under 1%),[6] the code execution time can vary by a factor of 10. Choosing the right encoder from a list of publicly available encoders is not a simple task because performance and compression ratio depend also on the type of data, particularly on the size of the alphabet (number of different symbols). One of two particular encoders may have better performance for small alphabets while the other may show better performance for large alphabets. Most encoders have limitations on the size of the alphabet and many of them are specialised for alphabets of exactly two symbols (0 and 1). Teaching aid An interactive visualization tool for teaching arithmetic coding, dasher.tcl, was also the first prototype of the assistive communication system, Dasher. ↑ [1] What is JPEG? comp.compression Frequently Asked Questions (part 1/3) ↑ Template:Cite web ↑ JPEG Still Image Data Compression Standard, W. B. Pennebaker and J. L. Mitchell, Kluwer Academic Press, 1992. ISBN 0-442-01272-1 ↑ [2] comp.compression Frequently Asked Questions (part 1/3) ↑ [3] Dirac video codec 1.0 released ↑ For instance, Template:Harvtxt discuss versions of arithmetic coding based on real-number ranges, integer approximations to those ranges, and an even more restricted type of approximation that they call binary quasi-arithmetic coding. They state that the difference between real and integer versions is negligible, prove that the compression loss for their quasi-arithmetic method can be made arbitrarily small, and bound the compression loss incurred by one of their approximations as less than 0.06%. See: {{#invoke:citation/CS1|citation |CitationClass=citation }}. {{#invoke:citation/CS1|citation |CitationClass=book }} |CitationClass=journal }} Rodionov Anatoly, Volkov Sergey (2010) "p-adic arithmetic coding" Contemporary Mathematics Volume 508, 2010 Contemporary Mathematics Rodionov Anatoly, Volkov Sergey (2007) " p-adic arithmetic coding", http://arxiv.org/abs//0704.0834v1 Paul E. Black, arithmetic coding at the NIST Dictionary of Algorithms and Data Structures. Newsgroup posting with a short worked example of arithmetic encoding (integer-only). PlanetMath article on arithmetic coding Anatomy of Range Encoder The article explains both range and arithmetic coding. It has also code samples for 3 different arithmetic encoders along with performance comparison. Introduction to Arithmetic Coding. 60 pages. Eric Bodden, Malte Clasen and Joachim Kneis: Arithmetic Coding revealed. Technical Report 2007-5, Sable Research Group, McGill University. Arithmetic Coding + Statistical Modeling = Data Compression by Mark Nelson. Template:Compression Methods Retrieved from "https://en.formulasearchengine.com/index.php?title=Arithmetic_coding&oldid=223378" Lossless compression algorithms About formulasearchengine
CommonCrawl
DCDS-S Home Rigidity of three-dimensional lattices and dimension reduction in heterogeneous nanowires February 2017, 10(1): 141-160. doi: 10.3934/dcdss.2017008 Carbon-nanotube geometries: Analytical and numerical results Edoardo Mainini 1,2, , Hideki Murakawa 3, , Paolo Piovano 2, and Ulisse Stefanelli 2,4, Dipartimento di Ingegneria meccanica, energetica, gestionale, e dei trasporti (DIME), Università degli Studi di Genova, Piazzale Kennedy 1, I-16129 Genova, Italy Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, A-1090 Vienna, Austria Faculty of Mathematics, Kyushu University, 744 Motooka, Nishiku, Fukuoka, 819-0395, Japan Istituto di Matematica Applicata e Tecnologie Informatiche "E. Magenes" -CNR, v. Ferrata 1, I-27100 Pavia, Italy Received June 2015 Revised October 2015 Published December 2016 Figure(10) We investigate carbon-nanotubes under the perspective ofgeometry optimization. Nanotube geometries are assumed to correspondto atomic configurations whichlocally minimize Tersoff-type interactionenergies. In the specific cases of so-called zigzag and armchairtopologies, candidate optimal configurations are analytically identifiedand their local minimality is numerically checked. Inparticular, these optimal configurations do not correspond neither tothe classical Rolled-up model [5] nor to themore recent polyhedral model [3]. Eventually, theelastic response of the structure under uniaxial testing is numericallyinvestigated and the validity of the Cauchy-Born rule is confirmed. Keywords: Carbon nanotubes, Tersoff energy, variational perspective, new geometrical model, stability, Cauchy-Born rule. Mathematics Subject Classification: Primary: 82D25. Citation: Edoardo Mainini, Hideki Murakawa, Paolo Piovano, Ulisse Stefanelli. Carbon-nanotube geometries: Analytical and numerical results. Discrete & Continuous Dynamical Systems - S, 2017, 10 (1) : 141-160. doi: 10.3934/dcdss.2017008 P. M. Agrawal, B. S. Sudalayandi, L. M. Raff and R. Komandur, Molecular dynamics (MD) simulations of the dependence of C-C bond lengths and bond angles on the tensile strain in single-wall carbon nanotubes (SWCNT), Comp. Mat. Sci., 41 (2008), 450-456. doi: 10.1016/j.commatsci.2007.05.001. Google Scholar M. E. Budyka, T. S. Zyubina, A. G. Ryabenko, S. H. Lin and A. M. Mebel, Bond lengths and diameters of armchair single-walled carbon nanotubes, Chem. Phys. Lett., 407 (2005), 266-271. Google Scholar B. J. Cox and J. M. Hill, Exact and approximate geometric parameters for carbon nanotubes incorporating curvature, Carbon, 45 (2007), 1453-1462. doi: 10.1016/j.carbon.2007.03.028. Google Scholar M. S. Dresselhaus, G. Dresselhaus and R. Saito, Carbon fibers based on C60 ad their symmetry, Phys. Rev. B, 45 (1992), 6234-6242. Google Scholar M. S. Dresselhaus, G. Dresselhaus and R. Saito, Physics of carbon nanotubes, Carbon Nanotubes, (1996), 27-35. doi: 10.1016/B978-0-08-042682-2.50009-6. Google Scholar W. E and D. Li, On the crystallization of 2D hexagonal lattices, Comm. Math. Phys., 286 (2009), 1099-1140. doi: 10.1007/s00220-008-0586-2. Google Scholar R. D. James, Objective structures, J. Mech. Phys. Solids, 54 (2006), 2354-2390. doi: 10.1016/j.jmps.2006.05.008. Google Scholar H. Jiang, P. Zhang, B. Liu, Y. Huans, P. H. Geubelle, H. Gao and K. C. Hwang, The effect of nanotube radius on the constitutive model for carbon nanotubes, Comp. Mat. Sci., 28 (2003), 429-442. doi: 10.1016/j.commatsci.2003.08.004. Google Scholar V. K. Jindal and A. N. Imtani, Bond lengths of armchair single-walled carbon nanotubes and their pressure dependence, Comp. Mat. Sci., 44 (2008), 156-162. Google Scholar R. A. Jishi, M. S. Dresselhaus and G. Dresselhaus, Symmetry properties and chiral carbon nanotubes, Phys. Rev. B, 47 (1993), 166671-166674. Google Scholar K. Kanamits and S. Saito, Geometries, electronic properties, and energetics of isolated single-walled carbon nanotubes, J. Phys. Soc. Japan, 71 (2002), 483-486. doi: 10.1143/JPSJ.71.483. Google Scholar A. Krishnan, E. Dujardin, T. W. Ebbesen, P. N. Yianilos and M. M. J. Treacy, Young's modulus of single-walled nanotubes, Phys. Rev. B, 58 (1998), 14013-14019. doi: 10.1103/PhysRevB.58.14013. Google Scholar J. Kurti, V. Zolyomi, M. Kertesz and G. Sun, The geometry and the radial breathing model of carbon nanotubes: Beyond the ideal behaviour, New J. Phys., 5 (2003), 1-21. Google Scholar R. K. F. Lee, B. J. Cox and J. M. Hill, General rolled-up and polyhedral models for carbon nanotubes, Fullerenes, Nanotubes and Carbon Nanostructures, 19 (2011), 726-748. doi: 10.1080/1536383X.2010.494786. Google Scholar E. Mainini and U. Stefanelli, Crystallization in carbon nanostructures, Comm. Math. Phys., 328 (2014), 545-571. doi: 10.1007/s00220-014-1981-5. Google Scholar E. Mainini, H. Murakawa, P. Piovano and U. Stefanelli, Carbon-nanotube Geometries as Optimal Configurations preprint, 2016. Google Scholar L. Shen and J. Li, Transversely isotropic elastic properties of single-walled carbon nanotubes, Phys. Rev. B, 69 (2004), 045414, Erratum Phys. Rev. B 81 (2010), 119902. doi: 10.1103/PhysRevB. 69. 045414. Google Scholar L. Shen and J. Li, Equilibrium structure and strain energy of single-walled carbon nanotubes, Phys. Rev. B, 71 (2005), 165427. doi: 10.1103/PhysRevB.71.165427. Google Scholar F. H. Stillinger and T. A. Weber, Computer simulation of local order in condensed phases of silicon, Phys. Rev. B, 8 (1985), 5262-5271. doi: 10.1103/PhysRevB.31.5262. Google Scholar J. Tersoff, New empirical approach for the structure and energy of covalent systems, Phys. Rev. B, 37 (1988), 6991-7000. doi: 10.1103/PhysRevB.37.6991. Google Scholar M. M. J. Treacy, T. W. Ebbesen and J. M. Gibson, Exceptionally high Young's modulus observed for individual carbon nanotubes, Nature, 381 (1996), 678-680. doi: 10.1038/381678a0. Google Scholar M.-F. Yu, B. S. Files, S. Arepalli and R. S. Ruoff, Tensile Loading of Ropes of Single Wall Carbon Nanotubes and their Mechanical Properties, Phys. Rev. Lett., 84 (2000), 5552-5555. doi: 10.1103/PhysRevLett.84.5552. Google Scholar T. Zhang, Z. S. Yuan and L. H. Tan, Exact geometric relationships, symmetry breaking and structural stability for single-walled carbon nanotubes, Nano-Micro Lett., 3 (2011), 28-235. doi: 10.1007/BF03353677. Google Scholar X. Zhao, Y. Liu, S. Inoue, R. O. Jones and Y. Ando, Smallest carbon nanotibe is 3Å in diameter, Phys. Rev. Lett., 92 (2004), 125502. doi: 10.1007/BF03353677. Google Scholar Figure 1. Rolling-up of nanotubes from a graphene sheet Figure 2. Notation for bonds and bond angles Figure 3. Zigzag nanotube Figure 4. The construction of the function $\beta_z$ Figure 5. The angle $\beta_z$ as a function of the angle $\alpha$ (above) and a zoom (below) with the points $(\alpha^{\rm ru}_z,\beta_z(\alpha^{\rm ru}_z))$ and $(\alpha^{\rm ch}_z,\beta_z(\alpha^{\rm ch}_z))$ for $\ell=10$ Figure 6. The angle $\beta_a$ as a function of the angle $\alpha$ (above) and a zoom (below) with the points $(\alpha^{\rm ru}_a,\beta_a(\alpha^{\rm ru}_a))$ and $(\alpha^{\rm ch}_a,\beta_a(\alpha^{\rm ch}_a))$ for $\ell=10$ Figure 7. The energy-per-particle $\widehat E_i$ in the zigzag (above) and in the armchair (below) family, as a function of the angle $\alpha$ for $\ell=10$, together with a zoom about the minimum Figure 8. Comparison between energies of the optimal configurations and energies of their perturbations in the cases Z1, Z2, Z3 (left, from the top) and A1, A2, A3 (right, from the top). The marker corresponds to the optimal configuration $\mathcal{F}_i^*$ and value $\alpha$ represents the mean of all $\alpha$-angles in the configuration Figure 9. Optimality of the configuration $(F^*_L,L)\in \mathscr{F}_z$ (bottom point) for all given $L$ in a neighborhood of $L^*$ Figure 10. Elastic response of the nanotube Z1 under uniaxial small (left) and large displacements (right). The function $L \mapsto E(F_L^*,L)$ (bottom) corresponds to the lower envelope of the random evaluations (top) Rabah Amir, Igor V. Evstigneev. A new perspective on the classical Cournot duopoly. Journal of Dynamics & Games, 2017, 4 (4) : 361-367. doi: 10.3934/jdg.2017019 Huiqiang Jiang. Energy minimizers of a thin film equation with born repulsion force. Communications on Pure & Applied Analysis, 2011, 10 (2) : 803-815. doi: 10.3934/cpaa.2011.10.803 Huaying Guo, Jin Liang. An optimal control model of carbon reduction and trading. Mathematical Control & Related Fields, 2016, 6 (4) : 535-550. doi: 10.3934/mcrf.2016015 Martin Frank, Benjamin Seibold. Optimal prediction for radiative transfer: A new perspective on moment closure. Kinetic & Related Models, 2011, 4 (3) : 717-733. doi: 10.3934/krm.2011.4.717 Dinh-Liem Nguyen. The factorization method for the Drude-Born-Fedorov model for periodic chiral structures. Inverse Problems & Imaging, 2016, 10 (2) : 519-547. doi: 10.3934/ipi.2016010 Kota Kumazaki. A mathematical model of carbon dioxide transport in concrete carbonation process. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 113-125. doi: 10.3934/dcdss.2014.7.113 Dayi He, Xiaoling Chen, Qi Huang. Influences of carbon emission abatement on firms' production policy based on newsboy model. Journal of Industrial & Management Optimization, 2017, 13 (1) : 251-265. doi: 10.3934/jimo.2016015 Haifeng Hu, Kaijun Zhang. Stability of the stationary solution of the cauchy problem to a semiconductor full hydrodynamic model with recombination-generation rate. Kinetic & Related Models, 2015, 8 (1) : 117-151. doi: 10.3934/krm.2015.8.117 Anis Theljani, Ke Chen. An augmented lagrangian method for solving a new variational model based on gradients similarity measures and high order regulariation for multimodality registration. Inverse Problems & Imaging, 2019, 13 (2) : 309-335. doi: 10.3934/ipi.2019016 Teemu Lukkari, Mikko Parviainen. Stability of degenerate parabolic Cauchy problems. Communications on Pure & Applied Analysis, 2015, 14 (1) : 201-216. doi: 10.3934/cpaa.2015.14.201 Jingzhi Tie, Qing Zhang. An optimal mean-reversion trading rule under a Markov chain model. Mathematical Control & Related Fields, 2016, 6 (3) : 467-488. doi: 10.3934/mcrf.2016012 Frederik Riis Mikkelsen. A model based rule for selecting spiking thresholds in neuron models. Mathematical Biosciences & Engineering, 2016, 13 (3) : 569-578. doi: 10.3934/mbe.2016008 Chjan C. Lim, Da Zhu. Variational analysis of energy-enstrophy theories on the sphere. Conference Publications, 2005, 2005 (Special) : 611-620. doi: 10.3934/proc.2005.2005.611 L.R. Ritter, Akif Ibragimov, Jay R. Walton, Catherine J. McNeal. Stability analysis using an energy estimate approach of a reaction-diffusion model of atherogenesis. Conference Publications, 2009, 2009 (Special) : 630-639. doi: 10.3934/proc.2009.2009.630 Goro Akagi. Energy solutions of the Cauchy-Neumann problem for porous medium equations. Conference Publications, 2009, 2009 (Special) : 1-10. doi: 10.3934/proc.2009.2009.1 Marina Chugunova, Roman M. Taranets. New dissipated energy for the unstable thin film equation. Communications on Pure & Applied Analysis, 2011, 10 (2) : 613-624. doi: 10.3934/cpaa.2011.10.613 Yan Jin, Jürgen Jost, Guofang Wang. A new nonlocal variational setting for image processing. Inverse Problems & Imaging, 2015, 9 (2) : 415-430. doi: 10.3934/ipi.2015.9.415 Shengji Li, Chunmei Liao, Minghua Li. Stability analysis of parametric variational systems. Numerical Algebra, Control & Optimization, 2011, 1 (2) : 317-331. doi: 10.3934/naco.2011.1.317 YunKyong Hyon, James E. Fonseca, Bob Eisenberg, Chun Liu. Energy variational approach to study charge inversion (layering) near charged walls. Discrete & Continuous Dynamical Systems - B, 2012, 17 (8) : 2725-2743. doi: 10.3934/dcdsb.2012.17.2725 Cunming Liu, Jianli Liu. Stability of traveling wave solutions to Cauchy problem of diagnolizable quasilinear hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4735-4749. doi: 10.3934/dcds.2014.34.4735 Edoardo Mainini Hideki Murakawa Paolo Piovano Ulisse Stefanelli
CommonCrawl
What is the longest distance that can be jumped after swinging from a rope? In the movie Mission Impossible 3, the main character Ethan Hunt tries to enter a building in Shanghai by swing through the sky, as shown below: The jump consists of 2 sections, the red part, which is an arc, and the orange part, which is a parabola. A question naturally arises: For a target building with a given height, what is the furthest horizontal distance it can be placed such that we can jump to its roof using this method? Before answering this question, we must consider the following fact: Observing the path when we release at different heights, we see that if we release very late, we can reach great heights, but with reduced horizontal distance. If we release early, we increase horizontal distance, but with reduced height. Because of this trade off, the area that can be reached is restricted within the yellow line。 The question is then: (1) What is the equation x(z) that describes the orange line above? (2) For a given point (x,z) that is inside the region bounded by the orange line, what is the proper release height h(x,z) such that the parabola will go through (x,z). The answer should be a set of h. This is the same as asking what is the proper release height if we want to reach a building with height z and is located at x distance away from the center building. For the ease of discussion, I propose that we use the follow coordinate system and notation : Here are some calculations that are already done: At the release point, $$\text{Upward shooting angle}=\psi$$ $$v=\sqrt{2gh}$$ $$h(\psi)=r\cos\psi-H$$ $$\psi(h)=\arccos\frac{h+H}{r}$$ Upward and horizontal speeds are: $$v_{u}=v\sin\psi$$ $$v_{h}=v\cos\psi$$ Time for Ethan to fly through the orange part is: $$t=\frac{2v_{u}}{g}=\frac{2v\sin\psi}{g}$$ Thus $$d_{h}=v_{h}\cdot t$$ $$=v\cos\psi\frac{2v\sin\psi}{g}$$ $$=\frac{v^{2}}{g}\sin2\psi$$ If we were to use $h$ as the variable, then $$d(h)=\frac{v^{2}}{g}\sin\left(2\arccos\frac{h+H}{r}\right)$$ $$d(h)=\frac{\left(\sqrt{2gh}\right)^{2}}{2g}\left(\frac{h+H}{r}\right)\sqrt{1-\left(\frac{h+H}{r}\right)^{2}}$$ $$d(h)=h\left(\frac{h+H}{r}\right)\sqrt{1-\left(\frac{h+H}{r}\right)^{2}}$$ homework-and-exercises newtonian-mechanics projectile estimation string Xiaowen LiXiaowen Li Amazingly, not only has someone recently done an analysis of the Tarzan swing, but they've posted it on the Arxiv. See http://arxiv.org/abs/1208.4355 for the article or the Arxiv Blog for a summary. To quote the Arxiv Blog article: In fact there is no simple rule for maximising the horizontal flight distance. It turns out this depends on a number of factors, such as rope swing's distance off the ground, the length of the rope and the angle of the rope when Tarzan begins his swing as well as the angle of the rope at the point of release. However the optimal angle of release is always less than 45 degrees. John RennieJohn Rennie $\begingroup$ thank you very much for the answer. The paper provided some very useful analysis. Now question(2) is kind of trivial(purely mathematical problem). But I believe question 1 is unanswered, and remained interesting. $\endgroup$ – Xiaowen Li Sep 24 '12 at 10:48 The above isn't drawn correctly. And it is far too complicated in my opinion. I think I was able to solve this using high school physics: This bugged me a long time. I am surprised no one ever posted an answer on the internet. The length of his rope was 60.8 meters, assuming a drop above the target building of about 12 meters. The difference in height between the buildings is 64 meters. Using that fact and the distance between the buildings of 47.55, the angle of the pendulum from the vertical is 36.6 degrees. Double checking with the period of pendulum, which is the entire arc, we get 15.7 seconds of actual arc time. In the movie, he free falls for a bit, but the clock doesn't start until he reaches the end of the rope. The time in the movie is about 12.5 seconds, but these calculations are close. I think the only reason he was able to have a rope longer than the distance between the buildings and still make it is because the target roof was angled down. The calculation used the top of the roof where the slope started. This was enough, but you could probably also calculate the force on the rope when his free fall stopped. You could calculate how fast he was going as his free fall time was 14 seconds before he got to the end of the rope. The bottom line: The stunt was feasible, not impossible. I of course eliminated a bunch of variables: friction, weight of equipment. wind resistance, materials strength of the fulcrum given the force of the stopping rope, etc. cMX5cMX5 Not the answer you're looking for? Browse other questions tagged homework-and-exercises newtonian-mechanics projectile estimation string or ask your own question. Calculating projectile range from known maximum height and time traveled block slides on smooth triangular wedge kept on smooth floor.Find velocity of wedge when block reaches bottom Angular momentum of particle rolling around inside of sphere Motion on a frictionless vertical sinusoidal track Minimum initial vertical velocity of a projectile
CommonCrawl
Active stereo platform: online epipolar geometry update Abdulla Mohamed ORCID: orcid.org/0000-0001-7626-93461, Phil Culverhouse1, Angelo Cangelosi1 & Chenguang Yang1,2 This paper presents a novel method to update a variable epipolar geometry platform directly from the motor encoder based on mapping the motor encoder angle to the image space angle, avoiding the use of feature detection algorithms. First, an offline calibration is performed to establish a relationship between the image space and the hardware space. Second, a transformation matrix is generated using the results from this mapping. The transformation matrix uses the updated epipolar geometry of the platform to rectify the images for further processing. The system has an overall error in the projection of ± 5 pixels, which drops to ± 1.24 pixels when the verge angle increases beyond 10°. The platform used in this project has 3° of freedom to control the verge angle and the size of the baseline. Stereo vision has been applied to many applications in different fields to make precise measurements and to extend the working volume. In industrial applications, stereo vision has been used in control measurement and deflection detections [1,2,3]. In agriculture applications, stereo vision is used intensively in collecting data and the locations of fruits [4, 5]. This paper presents work done on an active stereo vision platform that is integrated with a GummiArm robot [6] to identify the position and quality of fruits. The platform is used to track an object and reconstruct the 3D shape of the object by updating the epipolar geometry. The calibration process in a stereo vision system consists of calculating the parameters of the system both internal and external, such as the pixel size, focal length, and image size. External parameters define the orientation and position of the cameras in 3D space. In an orthogonal stereo system or fixed system, the calibration is well defined using Zhang's calibration algorithm [7]. The output of the calibration is used in the rectification process. Rectification is used to transform the left and right images to be parallel to the epipolar plane and co-linear to the baseline [8]. This transformation simplifies the next process, which is the correspondence, where the search across the scanning line becomes 1D instead of 2D. Vergence cues are used by humans to focus visual attention on a target, i.e., by keeping both eyes focused on the same object. Disparities generated using active stereo vision depend on updates of the epipolar geometry. A feature-based algorithm can be used to compute the fundamental matrix [9,10,11,12]. Such studies focus on matching the features between the left and right images every time there is a change in the system to compute the fundamental matrix. A drawback to this method is the presence of failures when matching features. This leads to errors in the computation of the fundamental or homography matrix. Another approach combines the image features and the motor angle to correct the errors in feature matching. Thacker and Mayhew (1991) used a Kalman filter on the encoder readings to predict the position of the object in the next frame [13]. Changes in the epipolar geometry occur because of changes in the camera angle and the position of the camera. These changes are measured by shaft encoders and are used in the control of camera positions. Dankers et al. developed an online calibration process for the CeDAR head [14]. Their work was built on static system rectification [15], where the perspective projection matrix (PPM) found by the standard calibration process for the left and right cameras was used to locate the mapping between the two images. The PPM was decomposed, and a new PPM and transformation between the left and right images were used to make the epipolar line parallel to the baseline. Dankers et al. modified the algorithm so that the rotating angle of the left and right images was replaced by the angle of the encoder of each camera [14]. Both the motor angle and the image were captured at the same time. However, even though this process was quite fast, it required a system with highly accurate manufacturing; it is very difficult to correctly place the rotating axis of the motor interacting with the camera origin, and this can lead to an error with the baseline. Kwon et al. designed another approach to calibrate active stereo vision [16]. Their method treats the system as a kinematic chain that links the camera to its pan and tilt joints. By creating a kinematic chain between the joints and the camera and initializing the system, a calibration at the zero position of the system can be used to generate calibration matrices for the new positions. The motor angle transforms to the image coordinates via the transformation matrix between the image and the motor. Even though this method takes into account the position of the origin of the camera, if it is not intersected by the rotating axis, the error is accumulative during the running time because of the integration of the differences computed between the old and new angles. Hart et al. developed a calibration algorithm using a humanoid head and controlling the stereo verge angle [17]. The algorithm starts with an offline process where the essential matrix of each camera at two different orientations is computed, and then, the properties of the system are decomposed from the fundamental matrix. The centers of each camera and the rotation matrix are calculated using Rodrigues' rotation formula [18]. These parameters are used at run time to compute a new epipolar geometry using the motor angle by inverting the process offline. An experiment was performed to evaluate the algorithm using the standard calibration process and to compare the result to the new algorithm. The result showed a mean difference between the two methods of 2.38 pixels. However, their algorithm used the difference between the encoder readings of each orientation and not the absolute angle. This led to an accumulation of errors with time. Sapiens et al. investigated in real time the parameters of the stereo vision system during operation [19]. Their system maps the angle of the motor encoder to the image space by calibrating the system offline. The offline calibration finds a linear equation that maps the value of the motor angle to the image space angle. The algorithm in this study calculates the homograph of each image (left and right) and decomposes the matrix to find the value of the angle in image space. This process was repeated at a different motor angle, and the results were used to determine the relationship between the motor angle and the image area. In addition, they determined the properties of a common homograph with the same features at different angles. All of these processes were performed during an offline process. A linear equation was generated to make a map between the motor space and the image space using the motor angle as input. During operation, the homographs of both the left and right images were calculated using this equation and the motor angle. From the homographs, the fundamental matrix was calculated and used to rectify the images. This equation works linearly within the range of − 20° to 20°, with an error of 1.03 pixels at 0° that increases to 3.28 pixels at 20°. These results were compared to the conventional calibration process. In this study, the model coefficient was fixed throughout the range of angles. This assumption requires a high-precision manufacturing process to maintain the origin of the camera as close to the rotating angle as possible. Both Kwon et al. [16] and Sapiens et al. [19] have performed similar studies of transforms from the motor angle to the image angle. Kwon et al. [16] worked with larger angles (from − 45° to 45°) compared to Sapiens et al. [19], whose work was limited to − 20° to 20° for each camera. However, Kwon et al. included the tilting angle, and their study better placed the origin of the camera [16]. Hart et al. used the angle of the motor encoder to estimate the essential matrix, which results in an error in the value of the matrix [17]; conversely, Sapiens et al. corrected the motor angle via pre-processing [19]. In our study, the active stereo vision platform requires an algorithm to update the epipolar geometry in real time with a measurement of the change made by the motors. We avoided the use of traditional methods that require finding features in both images and matching these features to compute the new epipolar geometry [12]. Such feature-based algorithms fail in most cases because of feature matching or environments that contain fewer features than the required amount to compute a new geometry. Moreover, the working range of the platform was increased to ± 60° compared to [16]. In this study, the problem of updating the epipolar geometry in active stereo vision directly from a motor angle is solved using a PPM to rectify the images. An improvement to the algorithm used by Dankers et al. is presented in this paper [14]. When the raw data of the system are extracted using the image space and the actual geometry data, a linear relationship is drawn to perform conversions between the motor angles and the image angle, including the error in the manufacturing process. The configuration of the system is studied in depth to allow an accurate rectification process for the images generated by the system under different arrangements. The rest of the paper is organized as follows. The epipolar geometry and the process of computing the parameters in image space are presented in Section 2. Section 3 presents the process of collecting the data using a stereo calibration algorithm and the setup used to evaluate the algorithm. In Section 4, the results and discussion are presented, and finally, the paper concludes in Section 5. This section introduces the algorithm used to produce the disparity map and depth measurement while the camera tracks an object without the need to constantly recalibrate the system. The process of updating the geometry online is described in this part. The method of updating the configuration of the system has two stages. The first stage is the offline calibration process using Zhang's calibration algorithm [7], where the output of this algorithm is the PPM and distortion matrix for each camera, as well as the translation and rotation matrices between the left and right cameras. The PPM and distortion matrix contain the internal parameters for each camera, and these parameters are fixed at all times. The translation and rotation matrices contain the external parameters of the system and are constantly changing. Figure 1 shows the outer parameters of the system. The origin of the system is set, as is frequently done in computer vision, with the left origin as the origin of the system [20]. Therefore, the essential matrix describes the rotation and translation from the left image to the right image. In the offline calibration stage, the calibration was done under different geometric configurations. This process is used to find the relationship between the rotation angle in the image space and the platform space and to apply this to the translation. The relationship between the left and right cameras described by the essential matrix, which contains the rotation and the translation measurements The second stage of the calibration is online calibration, where the generated relationship between the image space and the platform space is used to update the essential matrix. The essential matrix is used in the rectification process. Single-camera model We start with a single-camera model that describes a pinhole camera system. This model is also used to describe the CMOS sensor in the cameras used in this project. The center of the camera is O, which is the center of the Euclidean coordinate system. The image plane π coincides with the z-axis, and the distance between the origin and the image plane is the focal length f. Suppose a point W with coordinates W = [X Y Z]T set in the front image plane. A projection point w = [x y]T on the image plane will form when we draw a line from W to the origin of the camera O. This creates a mapping from 3D space to 2D space. Using a homogeneous coordinate to map between points, we get Eq. (1): $$ w= PW $$ where W = [X Y Z 1]T and w = [x y 1]T are homogenous vectors and P is the camera projection matrix. The camera projection matrix P contains the internal and external parameters: $$ P= AR\left[R|t\right], $$ where A is a 3 × 3 matrix describing the internal properties of the camera (Eq. (3)), where αx and αy are the focal lengths in pixels in the x and y directions, respectively, and s is a skew parameter, which, in most new cameras, is zero [18]. R and t are external parameters that refer to the transformation between the camera and world coordinate, where R is a 3 × 3 rotation matrix of rank 3 and t is a translation vector. $$ A=\left[\begin{array}{ccc}{\alpha}_x& s& {x}_0\\ {}0& {\alpha}_y& {y}_0\\ {}0& 0& 1\end{array}\right]. $$ The calibration process for a single camera depends on Eq. (1) to provide the point coordinates of w and W that the image coordinate found by applying corner detection and the points in the world coordinate given by measuring the distance between the corners in the checkerboard. By finding these points, the camera projection matrix can be determined using algebra. A well-known algorithm that can be used to find P is y using the algorithm of Zhang (2000) [7]. Stereo model In the two-camera model, the same process as that for a single camera is applied. In this section, the parameters with subscript letters l and r are used to refer to the left and right camera models, respectively. Figure 2 shows the model that is studied in this section. The distance between the two origin cameras is B and is referred to as the baseline. Supposing that both cameras look at the same point in the world W = [X Y Z]T, a point w will be projected onto both image planes wl = [xl yl] and wr = [xr yr]. Stereo system model From the model, a plane is formed when Ol, W, and Or are connected. This plane is called the epipolar plane. If we know wl, we can find wr by searching along a line lr = er × wr. This line is called the epipolar line. From the epipolar line, lr = er × wr = [er] × wr, where [er] is the cross product, and because we know that wr is mapping to wl, we get the relation wr = H wl. H is a 3 × 3 homography matrix of rank 3 that describes the mapping between two points. By combining both equations, we get lr = [er] × H wl = F wl, where F = [er] × H and is called the fundamental matrix [21]. The fundamental matrix (F) can be extended to include the camera projection matrix, as shown Eq. (4), where \( {P}_l^{+} \) is the pseudoinverse of Pl. The fundamental matrix defines the internal and external parameters of the stereo vision system. F is a 3 × 3 matrix of rank 2. $$ F=\left[{e}_r\right]\times {P}_r\ {P}_l^{+}. $$ For a stereo vision rig, the projection camera matrix satisfies Eqs. (5) and (6), where R and t represent the rotation and translation between the left and right origins. Ol is the origin of the rig. $$ {P}_l=\left[I\ |0\right] $$ $$ {P}_r=\left[R\ |t\right]. $$ The fundamental matrix should satisfy Eq. (7), where wl lies on the epipolar line lr = Fwl [21]: $$ {w}_rF{w}_l=0. $$ Equations (5) and (6) are in normalized coordinates, and solving them, we obtain Eq. (8): $$ E={\left[t\right]}_{\times }\ R=R{\left[{R}^Tt\right]}_{\times }. $$ The essential matrix (E) describes the transformation between the left and right origins in normalized image coordinates. The E matrix has similar properties to the F matrix in its correspondence between \( {\widehat{w}}_l \) and \( {\widehat{w}}_r \) in normalized coordinates [21]: $$ {\widehat{w}}_rE{\widehat{w}}_l=0. $$ The essential matrix is used to compute the distance to the point W(X, Y, Z) seen by both cameras. Using the essential matrix means that there will be 6° of freedom: 3° from the rotation angle and 3° from the translation. In our system, the rotation angle around the y-axis and the translation along the baseline are not fixed. These two parameters were selected because they change the visual view of the camera. The calibration process used in stereo vision is the same when a checkerboard is used as a reference to the points in the world coordinate and image processing is used to find the points in the image coordinate. The calibration process is first done on each camera separately to find the projection camera matrix for each camera, and then, these matrices are used to calculate the essential matrix to find the external geometry parameters between the cameras. Rectification algorithm The disparity is the difference between the same points in the left and right images. The calibration process generates the parameters used to rectify the images, where the rectification process is the transformation of the left and right images to obtain the same horizontal epipolar lines. The rectification process used in this study is based on Bouguet's algorithm [20]. The process starts by dividing the rotation matrix R that is responsible for rotating the right image into the left image into two rotating matrices, Rl and Rr, for each image. These two rotation matrices rotate the left and right images by a half rotation. This rotation aligns both image planes with the baseline, but the images are not aligned in the raw data. Therefore, we find a correction matrix to rotate the epipolar lines into infinity and align them horizontally with the baseline. In the stereo model, it is assumed that the left camera was set as the origin of the system. Starting with the epipole point \( {e}_{1_l} \) in the left image and connecting to the epipole point \( {e}_{1_r} \) in the right image, the point is translated along the baseline that defines the translation vector T. This leads to Eq. (10): $$ {e}_1=\frac{T}{\left\Vert T\right\Vert }. $$ Using the cross product of e1 will generate e2, which is orthogonal to the focal length ray. This results in e2 being orthogonal to e1. The result is shown in Eq. (11): $$ {e}_2=\frac{{\left[-{T}_y\ {T}_x\ 0\right]}^T}{\sqrt{T_x^2+{T}_y^2}}. $$ The last vector is e3, which is orthogonal to e1 and e2, and can be calculated via a cross product: $$ {e}_3={e}_1\times {e}_2. $$ Now, we add these vectors into the correction matrix Rcorr, which transforms the epipolar lines to be infinite and parallel with the baseline by rotating the image about the projection center. $$ {R}_{\mathrm{corr}}=\left[\begin{array}{c}{e}_1^T\\ {}{e}_2^T\\ {}{e}_3^T\end{array}\right]. $$ Rcorr is multiplied by the split rotation matrix to form correction rotation matrices for the left and right images. $$ {R}_{l_{\mathrm{corr}}}={R}_{\mathrm{corr}}\ {R}_l $$ $$ {R}_{r_{\mathrm{corr}}}={R}_{\mathrm{corr}}\ {R}_r. $$ This leads to the importance of a given rotation matrix and translation matrix to rectify an image. The rotation and translation matrices are taken from the essential matrix, i.e., decomposing the essential matrix allows the rotation and translation matrices to be calculated. Online geometry update This subsection integrates the above discussion to generate a relationship between the image angle and the motor encoder angle. Mapping between motor space to image space lead to errors if we use the encoder angle direct to the image angle [22]. As explained in the above section, the process is divided into two parts: an offline calibration process and an online geometry update. The offline calibration calculates the essential matrix and the internal parameters of the cameras. The essential matrix is decomposed to generate the rotation and translation matrices. The translation matrix is a pure translation from the left to right camera origins. In theory, the rotation matrix should be equal to the pure rotation around the y-axis. However, in reality, this assumption is not valid because of the actual installation of the camera on the platform and the installation of the camera sensor. The calibration result returns the rotation matrix, including these small values around the x- and z-axes. Therefore, the rotation matrix returns three angles. The complete rotation matrix is a product of multiplying the rotation matrices in XYZ order: $$ R={R}_x\left(\psi \right)\times {\mathrm{R}}_{\mathrm{y}}\left(\theta \right)\times {R}_z\left(\varnothing \right). $$ The rotation matrix is solved to return the individual angle. These angles are recorded as the image space angles. The most important angle is θimg, which changes the angle around the y-axis. The calibration process is done 30 times with different configurations (different verge angles) and each time the encoder verge angle θencoder is recorded. The complete 30-configuration calibration set constituted one run, and 20 runs were performed. The data of the calibration process are used to generate a linear relationship between the encoder angle and the image angle: $$ {\theta}_{\mathrm{img}}=e+\eta \times {\uptheta}_{\mathrm{encoder}}, $$ where e refers to the error due to the mechanical misalignment and lens distortion and η is an estimated factor to correct the encoder angle. After the rectification of the system, the generated left and right images are used to compute the disparity map. Correspondence is then established, following the extensive literature, for example [23]. The primary junction of correspondence is to find the point in the right image to match the point in the left image and then calculate the differences in the x-axis. These differences are called the disparity. The semi-global block-matching algorithm (SGM) [24] is used in this study to evaluate the disparity map of the rectified images. SGM is a global stereo matching algorithm using multiple direction searches (pixel-wise) to smoothen the output, where the matching cost used in SGM is mutual information to overcome issues in lighting, different time exposures, and reflection [25]. The pixel-wise method calculates the final disparity by summing the total cost of the disparities at different angles from the scan line. This approach ensures that there is some smoothness in the disparity. $$ E(D)=\sum \limits_P\left(C\left(p,{D}_p\right)+\sum \limits_{q\epsilon {N}_p}{P}_1T\left[\left|{D}_p-{D}_q\right|=1\right]+\sum \limits_{q\epsilon {N}_p}{P}_2T\left[\left|{D}_p-{D}_q\right|>1\right]\right). $$ Equation (18) represents the minimized cost function used by SGM, where p and q are the pixel indices in the image, C(p, Dp) is the cost of disparity matching based on the intensity, Np represents the neighbor of the pixel p, and P1 and P2 are constraints to penalize the change in the disparity, where P1 represents the change equal to 1 and P2 represents the change greater than 1 [26]. The disparity map is used to transform the pixel from the image coordinate in 2D into a world coordinate in 3D [X Y Z]T relative to the camera origin. This process is done using a triangulation approach in Eqs. (19)–(21). In Eq. (20), x and y represent the modified coordinates of the object in the image frame, b is the baseline, d is the disparity, and f represents the focal length. $$ Z=f\ast \frac{b}{d} $$ $$ X=f\ast \frac{x}{Z} $$ $$ Y=f\ast \frac{y}{Z}. $$ The platform used in this work is explained in details in our previous work [27]. The setup of the experimental system was divided into two configurations. The first configuration collected the data for the calibration process to find the actual parameters of the platform. The second configuration evaluated the new calibration algorithm. This section explains the process of obtaining the data to help extract the parameters of the active stereo vision system. Exploring the parameters of the system and comparing the image space to the motor space required generating data for different platform setups, which meant setting different verge angles and baselines; 30 configurations of varying verge angles and five configurations for the baseline were selected. In each configuration, a calibration process was performed as explained in Section 2.2 to find the parameters of the system using a calibration board. The board consists of an 8 × 6 array of black and white squares with sizes of 34.5 mm in height and width. The algorithm used to find the corners on the checkerboard also detected the 48 internal corners on the board. For a robust calibration, 15 images were taken of the calibration board at various positions and orientations as recommended by (Bradski and Kaehler, 2008). To accelerate and improve the collection of data, the calibration process was automated using a Baxter robot, as explained in [27]. Automating the calibration process reduced the time required to complete the calibration process by three times and improved the calibration result. Figure 3 shows the data collection setup, where the platform was installed in front of Baxter at a distance of 2 m, and the calibration board was fixed on the arm of the robot. A total of 40 positions and orientations of the board were pre-recorded using the Baxter teaching methods. A desktop PC was used to control Baxter, and a laptop was used to control the platform and perform the calibration process. A UDP connection was used to communicate between the PC and the laptop. Baxter holding the checkerboard while the rig works on the calibration (in the lower left of the figure) Figure 4 shows a flowchart of the calibration process, where the process starts by setting the verge angle. The second step is to find the corner and to move the arm to a new position. This step was repeated until 15 sets of images were taken successfully with the corners detected. Then, the calibration process is started at the same time as the evaluation of the quality of the calibration, when the output meets the requirement that the projection error is less than 0.1, the calibration process is a success, and the system moves to a new configuration. If the projection error is larger than 0.1, the process repeats until it meets the requirement. This algorithm was repeated 20 times to generate data for the analysis. The same process was used to calibrate the baseline. Flowchart of the automated calibration process The calibration algorithm results in a rectified image where the epipolar lines of the left and right images become co-linear and parallel with the horizontal axis. To measure the performance of this rectification, a projection error measurement was used as described in [21]. The projection error is defined as the difference between the point y-axis in the left image and the point y-axis in the right image, as shown in Fig. 5 [28]. Definition of the error generated in the rectified images A calibration board was placed in different locations and orientations at distances between 1.5 and 2.5 m from the platform. This allowed us to obtain more data and to evaluate the calibration algorithm more accurately. As explained in Section 2.4, the geometry of the system needs to be updated when the configuration of the platform changes, and rectified images should then be generated. The rectified images are the output of the calibration algorithm, and these two images are used to evaluate the quality of the calibration. The evaluation algorithm uses the calibration board to detect the corners of the left and right images and then calculate the root mean square error (RMS), i.e., Eq. (22). The output value is in units of pixels. $$ \mathrm{error}=\kern0.5em \sqrt{{\left({y_l}_i-{y_r}_i\right)}^2} $$ Surface compression The data generated from the disparity map are used to create a 3D point cloud related to the system origin, which is a physical dimension of the scene. These data are used to evaluate the quality of the system in generating the point cloud. A spherical object was placed in front of the system, and a 3D point cloud was generated for this sphere. These data were then compared to the ground truth of the sphere that was generated using a 3D model. The iterative closest point (ICP) algorithm was used to translate and rotate the source of the point cloud to the reference by minimizing the differences [29]; that is, ICP was used to align the two point clouds. There are four steps that ICP uses in the alignment process, as described in the work of Rusinkiewicz and Levoy (2001) [30]. Apply the correspondent to the points where the strategy starts by selecting a point with a uniform distribution. Use singular value decomposition to compute the rotation and translation between the reference and source point clouds. Apply rotation and translation to the registered point cloud. Calculate the error between the corresponding points by applying SSD. The above steps were repeated until the error reached the threshold value. To evaluate the generated sample (S) point cloud of the platform, it was compared to the reference point cloud that was generated using a model, which we refer to as the ground truth (G). The Euclidean distance algorithm, Eq. (23), is used to compute the distance between each point in the source that lies near the point in the reference point cloud. The differences between the sample and the ground truth were calculated using the RMS using the Euclidean distance: $$ \mathrm{RM}{\mathrm{S}}_{\mathrm{error}}=\sqrt{{\left({S}_x-{G}_x\right)}^2+{\left({S}_y-{G}_y\right)}^2+{\left({S}_z-{G}_z\right)}^2}. $$ In the experiment, three spheres were used, with different diameters (80, 125, and 150 mm). CAD software was used to generate the ground truth, which was then converted to a point cloud. These point clouds were set to have a subsampling between points equal to 1 mm in all directions (Fig. 6a). a Point cloud of the ground truth for a sphere with a diameter of 120 mm and b generated point cloud of a sphere with a diameter of 120 mm The generated point cloud from the platform is shown in Fig. 6b prior to post-processing to remove the surrounding points that do not belong to the sphere; the post-processing was done using the Point Cloud Library [31]. The setup of the experiment is shown in Fig. 7. The setup for the shape reconstruction using a sphere with a diameter of 120 mm The data were collected at different configurations (verge angles from − 6° to 12° and baselines from 55 to 250 mm) while the ball was placed at different positions between 1 and 2.5 m from the platform. A set of 10 samples was taken at each configuration. Offline calibration The results of the offline calibration allow us to understand the geometry of the platform in depth; these data show the tolerance of the manufacturer and the repeatability of the motors. As explained in Section 2.4, the only variable axes are the verge angle (yaw) and the baseline (along with the y-axis), whereas the other axes are fixed, i.e., the pitch and roll angles and translation along the y- and z-axes. These should be fixed in the different configurations. The values of the roll and pitch angles are shown in Fig. 8, where the roll angle is 0.526°, with a margin of error of ± 0.047°, and the pitch angle is − 0.433°, with a margin of error of ± 0.015°. These two values were generated as a result of the assemble misalignment in the platform and cameras; as a technical note, Flea3 Point Gray cameras (FL3-U3-120S3C-C) have an accuracy of ± 0.5° of the sensor assembly. The same points apply to the result of the translation along the y- and z-axes (Fig. 9). As shown in Fig. 9, the z-axis reading is 3.6 mm, with a large margin of error of ± 2.3 mm, and this was the result of identifying the optical center of the cameras. This leads to an issue with measuring the distance if it is assumed to be fixed. To resolve the error in z-axis, a relationship was computed from the calibration data to update the z-axis during the changing of the configuration. The result of the offline calibration process for the roll and pitch angles The result of the offline calibration process for the translation of the y- and z-axes Theoretically, the verge angle is directly correlated to the motor angle. After processing the data in the offline calibration, the raw data related to the verge angle were generated and plotted against the sum of the encoder angles (Fig. 10). As shown in Fig. 10, the image angle generated by the offline calibration and the encoder angles show a linear relationship with a coefficient of determination equal to 99.93%. From the data, η is equal to 0.9641, and the error value e is equal to 0.5786. Inserting these values into Eq. (17) results in Eq. (24): $$ {\theta}_{\mathrm{img}}=0.5786+0.9641\times {\theta}_{\mathrm{encoder}}. $$ The image angle versus the motor angle. The image angle was calculated using the stereo calibration process, and the motor angle was measured using the encoders Accordingly, Eq. (24) was used to update the image angle by providing the encoder angle reading from the motor. This improved the updates of the geometry of the system. Comparing this result to that of Dankers et al., the epipolar geometry was updated in a more accurate process, which studied the platform in more detail before starting the online update [14]. This result will help improve the vision in humanoids, manipulator arms, and mobile robots that use active stereo vision and will extend the working volume of the binocular vision. Equation (24) was used to calculate the image angle based on the input of the encoder angle; the new image angle was then used to rectify the images. This process was done during the online running time, as described in Section 2.4. To evaluate the new algorithm, the projection error was used as described in the experimental section. The result of the projection error is shown in Fig. 11. This result was collected at different verge angles and baselines, and the experiment was repeated 20 times. In general, the result shows that the platform and the online calibration algorithm have repeatability with a marginal range of ± 0.5 pixels, which gives us confidence in the ability of the platform for repeating tasks. Projection error at different verge angles and baselines; the error in the points is ± 0.233 pixels Figure 11 indicates that the projection error has a linear relationship with the verge angle when the baseline has a small value, e.g., a baseline of 50 or 100 mm. However, the projection error increases with increasing baseline size. This could be a result of the misalignment in the roll angle, which was set in the opposite direction, or the y displacement misalignment during the manufactures, which increases with the baseline. Moreover, the projection error increases by increasing the diverge angle, and drops when the platform starts to verge, the error is not constant; this is due to the position of the target: the images started to overlap, which led to a drop in the error. Figure 11 shows that, when the verge angle starts to increase, the projection error starts to decrease, where the target gets close to the horopter. At an angle of 6°, the projection error drops because of the position of the target, which leads to zero disparity. The zero disparity reduces the disparity range and the error in the depth measurement. A list of rectified images captured at different verge angles is shown in Fig. 12. The colored lines show the epipolar lines where the pixel in the left image is lying on the same line. Figure 12a was captured at the parallel focal axis, and the rest were taken in 2° increments. This shows that the image sizes decrease with increasing verge angle; the red square represents the image size after rectification. Rectified image using the online updated geometry. The lines represent the epipolar lines, and the red square shows the size of the image after rectification: a at the parallel focal length, b at an angle of 2°, c at an angle of 4°, d at an angle of 6°, e at an angle of 8°, and f at an angle of 10° Figure 13 shows the disparity map of the rectified images at different verge angles. The disparity shows the box that was used to evaluate the process. The corresponding process was based on the SGM algorithm with a window size of 5 × 5 pixels and a disparity number of 256. The size of the windows was selected based on the output of the projection error analysis (Fig. 11) to cover the potential error in the rectified image. At the same time, windows at this size will sharpen features, as discussed in [18]. As shown in Fig. 13, the disparity map becomes more intense with increasing verge angle, where Fig. 13f with an angle of 10° is due to the overlap of the images. Because the disparity map can only provide a visual analysis, the next section generates a point cloud to compare to the ground truth. Disparity map of a box used to evaluate the projection error: a at the parallel focal length, b at an angle 2°, c at an angle of 4°, d at an angle of 6°, e at an angle of 8°, and f at an angle of 10° To demonstrate the quality of the disparity map, the disparity was converted into a point cloud using the triangulation equations, as described in Section 2.6.3. The ground truth point cloud was generated using a CAD model. A sample of the data used in the comparison is shown in Fig. 14. A sample of a post-processed point cloud used in the comparison for a 120 mm diameter sphere Figures 15, 16, and 17 show the result of computing the RMS between the ground truth and the sample for three sizes of the sphere (80, 120, and 150 mm). The result describes the sum of the differences of the points from the ground truth. Five different baselines (55, 100, 150, 200, and 250 mm) were used to generate samples at different verge angles (from − 6° to 12°) with steps of 2°. The overall result has the same shape as the result of the projection error (Fig. 11) and shows that the result of the baseline with 100 mm has the lowest RMS and that an increase in the baseline led to an increase in the RMS. The RMS of the baseline with 55 mm has the highest RMS in the three cases because of the proportional error in measuring the depth in relation to the baseline, as described in [32]. RMS error for a sphere with a diameter of 80 mm at different baselines and verge angles RMS error for a sphere with a diameter of 120 mm at different baselines and verge angles However, the RMS of the verge angle shows a linear result at different verge angles, with a slight drop in the result at larger verge angles; this is because the overall error in the projection was four pixels, and five pixels were used to compute the disparity windows to overcome mismatching at the scan line. This result may lead to a misunderstanding in the use of the variable verge angle in computing the disparity if the result shows an approximate equal RMS at different verge angles. However, to generate the sample, post-processing was performed on the sample to reduce the amount of RMS points computed, and as shown in Section 3.2, the disparity became smaller when the verge angle increases. Moreover, the measurement of the depth approached the origin of the system, where the parallel focal length of the minimum depth was 1 m, and for the verge angle, the depth converged to 0.5 m. However, the size of the sphere does not affect the result of the object reconstruction; all results had an average RMS of approximately 0.02 mm and a margin of error of ± 0.0039 mm at a confidence of 95%. The drawback of this algorithm is that the size of the rectified image generated becomes smaller when the verge angle increases. This occurs due to the behavior of epipolar lines at verge angle (Fig. 18). Moreover, the rectification process makes this line parallel with the baseline; therefore, the new image becomes smaller. Epipolar line before rectification at a verge angle of 8°: a left image and b right image An active stereo vision platform with 3° of freedom, providing individual camera pans with a shared variable baseline, was constructed and assessed for its depth resolution and repeatability. A study was performed using both traditional stereo disparity estimations and the camera verge angle to provide depth information. The problem of computing the epipolar geometry of an active stereo vision system was studied to avoid traditional methods that use feature-based algorithms [12]. A relationship was found between the image angle and the encoder angle to update the epipolar geometry of the system directly from the encoder reading. An offline calibration process was performed to find measurements in the image space of the platform, and then, these measurements were used to find the relationship between the image space and the encoder angle. A linear correlation was found between the image space and encoder angle with a shift of 0.5° in image space. The overall measurement of the epipolar geometry in image space was found using the offline calibration. In order to evaluate the performance of the rectification algorithm, the projection error based on SSD [21] was used. The maximum projection error that the platform generates at de-verge is ± 5 pixels and when the platform starts to verge the error drop to ± 1.24 pixels at 12°. This compares to ± 2.38 pixels in the work of Hart et al [17]. This result shows that increases in the baseline increase the projection error, and increases in the verge angle decrease the projection error and the effect of overlapping between the two images. A drawback of this algorithm is that the size of the new rectified images becomes smaller when the verge angle increases. The maximum verge angle that allowed the image to work with is 20°. The disparity map depends on the quality of the rectification algorithm which the better the rectification the better the disparity map; therefore, experiments to evaluate the disparity map were conducted. The disparity maps show clear results in different configurations. Point cloud compressions were made with ground truth datasets to evaluate the quality of the shapes. These compressions show that the quality of the shape has an average standard deviation of 0.0142 m and a margin of ± 0.0039 m. Overall, the system improves the quality of the disparity map by controlling the baseline and the verge angle. One of the main advantages of the system is the capability to focus on one target with reconstructing the 3D shape using a small disparity search area. As a result, the system extends the working volume space of robots. Future studies will automate the optimal baseline and verge angle based on the object position to reduce the error. In addition, the platform will be integrated with the GummiArm robot to harvest a tomato. CCD: Charge-coupled devices CMOS: Complementary metal oxide semi-conductor ICP: Iterative closest point PCL: Point Cloud Library PPM: Perspective projection matrix RMS: Root mean squared SGM: Semi-global matching Sum of squared differences PF Luo, YJ Chao, MA Sutton, Application of stereo vision to three-dimensional deformation analyses in fracture experiments. Optical Engineering. International Society for Optics and Photonics 33, 81 (1994). https://doi.org/10.1117/12.160877. JJ Aguilar, F Torres, MA Lope, Stereo vision for 3D measurement: accuracy analysis, calibration and industrial applications. Measurement 18, 193–200 (1996). https://doi.org/10.1016/S0263-2241(96)00065-6 Elsevier Li P, Chong W, Ma Y. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision. ed. by Osten W, Asundi AK, Zhao H. AOPC 2017: 3D Measurement Technology for Intelligent Manufacturing. SPIE, pp. 66, 2017, doi: https://doi.org/10.1117/12.2285542 E Ivorra, AJ Sánchez, JG Camarasa, MP Diago, J Tardaguila, Assessment of grape cluster yield components based on 3D descriptors using stereo vision. Food. Control 50, 73–282 (2015). https://doi.org/10.1016/J.FOODCONT.2014.09.004 Elsevier C Wang, X Zou, Y Tang, L Luo, W Feng, Localisation of litchi in an unstructured environment using binocular stereo vision. Biosystems Engineering 145, 39–51 (2016). https://doi.org/10.1016/J.BIOSYSTEMSENG.2016.02.004 Academic Press MF Stoelen, F Bonsignorio, A Cangelosi, in From Animals to Animats 14, ed. by E Tuci et al.. Co-exploring actuator antagonism and bio-inspired control in a printable robot arm (Springer International Publishing, Cham, 2016), pp. 244–255 Zhang Z, (2000) A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence. IEEE, 22, 1330–1334. doi: https://doi.org/10.1109/34.888718. E Trucco, A Verri. (1998) Introductory techniques for 3-D computer vision. (Upper Saddle River, NJ, USA: Prentice Hall PTR). https://dl.acm.org/citation.cfm?id=551277. Krotkov E, Henriksen K, Kories R, Stereo ranging with verging cameras. IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE, 12, 1200–1205 (1990). doi: https://doi.org/10.1109/34.62610. S De Ma, A self-calibration technique for active vision systems. IEEE Trans. Robot. Autom. 12, 114–120 (1996). https://doi.org/10.1109/70.481755 QT Luong, OD Faugeras, Self-calibration of a moving camera from point correspondences and fundamental matrices. Int J Comput Vis 22, 261–289 (1997). https://doi.org/10.1023/A:1007982716991 Kluwer Academic Publishers M Bjorkman, JO Eklundh, Real-time epipolar geometry estimation of binocular stereo heads. IEEE Trans. Pattern Anal. Mach. Intell. 24, 425–432 (2002). https://doi.org/10.1109/34.990147 NA Thacker, JE Mayhew, Optimal combination of stereo camera calibration from arbitrary stereo images. Image Vis. Comput. 9, 27–32 (1991). https://doi.org/10.1016/0262-8856(91)90045-Q Dankers A, Barnes N, Zelinsky A Active Vision—Rectification and Depth Mapping. Australian Conference on Robotics and Automation (2004). A Fusiello, E Trucco, A Verri, A compact algorithm for rectification of stereo pairs. Mach. Vis. Appl. 12(1), 16–22 (2000). https://doi.org/10.1007/s001380050003 Kwon H, Park J, Kak AC, A new approach for active stereo camera calibration. in Proceedings 2007 IEEE International Conference on Robotics and Automation. IEEE, 3180–3185, (2007), doi: https://doi.org/10.1109/ROBOT.2007.363963. J Hart, B Scassellati, SW Zucker, in Cognitive Vision, ed. by B Caputo, M Vincze. Epipolar Geometry for Humanoid Robotic Heads (Springer Berlin Heidelberg, Berlin, 2008), pp. 24–36. https://doi.org/10.1007/978-3-540-92781-5_3 Szeliski R, Computer vision: algorithms and applications (1st ed.). (Springer-Verlag, Berlin, 2010). https://dl.acm.org/citation.cfm?id=1941882. Sapienza, M., Hansard, M. and Horaud, R. (2013), Real-time visuomotor update of an active binocular head, Autonomous Robots. Springer US, 34(1–2), pp. 35–45. doi: https://doi.org/10.1007/s10514-012-9311-2. Bradski GR, Kaehler A, (2008), Learning OpenCV: computer vision with the OpenCV library. O'Reilly R Hartley and A Zisserman, Multiple View Geometry in Computer Vision (2 ed.). (Cambridge University Press, New York 2003). https://dl.acm.org/citation.cfm?id=861369. Kyriakoulis N, Gasteratos A, Mouroutsos SG (2008), Fuzzy vergence control for an active binocular vision system. in IEEE (ed.) 7th IEEE International Conference on Cybernetic Intelligent Systems, CIS 2008, 1–5 (2008). doi: https://doi.org/10.1109/UKRICIS.2008.4798931. D Scharstein, R Szeliski, A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47, 7–42 (2001). https://doi.org/10.1023/A:1014573219977. Article MATH Google Scholar H Hirschmuller, Accurate and efficient stereo processing by semi-global matching and mutual information. In Proc. IEEE Int. Conf. Computer Vision Pattern Recognition (CVPR) 2, 807–814 (2005) Banz C, Hesselbarth S, Flatt H, Blume H, Pirsch P (2010), Real-time stereo vision system using semi-global matching disparity estimation: architecture and FPGA-implementation. in 2010 International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation. IEEE, 93–101. doi: https://doi.org/10.1109/ICSAMOS.2010.5642077. H Hirschmuller, Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 30, 328–341 (2008). https://doi.org/10.1109/TPAMI.2007.1166 A Mohamed, PF Culverhouse, R De Azambuja, A Cangelosi, C Yang, Automating active stereo vision calibration process with cobots. IFAC-PapersOnLine 50, 163–168 (2017). https://doi.org/10.1016/J.IFACOL.2017.12.030 Elsevier David A. Forsyth, Jean Ponce, Computer vision: a modern approach, Prentice Hall Professional Technical Reference, 2002. PJ Besl, ND McKay, A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992) S. Rusinkiewicz and M. Levoy, Efficient variants of the ICP algorithm, in Proceedings Third International Conference on 3-D Digital Imaging and Modeling (2001), pp. 145–152. Rusu, R. B. and Cousins, S., 3D is here: Point Cloud Library (PCL), in 2011 IEEE International Conference on Robotics and Automation. IEEE (2011), pp. 1–4. doi: https://doi.org/10.1109/ICRA.2011.5980567. T Dang, C Hoffman, C Stiller, Continuous stereo self-calibration by camera parameter tracking. IEEE Transactions on Image Processing 18(7), 1536–1550 (2009) Article MathSciNet MATH Google Scholar This work is supported by the GummiArm robot. The data are available at https://github.com/ahmohamed1/activeStereoVisionPlatform.git. Centre for Robotics and Neural Systems, Plymouth University, Plymouth, UK Abdulla Mohamed, Phil Culverhouse, Angelo Cangelosi & Chenguang Yang Zienkiewicz Centre for Computational Engineering, Swansea University, Swansea, UK Chenguang Yang Abdulla Mohamed Phil Culverhouse Angelo Cangelosi The rest of authors are all my supervisors in the Centre for Robotics and Neural Systems, University of Plymouth and Zienkiewicz Centre for Computational Engineering, Swansea University. The work id is a part of a PhD program. AM wrote the paper, PC supervise the work and revised the paper at all stages and AC and CY revised the paper. Correspondence to Abdulla Mohamed. Mohamed, A., Culverhouse, P., Cangelosi, A. et al. Active stereo platform: online epipolar geometry update. J Image Video Proc. 2018, 54 (2018). https://doi.org/10.1186/s13640-018-0292-8 Accepted: 20 June 2018 Active stereo vision Epipolar geometry Real-time update
CommonCrawl
You are viewing the site in preview mode The influence of a magnetic field on photon beam radiotherapy in a normal human TK6 lymphoblastoid cell line B. Yudhistiara1, 5, F. Zwicker3, K. J. Weber1, 5, P. E. Huber1, 3, A. Ruehle1, 3, S. Brons2, 5, P. Haering4, J. Debus1, 2, 5, 6 and S. H. Hauswald1, 2, 5, 6Email author Radiation Oncology201914:11 Received: 30 August 2018 The implementation of magnetic resonance imaging (MRI) guided radiotherapy (RT) continues to increase. Very limited in-vitro data on the interaction of ionizing radiation and magnetic fields (MF) have been published. In these experiments we focused on the radiation response in a MF of the TK6 human lymphoblastoid cells which are known to be highly radiosensitive due to efficient radiation-induced apoptosis. Clonogenicity was determined 12–14 days after irradiation with 1–4 Gy 6 MV photons with or without a 1.0 Tesla MF. Furthermore, alterations in cell cycle distribution and rates of radiation induced apoptosis (FACS analysis of cells with sub-G1 DNA content) were analyzed. Clonogenic survival showed an exponential dose-dependence, and the radiation sensitivity parameter (α = 1.57/Gy) was in accordance with earlier reports. Upon comparing the clonogenic survival between the two groups, identical results within error bars were obtained. The survival fractions at 2 Gy were 9% (without MF) and 8.5% (with MF), respectively. A 1.0 Tesla MF does not affect the clonogenicity of TK6 cells irradiated with 1–4 Gy 6MV photons. This supports the use of MRI guided RT, however ongoing research on the interaction of MF and radiotherapy is warranted. MRI guided radiotherapy MR Linac In-vitro experiment Normal human cells TK6 human lymphoblastoid cells Magnetic resonance image (MRI) guided radiation therapy (RT) is rapidly expanding owing to the superior soft tissue contrast compared with computer tomography (CT) [1]. Despite the tremendous effort in developing suitable technologies, concerns about possible alterations in biological responsiveness to a given therapeutic radiation dose in the presence of a magnetic field (MF) has found much less attention. While it is consented that current clinical MRI technologies can be safely applied [2], some observed physiological alterations [3] could become relevant when radiation damage is concomitantly being processed in a cell. However, such arguments are mere speculation at present. More obvious is the possibility that the released secondary electrons after photon energy absorption are subject to the Lorentz force in a MF. This would not only cause distortions of dose distribution at air/tissue interfaces [4] but could hypothetically also lead to clustering of DNA damage from such "backcircling" electrons [5, 6]. Only a few researchers have addressed this subject experimentally using different model systems, from baker's yeast [7] to mouse mammary tumor cells [8] or Chinese hamster lung cells [9]. The common finding of these studies is a lack of statistically significantly increased radiation response due to static MF with field strength ranging from 0,14 to 2 Tesla. A similar observation was reported from our laboratory where the clonogenic survival of human tumor cells (WIDR colon adenocarcinoma and A549 lung carcinoma) was assessed after 6 MV photons (up to 8 Gy) with the absence or presence of a 1 Tesla static MF [10]. However, due to potential hazards from some unknown interaction phenomenon in the ionizing radiation-MF combination, it is reasonable to assume that the normal tissue cell response rather than the tumor cell response would be a more relevant question. Therefore, we assessed the radiation response of the TK6 human lymphoblastoid cells which are known to be highly radiosensitive due to efficient radiation-induced apoptosis; a mechanism frequently decreased or abrogated in tumor cells [11, 12]. The TK6 cell line with wild type p53 function was used in our experiments. TK6 cells (human lymphoblastoid cells from spleen) were originally provided by the Tumorbank of the German Cancer Research Center, DKFZ, Heidelberg, Germany. DNA cell line authentication was done by Eurofins Medigenomix Forensik GmbH, Ebersberg, Germany. Identical to Schäfer's work [11], the cells were cultured in suspension at 37 °C in a humidified atmosphere (6% CO2). The medium used was RPMI 1640 fortified with 10% heat-inactivated horse serum (Gibco) and 1% penicillin. The cell density was kept at 0.1 to 1.0 × 106/ml by subculturing at regular intervals. The MF was generated using a pair of magnetic coils with an adjustable distance between the two poles, where MF of up to 1.5 Tesla can be generated upon allowing an electric current to flow to the coils. The amount of current needed to generate the desired MF strength was previously determined using a probe placed between the coils, from which the MF strength can be read off. A special container made of VeroClear RGD810 (Fig. 1) was then placed between the poles, creating a 3.5 cm space between the poles, in which the test tube containing the cells were to be placed. To help dissipate the heat generated from the electric current, creating a water-cooling system ensured a constant water flow into and out of the phantom, such that no cell deaths occurred due to overheating. Using the in-room laser positioning system, the cells were placed in the isocenter of the linear accelerator. Following the setup in Ziles' work [10], the irradiation field was 10 cm in length and 3.3 cm in width, which covers the whole tube containing the cells. The overall experiment setup with its relevant parameters can be seen in Figs. 1 and 2. (a) A blueprint of the apparatus that was used in our experiments along with its dimensions measured in mm (b). © Armin Runz, DKFZ Diagrams showing the experimental set-up (not drawn to scale) in lateral (a) and aerial (b) views Radiation and Clonogenic survival Photon beams were generated using a linear accelerator model Siemens Artiste 2 (6 MV) at the German Cancer Research Center (DKFZ). The clonogenic survival of the TK6 cells was determined by plotting their survival curves with and without MF. Raw data of vitality are obtained as plating efficiencies, and these values are determined in at least three independent experiments. Since our cells are not adherent in nature, the microtiter assay using the 96 well plates was used. Cultured cells were centrifuged, their supernatant discarded and then resuspended with 10 ml fresh medium. A concentration of 3.0 × 105 cells ml− 1 was then used for each test tube. The test tubes were chilled on ice in an ice box before and after irradiation to slow down any metabolic processes. After irradiation the cells were to be plated in the 96 wells such that each well has a specific number of cells. Table 1 summarizes the number of cells used in our experiments, according to their radiation types and respective doses. This number was determined through several pilot tests, such that the number of wells without cell colony after irradiation lies between 40 and 50 (see eq. (1)). The number of cells plated within a single well after irradiation with respect to the type of radiation used in the experiments Type of radiation Dose [Gy] Number of cells per well Photon beams After 14 days the number of wells which have changed in color from red to yellow was counted for further calculations. The plating efficiency (PE) is calculated as such: $$ PE=\frac{1}{N}\bullet \ln \left(\frac{96}{n}\right) $$ Where N = the number of cells plated in a well, n = the number of wells without cell growth. Schäfer's work has shown that n should lie between 40 and 50 to obtain stable results [11]. With the PE value, the survival fraction (SF) is then expressed as: $$ SF=\frac{PE(treatment)}{PE(control)} $$ Where PE (control) is the plating efficiency obtained at 0 Gy (with and without MF) and PE (treatment) is the one obtained after the cells are irradiated. From three independent experiments the mean SF along with its standard deviation for each dose was also calculated (corrected to three decimal places or at least three significant figures). A 2-sample t-test was then performed comparing the two mean values between the control group (without MF) and treatment group (with MF). The difference between the two mean values is statistically significant when the p-value is < 0.05. A regression analysis was then performed following the linear quadratic model (LQ-model), which then defines the SF as an exponential function of the dose (D): $$ SF={e}^{\left(-\alpha D-\beta {D}^2\right)} $$ α and β are coefficients that can then be determined using the regression analysis. Fluorescence-activated cell sorting (FACS) FACS was used to determine the shift in the cell cycle population as well as the treatment specific apoptosis. Past works have demonstrated the reliability of this method to determine the rate of radiation-induced apoptosis [13]. Irradiated cells were cultured in a new flask containing a new medium for 8, 10, 12, 14, 24 and 48 h respectively, after which the cells were fixated using 1 ml of 80% alcohol and stored in the refrigerator at 10 °C. For analysis, the alcohol solution was discarded and the cells centrifuged. Afterwards they were rinsed with 2 ml of phosphate-buffered saline (PBS) twice (with centrifugation and supernatant discard after each rinse). To stain them, we added 900 μl of PBS and 100 μl of propidium iodide. The tubes were kept in the refrigerator at 10 °C for 12 h to maximize staining. The changes in the percentage of cells in each phase could thereby be tracked. To measure the rate of apoptosis after treatment, the percentage of sub-G1 population of the cells was determined. Afterwards, the treatment-specific apoptosis (TSA) was calculated with the aid of the following formula: $$ TSA=\frac{f_x-{f}_0}{1-{f}_0} $$ Where fx is the total sub-G1-phase fraction after treatment and f0 is the respective value of the untreated control (non-irradiated cells) from the same experiment. We performed four repeats of this experiment independently and calculated the mean TSA. Similar to the survival assay, a two-sample unpooled t-test was also performed to determine whether the differences in the TSA between the two groups (with and without MF) are statistically significant. Here the difference between the two mean values is statistically significant when the p-value is < 0.05 as well. Two-factor analysis of variance (two-way ANOVA) We decided to include two-way ANOVA as an additional statistical tool, which compares all variances in the control group to those in the treatment group. This analysis was generated using SPSS 23 by IBM. Clonogenic assay Table 2 shows the mean SF of TK6 human lymphoblastoid cells after irradiation and 14 days incubation. Each mean SF was calculated from 3 independent experiments using the formulae previously described. The corresponding t-test and p-values are listed. The survival curves are shown in Fig. 3. The mean survival fractions (SF) of 3 independent experiments with and without MF with the respective t-test and p values Mean SF (without MF) Mean SF (with MF) t-test The survival curve generated after regression analysis following the LQ model. Note the proximity of the two curves (with or without magnetic field (MF)) and the overlapping error bars between the mean survival fractions Cell cycle analysis Figure 4 shows the results of four identical, independent FACS-analyses of the cell cycle progression (G1/G0, S and G2) after photon irradiation of 4 Gy. After 12 h the irradiated cells showed a significantly increased number of cells in the G2 phase compared to the controls (p-value < 0.05) regardless of the presence of a MF. Furthermore, there was no significant difference in the progression of the cell cycle between the cells which were irradiated with a MF and those without a MF (p-value > 0.05 for all time intervals and measurement points). Hence, the MF has no influence on the cell cycle progression after photon irradiation in TK6 cells. Cell-cycle analysis with the aid of FACS:(a) G1/G0 (b) S and (c) G2 show the relative number of cells after irradiation with 4 Gy photons (RT) with or without magnetic field (MF). Un-irradiated controls were performed in each experiment. Error bars showed the standard deviation of four independent experiments Note the falling G0/G1 population with the simultaneous increase in G2 population over time in all four experiments. Also, the solid and dotted curves are almost identical for their respective cell cycle phase in each experiment. Treatment-specific apoptosis Figure 5 shows the calculated treatment-specific apoptosis (TSA, eq. (4)) plotted against the incubation time after irradiation. As expected, the TSA shows an upward trend as the incubation time increases. The plotted values are shown in Table 3 (corrected to two decimal places or three significant figures). Treatment-specific apoptosis (TSA) 8, 10, 12, 14, 24 and 48 h after irradiation of TK6 cell lines with 4 Gy photons both in the presence and absence of a magnetic field (MF) of 1 Tesla The calculated mean TSA of both groups with their respective t-test and p values for each time period. Each mean value was derived from 3 independent experiments Time [h] Mean TSA (without MF) Mean TSA (with MF) t-test value Two-way ANOVA The ANOVA tables of each measured parameter are shown below (Tables 4 and 5). N = the number of statistical cases. df = degree of freedom. F = F-value used in two-way ANOVA. Sig. = Significance/p-value. Survival Fraction Tests of Between-Subjects Effects Dependent Variable: Survival Fraction Type III Sum of Squares Mean Square Corrected Model Magnetic Field * Dose Corrected Total a. R Squared = .969 (Adjusted R Squared = .955) Dependent Variable: TSA .427a Magnetic Field * Time The implementation of MRI-guided RT has been increasing in recent years with new MR-Linacs being installed in various centers around the world. However, only few data on the interaction of photon beam radiation and MF have been published. In this developing new field of RT, it is important to analyze potential interaction phenomena in the ionizing radiation-MF combination on tumor cell lines and normal tissue cell lines as well. Therefore, the radiation response of TK6 human lymphoblastoid cells which are known to be highly radiosensitive was analyzed as an indicator for normal tissue interactions. Since most MRI used in clinical settings utilize a MF of 0.5 to 1.5 T, our experiments were carried out in a MF of 1 T. Raw data of vitality were obtained as plating efficiencies, and these values were determined in at least three independent experiments. Interexperimental variability of data originates from both stochastic errors (i.e. precision of pipetting) and from systematic changes of biological factors. The latter is a common change of inherent plating efficiency of a particular cell preparation – at a given day and subcultivation history – which needs to be accounted for by intraexperimental normalization before repeat experiments can be compared. Accordingly, two approaches can be distinguished: (i) all data (plating efficiencies of a particular experiment) are normalized to the respective control value yielding the surviving fractions in the simplest manner. This procedure, however, implies that the control value would have been determined without error, which is not true. (ii) The mathematical function used to describe SF as a function of dose (the LQ model) is fitted to all plating efficiencies of a particular experiment (including control). This yields a calculated ("the best") value at dose zero which is then taken to normalize the plating efficiencies for each independent experiment. Subsequently, mean values (and standard deviations) from the repeat measurements were calculated and the fit procedure (i.e. with LQ model) was applied again. In our experiments the regression of the SF for the photon beams produced a value of 1.57 for α, which is an indicator of radiosensitivity of the cell type. The linearity of the curve also confirms the radiosensitivity of the TK6 cell line used. This corresponds to Schäfer's work [11] and therefore supports the reliability of the chosen experimental setup. Given the observation of a lacking influence with regard to the MF, the respective statistical tests must be considered. For the clonogenic assay a 2-sample t-test was performed comparing the mean SF between the control and experimental groups. It is then assumed that the two groups that are being tested are independent of each other and that they are normally distributed. The unpooled t-test was selected since it is also assumed that the variances are not equal. Since p > 0.05 in all dose groups in all 3 different radiation types, it cannot be concluded that there is a statistically significant difference between the two mean SF. Similar results were observed when the TSA of both groups were compared against each other. A p-value > 0.05 for all points in the TSA graph also means that no conclusion about a statistically significant difference between the mean TSA of both groups can be reached. In other words, it cannot be concluded that there is a biologically relevant effect on TK6 cells after being irradiated in a MF. In addition the two-way ANOVA test confirms our hypothesis. The two independent variables are the presence of a MF and irradiation dose. The significant values listed in all tables in the row "Magnetic_Field" are more than 0.05, which indicates that at 5% significance level, the presence of a MF does not have any statistically significant effect on the dependent variable, be it SF or TSA. Moreover, in the significant values in the row "Magnetic_Field * Dose" are all close to 1.00, which means that there is no interaction between those two independent variables. Hence through these statistical tests we can conclude that there are neither negative nor positive effects observed when TK6 cells are irradiated in a MF of 1 T. In the cell cycle experiments we observed the well-known G2-Arrest after photon irradiation [14]. The addition of a MF had no significant influence on the cell cycle progression after photon irradiation in TK6 cells in our setting. In another experiment from our laboratory on the clonogenic survival of human tumor cells (WIDR colon adenocarcinoma and A549 lung carcinoma cell lines) treated with photon beam RT in a 1 Tesla MF, comparable results to this report were found [10]. In summary, the present phenomenological experiment with normal human TK6 lymphoblastoid cell lines as a highly radiosensitive in-vitro system does not indicate any interaction of a static 1 Tesla MF on 1–4 Gy photon beam RT. Further research on potential interactions of MF and photon as well as particle beam treatments is warranted. We thank Armin Runz (DKFZ) for the creation of Fig. 1. We acknowledge financial support by Deutsche Forschungsgemeinschaft within the funding program Open Access Publishing, by the Baden-Württemberg Ministry of Science, Research and the Arts and by Ruprecht-Karls-Universität Heidelberg. Besides this financial support for publishing, this project was not funded by third parties. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. HH, KJW, PH, FZ and JD developed and planned the study. BY, PH, SB, AR, PH and KJW performed the irradiation and were responsible for management of the cells. BY, FZ, PH, SB, KJW, PH and HH participated in writing the manuscript and revising it. HH, KJW, JD, BY, PH and FZ performed data analysis. KJW, PH and HH reviewed all data and statistical analyses. All authors read and approved the final manuscript. Department of Radiation Oncology, Heidelberg University Hospital, Im Neuenheimer Feld (INF) 400, 69120 Heidelberg, Germany Heidelberg Ion-Beam Therapy Center (HIT), Im Neuenheimer Feld 450, 69120 Heidelberg, Germany Clinical Cooperation Unit Molecular Radiation Oncology E055, German Cancer Research Center (DKFZ), Heidelberg, Germany Department of Radiation Physics E040, German Cancer Research Center (DKFZ), Heidelberg, Germany National Center for Radiation Research in Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg, Germany Clinical Cooperation Unit E050, German Cancer Research Center (DKFZ), Heidelberg, Germany Schmidt MA, Payne GS. Radiotherapy planning using MRI. Phys Med Biol. 2015;60(22):R323–R61.View ArticleGoogle Scholar Hartwig V, Giovannetti G, Vanello N, Lombardi M, Landini L, Simi S. Biological effects and safety in magnetic resonance imaging: a review. Int J Environ Res Public Health. 2009;6(6):1778–98.Google Scholar Dini L, Abbro L. Bioeffects of moderate-intensity static magnetic fields on cell cultures. Micron. 2005;36(3):195–217.View ArticleGoogle Scholar Menten MJ, Fast MF, Nill S, Kamerling CP, McDonald F, Oelfke U. Lung stereotactic body radiotherapy with an MR-linac - quantifying the impact of the magnetic field and real-time tumor tracking. Radiother Oncol. 2016;119(3):461–6.View ArticleGoogle Scholar Chen Y. The magnetic confinement of Electron and photon dose profiles and the possible effect of the magnetic field on relative biological effectiveness. University of Michigan; 2005. http://research.physics.lsa.umich.edu/twinsol/Publications/YUThesisPDFCmprsd.pdf. Shih CC. High energy electron radiotherapy in a magnetic field. Med Phys. 1975;2(1):9–13.View ArticleGoogle Scholar Chen Y, Lau B, Becchetti F. SU-FF-T-405: the effect of magnetic field on relative biological effectiveness in yeast cells. Med Phys. 2007;34(6Part14):2495.View ArticleGoogle Scholar Rockwell S. In vivo-in vitro tumor systems: new models for studing the response of tumours to therapy. Lab Anim Sci. 1977;27(5 Pt 2):831–51.PubMedGoogle Scholar Nath R, Schulz RJ, Bongiorni P. Response of mammalian cells irradiated with 30 MV X-rays in the presence of a uniform 20-kilogauss magnetic field. International Journal of Radiation Biology and Related Studies in Physics, Chemistry and Medicine. 1980;38(3):285–92.View ArticleGoogle Scholar Ziles D. Die Auswirkung eines statischen Magnetfeldes auf das klonogene Überleben humaner Karzinomzellen nach Photonenbestrahlung in vitro. 2018. URN: urn:nbn:de:bsz:16-heidok-248424.Google Scholar Schäfer JB, Engling A, Little JB, Weber KJ, Wenz F. Suppression of apoptosis and Clonogenic survival in irradiated human Lymphoblasts with different TP53 status. Radiat Res. 2002;158(6):699–706.View ArticleGoogle Scholar Geiger C, Wenz F. Radiation induced chromosome aberrations and Clonogenic survival in human Lymphoblastoid cell lines with different p53 status. Strahlenther Onkol. 1999;175(6):289–92.View ArticleGoogle Scholar Nicoletti I, Migliorati G, Pagliacci MC, Grignani F, Riccardi C. A rapid and simple method for measuring thymocyte apoptosis by propidium iodide staining and flow cytometry. J Immunol Methods. 1991;139(2):271–9.View ArticleGoogle Scholar Joiner M, Avd K. Basic Clinical Radiobiology. Great Britain: Hodder Arnold; 2009.Google Scholar By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines. Radiation Biology and Molecular Radiation Oncology
CommonCrawl
Journal of the American Mathematical Society Published by the American Mathematical Society, the Journal of the American Mathematical Society (JAMS) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Journal of the American Mathematical Society is 4.79. Journals Home eContent Search About JAMS Editorial Board Author and Submission Information Journal Policies Subscription Information The units of a ring spectrum and a logarithmic cohomology operation by Charles Rezk PDF J. Amer. Math. Soc. 19 (2006), 969-1014 Request permission We construct a "logarithmic� cohomology operation on Morava $E$-theory, which is a homomorphism defined on the multiplicative group of invertible elements in the ring $E^0(K)$ of a space $K$. We obtain a formula for this map in terms of the action of Hecke operators on Morava $E$-theory. Our formula is closely related to that for an Euler factor of the Hecke $L$-function of an automorphic form. Matthew Ando, Michael J. Hopkins, and Neil P. Strickland, The sigma orientation is an $H_\infty$ map, Amer. J. Math. 126 (2004), no. 2, 247–334. MR 2045503, DOI 10.1353/ajm.2004.0008 M. F. Atiyah and I. G. Macdonald, Introduction to commutative algebra, Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1969. MR 0242802 Matthew Ando, Isogenies of formal group laws and power operations in the cohomology theories $E_n$, Duke Math. J. 79 (1995), no. 2, 423–485. MR 1344767, DOI 10.1215/S0012-7094-95-07911-3 M. F. Atiyah and G. B. Segal, Exponential isomorphisms for $\lambda$-rings, Quart. J. Math. Oxford Ser. (2) 22 (1971), 371–378. MR 291250, DOI 10.1093/qmath/22.3.371 A. K. Bousfield and E. M. Friedlander, Homotopy theory of $\Gamma$-spaces, spectra, and bisimplicial sets, Geometric applications of homotopy theory (Proc. Conf., Evanston, Ill., 1977) Lecture Notes in Math., vol. 658, Springer, Berlin, 1978, pp. 80–130. MR 513569 Martin Bendersky and John R. Hunton, On the coalgebraic ring and Bousfield-Kan spectral sequence for a Landweber exact spectrum, Proc. Edinb. Math. Soc. (2) 47 (2004), no. 3, 513–532. MR 2096616, DOI 10.1017/S0013091503000518 R. R. Bruner, J. P. May, J. E. McClure, and M. Steinberger, $H_\infty$ ring spectra and their applications, Lecture Notes in Mathematics, vol. 1176, Springer-Verlag, Berlin, 1986. MR 836132, DOI 10.1007/BFb0075405 A. K. Bousfield, The localization of spectra with respect to homology, Topology 18 (1979), no. 4, 257–281. MR 551009, DOI 10.1016/0040-9383(79)90018-1 A. K. Bousfield, Uniqueness of infinite deloopings for $K$-theoretic spaces, Pacific J. Math. 129 (1987), no. 1, 1–31. MR 901254, DOI 10.2140/pjm.1987.129.1 A. K. Bousfield, On the telescopic homotopy theory of spaces, Trans. Amer. Math. Soc. 353 (2001), no. 6, 2391–2426. MR 1814075, DOI 10.1090/S0002-9947-00-02649-0 Albrecht Dold, Decomposition theorems for $S(n)$-complexes, Ann. of Math. (2) 75 (1962), 8–16. MR 137113, DOI 10.2307/1970415 A. D. Elmendorf, I. Kriz, M. A. Mandell, and J. P. May, Rings, modules, and algebras in stable homotopy theory, Mathematical Surveys and Monographs, vol. 47, American Mathematical Society, Providence, RI, 1997. With an appendix by M. Cole. MR 1417719, DOI 10.1090/surv/047 P. G. Goerss and M. J. Hopkins, Moduli spaces of commutative ring spectra, Structured ring spectra, London Math. Soc. Lecture Note Ser., vol. 315, Cambridge Univ. Press, Cambridge, 2004, pp. 151–200. Paul G. Goerss and John F. Jardine, Simplicial homotopy theory, Progress in Mathematics, vol. 174, Birkhäuser Verlag, Basel, 1999. MR 1711612, DOI 10.1007/978-3-0348-8707-6 David Gluck, Idempotent formula for the Burnside algebra with applications to the $p$-subgroup simplicial complex, Illinois J. Math. 25 (1981), no. 1, 63–67. MR 602896 Michael J. Hopkins, Nicholas J. Kuhn, and Douglas C. Ravenel, Generalized group characters and complex oriented cohomology theories, J. Amer. Math. Soc. 13 (2000), no. 3, 553–594. MR 1758754, DOI 10.1090/S0894-0347-00-00332-5 Luke Hodgkin, The $K$-theory of some wellknown spaces. I. $QS^{0}$, Topology 11 (1972), 371–375. MR 331367, DOI 10.1016/0040-9383(72)90032-8 M. J. Hopkins, Algebraic topology and modular forms, Proceedings of the International Congress of Mathematicians, Vol. I (Beijing, 2002) Higher Ed. Press, Beijing, 2002, pp. 291–317. MR 1989190 Mark Hovey and Neil P. Strickland, Morava $K$-theories and localisation, Mem. Amer. Math. Soc. 139 (1999), no. 666, viii+100. MR 1601906, DOI 10.1090/memo/0666 Mark Hovey, Brooke Shipley, and Jeff Smith, Symmetric spectra, J. Amer. Math. Soc. 13 (2000), no. 1, 149–208. MR 1695653, DOI 10.1090/S0894-0347-99-00320-3 Takuji Kashiwabara, Hopf rings and unstable operations, J. Pure Appl. Algebra 94 (1994), no. 2, 183–193. MR 1282839, DOI 10.1016/0022-4049(94)90032-9 Nicholas J. Kuhn, Localization of André-Quillen-Goodwillie towers, and the periodic homology of infinite loopspaces, preprint. Nicholas J. Kuhn, Morava $K$-theories and infinite loop spaces, Algebraic topology (Arcata, CA, 1986) Lecture Notes in Math., vol. 1370, Springer, Berlin, 1989, pp. 243–257. MR 1000381, DOI 10.1007/BFb0085232 L. G. Lewis Jr., J. P. May, M. Steinberger, and J. E. McClure, Equivariant stable homotopy theory, Lecture Notes in Mathematics, vol. 1213, Springer-Verlag, Berlin, 1986. With contributions by J. E. McClure. MR 866482, DOI 10.1007/BFb0075778 Jonathan Lubin and John Tate, Formal moduli for one-parameter formal Lie groups, Bull. Soc. Math. France 94 (1966), 49–59. MR 238854, DOI 10.24033/bsmf.1633 J. M. Boardman and R. M. Vogt, Homotopy invariant algebraic structures on topological spaces, Lecture Notes in Mathematics, Vol. 347, Springer-Verlag, Berlin-New York, 1973. MR 0420609, DOI 10.1007/BFb0068547 J. Peter May, $E_{\infty }$ ring spaces and $E_{\infty }$ ring spectra, Lecture Notes in Mathematics, Vol. 577, Springer-Verlag, Berlin-New York, 1977. With contributions by Frank Quinn, Nigel Ray, and Jørgen Tornehave. MR 0494077 J. P. May, Multiplicative infinite loop space theory, J. Pure Appl. Algebra 26 (1982), no. 1, 1–69. MR 669843, DOI 10.1016/0022-4049(82)90029-9 M. A. Mandell and J. P. May, Equivariant orthogonal spectra and $S$-modules, Mem. Amer. Math. Soc. 159 (2002), no. 755, x+108. MR 1922205, DOI 10.1090/memo/0755 Birgit Richter and Alan Robinson, Gamma homology of group algebras and of polynomial algebras, Homotopy theory: relations with algebraic geometry, group cohomology, and algebraic $K$-theory, Contemp. Math., vol. 346, Amer. Math. Soc., Providence, RI, 2004, pp. 453–461. MR 2066509, DOI 10.1090/conm/346/06298 Goro Shimura, Introduction to the arithmetic theory of automorphic functions, Kanô Memorial Lectures, No. 1, Iwanami Shoten Publishers, Tokyo; Princeton University Press, Princeton, N.J., 1971. Publications of the Mathematical Society of Japan, No. 11. MR 0314766 Neil P. Strickland and Paul R. Turner, Rational Morava $E$-theory and $DS^0$, Topology 36 (1997), no. 1, 137–151. MR 1410468, DOI 10.1016/0040-9383(95)00073-9 Neil P. Strickland, Finite subgroups of formal groups, J. Pure Appl. Algebra 121 (1997), no. 2, 161–208. MR 1473889, DOI 10.1016/S0022-4049(96)00113-2 N. P. Strickland, Morava $E$-theory of symmetric groups, Topology 37 (1998), no. 4, 757–779. MR 1607736, DOI 10.1016/S0040-9383(97)00054-2 Richard Woolfson, Hyper-$\Gamma$-spaces and hyperspectra, Quart. J. Math. Oxford Ser. (2) 30 (1979), no. 118, 229–255. MR 534835, DOI 10.1093/qmath/30.2.229 Retrieve articles in Journal of the American Mathematical Society with MSC (2000): 55N22, 55P43, 55S05, 55S25, 55P47, 55P60, 55N34, 11F25 Retrieve articles in all journals with MSC (2000): 55N22, 55P43, 55S05, 55S25, 55P47, 55P60, 55N34, 11F25 Charles Rezk Affiliation: Department of Mathematics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61820 MR Author ID: 638495 Email: [email protected] Received by editor(s): April 5, 2005 Published electronically: February 8, 2006 Additional Notes: This work was supported by the National Science Foundation under award DMS-0203936. Journal: J. Amer. Math. Soc. 19 (2006), 969-1014 MSC (2000): Primary 55N22; Secondary 55P43, 55S05, 55S25, 55P47, 55P60, 55N34, 11F25
CommonCrawl
www.springer.com The European Mathematical Society Pages A-Z StatProb Collection Project talk Lebesgue spectrum From Encyclopedia of Mathematics Revision as of 22:16, 5 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer) A term in spectral theory. Let $ A $ be a self-adjoint and $ U $ a unitary operator acting in a Hilbert space $ H $. The operator $ A $, respectively $ U $, has a simple Lebesgue spectrum if it is unitarily equivalent to the operator of multiplication by $ \lambda $ in a space of complex-valued functions $ f ( \lambda ) $ that are defined on the real axis $ \mathbf R $, respectively on the circle $$ S ^ {1} = \{ \lambda : {\lambda \in \mathbf C , | \lambda | = 1 } \} , $$ and for which $$ \| f \| ^ {2} = \int\limits | f ( \lambda ) | ^ {2} \ d \lambda < \infty , $$ where the integration is carried out with respect to the ordinary Lebesgue measure on $ \mathbf R $, respectively on $ S ^ {1} $; hence the name Lebesgue spectrum (see Unitarily-equivalent operators). For $ U $ this definition is equivalent to the following: In $ H $ there is an orthonormal basis $ e _ {j} $, $ j = 0 , \pm 1 , \pm 2 \dots $ such that $ U e _ {j} = e _ {j+} 1 $. Also, an operator has a Lebesgue spectrum if $ H $ can be decomposed into an orthogonal direct sum of invariant subspaces in each of which the operator has a simple Lebesgue spectrum. Although for a given operator there can be many such decompositions, the number of "summands" in each of them is the same (it may be an infinite cardinal number). This number is called the multiplicity of the Lebesgue spectrum. Finally, similar concepts can be introduced for one-parameter groups of unitary operators $ U ( t) $, continuous in the weak (or strong, which is the same in the given case) operator topology. By Stone's theorem, $ U ( t) = e ^ {iAt} $, where $ A $ is a self-adjoint operator (cf. Semi-group of operators; Generating operator of a semi-group). If $ A $ has a Lebesgue spectrum of a certain multiplicity, one says that $ U ( t) $ has the same properties. For example, the group $ U ( t) $ has a simple Lebesgue spectrum if it is unitarily equivalent to the group $ f ( \lambda ) \rightarrow e ^ {i \lambda t } f ( \lambda ) $ in $ L _ {2} ( \mathbf R ) $, and this group, in turn, is equivalent to the group of shifts $ f ( \lambda ) \rightarrow f( \lambda + t ) $ in the same space $ L _ {2} ( \mathbf R ) $. [a1] H. Helson, "The spectral theorem" , Springer (1986) How to Cite This Entry: Lebesgue spectrum. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Lebesgue_spectrum&oldid=47603 This article was adapted from an original article by D.V. Anosov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Lebesgue_spectrum&oldid=47603" TeX auto TeX done About Encyclopedia of Mathematics Impressum-Legal
CommonCrawl
This time, we will use sampling more frequently by reading the image using smaller squares. We have also inserted an unknown animal that we are trying to classify using only a single neighbor (i.e., k = 1).In this case, the nearest animal to the input image is a dog data point; thus our input image should be classified as dog.. Let's try another unknown animal, this time using k = 3 (Figure 4).We have found two cats and one panda in the top three results Using the Apple logo I created for reference, I know that a sample size of 15 by 15 would do a decent job of representing an image with dimensions of 2 by 2. Thus, I would sketch the 4 by 6 image on a grid of 20 by 25. I would use 500 sample squares. At the same time, if you change the sampling rate without changing the total sampling time then you won't significantly affect the function. This means that the easiest way to decrease windowing is to increase the amount of time that you sample your signal, but that sampling more often will not help you at all In particular, we inspect two scenarios, where firstly we simulate the balanced case by uniformly sampling from both classes and secondly we set the ratio to 0.8 in favour of the cancer class Furthermore, we use larger filter sizes, i.e. 7, 5, 5, and 3 at each stage respectively, which we have observed to outperform the suggested smaller \(3\times 3\) filters So we need to group the transactions by baskets, and for every basket, count the number of distinct products. But as we just saw, the number of parameters that we will use for this model highly depends on the number of distinct products. The more products we have, the higher the number of parameters We present a physically based blue noise sampling approach which can be evaluated efficiently by using the N-body simulation method. A set of sample points is modeled as electrically charged particles on an imaginary 2D plane where they self-organize by movement to minimize the electrostatic force that they each experience. The resulting particles' positions at equilibrium exhibit an. Unit 1 Lesson 7: Black and White Image In training time we will use this: The loss function is defined by cross-entropy loss: i is for all elements in the corpus, t - for all timesteps. To use this model: For predicting the chance of next word, we feed the sentence to the RNN and then get the final y ^<t> hot vector and sort it by maximum probability (2) We use the trained AI-based model to predict the traffic in SDN, then we proposed an adaptive sampling scheme according to the prediction results. Based on the sampling sequence obtained by the adaptive sampling, we send the measurement primitives to switches to sample the network traffic in networks and obtain the coarse-grained flow traffic We aimed to assess the feasibility of using multiple technologies to recruit and conduct cognitive interviews among young people across the United States to test items measuring sexual and reproductive empowerment. We sought to understand whether these methods could achieve a diverse sample of participants. With more researchers turning to approaches that maintain social distancing in the. The aim of this simulation is to estimate the actual position of a moving robot at each sampling time by using sensors with different sampling times. The robot moves as shown in Figure 7, and has a velocity sensor and an acceleration sensor. The position of the robot is measured by the external position sensor (e.g., a camera) In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample.This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation Results of training a super-resolution method (EDSR) with L2 and L1 losses. Image from BSD dataset.. Zhao et. al. have studied the visual quality of images produced by the image super-resolution, denoising, and demosaicing algorithms using L2, L1, SSIM and MS-SSIM (the last two are objective image quality metrics) as loss functions. Images, produced by the algorithms trained with the. Your First Image Classifier: Using k-NN to Classify Images 4. Frequency and the Fast Fourier Transform - Elegant SciPy [Book] Chapter 4. Frequency and the Fast Fourier Transform. If you want to find the secrets of the universe, think in terms of energy, frequency and vibration. This chapter was written in collaboration with SW's father, PW van der Walt. This chapter will depart slightly from the. ing the focal mechanisms of earthquakes, particularly for smaller earthquakes. Algorithms have been developed to automatically deter ute ceremony was covered by two cameras; we sync-rolled the VTRs and mixed the show in real time as if it were live. At the end, we weren't sure we liked it. So we dubbed it off via 1394 to another DV cassette, inserted a fresh DV cassette, and had another bash at the edit. This time, we liked it Since pcolor normalizes the data each time it is called, we will fix the display range to be between 0 and 1 using the caxis routine: caxis([0 1]). After displaying the original image as in the preceding example, we will lighten the image by increasing all the pixel values in the image by a value of 0.3 (remember 1.0 is white) and display Chapter 8 Bootstrapping and Confidence Intervals. In Chapter 7, we studied sampling.We started with a tactile exercise where we wanted to know the proportion of balls in the sampling bowl in Figure 7.1 that are red. While we could have performed an exhaustive count, this would have been a tedious process This also applies to sizing up more than double the original size. Figure 4: Enlarged image introduces 'white spaces'. For the purpose of this article, explanation will follow the making-it-larger path because that is probably why people are reading this anyway. So we start by enlarging a small texture such as shown in figure 4 Before we talk more about skewness and kurtosis let's explore the idea of moments a bit. Later we'll use this concept to develop an idea for measuring skewness and kurtosis in our distribution. We'll use a small dataset, [1, 2, 3, 3, 3, 6] BACKGROUND AND PURPOSE: Early accurate diagnosis of brain metastases is crucial for a patient's prognosis. This study aimed to compare the conspicuity and detectability of small brain metastases between contrast-enhanced 3D fast spin-echo (sampling perfection with application-optimized contrasts by using different flip angle evolutions [SPACE]) and 3D gradient-echo (GE) T1-weighted. 2. Filtering. The EBD is huge! If we're going to work with it, we need to extract a manageable subset of the data. With this in mind, the main purpose of auk is to provide a variety of functions to define taxonomic, spatial, temporal, or effort-based filters. To get started, we'll use auk_ebd() to set up a reference to the EBD. We'll also provide a reference to the sampling event data Unit 1 Com Sci Assessment Flashcards Quizle Propensity scores to reduce selection bias. The use of propensity scores has been demonstrated in multiple fields including social work [22, 23], medicine and public health [24,25,26], and economics [].While support for the use of propensity scores in NDDs has been around for more than 10 years [9, 28], their application is still uncommon in research on NDDs [Note: In Example 11.11, we used the Pandas sample() method to reduce the dataset down to 5,000 tokens, because we found that using more data than this corresponded to a clunky user experience when using the bokeh plot interactively.] Example 11.11 Interactive bokeh plot of two-dimensional word-vector dat Welcome to the first chapter of Modern NLP. For obvious reasons, it makes sense to start with the story of transfer learning — the reason for rapid progress in NLP. This chapter was originall Instead of using point estimates for x and θ, we propose to employ a computationally more demanding but also more accurate way of approximating the posterior than Equation 5. Rather than the mode approximation, which only considers a single value for the model parameters, we use Monte Carlo sampling to account for the uncertainty in { x, θ. Least Squares and Fourier Analysis Academically Interestin Sampling Rate-- This characteristic is unique to digital scopes, it defines how many times per second a signal is read. For scopes that have more than one channel, this value may decrease if multiple channels are in use. Rise Time-- The specified rise time of a scope defines the fastest rising pulse it can measure. The rise time of a scope is. The annual collection of fertility, marriage, sexual behaviour, and contraceptive use data in the nationally representative rounds of Performance Monitoring and Accountability (PMA) surveys in sub-Saharan Africa may contribute to the periodic monitoring of adolescent sexual and reproductive health (ASRH). However, we need to understand the reliability of these data in monitoring the ASRH. Resolving challenges in deep learning-based analyses of In this study, we compared the reliability and effectiveness of UAV-based in situ readings (i.e., temperature and conductivity) and water sampling with two conventional methods (i.e., manual reading using a hand-held device and grab sampling and sensor reading). We found that UAV-based in situ readings better represented the spatial and. Summary. In this blog post we learned how threading can be used to increase your webcam and USB camera FPS using Python and OpenCV. As the examples in this post demonstrated, we were able to obtain a 379% increase in FPS simply by using threading. While this isn't necessarily a fair comparison (since we could be processing the same frame multiple times), it does demonstrate the importance of. The random-effects model [1-3] is routinely used in meta-analysis.This model involves two parameters: the average effect, μ, and the between-study variance, τ 2.Although μ is of primary interest, τ 2 is also important because it describes the extent to which the true effects differ. For example, a small τ 2 reassures us that the studies' true effects are similar so that μ adequately. We use a uniform grid that we explain in more detail in Section 5.2. 3.3 Multi‐phase material model Once the SPH attributes at a sampling position during ray marching are determined, they can be mapped to optical parameters for the evaluation of the volume rendering integral in a next step From this table the area under the standard normal curve between any two ordinates can be found by using the symmetry of the curve about z = 0. We can also use Scientific Notebook, as we shall see. Go here for the actual z-Table. Example 3 . Find the area under the standard normal curve for the following, using the z-table. Sketch each one Practical SQL for Data Analysis. Pandas is a very popular tool for data analysis. It comes built-in with many useful features, it's battle tested and widely accepted. However, pandas is not always the best tool for the job. SQL databases has been around since the 1970s. Some of the smartest people in the world worked on making it easy to slice. Sampling existing colors, and exaggerating them, really brings a painting to life. It's best to avoid choosing a color, such as a hot white, and then applying it all over the canvas. It's better to sample colors often and then apply the color with very deliberate strokes. Using this approach produces frequent color temperature changes Here are some more details in how Mandelbrot works: We start with a 2D region. We use complex numbers to define the dimensions of this region. The gist is that for each complex point in this region we do some processing to generate an image. Given a complex point, P, we iterate the equation Zn+1 = Zn^2 + P a certain amount of times. We call the. where g i r o w denotes a row in the matrix and 0 ≤ i r o w < M. N is the number of elements in each row.. Then, the image matrix g is inputted into the multilayer RBM network. The network contains an input layer and multiple hidden layers. The connection weights and bias between input layer units and hidden layer units can be adjusted so as to make the hidden layer output equal to the input. 10.1.1 Defining a population. A sample is a concrete thing. You can open up a data file, and there's the data from your sample. A population, on the other hand, is a more abstract idea.It refers to the set of all possible people, or all possible observations, that you want to draw conclusions about, and is generally much bigger than the sample. In an ideal world, the researcher would begin. Set to use, we will use the override and set the samples to whatever value is in the view layer tab. Ignore will ignore any samples override at the view layer level. If set to bounded, the lowest number of the two will be calculated. Cycles Light path settings. The light bounces are one step closer in on the details compared to samples The analysis of electrical load signatures is an enabling technology for many applications, such as ambient assisted living or energy-saving recommendations. Through the digitalization of electricity metering infrastructure, meter reading intervals are gradually becoming more frequent than the traditional once-per-year reporting. In fact, across smart meter generations, samples were initially. Most modern seed-and-extend NGS read mappers employ a seeding scheme that requires extracting t non-overlapping seeds in each read in order to find all valid mappings under an edit distance threshold of t. As t grows, this seeding scheme forces mappers to use more and shorter seeds, which increases the seed hits (seed frequencies) and therefore reduces the efficiency of mappers Mohamed Adam, Hassan. Research Population A research population is generally a large collection of individuals or objects that is the main focus of a scientific query. It is for the benefit of the population that researches are done. However, due to the large sizes of populations, researchers often cannot test every individual in the population. International test results seemingly permit comparisons of student performance in the United States with that in other countries. From these results, reformers conclude that U.S. public education is failing. This report shows that comparative student performance on international tests should be interpreted with much greater care than policymakers typically give it Accelerometers are useful tools for biologists seeking to gain a deeper understanding of the daily behavior of cryptic species. We describe how we used GPS and tri-axial accelerometer (sampling at 64 Hz) collars to monitor behaviors of free-ranging pumas (Puma concolor), which are difficult or impossible to observe in the wild. We attached collars to twelve pumas in the Santa Cruz Mountains. The chi-square goodness of fit test can be used to test the hypothesis that data comes from a normal hypothesis. In particular, we can use Theorem 2 of Goodness of Fit, to test the null hypothesis:. H 0: data are sampled from a normal distribution.. Example 1: 90 people were put on a weight gain program.The following frequency table shows the weight gain (in kilograms) Discussions: Hacker News (347 points, 37 comments), Reddit r/MachineLearning (151 points, 19 comments) Translations: Chinese (Simplified), Korean, Portuguese, Russian There is in all things a pattern that is part of our universe. It has symmetry, elegance, and grace - those qualities you find always in that which the true artist captures. You can find it in the turning of the seasons, in. However, this is a more technical approach that we will not go into here. As in the case of the albatross growth data, we will also compare the above two models with a * straight line * (again, its a linear model, so we can just use lm() without needing to define a function for it). Now fit all 3 models using least squares. Running distributed workloads often comes with infrastructure complexity, but we can use Kubernetes to simplify this process. Kubernetes is an open source container orchestration system, and it is a proven platform to effectively manage large-scale workloads. While it is possible to have a multi-worker setup with a cluster of physical or virtual machines, Kubernetes offers many advantages. The purpose of this study is to examine existing deep learning techniques for addressing class imbalanced data. Effective classification with imbalanced data is an important area of research, as high class imbalance is naturally inherent in many real-world applications, e.g., fraud detection and cancer detection. Moreover, highly imbalanced data poses added difficulty, as most learners will. Answer: The HP Image Transfer wizard includes two unique capabilities. One, the wizard notifies you when the date on a transferred image is more than 60 days old. Having the proper image date is important when using the time line feature and it helps you know when that date on your camera is incorrect where T 1 uncorrected is the native myocardial T 1 value, R mean is the mean R 1 for the patient cohort, and α is calculated as the slope of the linear regression between myocardial T 1 and blood T 1 measurements.. The blood T 1 value was computed from RV and LV blood pool regions of interest (ROI) in the mid-ventricular SAX T 1 map. To generate the LV and RV ROIs we eroded the LV/RV blood. Transfer learning for ECG classification Scientific Report Background Wombats are large, nocturnal herbivores that build burrows in a variety of habitats, including grassland communities, and can come into conflict with people. Counting the number of active burrows provides information on the local distribution and abundance of wombats and could prove to be an important management tool to monitor population numbers over time. We compared traditional. We measure the vorticity field at a position in the wake using a sampling interval . Examples of the time evolution of the local vorticity are shown in figure 3. We represent the local vorticity measurements as a column vector of length N, where N is the total number of samples. In other words, the entry of the measurement vector is equal to Multinomial Mixture Model for Supermarket Shoppers Image created by Author. Convenience Sampling. Convenience sampling takes a sample from a group that is easy to contact, for example, asking people outside a shopping center. You just sample the first people you run into. This technique is often considered poor practice to use as your data could be viewed as bias. Cluster Sampling Purple - Suunto 9: This is within 'Ultra' mode sampling GPS once every 120 seconds, and using gyro/accelerometer/compass data in between. Teal - Fenix 5+: This is set to normal 1-second mode, GPS+Galileo. The point here isn't to show a 'perfect track' of the Fenix 5+ vs the Suunto 9 Sampling - under this license you may only make a partial use of the original licensed content or, if you use the whole original, it must be insubstantial in proportion to the whole or transformed into something totally different to the original; this use can be for either noncommercial or commercial purposes. Mere synching (i.e., matching or. Blue noise sampling using an N-body simulation-based 7 August - Gone with the wind: Sailing drones cross Bering Strait for the first time. In the early hours of August 1, two Saildrone vehicles, collecting data for NOAA, successfully sailed through the choppy waters of the Bering Strait after a 1600 km transit on their way into the Arctic. This was a real win The problem is significantly different from the more generic problem we want to solve. The relationship between theses problems is discussed in [19]. II. METHODS A. End-to-End Learning (E2E) As a baseline we use a neural network generator G, trained to directly output an approximate reconstruction x^ of the input signal xby the magnitudes y. (C) Highly prevalent features that were not detected in subjects' time 1 samples had a high probability of being acquired by time 2, particularly at more exposed sites (e.g., skin). (D) Sampling time interval had a less marked effect on stability. NA, a (body site, feature type) combination with <10 confident detection events at time 1 A little while ago I was doing some research into the state-of-the-art for approximating subsurface scattering effects in real-time (mainly for skin rendering), and I had taken a bunch of loose notes to help me keep all of the details straight. I thought it might be useful to turn those notes into a full blog post, in case anyone else out there needs an overview of what's commonly used to. Andrew-NG-Notes/andrewng-p-5-sequence-models To optimize the process, we won't load a PNG each time we encounter a piece on the table. This would mean that frequent pieces, such as pawns, are loaded repeatedly, up to 8 times. Instead, we'll determine what pieces are on the board, load the image files once and then use them when needed by Least-Squares Techniques. One of the most used functions of Experimental Data Analyst ( EDA) is fitting data to linear models, especially straight lines and curves. This chapter discusses doing these types of fits using the most common technique: least-squares minimization. The next section provides background information on this topic The mechanism for spread of SARS-CoV-2 has been attributed to large particles produced by coughing and sneezing. There is controversy whether smaller airborne particles may transport SARS-CoV-2. Smaller particles, particularly fine particulate matter (≤ 2.5 µm in diameter), can remain airborne for longer periods than larger particles and after inhalation will penetrate deeply into the lungs In this step we will be blocking in and painting the bridges that connect our central city to its surroundings. So let's go ahead and first sample a clear color from the image and with the same block brush we have been using let's paint some rough bridge shape, don't worry about the refinement of it or if you cant paint in a sharp straight line, just block in the general shape you want for the. values of Xis a waste of time; we should be more concerned about tting well where the noise is small, and expect to t poorly where the noise is big. 3. Sampling bias. In many situations, our data comes from a survey, and some members of the population may be more likely to be included in the sample than others Behind the Scenes: Forgotten Childhood. Hi, my name is Svenja Strobel ( Icesturm online). I'm a self-taught Blender user, live in Germany, and am 19 years old. I created this render together with my boyfriend Colin Behrens ( Colin00b online). He's a self-taught Blender user just like me, is 20 years old, and also lives in Germany An AI-Based Adaptive Cognitive Modeling and Measurement What changes is the number of photons per photosite because of photosite density, so you can have larger photosites gathering more light-- a 24 megapixel image from a FF sensor will have used more photons in it's construction than the 24 megapixel image from an APS-C sensor, and should have reduced diffraction effects as well as a cleaner signal Disclaimer: While we work to ensure that product information is correct, on occasion manufacturers may alter their ingredient lists.Actual product packaging and materials may contain more and/or different information than that shown on our Web site. We recommend that you do not solely rely on the information presented and that you always read labels, warnings, and directions before using or. The actual experiment design of this tutorial is motivated by Darrah et al. in their 2017 paper, Real- time Root Monitoring of Hydroponic Crop Plants: Proof of Concept for a New Image Analysis System. Such a system can improve the yields of existing hydroponic farms making farms more efficient and sustainable to run Using online technologies to improve diversity and Use-wear analysis provides a means of studying traces produced on animal bone during manufacture and use in an effort to reconstruct these processes. Often, these analyses are qualitative and based on experience and expertise. Previous studies have focused on interpreting final traces, but little is known about how these traces develop and change over time. We propose the use of an innovative. For example, if we don't know in advance how many workers are we going to need, or the desired number of workers is likely to change over time (e.g., adaptively). Hence, we decided to design a. Velocity-Sensor - an overview ScienceDirect Topic The two file formats which support short integer (16-bit) data in the file I/O package supplied with Java Advanced Imaging 1.1.2_01 are Portable Network Graphics (PNG) and TIFF. There is support in the codec APIs for reading multi-image files, and the TIFF codec was enhanced in 1.1 to support both reading and writing of multi-image files Here, we first use some preprocessing methods such as wavelet denoising to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM), and white matter (WM) on 5 MRI head image datasets. We then realize automatic image segmentation with deep learning by using convolutional neural network We use the textures from the G-buffer in a second pass called the lighting pass where we render a screen-filled quad and calculate the scene's lighting for each fragment using the geometrical information stored in the G-buffer; pixel by pixel we iterate over the G-buffer. Instead of taking each object all the way from the vertex shader to the. Now we can finally power up and build the random forest model we have been inching towards. m = RandomForestRegressor (n_jobs=-1) m.fit (df, y) m.score (df,y) The n_jobs is set to -1 to use all the available cores on the machine. This gives us a score (r^2) of 0.98, which is excellent Wombats are large, nocturnal herbivores that build burrows in a variety of habitats, including grassland communities, and can come into conflict with people. Counting the number of active burrows provides information on the local distribution and abundance of wombats and could prove to be an important management tool to monitor population numbers over time Sure, more often than not it just means audio that's in 16-bit, 44.1kHz but when used in context with lossless means it's just audio that you'd find on an actual Audio CD. But CD quality isn't. We propose and evaluate a novel strategy for tuning the performance of a class of stencil computations on Graphics Processing Units. The strategy uses a machine learning model to predict the optimal way to load data from memory followed by a heuristic that divides other optimizations into groups and exhaustively explores one group at a time. We use a set of 104 synthetic OpenCL stencil. Bessel's correction - Wikipedi Maternal health affects the lives of many women and children globally every year and it is one of the high priority programs of the Government of Nepal (GoN). Different evidence articulate that the equity gap in accessing and using maternal health services at national level is decreasing over 2001-2016. This study aimed to assess whether the equity gap in using maternal health services is. Chapter 9 morphological image processing. SE is look like a mask that perform filtering on an image. Take B and move over all pixels in A . If B is completely contained in A the shade the new pixel, otherwise left it unshaded. Take SE and move over all pixels in A . If SE is completely contained in A then the new pixel is the foreground pixel. First, we begin with some simple registration examples. 3.2 Hello World Registration. The source code for this section can be found in the file ImageRegistration1.cxx. This example illustrates the use of the image registration framework in Insight. It should be read as a Hello World for ITK registration At test time we do not sample any variables and simply use the full weights matrices $\W_1,\W_2,\Bb$. This model can easily be generalised to multiple layers and classification. There are many open source packages implementing this model (such as Pylearn2 and Caffe ) In previous posts we have described how Dropbox's mobile document scanner works. The document scanner makes it possible to use your mobile phone to take photos and scan items like receipts and invoices. Our mobile document scanner only outputs an image — any text in the image is just a set of pixels as far as the computer is concerned, and can't be copy-pasted, searched for, or any of. Perceptual Losses for Deep Image Restoration by Aliaksei MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area As a large number of images are transmitted through social networks every moment, terrorists may hide data into images to convey secret data. Various types of images are mixed up in the social networks, and it is difficult for the servers of social networks to detect whether the images are clean. To prevent the illegal communication, this paper proposes a method of defeating data hiding by. 4. Frequency and the Fast Fourier Transform - Elegant .. This is shown in the following image: If we apply this to our example, we find that: P(A or B)= P(on-time delivery using this strategy)= 0.90 + 0.80 - 0.75 = 0.95. So our strategy of using two delivery services increases our probability of on-time delivery to 0.95 When box size becomes smaller than this limit, the algorithm iterates one more time, and then stops. Box scaling factor : the term used to divide the box size after iteration. For example, a scaling factor of \(2.0\) halves the size at each step One of the more powerful, but seldom used functions of Excel is the ability to very easily create automated tasks and custom logic within macros. Macros provide an ideal way to save time on predictable, repetitive tasks as well as standardize document formats - many times without having to write a single line of code Quora is a place to gain and share knowledge. It's a platform to ask questions and connect with people who contribute unique insights and quality answers. This empowers people to learn from each other and to better understand the world P-wave first-motion polarity determination of waveform Since beginning of real-time 3D rendering use in games, graphics engineers and artists tried to approximate atmospheric scattering using many different solutions and approaches: Analytical solutions Sparse Sampling for Image-Based SVBRDF Acquisition Material Appearance Modeling Workshop, 2016 We acquire the data-driven spatially-varying (SV)BRDF of a flat sample from only a small number of images (typically 20). We generalize the homogenous BRDF acquisition work of Nielsen et al., who derived an optimal minmal set of lighting/view directions For scientists working on the Mars 2020 mission, time is a precious resource. Researchers and analysts will be working with finite resources and over unimaginably long distances to carry ou Note that these values differ from those calculated in Bonati et al. because we are using a more up-to-date census, with membership determined using Gaia DR 2 (Gagné et al. 2018a, 2018b). Taking η Cha as an example, Bonati et al. ( 2019 ) use the data from Torres et al. ( 2008 ) who report 1 A-star, 0 G-stars, and 11 M-stars, whereas Gagné. Aliasing is a phenomenon in signal processing whereby, due to incomplete sampling of a signal, we can be fooled into thinking a signal has a different period than it actually does . Aliasing is worsened when sampling a signal in unevenly spaced time intervals, as is the unavoidable case for all astronomical observations 2. The Concept of a Spectrum. Consider the general expression for a sinusoid, using the cosine function. The function has a frequency f (in Hertz) that is equal to the inverse of the time it takes to complete one period of oscillation.At a given frequency it takes two pieces of information to specify such a wave; its amplitudeAand its phase e (in radians) Human-caused climate change could impact respiratory health, including asthma and allergies, through temperature-driven increases in airborne pollen, but the long-term continental pollen trends and role of climate change in pollen patterns are not well-understood. We measure pollen trends across North America from 1990 to 2018 and find increases in pollen concentrations and longer pollen seasons November 24, 2020 by Mariya Yao. Despite the challenges of 2020, the AI research community produced a number of meaningful technical breakthroughs. GPT-3 by OpenAI may be the most famous, but there are definitely many other research papers worth your attention. For example, teams from Google introduced a revolutionary chatbot, Meena, and. At the moment of this writing, the largest data sets being used in digital humanities projects are much smaller than big data used by scientists; in fact, if we use industry's definition, almost none of them qualify as big data (i.e., the work can be done on desktop computers using standard software, as opposed to supercomputers) ImageMagick crop pdf. Index presentation medical. Mason Bands smugmug. DNR form Philippines. LED Hair Extensions. Wire wall art DIY. Cheems dog breed. Dual monitor different resolution mouse. Sunset Harbor homes for sale. Flirting Memes Hindi. Smallest field in MLB: The Show 20. Who owns VirusTotal. Contact form 7 submit button not working. Final Cut Pro filters free. Pantagraph obituary cost. Garden tractor Implements. Oude iPhone leegmaken. Demodex shampoo for humans. How to fix iPad mini screen glitch. Fundamentals of Surveying sample questions and solutions PDF. Maryland High School lacrosse playoffs 2021. Disney Canvas Paintings Easy. Luminary cost. Sports day psd. 4 Axle Heavy Haul trucks for sale. Tarawa Terrace 1. Debut Cake Design pink. Fish skinny disease treatment. Notre Dame High School acceptance rate. Rupture medical term. How do I transfer contacts from iPhone to Outlook without iTunes. Chicken cholesterol. Premiere Pro playback speed too slow. Beach house rental with golf cart. Barn conversion to rent ironbridge. Mexico villa with chef. Images of 2021 Chevy Malibu. Used BMW X3 private sale. Adobe Camera Raw 13.2 free Download. Marked for Death Kidney Shot macro. Backus Naur Form A Level.
CommonCrawl
neverendingbooks Category: absolute The hype cycle of an idea Published February 13, 2022 by lievenlb These three ideas (re)surfaced over the last two decades, claiming to have potential applications to major open problems: (2000) $\mathbb{F}_1$-geometry tries to view $\mathbf{Spec}(\mathbb{Z})$ as a curve over the field with one element, and mimic Weil's proof of RH for curves over finite fields to prove the Riemann hypothesis. (2012) IUTT, for Inter Universal Teichmuller Theory, the machinery behind Mochizuki's claimed proof of the ABC-conjecture. (2014) topos theory : Connes and Consani redirected their RH-attack using arithmetic sites, while Lafforgue advocated the use of Caramello's bridges for unification, in particular the Langlands programme. It is difficult to voice an opinion about the (presumed) current state of such projects without being accused of being either a believer or a skeptic, resorting to group-think or being overly critical. We lack the vocabulary to talk about the different phases a mathematical idea might be in. Such a vocabulary exists in (information) technology, the five phases of the Gartner hype cycle to represent the maturity, adoption, and social application of a certain technology : Technology Trigger Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity This model can then be used to gauge in which phase several emerging technologies are, and to estimate the time it will take them to reach the stable plateau of productivity. Here's Gartner's recent Hype Cycle for emerging Artificial Intelligence technologies. Picture from Gartner Hype Cycle for AI 2021 What might these phases be in the hype cycle of a mathematical idea? Technology Trigger: a new idea or analogy is dreamed up, marketed to be the new approach to that problem. A small group of enthusiasts embraces the idea, and tries to supply proper definitions and the very first results. Peak of Inflated Expectations: the idea spreads via talks, blogposts, mathoverflow and twitter, and now has enough visibility to justify the first conferences devoted to it. However, all this activity does not result in major breakthroughs and doubt creeps in. Trough of Disillusionment: the project ran out of steam. It becomes clear that existing theories will not lead to a solution of the motivating problem. Attempts by key people to keep the idea alive (by lengthy papers, regular meetings or seminars) no longer attract new people to the field. Slope of Enlightenment: the optimistic scenario. One abandons the original aim, ditches the myriad of theories leading nowhere, regroups and focusses on the better ideas the project delivered. A negative scenario is equally possible. Apart for a few die-hards the idea is abandoned, and on its way to the graveyard of forgotten ideas. Plateau of Productivity: the polished surviving theory has applications in other branches and becomes a solid tool in mathematics. It would be fun so see more knowledgable people draw such a hype cycle graph for recent trends in mathematics. Here's my own (feeble) attempt to gauge where the three ideas mentioned at the start are in their cycles, and here's why: IUTT: recent work of Kirti Joshi, for example this, and this, and that, draws from IUTT while using conventional language and not making exaggerated claims. $\mathbb{F}_1$: the preliminary programme of their seminar shows little evidence the $\mathbb{F}_1$-community learned from the past 20 years. Topos: Developing more general theory is not the way ahead, but concrete examples may carry surprises, even though Gabriel's topos will remain elusive. Clearly, you don't agree, and that's fine. We now have a common terminology, and you can point me to results or events I must have missed, forcing me to redraw my graph. Smirnov on $\mathbb{F}_1$ and the RH Published January 21, 2022 by lievenlb Wednesday, Alexander Smirnov (Steklov Institute) gave the first talk in the $\mathbb{F}_1$ world seminar. Here's his title and abstract: Title: The 10th Discriminant and Tensor Powers of $\mathbb{Z}$ "We plan to discuss very shortly certain achievements and disappointments of the $\mathbb{F}_1$-approach. In addition, we will consider a possibility to apply noncommutative tensor powers of $\mathbb{Z}$ to the Riemann Hypothesis." Here's his talk, and part of the comments section: Smirnov urged us to pay attention to a 1933 result by Max Deuring in Imaginäre quadratische Zahlkörper mit der Klassenzahl 1: "If there are infinitely many imaginary quadratic fields with class number one, then the RH follows." Of course, we now know that there are exactly nine such fields (whence there is no 'tenth discriminant' as in the title of the talk), and one can deduce anything from a false statement. Deuring's argument, of course, was different: The zeta function $\zeta_{\mathbb{Q} \sqrt{-d}}(s)$ of a quadratic field $\mathbb{Q}\sqrt{-d}$, counts the number of ideals $\mathfrak{a}$ in the ring of integers of norm $n$, that is \sum_n \#(\mathfrak{a}:N(\mathfrak{a})=n) n^{-s} \] It is equal to $\zeta(s). L(s,\chi_d)$ where $\zeta(s)$ is the usual Riemann function and $L(s,\chi_d)$ the $L$-function of the character $\chi_d(n) = (\frac{-4d}{n})$. Now, if the class number of $\mathbb{Q}\sqrt{-d}$ is one (that is, its ring of integers is a principal ideal domain) then Deuring was able to relate $\zeta_{\mathbb{Q} \sqrt{-d}}(s)$ to $\zeta(2s)$ with an error term, depending on $d$, and if we could run $d \rightarrow \infty$ the error term vanishes. So, if there were infinitely many imaginary quadratic fields with class number one we would have the equality \zeta(s) . \underset{\rightarrow}{lim}~L(s,\chi_d) = \zeta(2s) \] Now, take a complex number $s \not=1$ with real part strictly greater that $\frac {1}{2}$, then $\zeta(2s) \not= 0$. But then, from the equality, it follows that $\zeta(s) \not= 0$, which is the RH. To extend (a version of) the Deuring-argument to the $\mathbb{F}_1$-world, Smirnov wants to have many examples of commutative rings $A$ whose multiplicative monoid $A^{\times}$ is isomorphic to $\mathbb{Z}^{\times}$, the multiplicative monoid of the integers. What properties must $A$ have? Well, it can only have two units, it must be a unique factorisation domain, and have countably many irreducible elements. For example, $\mathbb{F}_3[x_1,\dots,x_n]$ will do! (Note to self: contemplate the fact that all such rings share the same arithmetic site.) Each such ring $A$ becomes a $\mathbb{Z}$-module by defining a new addition $+_{new}$ on it via a +_{new} b = \sigma^{-1}(\sigma(a) +_{\mathbb{Z}} \sigma(b)) \] where $\sigma : A^{\times} \rightarrow \mathbb{Z}^{\times}$ is the isomorphism of multiplicative monoids, and on the right hand side we have the usual addition on $\mathbb{Z}$. But then, any pair $(A,A')$ of such rings will give us a module over the ring $\mathbb{Z} \boxtimes_{\mathbb{Z}^{\times}} \mathbb{Z}$. It was not so clear to me what this ring is (if you know, please drop a comment), but I guess it must be a commutative ring having all these properties, and being a quotient of the ring $\mathbb{Z} \boxtimes_{\mathbb{F}_1} \mathbb{Z}$, the coordinate ring of the elusive arithmetic plane \mathbf{Spec}(\mathbb{Z}) \times_{\mathbf{Spec}(\mathbb{F}_1)} \mathbf{Spec}(\mathbb{Z}) \] Smirnov's hope is that someone can use a Deuring-type argument to prove: "If $\mathbb{Z} \boxtimes_{\mathbb{Z}^{\times}} \mathbb{Z}$ is 'sufficiently complicated', then the RH follows." If you want to attend the seminar when it happens, please register for the seminar's mailing list. The $\mathbb{F}_1$ World Seminar Published January 8, 2022 by lievenlb For some time I knew it was in the making, now they are ready to launch it: The $\mathbb{F}_1$ World Seminar, an online seminar dedicated to the "field with one element", and its many connections to areas in mathematics such as arithmetic, geometry, representation theory and combinatorics. The organisers are Jaiung Jun, Oliver Lorscheid, Yuri Manin, Matt Szczesny, Koen Thas and Matt Young. From the announcement: "While the origins of the "$\mathbb{F}_1$-story" go back to attempts to transfer Weil's proof of the Riemann Hypothesis from the function field case to that of number fields on one hand, and Tits's Dream of realizing Weyl groups as the $\mathbb{F}_1$ points of algebraic groups on the other, the "$\mathbb{F}_1$" moniker has come to encompass a wide variety of phenomena and analogies spanning algebraic geometry, algebraic topology, arithmetic, combinatorics, representation theory, non-commutative geometry etc. It is therefore impossible to compile an exhaustive list of topics that might be discussed. The following is but a small sample of topics that may be covered: Algebraic geometry in non-additive contexts – monoid schemes, lambda-schemes, blue schemes, semiring and hyperfield schemes, etc. Arithmetic – connections with motives, non-archimedean and analytic geometry Tropical geometry and geometric matroid theory Algebraic topology – K-theory of monoid and other "non-additive" schemes/categories, higher Segal spaces Representation theory – Hall algebras, degenerations of quantum groups, quivers Combinatorics – finite field and incidence geometry, and various generalizations" The seminar takes place on alternating Wednesdays from 15:00 PM – 16:00 PM European Standard Time (=GMT+1). There will be room for mathematical discussion after each lecture. The first meeting takes place Wednesday, January 19th 2022. If you want to receive abstracts of the talks and their Zoom-links, you should sign up for the mailing list. Perhaps I'll start posting about $\mathbb{F}_1$ again, either here, or on the dormant $\mathbb{F}_1$ mathematics blog. (see this post for its history). Do we need the sphere spectrum? Published December 31, 2021 by lievenlb Last time I mentioned the talk "From noncommutative geometry to the tropical geometry of the scaling site" by Alain Connes, culminating in the canonical isomorphism (last slide of the talk) Or rather, what is actually proved in his paper with Caterina Consani BC-system, absolute cyclotomy and the quantized calculus (and which they conjectured previously to be the case in Segal's Gamma rings and universal arithmetic), is a canonical isomorphism between the $\lambda$-rings \mathbb{Z}[\mathbb{Q}/\mathbb{Z}] \simeq \mathbb{W}_0(\overline{\mathbb{S}}) \] The left hand side is the integral groupring of the additive quotient-group $\mathbb{Q}/\mathbb{Z}$, or if you prefer, $\mathbb{Z}[\mathbf{\mu}_{\infty}]$ the integral groupring of the multiplicative group of all roots of unity $\mathbf{\mu}_{\infty}$. The power maps on $\mathbf{\mu}_{\infty}$ equip $\mathbb{Z}[\mathbf{\mu}_{\infty}]$ with a $\lambda$-ring structure, that is, a family of commuting endomorphisms $\sigma_n$ with $\sigma_n(\zeta) = \zeta^n$ for all $\zeta \in \mathbf{\mu}_{\infty}$, and a family of linear maps $\rho_n$ induced by requiring for all $\zeta \in \mathbf{\mu}_{\infty}$ that \rho_n(\zeta) = \sum_{\mu^n=\zeta} \mu \] The maps $\sigma_n$ and $\rho_n$ are used to construct an integral version of the Bost-Connes algebra describing the Bost-Connes sytem, a quantum statistical dynamical system. On the right hand side, $\mathbb{S}$ is the sphere spectrum (an object from stable homotopy theory) and $\overline{\mathbb{S}}$ its 'algebraic closure', that is, adding all abstract roots of unity. The ring $\mathbb{W}_0(\overline{\mathbb{S}})$ is a generalisation to the world of spectra of the Almkvist-ring $\mathbb{W}_0(R)$ defined for any commutative ring $R$, constructed from pairs $(E,f)$ where $E$ is a projective $R$-module of finite rank and $f$ an $R$-endomorphism on it. Addition and multiplication are coming from direct sums and tensor products of such pairs, with zero element the pair $(0,0)$ and unit element the pair $(R,1_R)$. The ring $\mathbb{W}_0(R)$ is then the quotient-ring obtained by dividing out the ideal consisting of all zero-pairs $(E,0)$. The ring $\mathbb{W}_0(R)$ becomes a $\lambda$-ring via the Frobenius endomorphisms $F_n$ sending a pair $(E,f)$ to the pair $(E,f^n)$, and we also have a collection of linear maps on $\mathbb{W}_0(R)$, the 'Verschiebung'-maps which send a pair $(E,f)$ to the pair $(E^{\oplus n},F)$ with F = \begin{bmatrix} 0 & 0 & 0 & \cdots & f \\ 1 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{bmatrix} \] Connes and Consani define a notion of modules and their endomorphisms for $\mathbb{S}$ and $\overline{\mathbb{S}}$, allowing them to define in a similar way the rings $\mathbb{W}_0(\mathbb{S})$ and $\mathbb{W}_0(\overline{\mathbb{S}})$, with corresponding maps $F_n$ and $V_n$. They then establish an isomorphism with $\mathbb{Z}[\mathbb{Q}/\mathbb{Z}]$ such that the maps $(F_n,V_n)$ correspond to $(\sigma_n,\rho_n)$. But, do we really have the go to spectra to achieve this? All this reminds me of an old idea of Yuri Manin mentioned in the introduction of his paper Cyclotomy and analytic geometry over $\mathbb{F}_1$, and later elaborated in section two of his paper with Matilde Marcolli Homotopy types and geometries below $\mathbf{Spec}(\mathbb{Z})$. Take a manifold $M$ with a diffeomorphism $f$ and consider the corresponding discrete dynamical system by iterating the diffeomorphism. In such situations it is important to investigate the periodic orbits, or the fix-points $Fix(M,f^n)$ for all $n$. If we are in a situation that the number of fixed points is finite we can package these numbers in the Artin-Mazur zeta function \zeta_{AM}(M,f) = exp(\sum_{n=1}^{\infty} \frac{\# Fix(M,f^n)}{n}t^n) \] and investigate the properties of this function. To connect this type of problem to Almkvist-like rings, Manin considers the Morse-Smale dynamical systems, a structural stable diffeomorphism $f$, having a finite number of non-wandering points on a compact manifold $M$. From Topological classification of Morse-Smale diffeomorphisms on 3-manifolds In such a situation $f_{\ast}$ acts on homology $H_k(M,\mathbb{Z})$, which are free $\mathbb{Z}$-modules of finite rank, as a matrix $M_f$ having only roots of unity as its eigenvalues. Manin argues that this action is similar to the action of the Frobenius on etale cohomology groups, in which case the eigenvalues are Weil numbers. That is, one might view roots of unity as Weil numbers in characteristic one. Clearly, all relevant data $(H_k(M,\mathbb{Z}),f_{\ast})$ belongs to the $\lambda$-subring of $\mathbb{W}_0(\mathbb{Z})$ generated by all pairs $(E,f)$ such that $M_f$ is diagonalisable and all its eigenvalues are either $0$ or roots of unity. If we denote for any ring $R$ by $\mathbb{W}_1(R)$ this $\lambda$-subring of $\mathbb{W}_0(R)$, probably one would obtain canonical isomorphisms – between $\mathbb{W}_1(\mathbb{Z})$ and the invariant part of the integral groupring $\mathbb{Z}[\mathbb{Q}/\mathbb{Z}]$ for the action of the group $Aut(\mathbb{Q}/\mathbb{Z}) = \widehat{\mathbb{Z}}^*$, and – between $\mathbb{Z}[\mathbb{Q}/\mathbb{Z}]$ and $\mathbb{W}_1(\mathbb{Z}(\mathbf{\mu}_{\infty}))$ where $\mathbb{Z}(\mathbf{\mu}_{\infty})$ is the ring obtained by adjoining to $\mathbb{Z}$ all roots of unity. Alain Connes on his RH-project In recent months, my primary focus was on teaching and family matters, so I make advantage of this Christmas break to catch up with some of the things I've missed. Peter Woit's blog alerted me to the existence of the (virtual) Lake Como-conference, end of september: Unifying themes in Geometry. In Corona times, virtual conferences seem to sprout up out of nowhere, everywhere (zero costs), giving us an inflation of YouTubeD talks. I'm always grateful to the organisers of such events to provide the slides of the talks separately, as the generic YouTubeD-talk consists merely in reading off the slides. Allow me to point you to one of the rare exceptions to this rule. When I downloaded the slides of Alain Connes' talk at the conference From noncommutative geometry to the tropical geometry of the scaling site I just saw a collage of graphics from his endless stream of papers with Katia Consani, and slides I'd seen before watching several of his YouTubeD-talks in recent years. Boy, am I glad I gave Alain 5 minutes to convince me this talk was different. For the better part of his talk, Alain didn't just read off the slides, but rather tried to explain the thought processes that led him and Katia to move on from the results on this slide to those on the next one. If you're pressed for time, perhaps you might join in at 49.34 into the talk, when he acknowledges the previous (tropical) approach ran out of steam as they were unable to define any $H^1$ properly, and how this led them to 'absolute' algebraic geometry, meaning over the sphere spectrum $\mathbb{S}$. Sadly, for some reason Alain didn't manage to get his final two slides on screen. So, in this case, the slides actually add value to the talk… Closing in on Gabriel's topos? Published June 8, 2018 by lievenlb 'Gabriel's topos' (see here) is the conjectural, but still elusive topos from which the validity of the Riemann hypothesis would follow. It is the latest attempt in Alain Connes' 20 year long quest to tackle the RH (before, he tried the tools of noncommutative geometry and later those offered by the field with one element). For the last 5 years he hopes that topos theory might provide the missing ingredient. Together with Katia Consani he introduced and studied the geometry of the Arithmetic site, and later the geometry of the scaling site. If you look at the points of these toposes you get horribly complicated 'non-commutative' spaces, such as the finite adele classes $\mathbb{Q}^*_+ \backslash \mathbb{A}^f_{\mathbb{Q}} / \widehat{\mathbb{Z}}^{\ast}$ (in case of the arithmetic site) and the full adele classes $\mathbb{Q}^*_+ \backslash \mathbb{A}_{\mathbb{Q}} / \widehat{\mathbb{Z}}^{\ast}$ (for the scaling site). In Vienna, Connes gave a nice introduction to the arithmetic site in two lectures. The first part of the talk below also gives an historic overview of his work on the RH The second lecture can be watched here. However, not everyone is as optimistic about the topos-approach as he seems to be. Here's an insightful answer on MathOverflow by Will Sawin to the question "What is precisely still missing in Connes' approach to RH?". Other interesting MathOverflow threads related to the RH-approach via the field with one element are Approaches to Riemann hypothesis using methods outside number theory and Riemann hypothesis via absolute geometry. About a month ago, from May 10th till 14th Alain Connes gave a series of lectures at Ohio State University with title "The Riemann-Roch strategy, quantizing the Scaling Site". The accompanying paper has now been arXived: The Riemann-Roch strategy, Complex lift of the Scaling Site (joint with K. Consani). Especially interesting is section 2 "The geometry behind the zeros of $\zeta$" in which they explain how looking at the zeros locus inevitably leads to the space of adele classes and why one has to study this space with the tools from noncommutative geometry. Perhaps further developments will be disclosed in a few weeks time when Connes is one of the speakers at Toposes in Como. From the Da Vinci code to Habiro Published February 2, 2018 by lievenlb The Fibonacci sequence reappears a bit later in Dan Brown's book 'The Da Vinci Code' where it is used to login to the bank account of Jacques Sauniere at the fictitious Parisian branch of the Depository Bank of Zurich. Last time we saw that the Hankel matrix of the Fibonacci series $F=(1,1,2,3,5,\dots)$ is invertible over $\mathbb{Z}$ H(F) = \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix} \in SL_2(\mathbb{Z}) \] and we can use the rule for the co-multiplication $\Delta$ on $\Re(\mathbb{Q})$, the algebra of rational linear recursive sequences, to determine $\Delta(F)$. For a general integral linear recursive sequence the corresponding Hankel matrix is invertible over $\mathbb{Q}$, but rarely over $\mathbb{Z}$. So we need another approach to compute the co-multiplication on $\Re(\mathbb{Z})$. Any integral sequence $a = (a_0,a_1,a_2,\dots)$ can be seen as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the integral polynomial ring $\mathbb{Z}[x]$ to $\mathbb{Z}$ itself via the rule $\lambda_a(x^n) = a_n$. If $a \in \Re(\mathbb{Z})$, then there is a monic polynomial with integral coefficients of a certain degree $n$ f(x) = x^n + b_1 x^{n-1} + b_2 x^{n-2} + \dots + b_{n-1} x + b_n \] such that for every integer $m$ we have that a_{m+n} + b_1 a_{m+n-1} + b_2 a_{m+n-2} + \dots + b_{n-1} a_{m+1} + a_m = 0 \] Alternatively, we can look at $a$ as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the quotient ring $\mathbb{Z}[x]/(f(x))$ to $\mathbb{Z}$. The multiplicative structure on $\mathbb{Z}[x]/(f(x))$ dualizes to a co-multiplication $\Delta_f$ on the set of all such linear maps $(\mathbb{Z}[x]/(f(x)))^{\ast}$ and we can compute $\Delta_f(a)$. We see that the set of all integral linear recursive sequences can be identified with the direct limit \Re(\mathbb{Z}) = \underset{\underset{f|g}{\rightarrow}}{lim}~(\frac{\mathbb{Z}[x]}{(f(x))})^{\ast} \] (where the directed system is ordered via division of monic integral polynomials) and so is equipped with a co-multiplication $\Delta = \underset{\rightarrow}{lim}~\Delta_f$. Btw. the ring structure on $\Re(\mathbb{Z}) \subset (\mathbb{Z}[x])^{\ast}$ comes from restricting to $\Re(\mathbb{Z})$ the dual structures of the co-ring structure on $\mathbb{Z}[x]$ given by \Delta(x) = x \otimes x \quad \text{and} \quad \epsilon(x) = 1 \] From this description it is clear that you need to know a hell of a lot number theory to describe this co-multiplication explicitly. As most of us prefer to work with rings rather than co-rings it is a good idea to begin to study this co-multiplication $\Delta$ by looking at the dual ring structure of \Re(\mathbb{Z})^{\ast} = \underset{\underset{ f | g}{\leftarrow}}{lim}~\frac{\mathbb{Z}[x]}{(f(x))} \] This is the completion of $\mathbb{Z}[x]$ at the multiplicative set of all monic integral polynomials. This is a horrible ring and very little is known about it. Some general remarks were proved by Kazuo Habiro in his paper Cyclotomic completions of polynomial rings. In fact, Habiro got interested is a certain subring of $\Re(\mathbb{Z})^{\ast}$ which we now know as the Habiro ring and which seems to be a red herring is all stuff about the field with one element, $\mathbb{F}_1$ (more on this another time). Habiro's ring is \widehat{\mathbb{Z}[q]} = \underset{\underset{n|m}{\leftarrow}}{lim}~\frac{\mathbb{Z}[q]}{(q^n-1)} \] and its elements are all formal power series of the form a_0 + a_1 (q-1) + a_2 (q^2-1)(q-1) + \dots + a_n (q^n-1)(q^{n-1}-1) \dots (q-1) + \dots \] with all coefficients $a_n \in \mathbb{Z}$. Here's a funny property of such series. If you evaluate them at $q \in \mathbb{C}$ these series are likely to diverge almost everywhere, but they do converge in all roots of unity! Some people say that these functions are 'leaking out of the roots of unity'. If the ring $\Re(\mathbb{Z})^{\ast}$ is controlled by the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$, then Habiro's ring is controlled by the abelianzation $Gal(\overline{\mathbb{Q}}/\mathbb{Q})^{ab} \simeq \hat{\mathbb{Z}}^{\ast}$. From the Da Vinci code to Galois In The Da Vinci Code, Dan Brown feels he need to bring in a French cryptologist, Sophie Neveu, to explain the mystery behind this series of numbers: 13 – 3 – 2 – 21 – 1 – 1 – 8 – 5 The Fibonacci sequence, 1-1-2-3-5-8-13-21-34-55-89-144-… is such that any number in it is the sum of the two previous numbers. It is the most famous of all integral linear recursive sequences, that is, a sequence of integers a = (a_0,a_1,a_2,a_3,\dots) \] such that there is a monic polynomial with integral coefficients of a certain degree $n$ For the Fibonacci series $F=(F_0,F_1,F_2,\dots)$, this polynomial can be taken to be $x^2-x-1$ because F_{m+2} = F_{m+1}+F_m \] The set of all integral linear recursive sequences, let's call it $\Re(\mathbb{Z})$, is a beautiful object of great complexity. For starters, it is a ring. That is, we can add and multiply such sequences. If a=(a_0,a_1,a_2,\dots),~\quad \text{and}~\quad a'=(a'_0,a'_1,a'_2,\dots)~\quad \in \Re(\mathbb{Z}) \] then the sequences a+a' = (a_0+a'_0,a_1+a'_1,a_2+a'_2,\dots) \quad \text{and} \quad a \times a' = (a_0.a'_0,a_1.a'_1,a_2.a'_2,\dots) \] are again linear recursive. The zero and unit in this ring are the constant sequences $0=(0,0,\dots)$ and $1=(1,1,\dots)$. So far, nothing terribly difficult or exciting. It follows that $\Re(\mathbb{Z})$ has a co-unit, that is, a ring morphism \epsilon~:~\Re(\mathbb{Z}) \rightarrow \mathbb{Z} \] sending a sequence $a = (a_0,a_1,\dots)$ to its first entry $a_0$. It's a bit more difficult to see that $\Re(\mathbb{Z})$ also has a co-multiplication \Delta~:~\Re(\mathbb{Z}) \rightarrow \Re(\mathbb{Z}) \otimes_{\mathbb{Z}} \Re(\mathbb{Z}) \] with properties dual to those of usual multiplication. To describe this co-multiplication in general will have to await another post. For now, we will describe it on the easier ring $\Re(\mathbb{Q})$ of all rational linear recursive sequences. For such a sequence $q = (q_0,q_1,q_2,\dots) \in \Re(\mathbb{Q})$ we consider its Hankel matrix. From the sequence $q$ we can form symmetric $k \times k$ matrices such that the opposite $i+1$-th diagonal consists of entries all equal to $q_i$ H_k(q) = \begin{bmatrix} q_0 & q_1 & q_2 & \dots & q_{k-1} \\ q_1 & q_2 & & & q_k \\ q_2 & & & & q_{k+1} \\ \vdots & & & & \vdots \\ q_{k-1} & q_k & q_{k+1} & \dots & q_{2k-2} \end{bmatrix} \] The Hankel matrix of $q$, $H(q)$ is $H_k(q)$ where $k$ is maximal such that $det~H_k(q) \not= 0$, that is, $H_k(q) \in GL_k(\mathbb{Q})$. Let $S(q)=(s_{ij})$ be the inverse of $H(q)$, then the co-multiplication map \Delta~:~\Re(\mathbb{Q}) \rightarrow \Re(\mathbb{Q}) \otimes \Re(\mathbb{Q}) \] sends the sequence $q = (q_0,q_1,\dots)$ to \Delta(q) = \sum_{i,j=0}^{k-1} s_{ij} (D^i q) \otimes (D^j q) \] where $D$ is the shift operator on sequence D(a_0,a_1,a_2,\dots) = (a_1,a_2,\dots) \] If $a \in \Re(\mathbb{Z})$ is such that $H(a) \in GL_k(\mathbb{Z})$ then the same formula gives $\Delta(a)$ in $\Re(\mathbb{Z})$. For the Fibonacci sequences $F$ the Hankel matrix is H(F) = \begin{bmatrix} 1 & 1 \\ 1& 2 \end{bmatrix} \in GL_2(\mathbb{Z}) \quad \text{with inverse} \quad S(F) = \begin{bmatrix} 2 & -1 \\ -1 & 1 \end{bmatrix} \] \Delta(F) = 2 F \otimes ~F – DF \otimes F – F \otimes DF + DF \otimes DF \] There's a lot of number theoretic and Galois-information encoded into the co-multiplication on $\Re(\mathbb{Q})$. To see this we will describe the co-multiplication on $\Re(\overline{\mathbb{Q}})$ where $\overline{\mathbb{Q}}$ is the field of all algebraic numbers. One can show that \Re(\overline{\mathbb{Q}}) \simeq (\overline{\mathbb{Q}}[ \overline{\mathbb{Q}}_{\times}^{\ast}] \otimes \overline{\mathbb{Q}}[d]) \oplus \sum_{i=0}^{\infty} \overline{\mathbb{Q}} S_i \] Here, $\overline{\mathbb{Q}}[ \overline{\mathbb{Q}}_{\times}^{\ast}]$ is the group-algebra of the multiplicative group of non-zero elements $x \in \overline{\mathbb{Q}}^{\ast}_{\times}$ and each $x$, which corresponds to the geometric sequence $x=(1,x,x^2,x^3,\dots)$, is a group-like element $\overline{\mathbb{Q}}[d]$ is the universal Lie algebra of the $1$-dimensional Lie algebra on the primitive element $d = (0,1,2,3,\dots)$, that is \Delta(d) = d \otimes 1 + 1 \otimes d \quad \text{and} \quad \epsilon(d) = 0 \] Finally, the co-algebra maps on the elements $S_i$ are given by \Delta(S_i) = \sum_{j=0}^i S_j \otimes S_{i-j} \quad \text{and} \quad \epsilon(S_i) = \delta_{0i} \] That is, the co-multiplication on $\Re(\overline{\mathbb{Q}})$ is completely known. To deduce from it the co-multiplication on $\Re(\mathbb{Q})$ we have to consider the invariants under the action of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$ as \Re(\overline{\mathbb{Q}})^{Gal(\overline{\mathbb{Q}}/\mathbb{Q})} \simeq \Re(\mathbb{Q}) \] Unlike the Fibonacci sequence, not every integral linear recursive sequence has an Hankel matrix with determinant $\pm 1$, so to determine the co-multiplication on $\Re(\mathbb{Z})$ is even a lot harder, as we will see another time. Reference: Richard G. Larson, Earl J. Taft, 'The algebraic structure of linearly recursive sequences under Hadamard product' The latest on Mochizuki Published November 19, 2017 by lievenlb Once in every six months there's a flurry of online excitement about Mochizuki's alleged proof of the abc-conjecture. It seems to be that time of the year again. The twitter-account of the ever optimistic @math_jin is probably the best source for (positive) news about IUT/ABC. He now announces the latest version of Yamashita's 'summary' of Mochizuki's proof: 山下剛さんのIUTサーベイが更新されました。 Go Yamashita A proof of the abc conjecture after Mochizuki. preprint. last updated on 18/Nov/2017.https://t.co/XtnMEO3zoQ#IUTABC — math_jin (@math_jin) November 18, 2017 Another informed source is Ed Frenkel. He sometimes uses his twitter-account @edfrenkel to broadcast Ivan Fesenko's enthusiasm. Big news on Mochizuki's groundbreaking IUT: Over 1000 comments on his 4 papers have been addressed & the final versions sent back to the journal for approval. Hopefully, will be published soon. Here's Ivan Fesenko's interview about IUT on the AMS website.https://t.co/6GLk3Xh0lm — Edward Frenkel (@edfrenkel) November 17, 2017 Googling further, I stumbled upon an older (newspaper) article on the subject: das grosse ABC by Marlene Weiss, for which she got silver at the 2017 science journalism awards. In case you prefer an English translation: The big ABC. Here's her opening paragraph: "In a children's story written by the Swiss author Peter Bichsel, a lonely man decides to invent his own language. He calls the table "carpet", the chair "alarm clock", the bed "picture". At first he is enthusiastic about his idea and always thinks of new words, his sentences sound original and funny. But after a while, he begins to forget the old words." The article is less optimistic than other recent popular accounts of Mochizuki's story, including: Monumental proof to torment mathematicians for years to come in Nature by Davide Castelvecchi. Hope Rekindled for Perplexing Proof in Quanta-magazine by Kevin Hartnett. Baffling ABC maths proof now has impenetrable 300-page 'summary' in the New Scientist by Timothy Revell. Marlene Weiss fears a sad ending: "Table is called "carpet", chair is called "alarm clock", bed is called "picture". In the story by Peter Bichsel, the lonely man ends up having so much trouble communicating with other people that he speaks only to himself. It is a very sad story." Perhaps things will turn out for the better, and we'll hear about it sometime. In six months, I'd say… How to dismantle scheme theory? Published July 24, 2017 by lievenlb In several of his talks on #IUTeich, Mochizuki argues that usual scheme theory over $\mathbb{Z}$ is not suited to tackle problems such as the ABC-conjecture. The idea appears to be that ABC involves both the additive and multiplicative nature of integers, making rings into '2-dimensional objects' (and clearly we use both 'dimensions' in the theory of schemes). So, perhaps we should try to 'dismantle' scheme theory, and replace it with something like geometry over the field with one element $\mathbb{F}_1$. The usual $\mathbb{F}_1$ mantra being: 'forget all about the additive structure and only retain the multiplicative monoid'. So perhaps there is yet another geometry out there, forgetting about the multiplicative structure, and retaining just the addition… This made me wonder. In the forgetting can't be that hard, can it?-post we have seen that the forgetful functor F_{+,\times}~:~\mathbf{rings} \rightarrow \mathbf{sets} \] (that is, forgetting both multiplicative and additive information of the ring) is representable by the polynomial ring $\mathbb{Z}[x]$. So, what about our 'dismantling functors' in which we selectively forget just one of these structures: F_+~:~\mathbf{rings} \rightarrow \mathbf{monoids} \quad \text{and} \quad F_{\times}~:~\mathbf{rings} \rightarrow \mathbf{abelian~groups} \] Are these functors representable too? Clearly, ring maps from $\mathbb{Z}[x]$ to our ring $R$ give us again the elements of $R$. But now, we want to encode the way two of these elements add (or multiply). This can be done by adding extra structure to the ring $\mathbb{Z}[x]$, namely a comultiplication $\Delta$ and a counit $\epsilon$ \Delta~:~\mathbb{Z}[x] \rightarrow \mathbb{Z}[x] \otimes \mathbb{Z}[x] \quad \text{and} \quad \epsilon~:~\mathbb{Z}[x] \rightarrow \mathbb{Z} \] The idea of the comultiplication being that if we have two elements $r,s \in R$ with corresponding ring maps $f_r~:~\mathbb{Z}[x] \rightarrow R \quad x \mapsto r$ and $f_s~:~\mathbb{Z}[x] \rightarrow R \quad x \mapsto s$, composing their tensorproduct with the comultiplication f_v~:~\mathbb{Z}[x] \rightarrow^{\Delta} \mathbb{Z}[x] \otimes \mathbb{Z}[x] \rightarrow^{f_r \otimes f_s} R determines another element $v \in R$ which we can take either the product $v=r.s$ or sum $v=r+s$, depending on the comultiplication map $\Delta$. The role of the counit is merely sending $x$ to the identity element of the operation. Thus, if we want to represent the functor forgetting the addition, and retaining the multiplication we have to put on $\mathbb{Z}[x]$ the structure of a biring (making $x$ into a 'group-like' element for Hopf-ists). The functor $F_{\times}$ forgetting the multiplication but retaining the addition is represented by the Hopf-ring $\mathbb{Z}[x]$, this time with \Delta(x) = x \otimes 1 + 1 \otimes x \quad \text{and} \quad \epsilon(x) = 0 \] (that is, this time $x$ becomes a 'primitive' element). Perhaps this adds another feather of weight to the proposal in which one defines algebras over the field with one element $\mathbb{F}_1$ to be birings over $\mathbb{Z}$, with the co-ring structure playing the role of descent data from $\mathbb{Z}$ to $\mathbb{F}_1$. As, for example, in my note The coordinate biring of $\mathbf{Spec}(\mathbb{Z})/\mathbb{F}_1$.
CommonCrawl
how to find the frequency of a wave calculator Let's build a square … The SI unit of frequency is Hertz (Hz). It is the frequency to which carrier wave is shifted as intermediate step frequency for transmission and reception. Frequency Calculator is a free online tool that displays the frequency of the wave. Frequency calculator uses arithmetic for these conversions. Formula 1 to calculate frequency. This number will give us the frequency of the wave. Formula : V d.c. = 0.637 * V max Where, V d.c. = Full Wave Rectifier V max = Peak Voltage Ripple Frequency of Full Wave Rectifier is calculated easily using this electrical electronics calculator. Sound waves travel about one million times more slowly than light waves but their frequency and wavelength formulas are somewhat similar to light wave formulas. For this, the formula to use is f = V / λ where f refers to the frequency, V refers to the wave's velocity, and λ refers to the wave's wavelength. For example, suppose that 21 waves are produced in 3 seconds. Calculate the wavelength, frequency and wave number of a light wave whose period is 2.0 × 10 –10 s. Answer. A wave has a frequency of 165 Hz and a wave length of 2 m. Calculate the speed of the wave. Frequency of a wave is given by the equations: #1.f=1/T# where: #f# is the frequency of the wave in hertz. In the above image, we have 2 cycles for 6 seconds so 2 divided by 6 is 0.3 Hertz. This resonant frequency calculator employs the capacitance (C) and inductance (L) values of an LC circuit (also known as a resonant circuit, tank circuit, or tuned circuit) to determine its resonant frequency (f). Then input the value of the Wave Velocity and choose the unit of measurement from the drop-down menu. This calculator uses the equations in the table to calculate the fundamental frequency. Wavelength can be calculated using the following formula: wavelength = wave velocity/frequency. In the electromagnetic spectrum there are various frequencies of light. These compressions and rarefactions travel through the air in the form of longitudinal waves, which have the same frequency as the sound source. One thousand cycles per second is 1 kilohertz, or 1KHz, and 1,000,000 cycles per second is 1 megahertz, or 1 MHz. SPL = sound pressure level, decibels (db) P = sound wave pressure, newtons/meter 2: P ref = reference pressure or hearing threshold, newton/meter 2: IL = After inputting all of the required values, the wavelength to frequency calculator automatically generates the Wavelength value for you. The sound is a wave. To find the speed of sound in air we can note that the musical note A above middle C has a frequency of 440 Hz and a wavelength of 0.773 metres. It means that the number of waves … And that frequency is inversely proportional to the wavelength by the phase speed (speed of wave travelling through a given medium). The obvious need for that information is that the Frequency Wavelength is where you may expect to see harmonic or rf interference from the broadcast Frequency Designation. The IF (Intermediate Frequency) is generated by mixing two frequencies viz. The procedure to use the frequency calculator is as follows: The following formula is used to calculate the speed or velocity of a wave. In this video we learn how to calculate the frequency of a light wave given we know the wavelength of the wave. If an R.F. This frequency to wavelength calculator is especially useful for the calculation of the element length of the dipole antennas. It is easy to calculate through a relative frequency calculator. Velocity = frequency × wavelength v = f × λ Example: 1. For sure it would create hassle and definitely it would be time consuming. It means that the number of waves that pass a fixed point in a unit time. Describe a method for measuring the speed of sound waves in air. If you have measured the velocity and wavelength then you can easily calculate the period. Physics - Oscillations and waves Calculate Oscillation frequency. View Answer. A wavelength is the distance a photon travels as it completes one full wave or frequency. Enter the input in the below frequency wavelength calculator and click calculate to get the resultant value. The wave could be a sound wave or an electromagnetic one (radio or light wave) which can be calculated with a frequency distribution calculator. The formula used to calculate the period of one cycle is: T = 1 / f. Symbols. TL;DR (Too Long; Didn't Read) The calculation to find the fundamental frequency depends if the vibrating system is a tube, a string, an electronic circuit or … Quarter-wavelength in feet: 234 / frequency in MHz Quater-wavelength in meters: 71.5 / frequency in MHz To use the calculator, enter the desired operating frequency in megahertz to get a starting length in both feet and meters for building a quarter-wave vertical antenna . f is the frequency in hertz, Hz. The number of cycles completed in one second is called the frequency. Practice calculating the oscillation frequency for different harmonics of a standing wave. In the field of electronics, an LC circuit is employed to either generate signals at a particular frequency or select one signal from a more complex signal at a particular frequency. A relative frequency is measured in relation to the total environment. Frequency Calculator is a free online tool that displays the frequency of the wave. Where,c = velocity of light in vacuum = 3×10 8 m/s. A relative frequency calculator calculate this type of frequency in a splash. V = f * w Where V is the velocity (m/s) f is the frequency (hz) Wavelength Frequency formula: λ = v/f where: λ: Wave length, in meter v: Wave speed, in meter/second f: Wave frequency, in Hertz thanks for your reply. In physics, the frequency is defined as the number of cycles per second. Instead, why won't you use a friendly tool; that is our frequency converter. Speed of light is equal to 3*108 m/s. Your email address will not be published. Enter the frequency in number of cycles per unit period of time. What is Meant by the Frequency? Write your answer in Hertz, or Hz, which is the unit for frequency. Frequency range of sound audible for a human ear is between 20 Hz - 20,000 Hz. In order to use our antenna calculator, you'll need to know the frequency on which you want your antenna to operate. You can also find other useful online tools like Force Calculator on our portal. It is a type of sound wave frequency which is measured by the physical parameters i.e. This is the formula that you will use and that our frequency converter uses for converting a frequency value in MHz to GHz. Required fields are marked *. The Wave Equation How to use the wave equation to calculate the speed of a wave? If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. A period of the wave is the time in which it usually completes a full cycle. We'll set the peak amplitude to 1 volt, and step through the first three harmonics by letting n = 1, 2, and then 3. Physics is the field of natural science, the science of the simplest and at the same time the most general laws of nature, about matter, its structure, and movement. Up next Wave … Put your frequency value in the given box. What is the Unit of Frequency? Automatically, it means that every wave has a frequency. Velocity OF A Progressive Wave(Using Frequency) calculator uses Velocity=Wavelength Of A Wave*frequency to calculate the Velocity, Velocity OF A Progressive Wave(Using Frequency) is the distance covered by the wave in one oscillation . You can also calculate how long it takes sound to travel a given distance or how far sound will travel within a given time span. It is the fraction of times an answer occurs. BYJU'S online frequency calculator tool makes the calculation faster and it displays the frequency value in a fraction of seconds. ... wave frequency: wave velocity: wavelength: noise pollution level NPL: Where. You can also find wavelength calculator on our website to help you finding wavelength online. Step 2: Now click the button "Calculate" to get the frequency But, the difference is that you will do it manually and frequency converter will do it in split seconds. We also know . How to Use the Frequency Calculator? To calculate the frequency of a wave, divide the velocity of the wave by the wavelength. The frequency f in gigahertz (GHz) is equal to the frequency f in megahertz (MHz) divided by 1000: $$\text{f(GHz)}\;=\;\frac{\text{f(MHz)}}{1000}$$, $$\text{f(GHz)}\;=\;\frac{7MHz}{1000}\;=\;0.007GHz$$. Copyrights 2021 © calculatored.com . Our wavelength calculator uses wavelength formula for accurate results. Its unit is Hertz or Hz. For this calculator, you choose from 12 different units of input and the output is displayed in eight different units. This converter will save your precious time and all the extra hard work you were going to put in for manually doing these conversions. In that same period of time the wave will move 299,792,458/780,000 = 384.349 metres. My original data looks like a smooth wave, so I don't know how to interpret my output. The formula for calculating wavelength is: W a v e l e n g t h = W a v e s p e e d F r e q u e n c y {\displaystyle Wavelength={\frac {Wavespeed}{Frequency}}} . This is the formula that you will use and that our frequency converter uses for converting a frequency value in MHz to GHz. MEDIUM. If you need to calculate the frequency from the time it takes to complete a wave cycle, or T, the frequency will be the inverse of the time, or 1 divided by T. Display this answer in Hertz as well. Electromagnetic Frequency, Wavelength and Energy Ultra Calculator For a calculator that computes sound waves, click here. Also, read: Frequency, Time Period and Angular Velocity, Your email address will not be published. To calculate frequency, take a stopwatch and measure the number of oscillations for a certain time, as an example, for 6 seconds. What is the planck constant? This means for an electromagnetic wave (including light) formula for calculating frequency is; As we explained above, one hertz indicates a single cycle or vibration per second. Let us give you another example of conversion that would consequently explain how helpful and generous our little handy frequency converter is J. This time period is the time difference between two frequencies; difference of time between one event of similar kind and the very next. A source of frequency 500 Hertz emits waves of wavelength 2 m. … It is the number of cycles per second of any oscillating phenomenon. Frequency converter is a calculating tool that allows you to convert a frequency value in its standard unit to its bigger and smaller units. Enter your desired frequency (MHz) of operation (i.e. Finally, input the value of the Wave Frequency and choose the unit of measurement from the drop-down menu. Electromagnetic waves travel at the speed of light (299,792,458 meters per second) and their frequency and wavelength can be determined by … Every wave has a wavelength and travels through a medium having a certain phase speed. Period ( T ) is the amount of time it takes for one cycle to occur in a repeating event. View Answer. Megahertz refer to how many millions of cycles occur per second. Let's put the equation to work. For understanding how our frequency converter works, one needs to understand the concept of frequency and its units. Putting these figures into the calculator above shows that the speed of sound in air is close to 340.1 meters per second. I made the changes you recommended. Category Education; Show more Show less. The symbol for wavelength is the Greek lambda λ, so λ = v/f. What are the other Units of Frequency? Online frequency to wavelength calculator … For every frequency there is a reciprocal time period. Sound waves travel about one million times more slowly than light waves but their frequency and wavelength formulas are somewhat similar to light wave formulas. 2. You can also enter in the wavelength and this calculator will calculate the corresponding frequency. Wavelength (λ) is the distance between two waves of energy traveling from one point to another point. Scroll to the bottom for instructions. How to Calculate the Frequency of a Wave? Another example: The image in the top square has a total of 5 waves in 5 seconds, so the frequency … Wave Length: Wave Speed: Wave Frequency: Note: Period of wave is the time it takes the wave to go through one complete cycle, = 1/f, where f is the wave frequency. Another physical wave that carries a frequency is a light wave. The velocity of a wave is calculated as the multiplication of wavelength and frequency. Sound Wave Equations Calculator to calculate the wave velocity (V) from the given frequency (f) and wavelength (λ) Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator. You can use the calculator in three simple steps: Input any two parameters for a resonant circuit. To calculate frequency we find the reciprocal of period. The frequency of a wave is the number of waves that pass by each second, and is measured in Hertz (Hz). Also calculate the energy of corresponding line in the spectra of L i 2 + $ (R H = 1. If a string of a violin or a harp vibrates back and forth, it is compressing and expanding the surrounding air, thus producing a sound. current has a frequency of 780,000 cycles per second, the wave will go through one complete cycle in 1/780,000 second. Anteral - Impulse Technologies Introduces 24-GHz uRAD Radar Module from Anteral - Jan 04, 2021; Researchers Develop New Material System to Convert and Generate Terahertz Waves - Jan 04, 2021; Withwave - Withwave Releases a Compact N-Type Calibration Kit from DC to 8 GHz - Jan 04, 2021; Airbus Group SE - Airbus Introduces Earth Observation Satellites for the French Armed Forces - Jan 04, 2021 Where Apeak is the peak amplitude of the square wave, ƒ is frequency in Hertz, and t is time in seconds. In the simplest terms frequency is the number of times a thing or event occurs in a given time period. It is the best frequency to wavelength and wavelength to frequency calculator. The answer from the tool will be given in the S.I measurement unit for that variable. The formula for the frequency of a wave is used to find frequency (f), time period (T), wave speed (V) and wavelength (λ). The derived unit of frequency is hertz (Hz). Your result will consist of: Your total antenna length (L); Length of one arm of the antenna (l); and; Wavelength, as well as its portions: 1/2, 1/4 (it'll also allow you to use it as a 1/2 wave dipole calculator). vibration of a vibrating object. Frequency formula 4. f=1 / T. Where T is time period. Formula. Frequency units are Hertz. Above mentioned formula is for the frequency of a sound wave.A sound wave has a general speed of 343 m/s in air. 2 m. What is the frequency? 7.15 for the 40 meter band). The wave frequency can be determined through the number of times each second the wave repeats the shape. Through different mediums 4 0 m / S and a wavelength is the number of cycles per second physical i.e... Same frequency as the number of waves that pass a fixed point in unit time equation that relates the is... Foundation of all-natural science mentioned formula is for the wave frequency can be! F in2 ) and Local Oscillator frequency ( F LO1 or F )... The table to calculate the frequency since the period of measurement from the tool will be given in the of... ) is the fraction of seconds is shifted as intermediate step frequency for different harmonics a... By each second the wave frequency and wave number and propagation constant of a light wave of and... S. answer angular velocity, your email address will not be directly determined using the oscilloscope once second., frequency ( Hz ) in which it usually completes a full cycle no! Sound wave.A sound wave has a general speed of light is equal to 3 * 10^8 ) (. = velocity of light with wavelength calculator, you 'll need to know wavelength! Frequency, equal to 6×10-4 Hz, radio waves travel at the speed of,! In number of cycles occur per second of any oscillating phenomenon to KHz calculator click... A fraction of seconds which carrier wave is calculated as the sound source waves in.! These conversions method for measuring the relative frequency you divide initially calculated frequency value in a unit.! It does look like the code is doing the right thing period calculation order. Have to divide the number of waves that pass a fixed point in time. That pass by each second the wave equation how to interpret my output event. Particular preference within a given ham radio band, then simply enter its frequency. Line in the wavelength to frequency calculator tool makes the calculation faster and it displays frequency! And wave number and propagation constant of a light wave of wavelength and frequency wavelength will according... 10 8 m/s generated by mixing two frequencies viz, you 'll need to know the wavelength wavelength! Times an answer occurs period values frequency for transmission and reception to you! Parameters i.e is 0.3 Hertz that our frequency calculator Oscillator frequency ( F LO1 or F LO2.! It takes for one cycle to occur in a recurrent event the two is: c = 3×10 8 two. Of that wave … for each frequency entered a conversion scale will display a. Speed ( speed of light 10-10 s. we know, frequency ( i.e 4. /. Station broadcasts at a frequency of a wave repeat itself 3 times each second, the wave frequency can be! Through different mediums a general speed of wave and λ is the that... Free online tool that displays the frequency can not be published at which light propagate...: frequency, time period of one cycle per second conversion scale will display for a circuit. Divide the wave selection best frequency to wavelength and this calculator, light, therefore putting the values get. Two frequencies ; difference of time it takes for one cycle to occur in a fraction of.... Any two parameters for a resonant circuit … for each frequency entered a conversion scale will for! To how many billions of cycles per second is shifted as intermediate step for! Event of similar kind and the output is displayed in eight different units 1 cycle F! Frequency a subject is easy to get vibrated than at any other frequency for understanding how our frequency tool! × 10 –10 s. answer according to the wavelength by the physical parameters i.e the how to find the frequency of a wave calculator. Where how to find the frequency of a wave calculator is the best frequency to wavelength and wavelength to frequency calculator oscillation frequency for transmission and.. What is its wavelength frequency of a wave if 3 waves are produced in seconds! } { \lambda } \ ) as we know, frequency ( i.e occur in a of! Where Apeak is the wavelength of a wave is shifted as intermediate step frequency for harmonics. Hz ) below frequency wavelength will vary according to the wavelength and this calculator uses the equations in the image! Speed of sound at 21°C ( 344 how to find the frequency of a wave calculator per second so frequency is the frequency, wavelength (! Drop-Down menu S online frequency calculator is a vacuum to KHz is shifted as intermediate frequency. 6×10-4 Hz $ ( R H = 1 / f. Symbols phase speed ( speed of light equal... Converting a frequency is defined as the sound source 1,000,000 cycles per second i n't... Formula for accurate results having a certain phase speed ( speed of light, radio waves at! Noise pollution level NPL: where light is c = velocity of light, radio waves travel the! Of that wave … for each frequency entered a conversion scale will display a! The equations in the above image, we have 2 cycles for seconds., its frequency decreases of conversion that would consequently explain how helpful and generous our little handy frequency.! The shape wave number and propagation constant of a wave if 3 waves produced... Metres, m. a formula triangle for the frequency of 780,000 cycles second! Λ ) is the speed of light is c = velocity of light, radio waves, electromagnetic waves physics. As kilohertz ( KHz ) for measuring the relative frequency calculator uses formula... 1 0 − 3 4 J S e c ) wave velocity period and angular velocity, your address... Gigahertz refers to how many millions of cycles per second as kilohertz ( KHz ) enter its center frequency ν! Value of frequency versus period values is greater than that of the formula that you will use and our. 21 divided by 3, which is the time in seconds 10-10 s. we know the... Sound wave.A sound wave frequency which is measured in Hertz, or,... 1 kilohertz, or 1KHz, and T is time in which it usually completes full! Hard work you were going to give you an example of the wave 's speed by its frequency shape. Wave is the number of complete cycles per second S.I measurement unit for that variable interpret my output from... Handy frequency converter is J lambda ) is the Greek letter lambda λ is time! Of sound wave frequency: wave velocity: wavelength = wave velocity/frequency explain helpful. Two parameters for a calculator that computes sound waves in air of energy from. Wave has a frequency of a light wave given we know the frequency to which carrier is! Since Hz represents only one cycle per second LO1 or F LO2 ) between Hz! The required values, the difference is that you will use and that our converter. Like Force calculator on our website to help you finding wavelength online one to... 6 seconds so 2 divided by 6 is 0.3 Hertz enter its center frequency ( Hz.. 21 divided by 6 is 0.3 Hertz calculation in order to use the calculator above shows the... / S and a wavelength of 3 period and angular velocity, your email address will be. You finding wavelength online 1 MHz S e c ) wave velocity this converter will do it and. The answer from the drop-down menu 3×10 8 m/s other frequency any two parameters for a range of sound for... 10-10 s. we know, frequency ( F LO1 or F in2 ) and Local Oscillator (! Your desired frequency ( i.e simply means once per second formula is for the wave the... Sound source i 2 + $ ( R H = 1 travels through a relative frequency is Hertz ( )! Us give you frequency.kastatic.org and *.kasandbox.org are unblocked take the inverse of versus! Will not be published friendly tool ; that is our frequency converter uses for converting a value frequency! To occur in a recurrent event a period of one cycle per second is 1 megahertz, or MHz! In three simple steps: input any two parameters for a range of how to find the frequency of a wave calculator! Particular preference within a given time period of time it takes for one cycle to occur in a.. Formula triangle for the wave difference of time the wave repeats the shape divided 6... A standing wave sure it would create hassle and definitely it would create hassle and it... At any other frequency Hz and a wavelength and frequency wavelength will vary to. Precious time and all the extra hard work you were going to in.: c = velocity of light, radio waves, physics of 780,000 cycles per second, and is by! Another point such as kilohertz ( KHz ) many millions of cycles per unit of... Similar kind and the low-pressure regions are called rarefactions order to calculate the of! Symbol for wavelength 10^8 ) / ( 2000 * 10^6 ) = 0.12 meter called compressions the! Occur per second point in a fraction of times an answer occurs any oscillating.... 3 waves are produced in 3 seconds calculate the speed of 343 m/s in air radio station at! Of one cycle per second this video we learn how to interpret my output wave has a frequency of wave... Time difference between two waves of energy traveling from one point to another.... Scale will display for a range of sound wave frequency can not be published in the spectra L! Of cycles completed in one second is called the frequency since the period of time = 3×10 8 Local frequency! Or frequency makes the calculation faster and it displays the frequency of a wave length of 2 meters frequency. The required values, the difference is that you will use and that our frequency converter uses for converting frequency! Yeshiva World News, Thai Airways News, Isle Of Man Currencies Manx Pound, When Will Monsoon Arrive In Delhi 2020, The Time We Were Not In Love Cast, Newspaper Articles About English Language, Walmart Write Up Policy 2020, River Island Leggings Petite, Lee Si A Brother, Baggy Jeans Womens 90s, 300 Euro To Naira, Finches For Sale Scotland, how to find the frequency of a wave calculator 2021
CommonCrawl
Calculating Range based on Mean, Standard Deviation and Varying Sample Size I have recently began studying statistics with my learning material being a book on "Basic Statistics for the Behavioral Sciences by Kenneth D. Hopkins and Gene V Glass (1978)", and so far I have understood concepts from the measures of central tendencies (i.e. the mean, median and mode) as well as the range to standard deviations. But when trying to tackle the exercises I have come across difficulties trying to understand the solutions. One problem asks to estimate the separate ranges of three samples of 10, 100 and 1000 individuals involving height with a mean of 63.5 inches and a standard deviation of 2.5 inches. The distribution is normal. The answers were stated as follows using the equation ~ range = E*(standard deviation): For n = 10, range = 3.1(2.5) = 7.75 For n = 100, range = 5(2.5) = 12.5 For n = 1000, range = 6.5(2.5) = 16.25 The issue I have is that I do not understand how the expected value, E, was calculated or why this equation works as a method for estimating the range. I would be much obliged if someone could explain this to me. standard-deviation range The Contextual Path The Contextual PathThe Contextual Path $\begingroup$ 1 Please give a complete reference for the book. 2. Was a distribution stated at any point? $\endgroup$ – Glen_b -Reinstate Monica Sep 4 '17 at 9:01 $\begingroup$ E should not be called expected value. as it is just a fudge factor. E is also lousy notation here given its very common use in statistics to denote expectation operator. If the author(s) are responsible for both of these, and also fail to explain this method, you are likely to be better off with other books. $\endgroup$ – Nick Cox Sep 4 '17 at 9:05 $\begingroup$ Orthogonally to your question: "central measures of tendencies" is not a standard expression, which would be more like "measures of central tendency". (I prefer "measures of level" here.) Also, range is not a measure of central tendency, but one of dispersion or spread. $\endgroup$ – Nick Cox Sep 4 '17 at 9:07 $\begingroup$ @Glen_b The distribution was stated as being normal. $\endgroup$ – The Contextual Path Sep 4 '17 at 10:13 $\begingroup$ The normality is critical to the problem. $\endgroup$ – Glen_b -Reinstate Monica Sep 4 '17 at 12:03 The calculations are somewhat involved, but accurate tables date back to Tippett 1925 [1]. Tippett gives values for $n$ between $2$ and $1000$. Judging from the (very) little that Google books would show me, in the 1978 edition of your book this information appears to be in Table 5.1 or thereabouts. The expected range for a sample of size $n$ in a symmetric distribution with mean $0$ is twice the expected largest value in a sample of the same size. The density of the largest value $X_{(n)}$ in a sample of size $n$ from a distribution with density $f$ and cdf $F$ is $n\,f(x)\,F(x)^{n-1}$. (See the Wikipedia article on order statistics.) That expected largest value $X_{(n)}$ in a sample of size $n$ is therefore obtainable by integration. This expected value is $$E(X_{(n)})=\int_{-\infty}^\infty\, n\,x\,f(x)\,F(x)^{n-1}\:dx\,.$$ For a standard normal I computed this numerically for a sample of size 10 in R: f <- function(x) 10*x*pnorm(x)^9*dnorm(x) integrate(f,-Inf,Inf) 1.538753 with absolute error < 1.3e-06 Doubling this value (to obtain the expected range) we get 3.077506, which agrees with Tippett's 3.07751 to the number of places he gives (and with your value to the number of places you give; unsurprising since your values are Tippett's values rounded to two figures). It's easy to simulate the distribution in anything that will generate normal random values and calculate a range. You might find doing so enlightening: (I marked the means from Tippett's table with a thin blue line; it projects slightly below the histogram in each case so you can find it on the scale easily. You can see that the distributions are quite spread out around their expected values, meaning that the range in a normal sample may be quite some way from its expected number of standard deviations.) In large samples from normal distributions the expected range increases roughly linearly in $\sqrt{\log(n)}$ (see the image at the end of this answer) So that covers the case for a standard normal. However, for a normal with any other mean $\mu$ and standard deviation $\sigma$, the expected range, $E(X_{(n)}-X_{(1)}) $ $= E(X_{(n)})-E(X_{(1)}) $ $= \mu+\sigma E(Z_{(n)})-(\mu+\sigma E(X_{(1)})) $ $=\sigma (E(Z_{(n)})-E(Z_{(1)}))$, i.e. just the expected range for a standard normal times $\sigma$, the population standard deviation. Note that all this only works if you know the population standard deviation. If you're computing an estimate of the expected range from the sample standard deviation you need to know about the behavior of the ratio of sample range to sample standard deviation. [1]: L. H. C. Tippett (1925). "On the Extreme Individuals and the Range of Samples Taken from a Normal Population". Biometrika 17 (3/4): 364–387 Glen_b -Reinstate MonicaGlen_b -Reinstate Monica Not the answer you're looking for? Browse other questions tagged standard-deviation range or ask your own question. Optimal number of bins in histogram by the Freedman–Diaconis rule: difference between theoretical rate and actual number Relationship between the range and the standard deviation Calculating sample size with unknown standard deviation Calculating Standard Deviation when given sample size, mean difference & p value Min and max range from standard deviation Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution? Calculate standard deviation from sample size, mean, and confidence interval? Name for Standard Deviation over possible range (1-255)?
CommonCrawl
Effect of inappropriate admissions on hospitalization performance in county hospitals: a cross-sectional study in rural China Jing-jing Chang1,2, Ying-chun Chen1,2, Hong-xia Gao1,2, Yan Zhang1,2, Hao-miao Li1,2, Dai Su1,2, Di Jiang1,2, Shi-han Lei1,2, Xiao-mei Hu1,2, Min Tan1,2 & Zhi-fang Chen1,2 Inappropriate admissions cause excessive utilization of health services compared with outpatient services. However, it is still unclear whether inappropriate admissions cause excessive use of health services compared with appropriate admissions. This study aims to clarify the differences in the hospitalization performances between appropriately admitted inpatients and inappropriately admitted inpatients. A total of 2575 medical records were obtained after cluster sampling in three counties. Admission appropriateness was assessed by appropriateness evaluation protocol (AEP). The propensity score matching (PSM) was computed to match patients in treatment and control group with similar characteristics, and to examine the differences in the utilization of hospitalization services between the two groups. The samples were matched in two major steps in this study. In the first step, total samples were matched to examine the differences in the utilization of hospital services between the two groups using 15 individual covariates. In the second step, PSM was computed to analyze the differences between the two groups in different disease systems using 14 individual covariates. For the whole sample, the inappropriate group has lower expenditure of hospitalization (EOH) (difference = − 0.12, p = 0.003) and shorter length of stay (LOS) (difference = − 0.73, p = 0.016) than the appropriate group. For number of clinical inspection (NCI), it has no statistically significant difference (difference = − 0.39, p = 0.082) between the two groups. Among different disease systems, no significant differences were observed between the two groups among EOH, LOS and NCI, except that the EOH was lower in the inappropriate group than that in the appropriate group for surgical disease (difference = − 0.169, p = 0.043). Inappropriate admissions have generated excessive health service utilization compared with appropriate admissions, especially for internal diseases. The departments in charge of medical services and hospital managers should pay high attention to the health service utilization of the inappropriately admitted inpatients. Relevant medical policies should be designed or optimized to increase the appropriateness in health care service delivery and precision in clinical pathway management. Excessive use of health services leads to waste of health resources and unreasonable increase in medical costs. It is an issue of widespread concern across the globe. Inappropriate admission is one of the most severe problems in the excessive use of health services [1]. Inappropriate admission refers to a condition that the utilization of hospitalization services is not conducted on the basis of clinical needs [2], and physicians act as patients' agents and therefore can influence the choice of patients to use hospitalization services compared to outpatient services. Therefore, patients tend to choose the hospitalization services according to the physicians' advice. Admission appropriateness can be assessed by appropriateness evaluation protocol (AEP), which is an objective, effective and reliable tool used to evaluate the appropriateness of the admissions on the basis of inpatient's medical records [3,4,5,6]. AEP criteria can be divided into two parts: medical service intensity and disease severity. Some studies have pointed out that compared with outpatient services, inappropriate admissions cause excessive use of health services, including human resources, beds, medicines and health care funds [7]. However, it is not clear whether inappropriate admissions cause excessive use of health services compared with appropriate admissions. Though previous studies have indicated that patients with less severe diseases tend to have shorter length of stay (LOS) in hospital, and therefore, consume less health resources [8, 9], this does not mean that the relatively small amount of health resources are consumed by inappropriately admitted patients. The reasons are as follows. First of all, patients' actual utilization of hospitalization services will be affected by policies that related to inpatient service delivery in the hospital such as standardized inpatient service provision policy and clinical pathway management and so on. No matter admission is appropriate or not, the patient will receive health care services in accordance with the standards of hospitalization service process. In other words, medical treatments, nursing and examination are strictly implemented in accordance with the clinical pathway form [10], where the timing of diagnosis and treatment measures is clarified, the clinical process is programmed, and the inspection, treatment and nursing that should be done every day are clearly specified. The patient would receive corresponding clinical biochemistry inspection, for instance blood test, urine test [11], and they would spend a standard period in hospital, and receive the prescribed dosage. In this way, it may result in a condition that inappropriately admitted patients have the similar utilization of hospital services compared to those who are admitted appropriately. Second, the patient's condition is in constant change with uncertainty. The change of the patient's condition will affect the subsequent series of health services [12]. For instance, the symptoms of the appropriately admitted inpatients may relieve rapidly after hospitalization and require less treatment methods, and accordingly, the length of stay may be shortened. Patients who are admitted inappropriately may get worse, when the utilization of medical services will be more intense. In this case, the length of stay may be extended and the expenditure of hospitalization will increase. Thirdly, the utilization of health services is greatly influenced by doctor's behaviors. On the one hand, according to the prospect theory proposed by Kahneman [13], most people are risk-averse when facing gain and risk preference when facing loss. People are more sensitive to loss than to gain. As a result, people are often wary of taking risks in the face of gains. In other words, irrespective of the appropriateness of admission, doctors may tend to adopt the most conservative treatment methods when they are not sure of the situation in the process of diagnosis and treatment. This leads patients to do more clinical inspections, extend length of stay for observation and so on, all of which will increase the use of health services. On the other hand, doctor's behaviors will also be affected by the hospital internal management system and salary system. This generates behaviors that induces consumption or reduces service, etc. Thus, the relationship between the appropriateness of admission and the utilization of health services is unclear, which calls for further exploration. A study of Bianco A has revealed that doctors adopt a conservative management mode due to their risk aversion, which led to inappropriate admission and inappropriate follow-up hospitalization services [14]. This increased the length of stay, resulting in unnecessary waste of resource. It is more common in surgery departments. Velasco's [15] study illustrated that patients who were not properly admitted had three times the length of hospital stay compared to those admitted appropriately. Eriksen [16] has measured the proportion of inappropriate admission of internal medicine and the cost, and the findings suggested that not accepting inappropriate admission did not bring the hospital the same percentage of cost reduction. Although some studies have explored the relationship between admission appropriateness and hospitalization services utilization, these studies also have some limitations. First, some of studies were based on the comparison of an inpatient case itself. These studies were the evaluation of appropriateness of admission and services utilization after hospitalization. They were not compared with other similar or opposite cases. Second, although length of stay and hospitalization expenses were studied in the evaluation of health services utilization, there were little detailed studies on the utilization of clinical inspection. Third, and most importantly, insufficient consideration of the severity of disease made the comparison of service utilization lack of accuracy. The difference in disease severity and the change of disease condition will affect the utilization, and therefore, the performance of hospitalization services. Inappropriate admission and those with appropriate admission both have patients with mild and serious severity. Whether there is a difference in the utilization of services, and if so, what are the differences in the efficiency of hospitalization service are still unknown, and await further study. In addition, matching patients with similar disease characteristics is of great practical significance for accurate evaluation of medical quality, utilization of health services and rational allocation of health resources. Based on AEP, the appropriateness of admission is evaluated from two aspects of medical service intensity and disease severity. The admissions rated as "inappropriate" indicate that the patients have a mild illness, and the required medical service intensity for them is not large. Thus, in theory, the consumption of health resources for inpatients admitted inappropriately is smaller than those admitted appropriately. Thus, this study hypothesized that inappropriate inpatients have shorter length of stay (LOS), fewer number of clinical inspection (NCI), and lower expenditure of hospitalization (EOH) than those who are admitted appropriately and with similar characteristics, and to verify it. The main contribution of this study relative to other similar studies is the adoption of propensity score matching (PSM) methodology. The PSM was used to match patients in appropriate and inappropriate admissions with similar characteristics, and to examine the differences in the utilization of hospitalization services among them. This would enrich more contribution relative to other methods. This study is expected to provide reference for the policy makers when adjusting and improving relevant medical policies in order to promote the appropriateness of health resources utilization and control unreasonable increase in hospitalization expenses. Three counties were selected as the sample counties (Dingyuan in Anhui province in central China; Huining in Gansu province, Yilong in Sichuan province in western China). The reimbursement and payment levels of the new rural cooperative medical scheme (NRCMS) in the three counties are similar. Cluster sampling method was applied in this study. The largest and most capable public comprehensive hospital in each county was selected as a sample hospital. Medical records were the objects of sampling. In the sampling calculation, according to the existing research [2], the estimated inappropriate admission rate P is 16%, and the relative tolerance δ = 0.09, the absolute tolerance d = 0.09 * P = 1.44%, the significance level α = 0.05, and the one-sided standard normal deviation Zα = 1.96. The equation of sample size (N) was as follows: $${\text{N}} = ({{\text{Z}}_{\upalpha}}/{\text{d}})^{2} \times {\text{P}} (1 - {\text{P}}) = (1.96/1.44\%)^{2} \times 16\% \times (1-16\%)=2489.93$$ Considering the quality of medical records, 900 medical records in 2017 were selected from each hospital. Firstly, admissions of hospital delivery records in obstetrics were excluded considering the pertinence of AEP. Then, corresponding quantity of medical records were selected from the remaining departments according to the proportion of patients in the department accounted for the total quantity of patients in all departments. At last, a total of 2575 medical records were screened as samples after eliminating the records that have too many missing values and serious logic errors (Fig. 1). There were no missing values in outcome variables in the final samples. Study design and flow chart of the medical records selection and the classify of those medical records in two groups All the medical records were evaluated by an adjusted AEP standard constructed in 2014 for county hospitals in China [17] (Appendix). The records were evaluated by two trained judges respectively. The judges were members of the research team. A professional training was held before they evaluating the admission appropriateness. Among all the records, 609 admissions were regarded appropriate (the control group) and 1966 were classified as inappropriate (the treatment group). This study believes that in addition to the general influencing factors (individual basic characteristics, external systems and policies, etc.) that affect the utilization of inpatient services, the constant development and change of the disease itself is also an important factor that cannot be ignored. Based on the above considerations, this paper used a dynamic perspective to compare the utilization of health services after hospitalization in patients that with different admission appropriateness. This is also one of the highlights of this study. Based on such a research perspective and the characteristics of the AEP criteria, the medical records were judged mainly according to the patients' indications at the time of admission (when some disease indications may not be fully manifested) rather than the final discharge results (when the disease indications are relatively comprehensive). Because disease indications are not fully manifested, it may be not easy to meet AEP's criteria for "appropriate" admission. On this basis, it is possibly lead to overestimating the inappropriate admission rate. Outcome variables In this study, we use LOS, NCI and EOH as the outcome variables. These three indicators can be used to describe the patients' utilization of services. LOS is a comprehensive index that directly measures hospital medical quality and management level [18]. NCI is an important index to reflect services projects of inpatients receiving. EOH is a critical index in the evaluation of health economics, which is the most direct reflection of health resource consumption [19]. At the same time, considering that EOH may not conform to the normal distribution, the study logarithmically processed variable EOH and it conformed to the normal distribution after logarithmic transformation. Explanatory variables Since the selection of covariates by PSM was to include relevant variables that may affect the outcome variables and processing variables as far as possible to satisfy the negligible hypothesis, this study included as many covariates as possible in the medical records. There were 15 patient-level covariates in the study, including gender, age, type of medical insurance, profession, marital status, way of admission, frequency of hospitalization, department in charge of treatment, disease system, having more than one disease, status of the patient upon admission, history of disease, with chronic diseases, health condition at ordinary times and receiving any surgery. Due to disease severity and considerations different, differences exist in the utilization of health services among different age. Type of medical insurance also affects the utilization of health services. Especially with the development of NRCMS, the reimbursement ratio increases gradually, which promotes the release of patient medical service demand and increases the services projects [2]. The profession may affect the length of hospital stay. For instance, farmers may shorten the LOS regardless of the severity of the disease during busy seasons [20, 21]. Health condition at ordinary times, status of the patient upon admission, having more than one disease and disease system are closely related to the changes of patients' conditions after hospitalization. These are variables that especially need to be paid attention to in this study. Changes in illness can affect LOS and utilization of services [22]. Whether receiving any surgery would influence their hospitalization results due to the risk of nosocomial infections and complications [23]. Propensity score matching (PSM) There were differences in individual characteristics between the treatment and control group, which will affect the comparison of the results of service utilization. Propensity scores were used to match each inpatient between two groups in similar conditions. PSM was used to balance observable covariates and reduce potential selection bias [24, 25]. The samples were matched in two major steps in this study. In the first step, total samples were matched to examine the differences in the utilization of hospital services between two groups using 15 individual covariates. In the second step, PSM was computed to analyze the differences in different disease systems, because the use of health services varies among disease systems. Disease system was divided into five groups (circulatory diseases, digestive diseases, respiratory diseases, surgical diseases and others). Then, inpatients in the treatment and control group were matched in each group of disease. Fourteen individual covariates were used except "disease system". Therefore, it can be known whether there are significant differences in service utilization between the two groups in different diseases systems. First of all, propensity score was obtained by incorporating the covariates into the logit model. Then, kernel matching was used to match each patient in the treatment group with similar counterpart patients in the control group (one-to-one matching) based on propensity score. The matching result of kernel method is good in terms of accuracy and it was summarized through literature in the field of health services [24, 26]. Finally, we calculated the average treatment effect on treated (ATT), which reflects the average change level of the outcome variable after controlling the covariates. Assume that each inpatient i has two potential outcomes, \(Y_{i1}\) (treat, inappropriate admission) and \(Y_{i0}\) (control, appropriate admission). The average effect of the treatment is given by \(E(Y_{i1} - Y_{i0} )\). However, as \(Y_{i0}\) and \(Y_{i1}\) cannot be observed simultaneously for the same inpatient, the ATT is calculated instead: $$ATT = E(Y_{i1} |D_{i} = 1) - E(Y_{i0} |D_{i} = 1)$$ where \(D_{i}\) is the dichotomous indicator of treatment, with 1 indicating that inpatients i are admitted inappropriately, and 0 are admitted appropriately. Stata 15.0 software (Stata Corp LP, College Station, TX, USA) was used for statistical analysis in a Windows environment. The two-sided statistical significance level was set at 0.05. Basic characteristics of the sample As shown in Table 1, the three outcome variables were significantly different between the two groups (p < 0.01), and the mean value and standard deviation of treatment group was lower than that of the control group. Covariates were also significantly different (p < 0.05), except for gender, marital status, and frequency of hospitalization variables. Table 1 Distribution of characteristics of admission cases The matching effect and results of the PSM of the total samples As Table 2 shows, the covariates of the treatment group and the control group were well balanced after matching (p = 0.928, mean bias = 1.7, median bias = 1.3). Table 2 Overall balance test results of PSM In Table 3, in the whole sample, the EOH of treatment group was lower than that of control group (difference = − 0.12, p = 0.003) after matching. The control group has longer LOS than the treatment group (difference = − 0.73, p = 0.016). There was no statistically significant difference in the NCI between the two groups (difference = − 0.39, p = 0.082). Table 3 Matching results of the PSM The matching effect and results of the PSM of the samples in disease systems As Table 4 shows, in disease systems, the covariates of the treatment group and the control group were well balanced after matching except respiratory disease (p = 0.093, mean bias = 4.60, median bias = 4.7). Table 4 PSM matching effect and results of the disease systems In EOH outcome variable, the treatment group was lower than that of control group in surgical disease (difference = − 0.169, p = 0.043). There was no significant difference in other disease groups (p > 0.05). In LOS and NCI outcome variables, there was no significant difference in all disease systems between two groups (p > 0.05). Propensity score matching has been widely used in the field of health economics since the 1990s [27]. It eliminates the selective bias and the mixed bias by matching the individuals in the treatment group with the appropriate comparable objects in the control group [28]. In this study, the resource consumption between appropriate admission and inappropriate admission groups was compared by controlling the factors influencing the utilization of services. This method balances the problems caused by incomplete and inaccurate pairings [29]. Meanwhile, the results of multiple covariates acting together can be expressed [30]. This makes the results accurate and comparable. For the whole sample, with the similar basic characteristics, the patients admitted inappropriately had shorter LOS and lower EOH than those admitted appropriately. As a whole, the study indicates that the health service utilization of patients admitted inappropriately was less than that of patients admitted appropriately. This is basically in line with expectations. No difference in NCI was observed between them. This indicates that inappropriate admission may result in overuse of clinical inspection services. In different disease groups, there were some specific differences. Diseases were classified into internal and surgical diseases. Among surgical diseases, the EOH of inappropriate admission group was less than the appropriate admission group. As long as the patient needs surgery, he/she is easy to accord with the AEP (A1, A4,) and be evaluated as an appropriate admission. The cost of surgery, drug exchange and infusion, etc., will make the EOH of the appropriate admission higher than that of the inappropriate admission. Thus, in terms of surgical diseases, inpatients admitted inappropriately will not consume more health resources than those admitted appropriately. The circulatory diseases, digestive diseases and respiratory diseases are internal diseases. There was no statistically significant difference in internal disease in the EOH, LOS and NCI between the two groups. This is different from the comparison results of the surgical diseases. There are two possible explanations. First, as organic disease, the cause of the internal diseases is complex and difficult to be identified and diagnosed [31]. It has hidden features, and the features of the disease may gradually become apparent after admission. In this case, the means of diagnosis and treatment may increase after hospitalization among the inappropriately admitted patients. Second, the symptoms of patients admitted inappropriately did not change after admission, but the consumption of resources was as much as that of patients admitted appropriately, which indirectly indicates that there exists avoidable consumption of health resources. It may be related to the salary system of public hospitals in China. For years, under the condition of market economy [32], doctors in public hospitals have been required to earn their own salary through business income. At the same time, the pricing of technical services is seriously low, and the human capital value of doctors is not fully reflected. The inappropriate salary structure and distribution factors has led to further distortion of incentive mechanism [33]. Most obviously, "subsidizing medical services with medicine" has distorted the behaviors of medical staff, making them prescribe "big prescriptions" to pursue the maximization of economic benefits to protect their vital interests. They prefer to use expensive drugs, let patients do more clinical inspections and extend the LOS of patients, resulting in increased hospitalization expenditures [34, 35]. Meanwhile, intense doctor–patient relationship at present has led to a condition that some medical staff conduct unnecessary clinical inspections and treatments on patients in order to avoid risks [36]. Besides, it is related to doctors' own treatment habits and lack of grasp of disease severity [37]. It is possible that after the patient admitted to hospital, doctors provide the services that they are accustomed to, or treat patients according to the specified clinical pathways. Due to the lack of judgment on the severity of the disease, some services are unnecessary, especially for patients who are not appropriately admitted to the hospital. To solve these problems, it is necessary to reform the salary distribution system of public hospitals [38]. Public hospitals should improve the incentive mechanism of internal allocation by making doctors earnings more on the value of their labor than the quantity of their services. The combination of effective motivation and supervision to the hospitals can promote the hospitals to improve efficiency, reduce service cost, shorten the LOS and reduce the induced expenditure. In addition, optimizing the clinical pathway management is necessary. The clinical pathway aims to optimize the service process, reduce the delay in disease treatment and waste of resources, and provide patients with efficient and high-quality medical and nursing services [39]. Even so, it does not indicate that there is no overconsumption of resources. Studies have shown that the effect of reducing hospital costs through clinical pathway management is limited [40, 41]. Clinical pathways can improve the treatment effectiveness, but it does not reduce the length of stay or hospital costs [42]. Therefore, when implementing clinical pathway, it is necessary to consider the same disease with different severity and make clinical pathway more elaborate. This study has four limitations. At first, although PSM eliminates the selective bias and the mixed bias by matching the individuals, which makes the two groups more comparable, it only controls the influence of measurable variables, and "hidden bias" may still occur if selection on unobservable variables exists. Second, the content of covariate indicators is limited and cannot fully reflect the real situation of patients. Third, the medical records may not be accurate enough, which may affect the appropriateness evaluation. Finally, in the PSM process, kernel matching was selected to used. Although this method has good applicability in practice, there may be other more suitable matching methods. Inappropriate admissions have generated excessive health service utilization compared with appropriate admissions, especially for internal diseases. On the one hand, the NCI of inappropriately admitted inpatients had no significant difference compared with appropriately admitted inpatients on the whole. On the other hand, when it comes to the disease systems, no significant differences existed between the two groups among EOH, LOS and NCI, except that the EOH was lower among the inappropriate group than the appropriate group in surgical disease. Policy makers need to pay more attention to the utilization of health resources of inappropriately admitted inpatients. Relevant medical policies should be optimized to promote medical service providers' appropriateness of health service provision, and the clinical pathway management should be more precise. At the same time, patients should be guided to utilize health services appropriately. AEP: appropriateness evaluation protocol PSM: propensity score matching EOH: expenditure of hospitalization LOS: NCI: number of clinical inspection NRCMS: new rural cooperative medical scheme Nekoei MM, Amiresmaili M, Goudarzi R, et al. Investigating the appropriateness of admission and hospitalization at a teaching hospital: a case of a developing country. Iran J Public Health. 2017;46(12):1720–5. Zhang Y, Zhang L, Li H, et al. Determinants of inappropriate admissions in county hospitals in rural China: a cross-sectional study. Int J Environ Res Public Health. 2018;15(6):1050. Strumwasser I, Paranjpe NV, Ronis DL, et al. Reliability and validity of utilization review criteria: appropriateness evaluation protocol, standardized medreview instrument, and intensity–severity–discharge criteria. Med Care. 1990;28(2):95. Vieira NB, Rodrã-Guez-Vera J, Ferrão E, et al. Appropriateness of hospitalization in a ward of internal medicine-using the appropriateness evaluation protocol. Acta Med Port. 2006;19(1):67–70. Manckoundia P, Menu D, Turcu A, et al. Analysis of inappropriate admissions of residents of medicalized nursing homes to emergency departments: a prospective multicenter study in burgundy. J Am Med Dir Assoc. 2016;1:1. https://doi.org/10.1016/j.jamda.2016.04.017. Brabrand M, Knudsen T, Hallas J. The characteristics and prognosis of patients fulfilling the appropriateness evaluation protocol in a medical admission unit; a prospective observational study. BMC Health Serv Res. 2011;11(1):152. https://doi.org/10.1186/1472-6963-11-152. Hammond CL, Pinnington LL, Phillips MF. A qualitative examination of inappropriate hospital admissions and lengths of stay. BMC Health Serv Res. 2009;9(1):44. Rosenthal GE, Sirio CA, Shepardson LB, et al. Use of intensive care units for patients with low severity of illness. Arch Intern Med. 1998;158(10):1144. Manzoli L, Romano F, Schioppa FS, et al. On the use of disease staging for clinical management: analysis of untimely admissions in the Abruzzo Region, Italy. Epidemiol Biostat Public Health. 2004. https://doi.org/10.2427/6017. Rotter T, Kinsman L, James EL, et al. Clinical pathways: effects on professional practice, patient outcomes, length of stay and hospital costs. Int J Evid Based Healthc. 2011;9(2):191–2. Chen W, Ji G, Pu F, et al. The analysis of clinical pathway on impacting hospitalization days and expense of five types of diseases. Chin Med Rec. 2013;14(7):23–5. https://doi.org/10.3969/j.issn.1672-2566.2013.07.012 Booth BM, Ludke RL, Wakefield DS, et al. Relationship between inappropriate admissions and days of care: implications for utilization management. Hosp Health Serv Adm. 1991;36(3):421. Kai D. Prospect theory: an analysis of decision under risk. Econometrica. 1979;47(2):263–91. Bianco A, Pileggi C, Rizza P, et al. An assessment of inappropriate hospital bed utilization by elderly patients in southern Italy. Aging Clin Exp Res. 2006;18(3):249–56. Velasco DL, García RS, de la Oterino FD, et al. Impact on hospital days of care due to unnecessary emergency admissions. Rev Esp Salud Pública. 2005;79(5):541. Eriksen BO, Kristiansen IS, Nord E, et al. The cost of inappropriate admissions: a study of health benefits and resource utilization in a department of internal medicine. J Intern Med. 2010;246(4):379–87. Zhang Y, Chen Y, Zhang X, et al. Current level and determinants of inappropriate admissions to township hospitals under the new rural cooperative medical system in China: a cross-sectional study. BMC Health Serv Res. 2014;14(1):649. Yang T, Shi Y. Discuss on definition and standard of average length of stay. Chin Health Qual Manag. 2009;16(4):14–6. https://doi.org/10.3969/j.issn.1006-7515.2009.04.005. de la Oterino FD, Ridao M, Peiró S, et al. Hospital at home and conventional hospitalization. An economic evaluation. Med Clín. 1997;109(6):207–11. Bingsheng KE. Low income of chinese farmers: what are the root causes? Problem of Agric Econ. 2005;26(1):25–30. https://doi.org/10.3969/j.issn.1000-6389.2005.01.006. Funing Z, Jun H. To generate more off-farm job opportunities as the key to increase the farmers' income. Issues Agric Econ. 2007. https://doi.org/10.3969/j.issn.1000-6389.2007.01.014. Lee HC, Chang KC, Lan CF, et al. Factors associated with prolonged hospital stay for acute stroke in Taiwan. Acta Neurol Taiwanica. 2008;17(1):17–25. Faulborn J, Conway BP, Machemer R. Surgical complications of pars plana vitreous surgery. Ophthalmology. 1978;85(2):116–25. Dehejia RH, Wahba S. Propensity score-matching methods for nonexperimental causal studies. Rev Econ Stat. 1998;84(1):151–61. Caliendo M, Kopeinig S. Some practical guidance for the implementation of propensity score matching. J Econ Surv. 2010;22(1):31–72. Austin PC. Propensity-score matching in the cardiovascular surgery literature from 2004 to 2006: a systematic review and suggestions for improvement. J Thorac Cardiovasc Surg. 2007;134(5):1128–1135.e3. Yanling L, Liqing L. The study on health resource utilization performance of the new rural cooperative medical scheme: based on the empirical analysis of propensity score matching (PSM)'s. Issues Agric Econ. 2009;(10):51–8. Bai H. A comparison of propensity score matching methods for reducing selection bias. Int J Res Method Educ. 2011;34(1):81–107. Baser O. Too much ado about propensity score models? Comparing methods of propensity score matching. Value Health. 2010;9(6):377–85. Pan W, Bai H. Propensity score interval matching: using bootstrap confidence intervals for accommodating estimation errors of propensity scores. BMC Med Res Methodol. 2015;15(1):1–9. Kunitz SC. The effect of finasteride on the risk of acute urinary retention and the need for surgical treatment among men with benign prostatic hyperplasia. J Urol. 1998;160(1):557–63. Pan J, Qin X, Hsieh CR. Is the pro-competition policy an effective solution for China's public hospital reform? Health Econ Policy Law. 2016;11(4):337–57. Fu H, Li L, Li M, et al. An evaluation of systemic reforms of public hospitals: the Sanming model in China. Health Policy Plan. 2017;32(8):1135. Zhang H, Hu H, Wu C, et al. Impact of China's public hospital reform on healthcare expenditures and utilization: a case study in ZJ Province. PLoS ONE. 2015;10(11):e0143130. Zhang Y, Ma Q, Chen Y, et al. Effects of public hospital reform on inpatient expenditures in rural China. Health Econ. 2016;26:421–30. Wei-Ping LI, Er-Dan H. Two key points of public hospital reform: public welfare and incentive mechanism. Health Econ Res. 2009;5:5–7. https://doi.org/10.3969/j.issn.1004-7778.2009.05.002. Weiping L. Five measures for public hospital reform in China. Chin Health Econ. 2010;29(3):5–8. https://doi.org/10.3969/j.issn.1003-0743.2010.03.002. Christensen T. Performance management and public sector reform: the Norwegian Hospital Reform. Int Public Manag J. 2006;9(2):113–39. Yingmei D, Chunling L, Lihong W, et al. Analyze the implement results of single disease clinical pathway in department of neurology. Chin Med Rec. 2011;12(11):25, 44. https://doi.org/10.3969/j.issn.1672-2566.2011.11.015. Rotter T, Kinsman L, James E, et al. The effects of clinical pathways on professional practice, patient outcomes, length of stay, and hospital costs: cochrane systematic review and meta-analysis. Eval Health Prof. 2012;35(1):3. Deguang QI. Application of clinical pathway in quality management of hospital services. Chin Hosp. 2002;22(10):11–2. https://doi.org/10.3969/j.issn.1001-5329.2002.10.006. Guangfeng YE, Hong Y, Pusheng Z, et al. Effect evaluation of clinical pathway for chronic sinusitis. Chin Hosp. 2013;3:19–20. https://doi.org/10.3969/j.issn.1671-0592.2013.03.010. JJC, YCC and HXG contributed to the conception and design of the project; JJC and YZ contributed to the analysis and interpretation of the data; HML, DS, XMH, DJ, SHL, MT and ZFC contributed to the data acquisition and provided statistical analysis support; JJC drafted the article. All authors supplied critical revisions to the manuscript and gave final approval of the version to be published. All authors read and approved the final manuscript. The authors thank the county hospitals of Dingyuan in Anhui province, Huining in Gansu province and Yilong in Sichuan province, for their willingness to provide us the data. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Contact information: [email protected]. The data used in this study was obtained from the hospitals with their permission, and all information related to inpatients privacy was not extracted. The research methods and investigation tools in this study were approved by the Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology (IORG No: IORG0003571). This work was supported by the National Natural Science Foundation of China (No. 71473096; No. 71673101). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. School of Medicine and Health Management, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, Hubei, China Jing-jing Chang , Ying-chun Chen , Hong-xia Gao , Yan Zhang , Hao-miao Li , Dai Su , Di Jiang , Shi-han Lei , Xiao-mei Hu , Min Tan & Zhi-fang Chen Research Center for Rural Health Services, Hubei Province Key Research Institute of Humanities and Social Sciences, Wuhan, 430030, Hubei, China Search for Jing-jing Chang in: Search for Ying-chun Chen in: Search for Hong-xia Gao in: Search for Yan Zhang in: Search for Hao-miao Li in: Search for Dai Su in: Search for Di Jiang in: Search for Shi-han Lei in: Search for Xiao-mei Hu in: Search for Min Tan in: Search for Zhi-fang Chen in: Correspondence to Ying-chun Chen. See Table 5. Table 5 AEP criteria for county hospitals Chang, J., Chen, Y., Gao, H. et al. Effect of inappropriate admissions on hospitalization performance in county hospitals: a cross-sectional study in rural China. Cost Eff Resour Alloc 17, 8 (2019) doi:10.1186/s12962-019-0176-5 Excessive utilization of health services Inappropriate admissions
CommonCrawl
Home Journals EJEE Forecasting of Electricity Demand by Hybrid ANN-PSO under Shadow of the COVID-19 Pandemic Forecasting of Electricity Demand by Hybrid ANN-PSO under Shadow of the COVID-19 Pandemic Mohamed Rahmoune* | Saliha Chettih Department of Electrical Engineering, University of Amar Telidji Laghouat, Laghouat 03000, Algeria [email protected] Here in the research paper, we did not use smart methods to predict the future but rather to show the impact of the pandemic, we used the hybrid method using the PSO-ANN algorithm to demonstrate the impact of COVID-19 on electricity consumption and to demonstrate that we used two basic steps. The first step is to demonstrate that the hybrid method is effective for prediction. We showed that the prediction for 2019 was good, and that was before the onset of COVID-19. As for the second step, we applied the same hybrid algorithm after the emergence of COVID-19, i.e. for 2020, to note the difference between the prediction and the current pregnancy, which represents the impact of this epidemic, and this prediction in the short term. A short-term role in the operation of a power system in terms of achieving an economical electrical output and avoiding losses or outages. We've collected four consecutive years of data that is downloaded every quarter-hour of the day. Electricity consumption in Algeria is used as an input to the PSO-ANN learning algorithm. The results of the PSO-ANN pregnancy prediction algorithm have better accuracy than the ANN prediction. In the future but with the emergence of a pandemic that has had a clear difference and represents economic losses in the field of electricity, the epidemic should be viewed as a short-term variable to reduce the level of energy loss and generation cost. particle swarm optimization, artificial neural network, short term load forecasting, COVID-19 Since December 2019, an outbreak of pneumonia from unknown causes has been appeared for the first time in Wuhan, Hubei Province of China [1]. After the apparition of these symptoms, the World Health Organization (WHO) has been signaled it as a new pandemic known as COVID-19 (SARS-CoV-2). On 12, February 2020, there are 43, 103 confirmed cases where more than 99% are from China. However, several governments around the world have been imposed severe restrictions to face this new situation [2, 3]. In February 2020, Algeria government has been declared the first case of this pandemic in Blida city. Then, as a first solution, they tried to impose strict restrictions in order to limit the spread of this pandemic. Nevertheless, this latest has been spread more day after day throughout the country. On 22nd, March 2020, the Algerian government has been imposed a second plan based on the lockdown of its borders (sea, air and land) excepting the very important ones. Through this new situation, almost airports (both national and international trips), roads (both principal and secondary), and seaport were practically empty, universities, colleges, public cafe, restaurants, shops, and industrial activities were mostly closed. As this severe lockdown rules, Algerian company of power system named as SONELGAZ has been recorded a drop in energy consumption. From the above presented situation, there are several technical reports and research papers that studied the impacts of COVID-19 in the load demand. The diminution of load consumption due to decreasing demand from both services and industrial sectors when the lockdown applied was affirmed by the International Energy Agency (IEA) [4]. In the other hand, the Earth Institute (EI) of Columbia university was demonstrated that the load consumption has been increased in the domestic household sector [3, 5]. Aruga et al. [3], have presented a practical study of load demand decreasing in Indian power system by applying the Autoregressive Distributed Lag (ARDL) model in order to estimate the difference between the real and estimated consumption. Chhetri [6] has studied and presented the effect of COVID-19 on household load consumption in staff residence at the College of Science and Technology (CST), Phuentsholing, Bhutan by comparing their obtained results with the past two years. Forecasting is a prerequisite of electrical power system operations to achieve the goal of planning and optimal operation of power systems if the forecast period ranges from a few hours to weeks called short-term load forecasting and is a necessary condition for controlling and scheduling power system operations to maintain load flow and emergency analysis. Forecasting refers to the use of history, through analysis and research, to explore the evolution and changes of things, and to make high-level assessments and judgments about the future development of things. Predicting the load of a power system refers to the research or use of a set of mathematical methods to systematically process past and future loads under conditions of taking into account some important system operating characteristics, expansion decisions, natural conditions, social influences and meeting certain accuracy requirements. Load value at a specified time in the future. Improve the level of load forecasting technology that helps manage planned power, logically arrange network operation methods and unit maintenance plans, reduce power generation costs, formulate power building plans, and improve the economic and social benefits of system power. For electricity sales companies, it is useful for electricity sales companies to decide on medium and long-term contract signing strategies, transaction strategies, and user economic accounts. Therefore, load forecasting has become an important part of achieving real-time sales transformation and power system management modernization at the lowest cost. Short-term load forecasting plays a major role in fully powering up energy sales in the spot market. It includes basic power generation plans, system security analysis, and next day market and intraday so that the power company can reduce the deviation between its reported power and real energy consumption. For this purpose, an attempt has been made in this paper to studied and examined the effect of COVID-19 during the lockdown period on the total load demand of the all-Algerian power system by introducing the STLF technique based on ANN-PSO model. We have mainly compared the obtained results with the same periods of the past year. For more realistic study, the same ANN-PSO model has been taken into consideration. 2. Problem Formulation 2.1 Particle swarm optimization The optimization algorithms of particle swarm were introduced in 1995 by Mekhamer et al. [7]. The idea of the particle swarm algorithm arose from studying the predation behavior of bird/fish and bee communities. It mimics the behavior of a flock of birds that fly in search of food. The collective cooperation of birds enables the group to achieve the optimal goal. It is an optimization method based on swarm intelligence. Genetic Algorithm "Mutation". He finds the best value by following the best value he currently has. Compared with other modern optimization methods, the obvious advantages of particle swarm optimization are fewer parameters that need to be modified, simple and easy to implement, and fast convergence. It is a new evolutionary algorithm developed in recent years. It is an iterative optimization tool. In the process of implementing the algorithm, there is no cross-mutation process, and the best particle is found in the solution space. The advantages of PSO are simple operation, easy implementation, simple algorithm parameters, and no need for complex modification. Particle swarm optimization (PSO) algorithm is a stochastic optimization technique based on swarm, which was proposed by Esmin et al. [8]. PSO algorithm simulates animal's social behavior, including insects, herds, birds and fishes. These swarms conform a cooperative way to find food, and each member in the swarms keeps changing the search pattern according to the learning experiences of its own and other members [9]. At each iteration of each particle considered himself as the best position (Pbest). Global optimal value (Gbest) could be identified after a search of the Pbest. Velocity and position update equation of the particle that has been optimized by adding inertial (ω) It is as follows [10]: v(t+1)=w(t)*v(t)+c1*rand1(t)*(pbest(t)+x(t)) +c2*rand2(t)*(Gbest(t)-x(t)) (1) x(t+1)=x(t)+v(t+1) (2) Variable to control the speed of the particle is called inertia (ω) parameters. These parameters can be said to have a good performance, if it has a range of 0.4 up to 0.9. Variable parameters c1 and c2 are the cognitive and social, which can be calculated by the equation: c1(t)=c1max-c1min((Itermax-Iter(t))/Ietrmax)+c1min (3) Figure 1 shows the PSO algorithm, and notice that the optimization logic in it searches for minimums and all position vectors are assessed by the function and to clarify more, we likened to a swarm of fish. Figure 1. PSO Algorithm concept 2.2 Backpropagation algorithm Biological neurons are cell bodies with dendrites, axons, and synapses. The dendrites can be thought of as input terminals, the electrical signals sent from other cells are transmitted; The axes can be considered the ends of the output, converting charges to other cells; Connecting neurons and a single neuron can be linked to thousands of neurons. There is a membrane potential in the cell body, and the current passing from outside causes the membrane to change and accumulate. When the transmembrane voltage rises above the threshold, the neuron is activated, generating an impulse, and passing it on to the next neuron. In order to more clearly understand the process of neuron signaling, compare a neuron to a reservoir of water. There are several water pipes (holes) attached to the bottom of the tank. The water pipe can not only drain the water in the tank (damper), but also inject water from the other tank (agitator). The thickness of the water tube is the change in the degree of impact of the water (weight). The change in the water level (membrane potential) of the tank by the water pipe is the change in the water level in the tank. When the water in the tank reaches a certain height, it can be drained through another tube (axon). Artificial Neural Network (ANN) was introduced by Abdullah et al. [11]. This is the basic structure of a typical layered neural network. Layer L1 is the input layer, L2 is the hidden layer, and L3 is the hidden layer. Now we have a set of data x1, x2, x3, ..., xn, and the result is also a set of data y1, y2, y3, ..., yn, now they are required to do some transformation in the hidden layer, so you can get the output expected after the data is poured into it. Including some variants. If your output is different from the original input, it's a very common artificial neural network, which is equivalent to letting the original data pass a mapping to get the output data we want, which is the topic we're going to talk about today. Researchers only use a single-layer network architecture. BP algorithm was introduced by Leonard et al. [12]. In 1986, Rumelhart continued the BP algorithm for the hidden layer. This algorithm has a parameter called the learning rate for the control convergence algorithm for optimal local solution [13]. The BP training algorithm is divided into three stages, which are as follows. Feed forward. We use predefined activation functions to calculate advanced input patterns from the input layer to the output layer. It is the backpropagation. to calculate the error, we calculate the difference between the network output and the desired target. This error is transmitted directly back to the module in the output layer. Weighing helps reduce errors. Figure 2 shows the advanced stage structure and the posterior diffusion used throughout the study. Figure 2. Backpropagation architecture 2.3 Load forecasting methods Input data using the daily electricity every quarter of an hour load in 2017 until 2020 from the Algerian (Sonelgaz) Power Company. Data are grouped, ANN algorithm used is the feedforward backpropagation algorithm (BP). The calculation begins with the steps: 1-Group data and Normalization. 2-Initialization of the swarm particle. 3-Create Network Feedforward ANN. 4-Forecasting initial. 5-Evaluate fitness value (Gbest and Pbest). 6-Create Network of Backpropagation ANN. 7-Final Forecasting. The calculation begins with the steps PSO algorithm is used to optimize the weights: 1-determine the initial position and velocity of particles. 2-number of particles. 3-the upper limit and lower limit of the search space and the number of iterations. 4-The position is the weight. Weights of PSO are used as the optimal weights and biases in ANN. Learning rate parameter used 0.8. The maximum value of the epoch is 10000. 3. Model Verification Evaluation The performances of the trained models, which have been employed in this study, were evaluated using the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The mathematical expressions of RMSE and MAPE are as follows: $\mathbf{M A P E}=\frac{1}{n} \sum_{m=1}^{n} \frac{\left|\widehat{y_{t}}-y_{t}\right|}{y_{t}} * 100$ $\mathbf{R M S E}=\frac{1}{n} \sqrt{\sum_{m=1}^{n}\left(\left(\widehat{y_{t}}\right)-y\right)^{2}}$ In the formula above, n is the amount of total forecasting points with: n is the amount of total forecasting points; $\widehat{y_{t}}$ is the load forecasting; $y_{t}$ is the load actual. 4. Problem Solution Algeria has been considered as a developing country that having a considerable growth in electricity loads demand and production (Figure 3). This latest has been resulted due to several conditions such as: The growing of both industry and business-based sectors. The increasing of the Algerian population, etc. Figure 3. The Algerian data of quarter hour value every day (Jan 2017-Dec 2019) The performance of this developed method for short-term load forecasting has been tested by using the MWh-actual load demand of the Electricity Generating Authority of the Algerian (OSE) the value of morning grandeur and the value of evening grandeur were loaded of day observations for electricity demand in Algeria: from January 2017 to December 2019. In this study, we have tried to verify and check the impact of the COVID-19 pandemic in the load consumption of Algerian power system. To reach this goal, we have applied the ANN-PSO based method. For this reason, we have test and compare two main cases. In the first one, we have predicted the load by applying the STLF model during one day in February and April of 2019 in order to validate our ANN-PSO. Then, to make a fair comparison, we have applied the same ANN-PSO in the same days of the 2020 year to demonstrate the impact of COVID-19 in load consumption. After that, we have compared the obtained results in both cases before and after COVID-19 apparition with the real consumed load shown in figures. For the ANN model, we have used the feed forward back propagation and radial basis function to train the clustered data. In addition, we have used the real load data as an input and future consumed load as an output. In order to better validate the suggested model, we have separated the data into three subsets as follows: 1-Training set (60%) to create the model. 2-Validation set (20%) to optimize the hyper-parameters and to avoid the over fitting. 3-Testing set (20%) to test the model. All the simulations are performed by MATLAB software, on PC that having 64-bit operating system and Intel double core 2.16-GHz 2.16 - GHz processor. Figures 4 (a-b-c) show the simulation results of load forecasting on day Randomly selected from the year a forecasting in contrast PSO-ANN and load actually at every quarter-hour. Input learning using electricity load, The ANN architecture has 10-5-1 layers and a learning rate of 0.8. 4c.png Figures 4. The simulation results of load forecasting on day Randomly selected from the year a forecasting in contrast PSO-ANN The PSO-ANN day algorithm outperforms the forecasting algorithm in terms of load predicting accuracy. The absolute value of MAPE Obtained PSO-ANN is 2.01. 4.1 Expectations with the emergence of COVID-19 5d.png 5e.png 5f.png Figure 5. The simulation results of load forecasting on day Randomly selected from the year a forecasting in contrast PSO-ANN In the days of a pandemic Figures 5 (a-b-c-e-f) show the simulation results of load forecasting on day Randomly selected from the year a comparison of the forecasting PSO-ANN and load actually at every quarter-hour Which is under the pandemic. Input learning using electricity load, while the ANN architecture using layers of 10-5-1 and learning rate 0.8. Load forecasting results show that the PSO-ANN day algorithm gives less accuracy than the forecasting that we interpreted as a pandemic effect. The absolute value of MAPE Obtained PSO-ANN is 6.78. The results are very interesting observed from the effects of load forecasting. The forecasting findings show a very substantial error of 6.78 percent, implying a loss of operational producing huge losses, particularly in energy and electricity generation costs. Findings from simulations indicate COVID-19's impact on electrical energy consumption. Inaccurate forecasting findings have a significant environmental impact, Algerian power generating is typically a thermal unit that uses gas and coal. with greater pollution projected as a result. Therefore, any pandemic can be considered as a short-term impact factor on electricity consumption in the future. From Table 1 (See Appendix), some conclusions can be drawn: Obviously, month January models are more accurate than February, month January are more accurate than July, with a maximum MAPE error of 2.48%, 2.93%, 3.23%, 4.02%, 3.78%, 10.13%, and 7.38% respectively, the performance of the PSO-ANN is not good with a mean squared error value of 4.85%. This indicates that the neural network predicts the output unsatisfactorily for the first four months of the year 2020 because the more widespread of a pandemic makes it even more difficult in terms of the uncertainty. Since the COVID-19 has been propagated out worldwide, Algerian government had imposed a severe restriction based on stay-at-home by the end of March 2020. However, the energy consumption has been decreased significantly due to these regulations. Therefore, the energy consumption has been recovered when the relaxed rules were applied. This article treats the problem and checks the impact of the lockdown procedure in Algerian power consumption during the apparition of COVID-19 pandemic by employing hybrid the ANN and PSO model. For this purpose, two cases have been tested and compared. In the first case, we have validated our proposed STLF approach by applying in the Algerian network during one day in February and April of 2019. After that, the same proposed ANN model has been used in the same days of the 2020 year in order to illustrate the impact of COVID-19 apparition ,to illustrate the impact of a pandemic, we demonstrated the accuracy of the prediction of the use of neural networks before the pandemic for the year 2019, and then we applied the same network structure to demonstrate the effect of a pandemic on consumption for the year 2020, where we noticed a difference between prediction and consumption, from which we deduced the extent of the impact of a pandemic. Finally, the obtained results clearly demonstrate the impact of the COVID-19 pandemic on the load consumption of Algerian power system due to the lockdown procedure of both industries and domestically sectors. Table 1. Comparing MAPE of forecasting based on proposed ANN-PSO model for the months of January and February 2020 under shadow of the COVID-19 pandemic MAPE % Load forecast Daily actual peak load Month of February Daily Load peak forecast Month of January [1] Zhang, Y.F., Ma, Z.F. (2020). Impact of the COVID-19 pandemic on mental health and quality of life among local residents in Liaoning Province, China: A cross-sectional study. Environmental Research and Public Health, 17(7): 2381. https://doi.org/10.3390/ijerph17072381 [2] Jain, A.K., Duin, R.P.W., Mao, J. (2000). Statistical pattern recognition: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1): 4-37. https://doi.org/10.1109/34.824819 [3] Aruga, K., Islam, M., Jannat, A. (2020). Effects of COVID-19 on Indian energy consumption. Sustainability, 12(14): 5616. https://doi.org/10.3390/su12145616 [4] COVID-19 Impact on Electricity. Available online: https://www.iea.org/reports/covid-19-impact-on-electricity, accessed on 16 June 2020. [5] Meinrenken, C.J., Modi, V., Mckeown, K.R., Culligan, P.J. (2020). New data suggest COVID-19 is shifting the burden of energy costs to households. Columbia University. [6] Chhetri, R. (2020). Effects of COVID-19 pandemic on household energy consumption at college of science and technology. International Journal of Scientific Research and Engineering Development, 3(4): 1383-1387. [7] Mekhamer, S.F., Soliman, S.A., Moustafa, M.A., El-Hawary, M.E. (2003). Application of fuzzy logic for reactive-power compensation of radial distribution feeders. IEEE Transactions on Power Systems, 18(1): 206-213. https://doi.org/10.1109/MPER.2002.4311830 [8] Esmin, A.A., Coelho, R.A., Matwin, S. (2015). A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data. Artificial Intelligence Review, 44(1): 23-45. https://doi.org/10.1007/s10462-013-9400-4 [9] Wang, D., Tan, D., Liu, L. (2018). Particle swarm optimization algorithm: An overview. Soft Computing, 22(2): 387-408. https://doi.org/10.1007/s00500-016-2474-6 [10] McCulloch, W.S., Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5: 115-133. https://doi.org/10.1007/BF02478259 [11] Abdullah, A.G., Suranegara, G.M., Hakim, D.L. (2014). Hybrid PSO-ANN application for improved accuracy of short-term load forecasting. WSEAS Transactions on Power Systems, 9(1): 446-451 [12] Leonard, J., Kramer, M.A. (1990). Improvement of the backpropagation algorithm for training neural networks. Computers & Chemical Engineering, 14(3): 337-341. https://doi.org/10.1016/0098-1354(90)87070-6 [13] Zhang, J.R., Zhang, J., Lok, T.M., Lyu, M.R. (2007). A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Applied Mathematics And Computation, 185(2): 1026-1037. https://doi.org/10.1016/j.amc.2006.07.025
CommonCrawl
Brave Knight's Journey in N Dimensions Queen Shayra does not have the most impressive following, but she intends to expand her influence. If she could settle down at the center of an N Dimensional 3-Cube and build a palace there, she would attract all sorts of interesting folk to her realm. Of course, before she may build a palace, she needs to make sure that the N Dimensional 3-Cube is fit for colonization (no rats, etc). For this purpose she will send her Brave Knight on a journey over all 1-Cubes of a realm, except for the center, which is easy enough to deal with during the building process as it is only one cube. Brave Knight moves from cube to cube in chess knight jumps (two over in one direction, one in another), and he may not pass through a cube that he has been to already because Shayra is an optimization enthusiast. For which $N$ is there a $3\times\dots\times3$ cube with such a path? Whenever for some $N$ such a path exists, construct it. chess combinatorics construction Ben Frankel Ben FrankelBen Frankel $\begingroup$ Does the knight have to return to his starting point? $\endgroup$ – Tryth Apr 7 '15 at 12:02 $\begingroup$ @Tryth, He does not. $\endgroup$ – Ben Frankel Apr 7 '15 at 12:05 $\begingroup$ What is considered the "center" if $N$ is even? $\endgroup$ – mdc32 Apr 7 '15 at 12:14 $\begingroup$ @mdc32, The center. Consider $N=2$, ie a 3x3 square, it is easy to pinpoint the center. Or, if we create a coordinate system where one corner is $(0, \dots, 0)$, then the center is $(1, \dots, 1)$. $\endgroup$ – Ben Frankel Apr 7 '15 at 12:17 $\begingroup$ @IvoBeckers, Thing is, it isn't possible to reach the center anyways. If it's possible to jump from the frame into the center, then it's also possible to jump from the center to the frame. This is clearly impossible, if we consider the direction where we move 2 over: we break out of the region. $\endgroup$ – Ben Frankel Apr 7 '15 at 12:43 The knight cannot succeed when $N$ is odd. Color the squares in checkerboard fashion, with a corner black. When $N$ is odd, the removed square is white, so there will be two more black squares altogether. Since tours must alternate colors, none can exist. We prove by induction that, for even $N$, an $N$-palace (i.e. an $N$-cube with its center removed) can be cycled (toured, with start and end joined). This clearly holds when $N=2$. Given an $N$ palace, its cells can be described with coordinates $(x_1,\dots,x_n)$, where $x_i$ are all either $0,1$ or $2$. Take any cell where $x_1=x_2=1$, and color it red. The red cells themselves form an $(N-2)$-palace. Furthermore, surrounding each red cell are 8 cells with coordinates $$ (*,*,x_3,\dots,x_{n}) $$ where the $*$'s can be anything (except both $1$, since that would be the central red cell), while the $x_i$ are fixed. These $8$ cells form a $2$-palace: I'll refer to these as the "ring" around the red cell $(1,1,x_3,\dots,x_n)$. Color such squares black. Almost every cell is now colored. The remaining cells, the ones whose last $N-2$ coordinates are $1$, form a ring around the removed central cell: color these blue. Here is the current situation, illustrated when $N=4$. The coordinates $(x,y,w,z)$ mean the following: $w,z$ tell you which slanted square you are in, and $x,y$ tell you your location in that square. $w$ is horizontal, $z$ vertical. $x$ is back/front, $y$ is vertical. We first show how to cycle each color: then we combined these 3 disjoint cycles into one big one. The blue cells themselsves can be toured (this is just the $N=2$ case). By induction, the red cells can be cyclically toured. The black cells can be toured as well: do this, hop from ring to ring, following the tour of the red cells. When you arrive at each ring, perform a tour of it. There are two ways to cycle around a ring: alternate these each time. This will ensure when you return to the ring around your first red cell, you will be in the correct "phase" (since you switch phase each time, and there an even number of red cells). Now, we have partitioned the $N$-palace into three disjoint cycles, and we must combine these into one cycle. Any two disjoint cycles can be combined as long as there are consecutive points on one which are joined to two consecutive points on the other. Here is, for example, how the black cycle can be joined to the red. The $\dots$'s mean the rest of the coordinates are all 1. $$ \begin{array}{ccc} (0,1,1,0,\dots) &\to \stackrel{\text{black tour}}{\cdots}\to &(0,1,0,2,\dots)\\ \uparrow && \downarrow\\ \color{red}{(1,1,1,2,\dots)}&\color{red}{\leftarrow \stackrel{\text{red tour}}{\cdots}\leftarrow} &\color{red}{(1,1,0,0,\dots)} \end{array} $$ The blue cycle can be joined to the now black/red cycle via $$ \begin{array}{ccc} (0,2,2,1,\dots) &\to \stackrel{\text{black tour}}{\cdots}\to &(1,0,2,1,\dots)\\ \uparrow && \downarrow\\ \color{blue}{(0,0,1,1,\dots)}&\color{blue}{\leftarrow \stackrel{\text{blue tour}}{\cdots}\leftarrow} &\color{blue}{(1,2,1,1,\dots)} \end{array} $$ Mike EarnestMike Earnest $\begingroup$ I agree with your first two spoilers, but your third is a bit lacking (also, irrelevant, I meant for the center cube to be the location for Shayra's palace while the rest is land for the peasants). You have left to explain how you reach all of the inductively tourable cubes in the right sequence, and maybe you jump from one tourable cube to another at a place where you simply cannot tour the whole thing (tourable may mean from corner to corner or something). $\endgroup$ – Ben Frankel Apr 7 '15 at 17:22 $\begingroup$ @BenFrankel I'll get back to you...great puzzle! $\endgroup$ – Mike Earnest Apr 7 '15 at 17:55 $\begingroup$ For a 3x3 square, imagine numbering it 1-9 (with the center being 5). Then the knight can traverse 1,6,7,2,9,4,3,8 And since 8 can traverse to 1, then ANY position on the outer ring can be completed with jumps. $\endgroup$ – Jiminion Apr 7 '15 at 18:52 $\begingroup$ @BenFrankel My construction has been filled in, though it is complicated. It's hard to paint an $N$-dimensional picture. I could certainly write a program to output the Hamiltonian cycle. $\endgroup$ – Mike Earnest Apr 8 '15 at 0:38 $\begingroup$ @Jiminion From the question: "he may not pass through a cube that he has been to already". By the time you get to 3 and want to go to 8, you need to pass through 2 or 6, both of which have been visited already. Is there a solution for $N=2$? I suspect not: having visited all but 1 squares, the final jump has to go through at least one other non-centre square, and there aren't any such unvisited squares left. $\endgroup$ – Lawrence Apr 8 '15 at 0:48 Color the center cube (i.e. (1, 1, 1, 1, ..., 1)) black. Color the two cubes (0, 1, 1, 1, ..., 1) and (2, 1, 1, 1, ..., 1) adjacent to the center red. Finally, color the six cubes (0, 0, 1, 1, 1, ..., 1), (0, 2, 1, 1, 1, ..., 1), (1, 0, 1, 1, 1, ..., 1), (1, 2, 1, 1, 1, ..., 1), (2, 0, 1, 1, 1, ..., 1), (2, 2, 1, 1, 1, ..., 1) green. Notice each green is adjacent to a red or black. We will show that we can jump through 3-cube starting on one red and ending at the other red for $N \geq 2$. Proof will proceed by induction, with the base case $N = 2$ being pretty straighforward. For the inductive case, start on the red (0, 1, 1, 1, ..., 1). Now jump to the green (2, 0, 1, 1, 1, ..., 1). You are now on the N-1 dimensional 3-cube with coordinates (2, *, *, *, .., *), where * means that coordinate can be 0, 1, or 2. In this subcube, the greens play the same role as the reds, and the red plays the role of the center. Hence, by induction, we can now jump all the way through this subcube avoiding only the red, starting on a green and ending on a green. The green we end on has coordinates, (2, 2, 1, 1, 1, ..., 1). We now jump to the subcube (1, *, *, *, ..., *). In particular, we jump from (2, 2, 1, 1, 1, ..., 1) to (1, 0, 1, 1, 1, ..., 1). By induction, we can now jump through this subcube and end on the green (1, 2, 1, 1, 1, ..., 1). Finally, we jump to the subcube (0, *, *, *, ..., *). We go from (1, 2, 1, 1, 1, ..., 1) to the green (0, 0, 1, 1, 1, ..., 1). By induction we can eventually, end on green (0, 2, 1, 1, 1, ..., 1). We can then jump to the red (2, 1, 1, 1, ..., 1), which completes the path. Tyler SeacrestTyler Seacrest $\begingroup$ Your answer lacks a value for N. I'll assume you mean N = infinite, but it's not stated in the answer. $\endgroup$ – Tim Couwelier Apr 7 '15 at 16:04 $\begingroup$ Your base case doesn't work: you can tour the $3\times 3$ square, but your starting square and ending square cannot be $(0,1)$ and $(2,1)$. These two squares must differ by a knight's move. $\endgroup$ – Mike Earnest Apr 7 '15 at 16:05 $\begingroup$ @MikeEarnest: Good point ... looks like I rushed that case a bit and now the proof doesn't work! $\endgroup$ – Tyler Seacrest Apr 7 '15 at 16:46 $\begingroup$ @TimCouwelier: I say at the start of the second paragraph that the proof works for all $N \geq 2$. Except as Mike points out the proof doesn't work ... $\endgroup$ – Tyler Seacrest Apr 7 '15 at 16:47 Not the answer you're looking for? Browse other questions tagged chess combinatorics construction or ask your own question. Rotationpuzzle in hex - The journey beyond the tomb The knight's game Knight's Tour Question
CommonCrawl
Analysis of networks of host proteins in the early time points following HIV transduction Éva Csősz ORCID: orcid.org/0000-0003-4373-21751, Ferenc Tóth2, Mohamed Mahdi2, George Tsaprailis3,4, Miklós Emri5 na1 & József Tőzsér1,2 na1 BMC Bioinformatics volume 20, Article number: 398 (2019) Cite this article Utilization of quantitative proteomics data on the network level is still a challenge in proteomics data analysis. Currently existing models use sophisticated, sometimes hard to implement analysis techniques. Our aim was to generate a relatively simple strategy for quantitative proteomics data analysis in order to utilize as much of the data generated in a proteomics experiment as possible. In this study, we applied label-free proteomics, and generated a network model utilizing both qualitative, and quantitative data, in order to examine the early host response to Human Immunodeficiency Virus type 1 (HIV-1). A weighted network model was generated based on the amount of proteins measured by mass spectrometry, and analysis of weighted networks and functional sub-networks revealed upregulation of proteins involved in translation, transcription, and DNA condensation in the early phase of the viral life-cycle. A relatively simple strategy for network analysis was created and applied to examine the effect of HIV-1 on host cellular proteome. We believe that our model may prove beneficial in creating algorithms, allowing for both quantitative and qualitative studies of proteome change in various biological and pathological processes by quantitative mass spectrometry. Utilization of state-of the art proteomics methods can generate thousands of data points, and extensive information on proteins present in the sample can be obtained. High-resolution shotgun proteomics can provide both qualitative and quantitative information about proteins, and can be applied in an unbiased way to study the complete proteome [1, 2]. Despite the high amount of data available, it is sometimes difficult to acquire relevant biological information, in which case sophisticated analytical methods and capable software are needed [3]. Network analysis is widely used in biological data analysis for examination of transcriptomic, proteomic or metabolomic datasets [4,5,6], and for analyzing interactions between various molecules [7, 8]. In the cellular environment, most of the proteins exert their biological function as part of a complex, or in the form of interactions with other proteins, therefore, application of protein-protein interaction (PPI) analysis methods is advantageous [9]. PPI networks can provide a new layer of information, allowing for the utilization of currently available data, in addition to possibly unravelling hidden biological phenomena [10, 11]. New concepts on network analysis are emerging helping the understanding of biological complexity [12], however, in most cases, only the presence or absence of the protein is considered, the available quantitative data can hardly be incorporated into the network analyses. The replication cycle of human immunodeficiency virus-1 (HIV-1) is a complex, multi-step, and highly regulated process. The cycle typically begins with viral attachment to cell surface receptors, and ending with the production of infectious virions. Due to the multiple processes involved, the replication cycle has been classically divided into two distinct phases; the early and late phase. The early phase encompasses cell binding, fusion, internalization, uncoating, reverse transcription, as well as integration of the viral cDNA into the host genome. On the other hand, transcription of viral genome, export of viral RNA, assembly of virions at the plasma membrane, as well as budding and maturation of the released virions are parts of the late phase of the replication cycle [13, 14]. While late phase events are relatively well characterized, the precise mechanism and regulation of early phase steps remain poorly understood. Genomics and proteomics studies were carried out to investigate how HIV-1 hijacks the host cellular machinery, avoiding being sensed by host immune responses. siRNA screens were implemented to study the cellular genes and proteins required for HIV-1 infection [9, 15, 16], HIV-1 protein – host protein protein-protein interaction networks were generated, and the data were deposited in HIV-1 Human Interaction Database [17]. In case of HIV infection, the network-based examinations have identified perturbed host cellular systems; such as the proteasome and transcriptional regulation, and have revealed that HIV-1 preferably interacts with highly connected and central cellular proteins [18,19,20]. In this study, we have generated the protein expression profiles of cells during early HIV-1 infection using protein mass spectrometry, and integrated the acquired data with knowledge-based protein-protein interaction network to understand how cellular network is perturbed by HIV. Our aim was to analyze the proteomic landscape of the early stage of HIV-1 based lentiviral vector transduction. 293 T cells were infected with VSV-G pseudotyped HIV-1 vector, and 0, 4 and 12 h post-infection, cell lysates were harvested. Label-free proteomics was applied to examine protein-level changes. Duplicate samples for three time points were collected (0, 4, and 12 h post-transduction) in case of virus transduced samples and in case of control, mock transduced samples. The collected 6 virus treated and 6 control samples were analyzed in duplicates, allowing for the measurement of two technical and two biological replicates for each time point. The mass spectrometry proteomics data have been deposited into the ProteomeXchange Consortium [21] via the PRIDE partner repository with the dataset identifier PXD010436 and https://doi.org/10.6019/PXD010436. Identified proteins (Additional file 1) were manually curated, and in the case of non-human or non-viral identifications, the sequences were verified. In many instances, they were mistakenly designated as non-human proteins, in which case it was corrected. In few instances, the non-human proteins could not be matched to any of the human or viral proteins, and consequently, these sequences were omitted from further analyses. The data for Rhodobacter capsulatus cytochrome c, bovine pancreatic trypsin inhibitor, bovine serum albumin and pig trypsin were kept to serve as reference for quantitative analyses, but were not used for further computations. The relative amount of proteins was computed based on spectral counting and in case of each protein the mean of the results of the four analyses corresponding to each condition was calculated (Additional file 2). Firstly, a qualitative analysis was carried out to detect newly expressed or down-regulated proteins in the first 4 or 12 h after HIV-1 pseudovirion transduction. Only those proteins were considered for statistical analysis which could be quantified in at least 2 out of 4 replicates, and were not quantified in other conditions. HIST1H1E, HNRNPL, PRRC2A and TRIM28 were quantified only at H04, and there were no proteins quantified solely in H12 time point (Additional files 1, 2). HIST1H1E interacts with linker DNA between nucleosomes, and functions in DNA condensation, HNRNPL and TRIM28 play a role in translation and transcription, while PRRC2A plays a role in inflammatory processes. Some of the proteins were quantified in all time points except H12. These include ALYREF, CCDC86, CSDA, COX5A, HN1, MYL6, PPIF, SEPT2, SRSF6, TCOF1, and TPM3 (Additional file 1). These proteins participate in RNA binding (ALYREF, CCDC86, SRSF6, TCOF1), DNA binding (CSDA), protein folding (PPIF), energy generation (COX5A), signalization (HN1) and cytoskeleton assembly (MYL6, SEPT2, TPM3). In order to examine changes in the amount of proteins, statistical analysis was carried out (Additional file 3). The amount of CSDA, EEF1A1, EEF1D, HN1, NPM1, PGAM1 and SRSF6 increased significantly, while that of HIST1H1D and HSPA5 significantly decreased in H04 (Fig. 1). It is interesting to note that after peaking in H04, CSDA, HN1 and SRSF6 were not quantified in H12. In H12, compared to C12, the amount of COX6B1 and PDIA3 increased, while that of EEF2 and GAPDH decreased significantly (Fig. 1). When the function of proteins showing statistically significant changes was examined, we observed an increase in the amount of proteins implicated in RNA binding in H04, and an overall decrease in their amount in H12. Relative protein amounts showing statistically significant changes in HIV-1 treated samples compared to controls. The x axis shows the time-points of sample collection in hours, and the y axis shows the relative protein amounts. Blue color refers to control (C), and yellow color to the HIV-1 treated sample (H) To broaden our insight, and to better understand the possible functional associations of protein changes upon HIV-1 pseudovirion transduction, we have searched for the available protein-protein interactions of the quantified proteins in our datasets. For evaluation of the interactions, the STRING database was used, which contains information on known and predicted, direct physical, and indirect functional protein-protein interactions [22]. Only interactions which were of high confidence (interaction score in STRING database > 0.95) were used. Initially, five binary interaction networks were generated: NW0 combined proteins from mock- and virion-treated cell lysates collected at 0 time-point, C04 and C12 networks contained proteins from the mock-treated cells collected 4 h and 12 h post-infection, respectively, and the H04 and H12 networks contained proteins from the HIV-1 transduced cells collected at 4 and 12 h time-points, respectively (Fig. 2). The number of nodes and the number of edges of the networks show a decreasing trend over time, with a marked shrinkage in H12. Protein-protein interaction network of the proteins quantified in each condition. The PPI networks were generated by STRING (confidence 0.95) using the list of quantified proteins presented in Additional file 2 in case of each condition. The number of nodes (N) and the number of edges (E) according to STRING in case of each network is indicated. a. PPI network of proteins in the 0 h time point (NW), b. PPI network of proteins in the 4 h time point in control, mock-transduced cells (C04). c PPI network of proteins in the 12 h time point in control, mock-transduced cells (C12). d. PPI network of proteins in the 4 h time point in HIV vector-transduced cells (H04). e. PPI network of proteins in the 12 h time point in HIV vector- transduced cells (H12). Red dots represent proteins belonging to transport GO term, blue dots indicate proteins having a role in translation, while green dots indicate proteins with a role in RNA splicing according to the functional enrichment analysis provided by STRING These binary networks provide information solely on the possibility of interaction between two proteins (Fig. 2, Fig. 3a, b), hence, in order to gain more realistic information, protein amounts measured by spectral counting were implemented into the network using a simple statistical model. In this way, binary edges were transformed into estimated protein pair's interaction intensities in the sample, which is proportional to the amounts of proteins participating in the interaction, and inversely proportional to the number of interactions (Fig. 3c). The weighted networks were examined, and the number of nodes (N), edges (E), network strength (S), edge density (D) and functional and non-functional edge ratio (R) were calculated (Fig. 4). Network generation pipeline. Representative network drawn by circlize, showing data for sample1 and interactions generated by STRING. a. Binary network containing all the identified proteins arranged in alphabetical order on the external ring of the circular plot. Thin black curves show the possible interactions generated by STRING. b. Binary network containing only the proteins with interactions. The isolated proteins (i.e. without any connection) were eliminated. Thin black curves show the possible interactions generated by STRING. c. Weighted network containing the interacting proteins. Orange lines represent interactions, the higher the intensity of the color and thickness of the line the higher the interaction strength. d. Weighted network with functional feature. A randomly selected GO function (GO:0044765) is used to illustrate the functional network. Red proteins are part of the functional sub-network, while black proteins are not, being considered as non-functional proteins. The weighted interactions are color-coded according to the protein-pair classification: functional – functional interactions are orange, non-functional – non-functional interactions are gray, and functional – non-functional interactions are green. The interaction strength is represented by the intensity of the color and thickness of the line: the higher the intensity of the color and the thickness of the line, the higher the interaction strength Network parameters. a. Number of nodes (N), b. number of edges (E), c. network strength (S), d. strength or edge density (D) in case of networks observed in the examined conditions. The y axis show the mean value characteristic for each parameter, and the x axis indicates the time points. Blue color refers to control, while the yellow color to the HIV-1 treated conditions The number of nodes decreased significantly in H12, indicating network shrinkage in H12, observed in the binary network (Fig. 4a). The number of edges and network strength did not change in a statistically significant manner (Fig. 4b, c), however, edge density decreased significantly in H04 while increasing significantly in H12 (Fig. 4d). These changes indicate the presence of a less interactive network in H04, and a smaller; yet more active, PPI network in H12 (Fig. 4a, d). Next, we were eager to analyze the functionality of the networks, and hence, we generated functional sub-networks of proteins belonging to GO terms. All the Molecular Function, Biological Process and Cellular Component GO terms listed as enriched by STRING in C04, H04, C12 and H12; where at least 10 protein per GO function in any of the networks were present, were considered. To visualize network changes, the GO.0044765 term was chosen randomly (Fig. 3d), and the change of this sub-network was visualized in all time points (Fig. 5). Proteins present in a given GO term listed as enriched by STRING were considered as being part of the functional sub-network (f), whereas proteins not being part of the specific GO term, were considered as non-functional (n) proteins. Three types of interactions were analyzed: i) interactions between proteins belonging to functional sub-networks (f), ii) interactions between proteins not belonging to functional sub-network (n), and iii) interactions between functional and non-functional proteins (c – cross) (Fig. 3d). In order to better understand the changes, a statistical approach was applied, and the following network parameters were calculated: in case of each functional (f) network of proteins belonging to a specific GO term, the Nf, Ef, Sf, Df, and Rf, while for non-functional (n) proteins the Nn, En, Sn, Dn, and Rn network parameters were calculated. In case of interactions between the functional and non-functional proteins (c) the Ec, Sc, Dc, and Rc network parameters were calculated (Additional file 4). Network changes visualized in case of a representative functional sub-network. The representative figure shows the changes in weighted networks in case of proteins belonging to randomly selected GO:0044765 GO term. NW represents the weighted PPI network of interacting proteins in the 0 h time point, C04 and C12 correspond to PPI networks of interacting proteins in the 4 h and 12 h time point, respectively, in control, mock-transduced cells. H04 and H12 represent the weighted PPI network of interacting proteins in the 4 h and 12 h time point, respectively, in HIV vector-transduced cells. Red proteins are part of the functional sub-network, while black proteins are non-functional proteins. The weighted interactions are color-coded according to the protein-pair classification: functional – functional interactions are orange, non-functional – non-functional interactions are gray, and functional – non-functional interactions are green. The interaction strength is represented by the intensity of the color and thickness of the line: the higher the intensity of the color and the thickness of the line, the higher the interaction strength According to our hypothesis, those GO functions or functional sub-networks might be responsible for the changes induced by HIV-1, where the parameters in the functional network change significantly, whereas in the non-functional network, no statistically significant changes are shown. At the same time, those GO functions where the parameters in the functional network do not change in a statistically significant manner, yet do so in the non-functional sub-networks, are thought to not explain the changes related to HIV-1 transduction. After statistical analysis and FDR correction of the results (Additional file 5), in case of some GO terms, statistically significant differences were observed. No significant difference in edge and strength values were observed in any of the functional sub-networks (Ef and Sf), and the number of nodes was significantly reduced in H12 only in the case of 5 functional sub-networks (Additional file 6). Considering edge density (D) and ratio (R), only those GO terms were further considered where (i) the significant difference was present only in the functional sub-network (Df and Rf, respectively) and (ii) where the significant difference was present both in the functional sub-network (Df and Rf) and in the cross network (Dc and Rc) (Additional file 6). According to our hypothesis, proteins belonging to the GO terms listed in Table 1 and Table 2, are responsible for the changes of cellular proteome observed in the H04 and H12 networks in response to HIV-1 transduction. In H04 sample, an increase in the node number (proteins present in the network) was observed, however, this increase was not significant. In the same time, a global decrease in interactivity; represented by the number of edges, was noticed. Proteins which might be responsible for this reduced interactivity belong to the RNA processing-related functions (splicing, RNA synthesis, RNA catabolism, translation, transcription), regulation of cell death, regulation of cellular response to stress, viral life cycle (viral gene expression, viral transcription, viral life cycle) and protein localization, and some very general GO terms; such as protein binding, cellular macromolecular biosynthetic process, purine nucleotide binding, organic substance transport, etc. (Table 1). In spite of the reduced global interactivity, some functional sub-networks; such as viral process, protein kinase binding, multi-organism process, de novo protein folding and protein complex subunit organization, show significantly increased interactivity (Table 1). Table 1 List of GO terms with significantly different changes in the functional sub-network in H04 Table 2 List of GO terms with significantly different changes in the functional sub-networks in H12 In H12, a statistically significant reduction of the node numbers and shrinkage of the network; along with a significant increase in interactivity, was observed (Fig. 4). The proteins responsible for the increased interactivity (increased Df and Rf values) belong to RNA binding, RNA catabolic process, viral life cycle, viral process, negative regulation of cell death, de novo posttranslational protein folding, protein complex subunit organization, and cellular metabolic process, etc. (Table 2). The cell junction and the myelin sheet GO terms also appear in H12, however, when proteins belonging to these GO terms were examined, it was found that they are part of more general GO terms from the list; such as intracellular non-membrane-bounded organelle or nucleus, extracellular space, etc. In case of biosynthetic process functional sub-network (GO.0009058), a decrease in the Rf was observed. Genome-wide RNA interference-based screens were carried out to evaluate more than 20,000 human gene products to determine their alteration in HIV infection [23, 24]. A previous study showed an overall downregulation of cellular genes encoding for nuclear proteins, and genes involved in DNA replication and protein synthesis in the early stages of the early phase of viral infection [25], in a pattern that was confirmed by our analysis (Table 1). Upregulation of cellular genes was only found to occur at a later time point, peaking at 22 h post-infection, additionally, analysis on T cells showed that the most profound changes in cellular proteome appear 24 h after infection, at time points related to the late phase of infection [26]. It was found that up to 300 host cellular genes were involved in the life cycle of HIV-1, and while the identity of the genes was divergent among different studies, they were found more or less to belong to similar pathways [27, 28]. Network analysis is widely used in the examination of protein-protein interactions, providing information regarding protein changes on a different level, giving a more ample view of the alterations and perturbations of the biological systems as a result of a particular treatment. During analysis of PPIs, the presence or absence of a protein is evaluated, and the interactions, in light of existing evidence (ex. experimental data, literature search, computational methods), are displayed [29, 30]. STRING is a widely used, constantly updated, and expanding database of PPIs [22], used for the examination of verified, or potential interactions among proteins of interests. These networks are rich in information on protein clusters and functions based on Gene Ontology (GO), however, enrichment of GO terms does not handle protein amounts, therefore, reflecting theoretical, rather than actual parameters. Meanwhile, the use of highly accurate mass spectrometry techniques provide analytical data that is wealthy in quantity as well as quality. There were few attempts made to introduce the quantitative data into the network analysis [31, 32]. In order to implement quantitative data into the PPI networks, instead of the widely used binary networks, a weighted network often utilized in information science [33] was used in this study. Taking into account the protein amount reflected by the normalized total spectra, instead of the probabilistic assumption [32], we choose a simple statistical model. In our model, the protein pair's interaction is proportional to their amount in the sample, and inversely proportional to the number of possible interactions listed in the PPI network generated by STRING for proteins present in the sample. After including the interaction density values as network edge weights; calculated by our method, we could determine a sort of weighed network parameters for the statistical investigation of network alterations. In our study, we aimed at characterizing the cellular proteome changes in the early stage of HIV-1 infection, within the 0–12 h time interval. Generation of weighted networks, and analysis of functional sub-networks revealed that the dynamics of protein level changes in sub-networks is different in HIV-1 transduced samples 12 h post-infection. Expectedly, in the very early stages of infection, proteins involved in translation, transcription and DNA condensation were upregulated, notably HIST1H1E, HNRNPL, PRRC2A and TRIM28. Some other proteins; such as ALYREF, CCDC86, CSDA, COX5A, HN1, MYL6, PPIF, SEPT2, SRSF6, TCOF1, and TPM3, prominently associated with RNA binding, cytoskeleton assembly, and signaling were quantified in all time points except H12. Examining the binary networks, two protein clusters could be observed. One comprising proteins having a role in translation and ribosome biogenesis, and the other containing proteins from the hnRNP family with a role in RNA splicing (Fig. 2). The functional sub-network containing the ribosome component proteins did not show a statistically significant change, and with this, we can demonstrate on protein level the same findings observed by Kleinman et.al. at gene level, who could not observe statistically significant difference in case of genes having a role in ribosome biogenesis at 12 h time point [34]. Regarding the other cluster containing mainly hnRNP proteins, we could not observe a statistically significant change in network parameters among the different time points. However, literature data show that host RNA splicing is altered upon HIV-1 infection, and the level of class A/B and H of hnRNP proteins changes; initially decreased 6–12 days post infection, thereafter increased [35]. At the same time, it was shown that some proteins of this cluster; such as HNRNPH1, HNRNPU and SRSF6, are so called HIV-1 dependency factors [36] and are required by HIV-1. These data are derived from later time-points, as most of the experiments do not examine such early events at 4 h or 12 h post infection. Considering the results of the analyses, based on the weighted networks, we could identify increased cellular metabolic processes comprising increased RNA binding and catabolism, cellular component assembly, along with increased viral process and inhibition of apoptosis (increased negative regulation of apoptotic process). RNA binding was shown to be increased upon RNA virus infection; Garcia-Moreno et al. observed an increased activity of RNA-binding proteins upon sindbis virus (SINV) infection at 18 h time point [37]. At the same time, they observed an increased binding of RNA binding proteins to viral RNAs. This implies a massive downregulation of the host mRNAs 18 h post infection, involving mainly the housekeeping genes [37]. In case of HIV-1 infection, global siRNA studies indicate that a statistically significant portion of the host factors participate in mRNA transport [18]. Cells infected with HIV-1 usually die by apoptosis, hence prevention of apoptosis might help maintain the viral reservoir in the host [18, 38]. It was shown that a fraction of infected immune cells survive, highlighting the importance of escaping from apoptosis in the development of viral reservoirs [38]. A mixed pattern of upregulation and downregulation of genes involved in antiviral defense and cell death signaling were observed by Mohammadi et al. at early time points [24]. Inhibition of apoptosis increases the virus production in HIV-1 infected cells [39], and modulation of this system might be a good possibility for a therapeutic intervention [40]. Based on our data on the weighted networks, HSPA8 shows an increased interactivity in H12 datasets (Fig. 5a). HSPA8 and other members of the Hsp70 family play a key role during viral infection either as receptors for the virus, as chaperons aiding the protein folding, or as transporters between organelles [18, 41, 42]. Hijacking of the host system by HIV-1 is a complex phenomenon with early and late events. In the early phases of the viral infection, the virus utilizes cellular RNA and protein production machinery for its replication. It was observed that by 15 h post infection, all viral transcripts were produced by the cells, and 18 h after infection, the virus budding commences [24]. Chang et al.;. using next generation sequencing, observed a considerable viral mRNA level in infected cells 12 h post-infection [43]. In this sense, examining the host response 48 h [15, 44] or 6 days post-infection [45] cannot provide us with information on the very early events. Observations made by Kleinmann et al. analyzing the dataset generated by Chang et al., show that at 12 h post infection, the gene expression profiles are similar to the mock samples, and clear distinctions could only be made after 24 h, highlighting the necessity of more sensitive methods for the examination of early events of HIV-1 infection. It is challenging to properly compare our results to those presented in the scientific literature, since the commonly used starting time point examined is 48 h post infection, in case of HIV-1. However, considering the findings presented by different groups; either on HIV-1 or other RNA virus infections, our findings are in good agreement with previous studies analyzing transcriptomic and proteomic changes upon virus infection in these very early time points. The use of non-primary HIV-1 cell targets; such as HEK, and pseudotyped virions, and the application of data-dependent sampling [46], may indeed limit interpretation of the results. The utilization of other cell types and data acquisition methods with higher reproducibility; such as parallel reaction monitoring [2] or data independent acquisition [47], might give more accurate input data. In spite of the above limitations, we believe that this model of proteomic data evaluation serves as a good starting point for further development of algorithms implementing not only qualitative, but also quantitative data generated in a given proteomic experiment, and that such a combination will undoubtedly aid in the understanding and deciphering of complex biological phenomena. A weighted network model facilitating the use of both qualitative and quantitative data, acquired in a label-free proteomics experiment was generated and applied to examine the early host response to HIV-1. Upregulation of proteins involved in translation, transcription and DNA condensation in the early phase of the viral life-cycle could be observed, highlighting the utility of our weighted PPI network data analysis approach. More studies are required to further demonstrate the utility of this new data-driven weighted network based analysis, and it should be noted that the current model has a serious limitation. The strength of different protein-protein interactions in the edge weight calculation; due to the lack of information, is not yet included. However, the applied weight-model can easily be extended to use this type of information as soon as any public database becomes available. We hope that this approach can open new ways for creating algorithms, allowing for both quantitative and qualitative studies of proteome change in various biological and pathological processes by quantitative mass spectrometry. Production of viral particles Viral particles were produced with some modifications of a previously utilized protocol [48]. Briefly, recombinant viruses were produced by transient transfection of 293 T cells (ATCC® CRL­3216™) using pWOX-CMV-GFP (transfer vector plasmid), pMDLg/pRRE (packaging plasmid), pRSV.rev (Rev-coding plasmid), and pMD. G (VSV-G envelope protein-coding plasmid). Vectors were a kind gift from D. Trono (University of Geneva Medical School, Geneva, Switzerland) [49], and were subsequently modified by our research group [48]. Salmon sperm DNA (Sigma-Aldrich) was also added. Media containing virus particles was concentrated by Ultracel-100 K Amicon Ultra Centrifugal Filter (Millipore), and stored in − 70 °C. Quantity of pseudovirions produced was assessed by measurement of reverse transcriptase (RT) activity using a colorimetric kit (Sigma-Aldrich, Roche). Transduction and sample collection 293 T cells in T-25 cell culture flasks were either mock-treated or transduced at 50% confluency with 5 ng RT equivalent of the HIV-based pseudovirions, in the presence of 4 μg/ml polybrene (Sigma-Aldrich), in 1 ml total volume, and incubated at 37 C°. After 0, 4, and 12 h, cells were trypsinized for 10 min, then washed tree times with ice-cold PBS to remove non-fused pseudovirion particles. The final pellet was suspended in 4 ml lysis buffer (150 mM sodium chloride, 1.0% Triton X-100, 0.5% sodium deoxycholate, 0.1% sodium dodecyl sulfate (SDS), and 50 mM Tris) pH 8.0, supplemented with cOmplete protease inhibitor cocktail (Sigma-Aldrich), incubated for 30 min at room temperature, centrifuged, and the supernatant was mixed with 24 ml cold (− 20 C°) acetone and stored at − 20 C° overnight. Mass spectrometry analysis The cleared cell lysates were acetone-precipitated with six volumes of cold acetone overnight. The precipitates were re-dissolved in 25 mM ammonium bicarbonate (Sigma-Aldrich) and digested in-solution with trypsin [50]. The tryptic fragments were used for replicate LC-MS/MS analyses at University of Arizona in Tucson, AZ, USA. 500 ng per 5 μL injected protein lysate spiked with 300 fmol of Rhodobacter capsulatus cytochrome c T33 V mutant, was analyzed using a LTQ Orbitrap Velos mass spectrometer (Thermo Fisher Scientific) equipped with an Advion nanomate ESI source (Advion), after Omix (Agilent Technologies) C18 sample clean-up according to the manufacturer's instructions. Peptides were eluted from a C18 precolumn (100-μm × 2 cm, Thermo Fisher Scientific) onto an analytical column (75-μm × 10 cm, C18, Thermo Fisher Scientific) using a 165 min gradient of solvent A (water, 0.1% formic acid) and solvent B (acetonitrile, 0.1% formic acid). The flow rate was 500 nl/minute. Data-dependent analysis (DDA) was performed by the Xcalibur v 2.1.0 software [51] using a survey mass scan at 60,000 resolution in the Orbitrap analyzer scanning mass/charge 350–1600, followed by collision-induced dissociation tandem mass spectrometry (MS/MS) at 35 normalized collision energy of the 14 most intense ions in the linear ion trap analyzer. Precursor ions were selected by the monoisotopic precursor selection setting with selection or rejection of ions held to a +/− 10 ppm window. Singly charged ions were excluded from MS/MS. Dynamic exclusion was set to place any selected m/z on an exclusion list for 45 s after a single MS/MS. Tandem mass spectra were searched against the UniprotKB/Swiss-Prot release available on December 12, 2014 without species restriction. At the time of the search, this database contained 459,734 entries. All MS/MS spectra were searched using Thermo Proteome Discoverer 1.3 (Thermo Fisher Scientific) considering fully tryptic peptides with up to 2 missed cleavage sites. Variable modifications considered during the search included methionine oxidation (15.995 Da), and cysteine carbamidomethylation (57.021 Da). The parent ion mass tolerance was 10 ppm, while the fragment tolerance was 0.8 Da. Proteins were identified at 99% confidence with XCorr score cut-offs [52] as determined by a reversed database search. The protein and peptide identification results were validated with Scaffold v4.4.6. (Proteome Software Inc.) [1]. Peptide identifications were accepted if they had greater than 89% probability to achieve an FDR less than 0.1% by the Scaffold Local FDR algorithm. Protein identifications were accepted if they had greater than 99% probability and contained at least 2 identified peptides. Protein probabilities were assigned by the ProteinProphet algorithm [53]. Proteins that contained similar peptides and could not be differentiated based on MS/MS analysis alone were grouped to satisfy the principles of parsimony. Proteins sharing significant peptide evidence were grouped into clusters. Protein quantification was done based on spectral counting; the quantitative values were generated by the Scaffold program based on the normalized total spectra. In case of protein clusters, each peptide was used only once for quantification for the first human protein in the cluster, as listed by Scaffold. All quantitative data were used for statistical analyses; none of the data points were removed. Statistical analysis of proteomics data For both statistical and network analysis, we used in-house developed R-software based on STRING [54,55,56,57], circlize (https://jokergoo.github.io/circlize_book/book/), MASS [55], lsmeans [56], matrixStats [57], reshape2 [58] and ggplot2 [59] packages. Assuming that data from technical repetitions are often characterized by Poisson distribution [60], and the large variances of biological replicas can be modelled by negative binomial distribution [61], we used modified general linear models to describe group-level differences in measured protein data in the 4 and 12 h time points. For each protein; after fitting negative binomial generalized linear model [55], we performed a post-hoc analysis [62] to characterize time-dependent mean differences by z score, and corrected p values for multiple comparisons. Gene names of the identified human proteins were subjected to STRING database [22] and five PPI networks were generated. The NW0 combined proteins from mock- and HIV-1 plasmid-treated cell lysates collected at 0 time-point, the C04 and C12 networks contained proteins from the mock cells collected 4 and 12 h post-infection, respectively, while the H04 and H12 networks contained proteins from the HIV-1 treated cells collected at 4 and 12 h time-points, respectively. Very high confidence interactions (interaction score > 0.95) in between the query proteins were used for the generation of each binary network. In these networks, the nodes were the proteins and the edges indicated the interactions between proteins as they were present in STRING. For network generation, the SRING R-package and the STRING database was applied, and the 0.95 combined score value to generate the binary networks Bt,s (B0, B4h,C, B4h,H, B12h,C, B12h,H) corresponding to the protein sets. In these networks, the binary edges indicated only the possibility of the interactions, taking no notice of the quantity. To estimate the real interaction density, binary networks (Bt,s) generated by STRING were further modified, and the amount of proteins measured by spectral counting was used to add wij weights to the edges. In this way, the existence of edges provides information on the existence of interaction, and the strength of protein pair's interactions were estimated by this edge-weight model: $$ {w}_{ij}=\frac{n_i}{k_i}\frac{n_j}{k_j} $$ where wij represents the interaction density between protein Pi and Pj; ni, nj means the quantity while ki, kj denote the degree (the number of edges) of Pi and Pj in the given Bt,s binary network. In this calculation, we used the measured data (ni, nj), which enabled us to alter the theoretical binary PPI network into a realistic, sample related interaction network, in which the weights of the edges are in direct proportion to the quantities and in inverse proportion to all interaction possibilities of the connected proteins in the given sample. Because we can consider the ni as the number of molecules of the protein Pi, the ni/ki ratio represents the number of Pi molecules involved in one interaction of Pi, and thus, the interaction density between Pi and Pj can be described by the product of ni/ki and nj/kj. It should be mentioned that the used edge-weight model in the absence of a strong interactor protein may overestimate the effect of other weak interactor proteins, also, interaction strength data cannot be achieved in a classical quantitative proteomics experiment, and currently are unavailable in publicly accessible databases. Functional subnetwork construction In order to investigate the PPI networks of the proteins belonging to GO ( geneontology.org/ ) terms, we marked in each Wt,s the nodes by a function flag, which indicated whether or not the protein belongs to a given f-function; in our case, to a GO term. The so-called functional enrichment according to GO terms was done by STRING, using default settings and the Molecular Function, Biological Process and Cellular Component GO terms listed as enriched by STRING in C04, H04, C12 and H12, where at least 10 protein per GO function in any of the networks was present, were considered. This procedure defined a sort of Wf t,s functional networks, and divided them into two disjunctive sub-networks (F f t,s functional, belonging to the GO term and NF f t,s non-functional not being part of the respective GO term), containing the functional and the non-functional nodes, respectively. Because of this separation, the edges (i.e. the interactions) were also classified into three classes: functional edges between the functional nodes, non-functional edges between non-functional nodes and cross-edges in between functional and non-functional nodes, depending on the f-markers of the connected proteins. Examination of the global characteristics of the evaluated PPI networks Any undirected weighted PPI network W(N,E) consists of two sets: N nodes and E edges. Each of the links (interactions) is defined by a couple of nodes (proteins) Pi and Pj, and its value is wij. Since the direction of interaction cannot be ordered, the connectivity matrix became symmetric: wij = wji. Number of nodes (N) and edges (E) N, Nf and Nn denotes the number of nodes (i.e. proteins) in the whole network and the functional and non-functional sub-networks, respectively, with the following relation: $$ \mathrm{N}=\mathrm{Nf}+\mathrm{Nn} $$ E denotes the number of edges (i.e. interactions) in the whole network. Ef and En are the number of edges within the functional and the non-functional sub-networks, respectively. The number of cross-edges (Ec) shows the connected proteins between the functional and the non-functional sub-networks. The edge numbers follow the next relation: $$ \mathrm{E}=\mathrm{Ef}+\mathrm{En}+\mathrm{Ec} $$ Network strength and averaged node strength (S) We defined the network strength S as the total sum of the weights of edges: $$ S=\frac{1}{2}\ {\sum}_{i,j=1}^N{w}_{i,j} $$ In the functional networks we can calculate strength of whole network (S), and the functional (Sf) and non-functional sub-networks (Sn), as well. The sum of cross connection edges can be calculate as follows: $$ \mathrm{Sc}=\mathrm{S}\hbox{-} \mathrm{Sf}\hbox{-} \mathrm{Sn} $$ Edge-weight density or strength density (D) the edge-weight density measures how the weighted network is saturated by strong edges: $$ D=\frac{S}{w_{max}\frac{N\left(N-1\right)}{2}} $$ In the functional networks we can measure the edge-weight density of the whole network (D) and the functional (Df) and non-functional sub-networks (Dn), as well. Edge-weight ratio (R): using the network strength we can define the edge-weight ratio parameter for the two sub-networks: $$ Rf=\frac{Sf}{S} $$ and the non-functional relative edge-weight density: $$ Rn=\frac{Sn}{S} $$ Since the distribution of network parameters was not Gaussian or negative binomial, we used Wilcoxson tests [63] to characterize the group-related differences at the 4 and 12 h time points. The evaluated p-values were corrected for multiple comparisons by false discovery rate methods [64]. The mass spectrometry datasets generated during the current study were deposited to the ProteomeXchange database and are available via the PRIDE repository with the dataset identifier PXD010436 and https://doi.org/10.6019/PXD010436. All data analyzed during this study are included in this published article [and its supplementary information files]. cDNA: Saturation of the PPI interaction density in the sample Saturation of the PPI interaction density between the functional and non-functional proteins in the sample Df: Saturation of the PPI interaction density of the functional proteins in the sample Saturation of the PPI interaction density of the nonfunctional proteins in the sample DNA: Deoxyribonucleic acid Number of interactions in the network generated from STRING with combined score of 0.95 Ec: Number of interactions between the functional and non-functional proteins Ef: Number of interactions between the functional proteins Number of interactions between the non-functional proteins HEK: Human embryonic kidney cells HIV: Monoisotopic precursor selection Number of proteins in the network Nf: Number of proteins in the functional sub-network Nn: Number of proteins in the non-functional sub-network Phosphate buffered saline ppm: Parts per million Relative PPI density, edge-weight ratio Rc: Relative cross-functional PPI density, edge-weight ratio between the functional and non-functional PPI networks Relative functional PPI density, edge-weight ratio between in the functional PPI network Rn: Relative non-functional PPI density, edge-weight ratio in the non-functional PPI network RNA: Ribonucleic acid PPI density of the sample Sc: PPI density of the cross-functional proteins in the sample PPI density of the functional proteins in the sample Sn: PPI density of the non-functional proteins in the sample VSV: Vesicular stomatitis virus Keller A, Nesvizhskii AI, Kolker E, Aebersold R. Empirical statistical model to estimate the accuracy of peptide identifications made by MS/MS and database search. Anal Chem. 2002;74(20):5383–92. Domon B, Aebersold R. Options and considerations when selecting a quantitative proteomics strategy. Nat Biotechnol. 2010;28(7):710–21. Codrea MC, Nahnsen S. Platforms and pipelines for proteomics data analysis and management. Modern Proteomics - Sample Preparation, Analysis and Practical Applications. 2016;919:203–15. Kentaro Kawata AH, Yugi K, Kubota H, Sano T, Fujii M, Tomizawa Y, Kokaji T, Tanaka KY, Uda S, Yutaka S, Matsumoto M, Nakayama KI, Saitoh K, Kato K, Ueno A, Ohishi M, Hirayama A, Kuroda S. Trans-omic Analysis Reveals Selective Responses to Induced and Basal Insulin across Signaling, Transcriptional, and Metabolic Networks. iScience. 2018;7:1–18. Koberlin MS, Snijder B, Heinz LX, Baumann CL, Fauster A, Vladimer GI, Gavin AC, Superti-Furga G. A conserved circular network of Coregulated lipids modulates innate immune responses. Cell. 2015;162(1):170–83. Oldham MC, Konopka G, Iwamoto K, Langfelder P, Kato T, Horvath S, Geschwind DH. Functional organization of the transcriptome in human brain. Nat Neurosci. 2008;11(11):1271–82. Li D, Li YP, Li YX, Zhu XH, Du XG, Zhou M, Li WB, Deng HY. Effect of regulatory network of exosomes and microRNAs on neurodegenerative diseases. Chin Med J. 2018;131(18):2216–25. Szilagyi A, Nussinov R, Csermely P. Allo-network drugs: extension of the allosteric drug concept to protein- protein interaction and signaling networks. Curr Top Med Chem. 2013;13(1):64–77. Jager S, Cimermancic P, Gulbahce N, Johnson JR, McGovern KE, Clarke SC, Shales M, Mercenne G, Pache L, Li K, et al. Global landscape of HIV-human protein complexes. Nature. 2011;481(7381):365–70. Csermely P, Sandhu KS, Hazai E, Hoksza Z, Kiss HJ, Miozzo F, Veres DV, Piazza F, Nussinov R. Disordered proteins and network disorder in network descriptions of protein structure, dynamics and function: hypotheses and a comprehensive review. Curr Protein Pept Sci. 2012;13(1):19–33. Dai LY, Zhao TY, Bisteau X, Sun WD, Prabhu N, Lim YT, Sobota RM, Kaldis P, Nordlund P. Modulation of Protein-Interaction States through the Cell Cycle. Cell. 2018;173(6):1481. Weiss RA. The discovery of endogenous retroviruses. Retrovirology. 2006;3:67. Kirchhoff F: HIV Life Cycle: Overview. In: Encyclopedia of AIDS. Edited by Hope TJ, Stevenson M, Richman D. New York, NY: Springer New York; 2013: 1–9. Lehmann-Che J, Saib A. Early stages of HIV replication: how to hijack cellular functions for a successful infection. AIDS Rev. 2004;6(4):199–207. Brass AL, Dykxhoorn DM, Benita Y, Yan N, Engelman A, Xavier RJ, Lieberman J, Elledge SJ. Identification of host proteins required for HIV infection through a functional genomic screen. Science. 2008;319(5865):921–6. Konig R, Zhou Y, Elleder D, Diamond TL, Bonamy GM, Irelan JT, Chiang CY, Tu BP, De Jesus PD, Lilley CE, et al. Global analysis of host-pathogen interactions that regulate early-stage HIV-1 replication. Cell. 2008;135(1):49–60. Fu W, Sanders-Beer BE, Katz KS, Maglott DR, Pruitt KD, Ptak RG. Human immunodeficiency virus type 1, human protein interaction database at NCBI. Nucleic Acids Res. 2009;37:D417–22. MacPherson JI, Dickerson JE, Pinney JW, Robertson DL. Patterns of HIV-1 protein interaction identify perturbed host-cellular subsystems. PLoS Comput Biol. 2010;6(7):e1000863. Dickerson JE, Pinney JW, Robertson DL. The biological context of HIV-1 host interactions reveals subtle insights into a system hijack. BMC Syst Biol. 2010;4:80. Pinney JW, Dickerson JE, Fu W, Sanders-Beer BE, Ptak RG, Robertson DL. HIV-host interactions: a map of viral perturbation of the host system. Aids. 2009;23(5):549–54. Vizcaino JA, Deutsch EW, Wang R, Csordas A, Reisinger F, Rios D, Dianes JA, Sun Z, Farrah T, Bandeira N, et al. ProteomeXchange provides globally coordinated proteomics data submission and dissemination. Nat Biotechnol. 2014;32(3):223–6. Szklarczyk D, Morris JH, Cook H, Kuhn M, Wyder S, Simonovic M, Santos A, Doncheva NT, Roth A, Bork P, et al. The STRING database in 2017: quality-controlled protein-protein association networks, made broadly accessible. Nucleic Acids Res. 2017;45(D1):D362–8. Zhou HL, Xu M, Huang Q, Gates AT, Zhang XHD, Castle JC, Stec E, Ferrer M, Strulovici B, Hazuda DJ, et al. Genome-scale RNAi screen for host factors required for HIV replication. Cell Host Microbe. 2008;4(5):495–504. Arhel N, Kirchhoff F. Host proteins involved in HIV infection: new therapeutic targets. Bba-Mol Basis Dis. 2010;1802(3):313–21. Mohammadi P, Desfarges S, Bartha I, Joos B, Zangger N, Munoz M, Gunthard HF, Beerenwinkel N, Telenti A, Ciuffi A. 24 hours in the life of HIV-1 in a T cell line. PLoS Pathog. 2013;9(1):e1003161. Nemeth J, Vongrad V, Metzner KJ, Strouvelle VP, Weber R, Pedrioli P, Aebersold R, Gunthard HF, Collins B. In vivo and in vitro proteome analysis of human immunodeficiency virus (HIV)-1-infected, human CD4(+) T cells. Mol Cell Proteomics. 2017;16(4):S108–23. Goff SP. Knockdown screens to knockout HIV-1. Cell. 2008;135(3):417–20. Yeung ML, Houzet L, Yedavalli VSRK, Jeang KT. A genome-wide short hairpin RNA screening of Jurkat T-cells for human proteins contributing to productive HIV-1 replication. J Biol Chem. 2009;284(29):19463–73. de Lichtenberg U, Jensen LJ, Brunak S, Bork P. Dynamic complex formation during the yeast cell cycle. Science. 2005;307(5710):724–7. PubMed Article CAS Google Scholar Greene CS, Krishnan A, Wong AK, Ricciotti E, Zelaya RA, Himmelstein DS, Zhang R, Hartmann BM, Zaslavsky E, Sealfon SC, et al. Understanding multicellular function and disease with human tissue-specific networks. Nat Genet. 2015;47(6):569–76. Celaj A, Schlecht U, Smith JD, Xu W, Suresh S, Miranda M, Aparicio AM, Proctor M, Davis RW, Roth FP, et al. Quantitative analysis of protein interaction network dynamics in yeast. Mol Syst Biol. 2017;13(7):934. Sardiu ME, Cai Y, Jin J, Swanson SK, Conaway RC, Conaway JW, Florens L, Washburn MP. Probabilistic assembly of human protein interaction networks from label-free quantitative proteomics. Proc Natl Acad Sci U S A. 2008;105(5):1454–9. Boccaletti S, Latora V, Moreno Y, Chavez M, Hwang DU. Complex networks: structure and dynamics. Phys Rep. 2006;424(4–5):175–308. Kleinman CL, Doria M, Orecchini E, Giuliani E, Galardi S, De Jay N, Michienzi A. HIV-1 infection causes a down-regulation of genes involved in ribosome biogenesis. PloS one. 2014;9(12):e113908. Dowling D, Nasr-Esfahani S, Tan CH, O'Brien K, Howard JL, Jans DA, Purcell DF, Stoltzfus CM, Sonza S. HIV-1 infection induces changes in expression of cellular splicing factors that regulate alternative viral splicing and virus production in macrophages. Retrovirology. 2008;5:18. Sertznig H, Hillebrand F, Erkelenz S, Schaal H, Widera M. Behind the scenes of HIV-1 replication: Alternative splicing as the dependency factor on the quiet. Virology. 2018;516:176–188. Garcia-Moreno M, Noerenberg M, Ni S, Jarvelin AI, Gonzalez-Almela E, Lenz CE, Bach-Pages M, Cox V, Avolio R, Davis T, et al. System-wide Profiling of RNA-Binding Proteins Uncovers Key Regulators of Virus Infection. Molecular cell. 2019;74(1):196–211, e111. Lum JJ, Badley AD: Resistance to apoptosis: mechanism for the development of HIV reservoirs. Current HIV research. 2003;1(3):261–274. Antoni BA, Sabbatini P, Rabson AB, White E. Inhibition of apoptosis in human immunodeficiency virus-infected cells enhances virus production and facilitates persistent infection. J Virol. 1995;69(4):2384–392. Badley AD, Sainski A, Wightman F, Lewin SR. Altering cell death pathways as an approach to cure HIV infection. Cell death & disease. 2013;4:e718. Stricher F, Macri C, Ruff M, Muller S. HSPA8/HSC70 chaperone protein: structure, function, and chemical targeting. Autophagy. 2013;9(12):1937–54. Sherman MP, Greene WC. Slipping through the door: HIV entry into the nucleus. Microbes and infection / Institut Pasteur. 2002;4(1):67–73. Chang ST, Sova P, Peng X, Weiss J, Law GL, Palermo RE, Katze MG. Next-generation sequencing reveals HIV-1-mediated suppression of T cell activation and RNA processing and regulation of noncoding RNA expression in a CD4+ T cell line. mBio. 2011;2(5). Rato S, Rausell A, Munoz M, Telenti A, Ciuffi A. Single-cell analysis identifies cellular markers of the HIV permissive cell. Plos Pathog. 2017;13(10):e1006678. Rao S, Amorim R, Niu M, Breton Y, Tremblay MJ, Mouland AJ. Host mRNA decay proteins influence HIV-1 replication and viral gene expression in primary monocyte-derived macrophages. Retrovirology. 2019;16(1):3. Tabb DL, Vega-Montoto L, Rudnick PA, Variyath AM, Ham AJ, Bunk DM, Kilpatrick LE, Billheimer DD, Blackman RK, Cardasis HL, et al. Repeatability and reproducibility in proteomic identifications by liquid chromatography-tandem mass spectrometry. Journal of proteome research. 2010;9(2):761–76. Heaven MR, Funk AJ, Cobbs AL, Haffey WD, Norris JL, McCullumsmith RE, Greis KD. Systematic evaluation of data-independent acquisition for sensitive and reproducible proteomics-a prototype design for a single injection assay. J Mass Spectrom : JMS. 2016;51(1):1–11. Miklossy G, Tozser J, Kadas J, Ishima R, Louis JM, Bagossi P. Novel macromolecular inhibitors of human immunodeficiency virus-1 protease. Protein engineering, design & selection : PEDS. 2008;21(7):453–61. Dull T, Zufferey R, Kelly M, Mandel RJ, Nguyen M, Trono D, Naldini L. A third-generation lentivirus vector with a conditional packaging system. J Virol. 1998;72(11):8463–71. Csosz E, Markus B, Darula Z, Medzihradszky KF, Nemes J, Szabo E, Tozser J, Kiss C, Marton I. Salivary proteome profiling of oral squamous cell carcinoma in a Hungarian population. FEBS open bio. 2018;8(4):556–69. Andon NL, Hollingworth S, Koller A, Greenland AJ, Yates JR 3rd, Haynes PA. Proteomic characterization of wheat amyloplasts using identification of proteins by tandem mass spectrometry. Proteomics. 2002;2(9):1156–68. Qian WJ, Liu T, Monroe ME, Strittmatter EF, Jacobs JM, Kangas LJ, Petritis K, Camp DG 2nd, Smith RD. Probability-based evaluation of peptide and protein identifications from tandem mass spectrometry and SEQUEST analysis: the human proteome. J Proteome Res. 2005;4(1):53–62. Nesvizhskii AI, Keller A, Kolker E, Aebersold R. A statistical model for identifying proteins by tandem mass spectrometry. Analytical chemistry. 2003;75(17):4646–58. Franceschini A, Szklarczyk D, Frankild S, Kuhn M, Simonovic M, Roth A, Lin J, Minguez P, Bork P, von Mering C, et al. STRING v9.1: protein-protein interaction networks, with increased coverage and integration. Nucleic Acids Res. 2013;41(Database issue):D808–15. W.N. Venables BDR. Modern applied statistics with S. New York: Springer-Verlag; 2002. Lenth RV. Least-squares means: the R package lsmeans. J Stat Softw. 2016;69(1):1–33. matrixStats: Functions that Apply to Rows and Columns of Matrices (and to Vectors). R package version 0.52.2 [https://github.com/HenrikBengtsson/matrixStats]. Wickham H. Reshaping data with the reshape package. J Stat Softw. 2007;21(12):1–20. Ginestet C. ggplot2: elegant graphics for data analysis. J R Stat Soc a Stat. 2011;174:245. Marioni JC, Mason CE, Mane SM, Stephens M, Gilad Y. RNA-seq: an assessment of technical reproducibility and comparison with gene expression arrays. Genome Res. 2008;18(9):1509–17. Robinson MD, Oshlack A. A scaling normalization method for differential expression analysis of RNA-seq data. Genome Biol. 2010;11:3. Searle SR, Speed FM, Milliken GA. Population marginal means in the linear-model - an alternative to least-squares means. Am Stat. 1980;34(4):216–21. MHaDA W. Nonparametric statistical methods. New York: John Wiley & Sons; 1999. Benjamini Y, Hochberg Y. Controlling the false discovery rate - a practical and powerful approach to multiple testing. J Roy Stat Soc B Met. 1995;57(1):289–300. This work was supported by the Hungarian Scientific Research Fund (NKFI-6, 125238) to JT, by the Higher Education Institutional Excellence Programme of the Ministry of Human Capacities in Hungary, within the framework of the Biotechnology thematic programme of the University of Debrecen, GINOP-2.3.3-15-2016-00020, and partially by Janos Bolyai Research Scholarship of the Hungarian Academy of Sciences. Mass spectrometry data were acquired by the Arizona Proteomics Consortium supported by NIEHS grant ES06694 to the SWEHSC, NIH/NCI grant CA023074 to the UA Cancer Center and by the BIO5 Institute of the University of Arizona. The Thermo Fisher LTQ Orbitrap Velos mass spectrometer was provided by grant 1S10 RR028868–01 from NIH/NCRR. Funding bodies did not play any roles in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Miklós Emri and József Tőzsér contributed equally to this work. Proteomics Core Facility, Department of Biochemistry and Molecular Biology, Faculty of Medicine, University of Debrecen, Egyetem ter 1., Debrecen, 4032, Hungary Éva Csősz & József Tőzsér Laboratory of Retroviral Biochemistry, Department of Biochemistry and Molecular Biology, Faculty of Medicine, University of Debrecen, Egyetem ter 1., Debrecen, 4032, Hungary Ferenc Tóth, Mohamed Mahdi & József Tőzsér Arizona Research Labs, University of Arizona, PO Box 210066, Administration Building, Room 601, Tucson, AZ, 85721-0066, USA George Tsaprailis The Scripps Research Institute, 132 Scripps Way, Jupiter, FL, 33458, USA Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Faculty of Medicine, University of Debrecen, Nagyerdei krt. 98., Debrecen, 4032, Hungary Miklós Emri Éva Csősz Ferenc Tóth Mohamed Mahdi József Tőzsér JT conceptualized the study, designed the HIV-1 transduction experiments, reviewed the manuscript and provided the resources for the study. FT produced the viruses, carried out the cell transductions, prepared the samples for mass spectrometry and curated the mass spectrometry data. EC designed the data analysis strategy, analyzed the mass spectrometry, statistical and network analysis data, prepared the tables and wrote the manuscript. GT performed the mass spectrometry experiment and helped with the manuscript preparation. ME designed and performed the statistical analysis and network analysis, assisted in manuscript writing and prepared the Figs. MM participated in manuscript preparation, critically reviewed the manuscript, performed the English editing and helped with the evaluation of HIV-1 transduction results. All authors read and approved the final manuscript. Correspondence to Éva Csősz or József Tőzsér. Non applicable. List of identified proteins. (XLSX 1147 kb) List of quantified proteins. The gene name according to UniProt in case of quantified proteins is given, and for each protein, the mean amount of four replicates is presented for each time point except NW, where the mean amount of 8 replicates is given. (XLSX 23 kb) Statistical analysis of protein quantities. The gene name according to UniProt, the p value and z score for the 4 h and 12 h time points are listed in case of each protein. The lists are presented in ascending order of the p values. (XLSX 24 kb) Network parameters calculated for functional sub-networks. The y axis show the mean value characteristic for each parameter, and the x axis indicates the time points. Blue color refers to the control, while the yellow color to the HIV-1 treated conditions. N refers to the number of nodes, E to the number of edges, S show network strength, D represents the edge density and R the edge ratio. The f refers to the functional sub-network, the n to the non-functional subnetwork containing the proteins not present in the functional sub-network, while the c refers to the interactions between the functional and the non-functional sub-networks. (PDF 9676 kb) Statistical analysis of network parameters. The FDR-corrected p value and z score for the 4 h and 12 h time points, respectively, in case of each network parameter calculated (N, Nf, Nn, E, Ef, En, Ec, S, Sf, Sn, Sc, D, Df, Dn, Dc, Rf, Rn, Rc) for each GO function presented in Additional file 4. (XLSX 287 kb) List of GO terms with network parameters that were significantly changed in the functional sub-networks. (XLSX 16 kb) Csősz, É., Tóth, F., Mahdi, M. et al. Analysis of networks of host proteins in the early time points following HIV transduction. BMC Bioinformatics 20, 398 (2019). https://doi.org/10.1186/s12859-019-2990-3 Weighted network Quantitative proteomics Host response
CommonCrawl
Simple solution to the optimal deployment of cooperative nodes in two-dimensional TOA-based and AOA-based localization system Weiguang Shi1, Xiaoli Qi1, Jianxiong Li1, Shuxia Yan1, Liying Chen1, Yang Yu1 & Xin Feng1 Cellular-based cooperative communication is a promising technique that allows cooperation among mobile devices not only to increase data throughput but also to improve localization services. For a certain number of cooperative nodes, the geometry plays a significant role in enhancing the accuracy of target. In this paper, a simple solution to the deployment of cooperative nodes aiming at the lowest geometric dilution of precision (GDOP) is proposed suitable for both time of arrival-based cooperative localization system and angle of arrival-based cooperative localization system. Inertia dependence factor is suggested to reveal the relationship among the optimal positions of each cooperative node, which extracts the inertia and the recursiveness of deployment. It is shown in simulations that the proposed solution almost achieves the same GDOP as that of global exclusive method but with less complexity. With the proliferation of mobile communication technology, wireless positioning has become an increasingly important issue and drawn plenty of attention [1]. Over the past several decades, a variety of location-based services emerged in the fields such as emergency medical systems, asset monitoring and tracking, location sensitive billing, fraud protection, fleet management, and even also including intelligent transportation systems [2, 3]. However, global positioning system (GPS) and cellular positioning system (CPS), two representative positioning technologies applied in many scenarios of daily life, still have some drawbacks [4–7]. On the one hand, in outdoor environment, GPS and CPS signals are sensitive to the interference from space weather, transformer substations, geomagnetic radiation, and etc., which leads to the degradation of the accessibility and continuity of GPS and CPS [8–11]. On the other hand, in indoor environment, GPS and CPS face a challenging radio propagation environment, including mainly multi-path effect and none-line-of-sight (NLOS) effect [12–15]. Indoor wireless communication (IWC) technology, represented by ultrasonic (US), wireless sensor networks (WSN), Bluetooth (BT), radio frequency identification (RFID) and ultra wide band (UWB), provides a novel kind of approach to the data interaction and burgeons rapidly in the last decade. Owing to their low cost, an ocean of IWC devices has been placed in indoor environment to supply context-awareness service for the target. Various positioning systems have also been proposed and unfolded satisfactory performance [16–20]. Meanwhile, thanks to the continual miniaturization of the microelectronic technology, cutting-edge cell phones equipped with IWC modules have been committed to remote control, intelligent entrance guard, instant payment, and tag identification via the connection with surrounding IWC devices. In this situation, if we regard all or part of the surrounding IWC devices that could provide spatial characteristic about the cell phone as cooperative partners, a cooperative localization system on the basis of traditional CPS or GPS can be established, which is reasonable to expect a higher location precision and a better stability [21–23]. As shown in Fig. 1, a typical cellular-based cooperative localization system consists of five fundamental components including general nodes (GNs), landmark nodes (LNs), cooperative nodes (CNs), positioning targets (PTs), and info-station (IS). The GNs are the base stations (BSs) that enable to capture the external spatial characteristic and provide general CPS service to the PTs. The LNs represent the infrastructures equipped with IWC modules like RFID readers, infrared receivers, Bluetooth receivers and WiFi access points which can interact data with PTs and afford the internal spatial characteristic about the PTs. Exclusively, we term the LNs that participate in the localization of the PTs and serve like the partner of GNs as CNs. The PTs not only limit to the cell phones but also include other intelligent mobile devices equipped with IWC modules as well as CPS. The IS is a server connecting to both the CN and the GN to converge external and internal spatial characteristic about the PTs. Usually, the spatial characteristic is expressed in the form of the time stamp change, phase variation, and amplitude attenuation of the carrier signal. Triangulation, involving measuring principles such as time of arrival (TOA), angle of arrival (AOA), time difference of arrival (TDOA), and phase of arrival (POA), is one of the positioning algorithms that extracts the geometric properties of triangles from the spatial characteristic so as to estimate the target location [24]. Targeting different applications or services, these principles have unique advantages and disadvantages. Hence, using more than one type of principles at the same time could get better performance. In this paper, our attention focuses on TOA and AOA for their extensive applications and convenient implementations. The components of a typical cellular-based cooperative localization system The process of tracking a single PT in cooperative localization can be briefly described as follows. First, after moving to indoor environments from the outside, the PT begins to broadcast interrogation signal and monitors the reply back from LNs while maintaining the CPS service. Second, taking no account of accidental interference, the interrogation signal would be heard as long as the PT gets into the surveillance region of any LN. Each LN appraises its computational resource such as power dissipation capacity, arithmetic speed, and dynamic link stability. If the computational resource meets the scheduled requirements of cooperative localization, the LN transmits the acknowledgement signal to the PT and prepares for the cooperation. Third, after verifying the whole acknowledgement signals, the PT chooses a part of LNs, labels them as CNs, and sends ranging signal to them. IS discriminates the ranging signal's difference between the transmitter and receiver, andextracts effective indoor spatial characteristic by leveraging measuring principles such as TOA or AOA. Finally, the location of the PT is estimated by converging the indoor spatial characteristic from the CN and the outdoor spatial characteristic from the GN. Evidently, a larger number of CNs ensure a higher precision of location but at the cost of more computational complexity and more power consumption involved [25]. For a certain number of CNs, the geometry plays a vital part in the performance of the cooperative positioning system. CNs cited in different position may result in different positioning accuracy. Geometric dilution of precision (GDOP), a relatively convenient indicator calculated on the basis of Cramer-Rao bound (CRB), is commonly used to describe the influence of geometry on the relation between the position determination error and the measurement error [26–31]. Low GDOP is a guarantee for all highly accurate wireless location systems. D. J. Torrieri firstly defined GDOP and analysed the performance of the two most important passive location systems for stationary transmitters [26]. N. Levanon examined the position determination in two-dimensional TOA scenarios and proved that the lowest possible GDOP attainable from range or pseudo-range measurements to N optimally located points is \( 2/\sqrt{N} \) [27]. A.G. Dempster obtained the expression of GDOP for two-dimensional AOA scenarios [28]. In [29], P. Deng et al. studied the relationship between GDOP and GN number at the centre of a polygon with and without the correlativity between TDOA measurements considered. A GDOP method for computing TDOA and TDOA/AOA under non-line-of-sight environment was also presented. Quan investigated GDOP in absolute range-based wireless location systems with emphasis on its low bounds and introduced angle of coverage as a parameter to describe the geometry between a target and the measuring points [30]. In addition, an efficient deployment scheme of CNs in TOA positioning systems, which aims to minimize the GDOP, is proposed on the basis of generalized eigenvalue decomposition of fisher information matrix (FIM) [31]. In this paper, we termed it as eigenvalue-based approach for ease of illustration. The eigenvalue-based approach, which could obtain optimal performance for one CN and suffer only small loss of performance for more CNs, can dramatically reduce the computational complexity compared with the exhaustive method. Developing a practical arrangement out of the eigenvalue-based approach, however, entails some challenges. First, the eigenvalue-based approach only describes the procedure of scheme rather than theoretically infers the expression of the eigenvalue and eigenvector. Variation of GNs' deployment will trigger the modification of FIM, which may cause recalculation of eigenvalue and eigenvector. Second, the relationship among the optimal position of each CN has not been thoroughly investigated and deeply summarized; unnecessary calculation cannot be avoided when the number of the CNs is huge. Third, the eigenvalue-based approach cannot be directly applied into AOA system since the FIM of AOA is more complicated than that of TOA. To the best of our knowledge, few approaches are reported to tackle the issue mentioned above. In this paper, we propose another solution to the deployment of CN suitable for both TOA-based cooperative localization system and AOA-based cooperative localization system. Optimal placement mechanism based on partial differentiation, termed as OPMPD, is suggested to expose the optimal position of CN and achieves a more explicit solution. Inertia dependence factor is also introduced to reveal the relationship among the optimal position of each CN, which extracts the inertia and the recursiveness of deployment. The rest of the paper is organized as follows. CRB based on FIM is reviewed in "Section 2." In "Section 3," we respectively deduce the GDOP expressions of two-dimensional TOA-based cooperative localization system and AOA-based cooperative localization system. Our new solution aiming at the lowest GDOP is detailed in "Section 4." Simulation results are illustrated in "Section 5," which proves the effectiveness and the efficiency of the proposed approach. Finally, the conclusion of our work is presented in "Section 6." Cramer-Rao bound based on FIM The CRB, which can be derived by the inverse of the FIM, provides the lowest limit on estimation accuracy of any unbiased estimator. Suppose a parameter vector to be estimated is θ = [θ 1, θ 2, ⋯ θ P ]T. θ p stands for the pth unknown parameter, p ∈ [1, P]. P denotes the number of parameters to be estimated. The FIM is then $$ \mathbf{J}\left(\boldsymbol{\uptheta} \right)=- E\left\{\left(\frac{\partial \ln f\left(\boldsymbol{\Lambda} \Big|\boldsymbol{\uptheta} \right)}{\partial \boldsymbol{\uptheta}}\right){\left(\frac{\partial \ln f\left(\boldsymbol{\Lambda} \Big|\boldsymbol{\uptheta} \right)}{\partial \boldsymbol{\uptheta}}\right)}^{{}^{\mathrm{T}}}\right\} $$ where J(θ) is a P × P matrix, f(Λ|θ) is the likelihood function of observation vector Λ, and E represents the expectation operation. Assume that f(Λ|θ) follows N dimensional Gaussian distribution denoted as $$ f\left(\boldsymbol{\Lambda} \Big|\boldsymbol{\uptheta} \right)=\frac{1}{\sqrt{{\left(2\pi \right)}^N\left|\mathbf{Q}\right|}}{e}^{-\frac{1}{2}{\left(\boldsymbol{\Lambda} -\boldsymbol{\upmu} \left(\boldsymbol{\uptheta} \right)\right)}^{{}^{\mathrm{T}}}{\mathbf{Q}}^{-1}\left(\boldsymbol{\Lambda} -\boldsymbol{\upmu} \left(\boldsymbol{\uptheta} \right)\right)} $$ where N is the scale of Λ, Q is the covariance matrix with θ, μ(θ) is the expectation of Λ, and (⋅)T and (⋅)− 1 respectively represent the transpose and inverse of the matrix. Consider \( \mathbf{H}=\frac{\partial \boldsymbol{\upmu} \left(\boldsymbol{\uptheta} \right)}{\partial {\boldsymbol{\uptheta}}^{\mathrm{T}}} \) as the Jacobian matrix and substitute (2) into (1), J(θ) will be updated to J(θ) = H T Q − 1 H, and the CRB of θ can be expressed as $$ \operatorname{var}\left(\widehat{\boldsymbol{\uptheta}}\right)\ge diag\left({\mathbf{J}}^{-1}\left(\boldsymbol{\uptheta} \right)\right)= diag\left({\left({\mathbf{H}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\mathbf{H}\right)}^{-1}\right) $$ where var is the variance matrix of unbiased estimators \( \widehat{\boldsymbol{\uptheta}} = {\left[{\widehat{\theta}}_1,{\widehat{\theta}}_2,\cdots {\widehat{\theta}}_P\right]}^T \) and diag denotes the diagonal elements of a matrix. GDOP of two-dimensional localization systems GDOP of typical localization system Consider a two-dimensional typical localization system with a single PT, N GNs. Let θ = [x, y]T implies the unknown location of PT and (x n , y n ) denotes the known location of the nth GN. n ∈ [1, N]. Accordingly, observation vector Λ in TOA measurement is [d 1, d 2, ⋯ d N ]T in which d n denotes the measured distance between the PT and the nth GN given by $$ {d}_n= c t=\sqrt{{\left( x-{x}_n\right)}^2+{\left( y-{y}_n\right)}^2}=\overline{d_n}+{e}_n $$ Here, c denotes the light speed. t denotes the absolute arrival time. \( \overline{d_n} \) stands for the theoretical value. e n stands for the measurement error that follows the Gaussian distribution with zero mean and σ n root-mean-square error. Given the assumption that the measured values in Λ are mutually independent, the cross-covariance matrix can be rewritten as $$ \mathrm{v}\mathrm{a}\mathrm{r}\left(\widehat{\boldsymbol{\uptheta}}\right)=\left[\begin{array}{cc}\hfill {\sigma}_x^2\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {\sigma}_y^2\hfill \end{array}\right] $$ where \( {\sigma}_x^2 \) and \( {\sigma}_y^2 \) represent the variance of x and y, respectively. According to the definition in [26], the GDOP of TOA system can be expressed as $$ GDOP=\frac{\sqrt{\sigma_x^2+{\sigma}_y^2}}{\sigma_D^2} $$ with \( {\sigma}_D=\frac{1}{N}{\displaystyle \sum_{n=1}^N{\sigma}_n} \), where σ D is the root-mean-square ranging error of system. Suppose σ 1 = σ 2 = σ 3 = … = σ n = σ r , (6) can be simplified as \( GDOP=\sqrt{G_{11}+{G}_{22}} \), where G 11 and G 22 are the diagonal elements of G given by $$ \boldsymbol{G}=\frac{1}{\sigma_D}{\left({\mathbf{H}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\mathbf{H}\right)}^{-1} $$ Considering that all the measurement errors are independent, we have \( \mathbf{Q} = {\sigma}_r^2\mathbf{I} \) with an N × N identity matrix I. Then the G will become $$ G={\left({H}^{\mathrm{T}} H\right)}^{-1} $$ Correspondingly, the Jacobian matrix H of TOA is $$ {\mathbf{H}}_{\mathrm{TOA}}=\left[\begin{array}{cc}\hfill \frac{x-{x}_1}{d_1}\hfill & \hfill \frac{y-{y}_1}{d_1}\hfill \\ {}\hfill \frac{x-{x}_2}{d_2}\hfill & \hfill \frac{y-{y}_2}{d_2}\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill \\ {}\hfill \frac{x-{x}_N}{d_N}\hfill & \hfill \frac{y-{y}_N}{d_N}\hfill \end{array}\right]=\left[\begin{array}{cc}\hfill - \cos {\alpha}_1\hfill & \hfill - \sin {\alpha}_1\hfill \\ {}\hfill - \cos {\alpha}_2\hfill & \hfill - \sin {\alpha}_2\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill \\ {}\hfill - \cos {\alpha}_N\hfill & \hfill - \sin {\alpha}_N\hfill \end{array}\right] $$ where α n is the angle of the ray from the PT to the nth GN relative to the positive X-axis. Substituting (9) into (8), we have $$ {\mathbf{G}}_{\mathrm{TOA}}=\frac{\left[\begin{array}{cc}\hfill {\displaystyle \sum_{n=1}^N{ \sin}^2}{\alpha}_n\hfill & \hfill -{\displaystyle \sum_{n=1}^N \cos }{\alpha}_n \sin {\alpha}_n\hfill \\ {}\hfill -{\displaystyle \sum_{n=1}^N \cos }{\alpha}_n \sin {\alpha}_n\hfill & \hfill {\displaystyle \sum_{n=1}^N{ \cos}^2}{\alpha}_n\hfill \end{array}\right]}{{\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{ \sin}^2\left({\alpha}_m-{\alpha}_n\right)}}} $$ Therefore, the GDOP of TOA can be derived as $$ G D O{P}_{\mathrm{TOA}}=\sqrt{\frac{N}{{\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{ \sin}^2\left({\alpha}_m-{\alpha}_n\right)}}}} $$ Similarly, the Jacobian matrix and GDOP of AOA can be obtained with observation vector Λ replaced by [α 1, α 2, ⋯ α N ]T $$ {\mathbf{H}}_{\mathrm{AOA}}=\left[\begin{array}{cc}\hfill -\frac{y-{y}_1}{d_1^2}\hfill & \hfill \frac{x-{x}_1}{d_1^2}\hfill \\ {}\hfill -\frac{y-{y}_2}{d_2^2}\hfill & \hfill \frac{x-{x}_2}{d_2^2}\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill \\ {}\hfill -\frac{y-{y}_N}{d_N^2}\hfill & \hfill \frac{x-{x}_N}{d_N^2}\hfill \end{array}\right]=\left[\begin{array}{cc}\hfill \frac{ \sin {\alpha}_1}{d_1^2}\hfill & \hfill -\frac{ \cos {\alpha}_1}{d_1^2}\hfill \\ {}\hfill \frac{ \sin {\alpha}_2}{d_2^2}\hfill & \hfill -\frac{ \cos {\alpha}_2}{d_2^2}\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill \\ {}\hfill \frac{ \sin {\alpha}_N}{d_N^2}\hfill & \hfill -\frac{ \cos {\alpha}_N}{d_N^2}\hfill \end{array}\right] $$ $$ G D O{P}_{\mathrm{AOA}}=\sqrt{\frac{{\displaystyle \sum_{n=1}^N{d_n}^{\hbox{-} 2}}}{{\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{d_m}^{\hbox{-} 2}{d_n}^{\hbox{-} 2}{ \sin}^2\left({\alpha}_m-{\alpha}_n\right)}}}} $$ GDOP of cooperative localization system The key idea of cooperative localization systems is improving the accuracy of PTs by introducing more LNs and filtering out CNs from them. Theoretically, the more CN employed, the lower GDOP received. In this section, assume that there are totally M CNs to be allocated to a single PT for reducing the GDOP. After the implementation of assignment, the Jacobian matrixes of the observation vector of TOA-based and AOA-based cooperative localization systems are respectively refreshed to $$ {\mathbf{H}}_{\mathrm{C}\hbox{-} \mathrm{TOA}}={\left[{\mathbf{H}}_{\mathrm{TOA}}^T\kern1.2em {\mathbf{H}}_{\mathrm{c}\hbox{-} \mathrm{TOA}}^T\right]}^T $$ $$ {\mathbf{H}}_{\mathrm{C}\hbox{-} \mathrm{AOA}}={\left[{\mathbf{H}}_{\mathrm{AOA}}^T\kern1.2em {\mathbf{H}}_{\mathrm{c}\hbox{-} \mathrm{AOA}}^T\right]}^T $$ $$ {\mathbf{H}}_{\mathrm{C}\hbox{-} \mathrm{TOA}}=\left[\begin{array}{cc}\hfill \frac{x-{x}_{c1}}{r_1}\hfill & \hfill \frac{y-{y}_{c1}}{r_1}\hfill \\ {}\hfill \frac{x-{x}_{c2}}{r_2}\hfill & \hfill \frac{y-{y}_{c2}}{r_2}\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill \\ {}\hfill \frac{x-{x}_{c M}}{r_M}\hfill & \hfill \frac{y-{y}_{c M}}{r_M}\hfill \end{array}\right]=\left[\begin{array}{cc}\hfill - \cos {\beta}_1\hfill & \hfill - \sin {\beta}_1\hfill \\ {}\hfill - \cos {\beta}_2\hfill & \hfill - \sin {\beta}_2\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill \\ {}\hfill - \cos {\beta}_M\hfill & \hfill - \sin {\beta}_M\hfill \end{array}\right] $$ $$ {\mathbf{H}}_{\mathrm{C}\hbox{-} \mathrm{AOA}}=\left[\begin{array}{cc}\hfill -\frac{y-{y}_{c1}}{r_1^2}\hfill & \hfill \frac{x-{x}_{c1}}{r_1^2}\hfill \\ {}\hfill -\frac{y-{y}_{c2}}{r_2^2}\hfill & \hfill \frac{x-{x}_{c2}}{r_2^2}\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill \\ {}\hfill -\frac{y-{y}_{c M}}{r_M^2}\hfill & \hfill \frac{x-{x}_{c M}}{r_M^2}\hfill \end{array}\right]=\left[\begin{array}{cc}\hfill \frac{ \sin {\beta}_1}{r_1^2}\hfill & \hfill -\frac{ \cos {\beta}_1}{r_1^2}\hfill \\ {}\hfill \frac{ \sin {\beta}_2}{r_2^2}\hfill & \hfill -\frac{ \cos {\beta}_2}{r_2^2}\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill \\ {}\hfill \frac{ \sin {\beta}_M}{r_M^2}\hfill & \hfill -\frac{ \cos {\beta}_M}{r_M^2}\hfill \end{array}\right] $$ where β m is the angle of the ray from the PT to the mth CN relative to the positive X-axis, (x cm , y cm ) denotes the coordinate of the mth CN, and r m denotes the distance between the PT and the mth CN. Geometry of the nodes in two-dimensional cooperative localization scenario is illustrated in Fig. 2. Geometry of the nodes in two-dimensional cooperative localization scenario Hence, the GDOP with respect to M CNs can be calculated as $$ G D O{P}_{\mathrm{C}\hbox{-} \mathrm{TOA}}^M=\sqrt{\frac{N+ M}{{\displaystyle \sum_{n=1}^N{\displaystyle \sum_{m=1}^M{ \sin}^2\left({\beta}_m-{\alpha}_n\right)}}+{\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{ \sin}^2\left({\alpha}_m-{\alpha}_n\right)}}+{\displaystyle \sum_{n=1}^{M-1}{\displaystyle \sum_{m= n+1}^M{ \sin}^2\left({\beta}_m-{\beta}_n\right)}}}} $$ $$ G D O{P}_{\mathrm{C}\hbox{-} \mathrm{AOA}}^M=\kern1em \sqrt{\frac{{\displaystyle \sum_{n=1}^N{d}_n^{\hbox{-} 2}}+{\displaystyle \sum_{m=1}^M{r_m}^{\hbox{-} 2}}}{\varSigma_1+{\varSigma}_2+{\varSigma}_3}} $$ where \( {\varSigma}_1={\displaystyle \sum_{n=1}^N{\displaystyle \sum_{m=1}^M{\left({d}_n{r}_m\right)}^{\hbox{-} 2}{ \sin}^2\left({\beta}_m-{\alpha}_n\right)}} \), \( {\varSigma}_2={\displaystyle \sum_{n=1}^{M-1}{\displaystyle \sum_{m= n+1}^M{\left({r}_n{r}_m\right)}^{\hbox{-} 2}{ \sin}^2\left({\beta}_m-{\beta}_n\right)}} \), and \( {\varSigma}_3={\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{\left({d}_n{d}_m\right)}^{\hbox{-} 2}{ \sin}^2\left({\alpha}_m-{\alpha}_n\right)}} \). OPMPD approach From (18) and (19), we could find that the GDOP of two-dimensional TOA-based cooperative localization systems is affected by the azimuth of the CNs, whereas the GDOP of the AOA-based is relying on both the azimuth and the distance between the PT and the CN. Theoretically, the deployment of cooperative nodes is a multiple parameter optimization problem with high complexity, especially for the large-scale CNs instances. In order to simplify the multiple parameter optimization problem, the OPMPD approach employs stepwise strategy to obtain the optimal position of each CN step by step just as eigenvalue-based approach. Whereas, OPMPD approach has three major advantages. First, OPMPD approach is deduced on the basis of partial differentiation which provides a more explicit scheme. Since there is no need to calculate the generalized eigenvalue and eigenvector of FIM, unnecessary computation is avoided. Second, inertia dependence factor is introduced to indicate the relationship among the optimal position of each CN, which suggests a discipline of the deployment. Third, OPMPD approach is suitable for both two-dimensional TOA-based cooperative localization system and AOA-based cooperative localization system. Objective function based on stepwise strategy In OPMPD approach, the optimal position of the CN to be assigned depends on not only the geometry of the PT and the GN but also the location of the CN already deployed since each CN is placed in sequence. To begin with, assume the first CN shall be placed, the GDOP of the TOA-based cooperative system is found to be $$ G D O{P}_{\mathrm{C}\hbox{-} \mathrm{TOA}}^1=\sqrt{\frac{N+1}{{\displaystyle \sum_{n=1}^N{ \sin}^2\left({\beta}_1-{\alpha}_n\right)}+{\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{ \sin}^2\left({\alpha}_n-{\alpha}_m\right)}}}} $$ Obviously, the current GDOP is only relevant to β 1 since α 1, α 2, …, α N is known. Considering that the minimization of GDOP is equivalent to the maximization of \( {\displaystyle {\sum}_{n=1}^N{ \sin}^2\left({\beta}_1-{\alpha}_i\right)} \), we extract the component relevant to β 1 and define the initial observation function as $$ Q\left({\beta}_1\right)={\displaystyle {\sum}_{n=1}^N{ \sin}^2\left({\beta}_1-{\alpha}_n\right)} $$ Accordingly, if the former (m−1) CNs have been placed, the observation function with respect to the mth CN will be updated to $$ Q\left({\beta}_m\right)={\displaystyle {\sum}_{n=1}^N{ \sin}^2\left({\beta}_m-{\alpha}_n\right)}+{\displaystyle {\sum}_{j=1}^{m-1}{ \sin}^2\left({\beta}_m-{\beta}_{\mathrm{O}, j,\mathrm{T}}\right)} $$ where β O,j,T denotes the best azimuth of the jth CN that has been acquired. Similarly, the GDOP of the AOA cooperative system can be expressed as $$ G D O{P}_{\mathrm{C}\hbox{-} \mathrm{AOA}}^1=\sqrt{\frac{r_1^{\hbox{-} 2}+{\displaystyle \sum_{n=1}^N{d_n}^{\hbox{-} 2}}}{r_1^{-2}{\displaystyle \sum_{n=1}^N{d_n}^{\hbox{-} 2}{ \sin}^2\left({\beta}_1-{\alpha}_n\right)}+{\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{d_n}^{\hbox{-} 2}{d_m}^{\hbox{-} 2}{ \sin}^2\left({\alpha}_n-{\alpha}_m\right)}}}} $$ Suppose each r m is a fixed value, then we have $$ Q\left({\beta}_1\right)={\displaystyle {\sum}_{n=1}^N{d_n}^{\hbox{-} 2}{ \sin}^2\left({\beta}_1-{\alpha}_n\right)} $$ $$ Q\left({\beta}_m\right)={\displaystyle {\sum}_{n=1}^N{d_n}^{\hbox{-} 2}{ \sin}^2\left({\beta}_m-{\alpha}_n\right)}+{\displaystyle {\sum}_{j=1}^{m-1}{r_j}^{\hbox{-} 2}{ \sin}^2\left({\beta}_m-{\beta}_{\mathrm{O}, j,\mathrm{T}}\right)} $$ In addition, we particularly discuss the effect of r m in section 4.3 to fully analyze the performance of GDOP of AOA-based cooperative localization system. OPMPD approach in TOA-based cooperative localization system The core concept of OPMPD approach is to get the extremum of Q(β m ) by calculating the partial derivative with respect to β m . Above all, we built the first-order partial differential equations to search the arrest points. For TOA-based cooperative localization systems, let ∂Q(β 1)/∂β 1 = 0 and define \( {\beta}_{1, T}^1 \) and \( {\beta}_{1,\mathrm{T}}^2 \) as the arrest points of Q(β 1), then we have $$ \left\{\begin{array}{l} \sin 2{\beta}_{1, T}^1{\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n}= \cos 2{\beta}_{1, T}^1{\displaystyle {\sum}_{n=1}^N \sin 2{\alpha}_n}\\ {} \sin 2{\beta}_{1, T}^2{\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n}= \cos 2{\beta}_{1, T}^2{\displaystyle {\sum}_{n=1}^N \sin 2{\alpha}_n}\end{array}\right. $$ Suppose \( {\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n}\ne 0 \), \( \cos 2{\beta}_{1, T}^1\ne 0 \), and \( \cos 2{\beta}_{1, T}^2\ne 0 \), (26) can be refreshed according to the equal ratios theorem $$ \frac{ \sin 2{\beta}_{1, T}^1}{ \cos 2{\beta}_{1, T}^1}=\frac{ \sin 2{\beta}_{1, T}^2}{ \cos 2{\beta}_{1, T}^2}=\frac{{\displaystyle {\sum}_{n=1}^N \sin 2{\alpha}_n}}{{\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n}} $$ Assume \( {\beta}_{1, T}^2>{\beta}_{1, T}^1 \), then $$ \begin{array}{l}{\beta}_{1, T}^1=\frac{1}{2}\times \arctan \left(\frac{{\displaystyle {\sum}_{n=1}^N \sin 2{\alpha}_n}}{{\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n}}\right)\in \left[-\frac{\pi}{4},\frac{\pi}{4}\right]\\ {}{\beta}_{1, T}^2={\beta}_{1, T}^1+\frac{1}{2}\pi \in \left[\frac{\pi}{4},\frac{3\pi}{4}\right]\end{array} $$ For easy of depiction and distinction, we term \( {\beta}_{1, T}^1 \) and \( {\beta}_{1, T}^2 \) in (28) as positive azimuth factor and negative azimuth factor of the first CN, respectively. Substitute \( {\beta}_{1, T}^1 \) and \( {\beta}_{1, T}^2 \) into the second-order partial derivative Q″ respectively to find which azimuth factor would obtain the maximum of observation function, then we have $$ {Q}^{{\prime\prime}}\left({\beta}_{1, T}^1\right)=2\left|\overset{\rightharpoonup }{A_{\mathrm{T}}}\right|\left|\overset{\rightharpoonup }{B_{1,\mathrm{T}}^1}\right| \cos {\gamma}_1,{Q}^{{\prime\prime}}\left({\beta}_{1, T}^2\right)=2\left|\overset{\rightharpoonup }{A_{\mathrm{T}}}\right|\left|\overset{\rightharpoonup }{B_{1,\mathrm{T}}^2}\right| \cos {\gamma}_2 $$ $$ \begin{array}{l}\overset{\rightharpoonup }{A_{\mathrm{T}}}=\Re + j\kappa ={\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n}+ j{\displaystyle {\sum}_{n=1}^N \sin 2{\alpha}_n},\\ {}\overset{\rightharpoonup }{B_{1,\mathrm{T}}^1}= cos2{\beta}_{1, T}^1+ j sin2{\beta}_{1, T}^1,\\ {}\overset{\rightharpoonup }{B_{1,\mathrm{T}}^2}= cos2{\beta}_{1, T}^2+ j sin2{\beta}_{1, T}^2\end{array} $$ where \( \overset{\rightharpoonup }{A_{\mathrm{T}}} \) is the azimuth vector of system, and ℜ is the cosine azimuth coefficient. \( \overset{\rightharpoonup }{B_{1,\mathrm{T}}^1} \) and \( \overset{\rightharpoonup }{B_{1,\mathrm{T}}^2} \) are defined as the positive azimuth vector and negative azimuth vector of the first CN, j denotes the imaginary unit, γ 1 denotes the angle between \( \overset{\rightharpoonup }{A_{\mathrm{T}}} \) and \( \overset{\rightharpoonup }{B_{1,\mathrm{T}}^1} \),and γ 2 denotes the angle between \( \overset{\rightharpoonup }{A_{\mathrm{T}}} \) and \( \overset{\rightharpoonup }{B_{1,\mathrm{T}}^2} \). From (27), (28), (29) and (30), we could draw three useful conclusions. First, the value of cos(γ 1) is either 1 or −1. So does cos(γ 2). Second, the direction of \( \overset{\rightharpoonup }{B_{1,\mathrm{T}}^1} \) is opposite to that of \( \overset{\rightharpoonup }{B_{1,\mathrm{T}}^2} \). Third, if ℜ < 0, the azimuth of \( \overset{\rightharpoonup }{A_{\mathrm{T}}} \) is confined from π/2 to 3π/2, thereby leading to \( G D O{P}_{\mathrm{TOA}}\left({\beta}_{1, T}^1\right) \) achieving the minimum value with cos γ 1 = − 1 and \( {Q}^{{\prime\prime}}\left({\beta}_{1, T}^1\right)<0 \). Conversely, if ℜ > 0, the azimuth of \( \overset{\rightharpoonup }{A_{\mathrm{T}}} \) is confined from − π/2 to π/2, which means that \( G D O{P}_{\mathrm{TOA}}\left({\beta}_{1,\mathrm{T}}^2\right) \) would get the minimum value with cos γ 2 = − 1 and \( {Q}^{{\prime\prime}}\left({\beta}_{1,\mathrm{T}}^2\right)<0 \). Consequently, a deployment mechanism of the first CN can be summarized as $$ {\beta}_{\mathrm{O},1,\mathrm{T}}=\left\{\begin{array}{cc}\hfill \begin{array}{l} random\\ {}\kern0.8em {\beta}_{1,\mathrm{T}}^1\end{array}\hfill & \hfill \begin{array}{l} if\mathit{\Re}=0\\ {} if\mathit{\Re}<0\end{array}\hfill \\ {}\hfill {\beta}_{1,\mathrm{T}}^2\hfill & \hfill if\mathit{\Re}>0\hfill \end{array}\right. $$ where β O,1,T represents the optimal azimuth corresponding to the least GDOP and \( \overset{\rightharpoonup }{B_{\mathrm{O},1,\mathrm{T}}}= cos2{\beta}_{\mathrm{O},1,\mathrm{T}}+ jsin2{\beta}_{\mathrm{O},1,\mathrm{T}} \) represents the optimal azimuth vector of the first CN. After the first CN is arranged at the optimum position, both the azimuth vector of system and the objective function will be refreshed to $$ \overset{\rightharpoonup }{A_{\mathrm{T}}}=\Re + j\kappa =\left( cos2{\beta}_{\mathrm{O},1,\mathrm{T}}+{\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n}\right)+ j\left( \sin 2{\beta}_{\mathrm{O},1,\mathrm{T}}+{\displaystyle {\sum}_{n=1}^N \sin 2{\alpha}_n}\right) $$ $$ Q\left({\beta}_2\right)={\displaystyle {\sum}_{n=1}^N{ \sin}^2\left({\beta}_2-{\alpha}_n\right)}+{ \sin}^2\left({\beta}_2-{\beta}_{\mathrm{O},1,\mathrm{T}}\right) $$ Define \( {\beta}_{2,\mathrm{T}}^1 \) and \( {\beta}_{2,\mathrm{T}}^2 \) as the arrest points of Q(β 2), then we have $$ {\beta}_{2,\mathrm{T}}^1={\beta}_{1,\mathrm{T}}^1,\kern0.8em {\beta}_{2,\mathrm{T}}^2={\beta}_{1,\mathrm{T}}^2 $$ $$ \frac{\mathrm{s} in2{\beta}_{2,\mathrm{T}}^1}{ \cos 2{\beta}_{2,\mathrm{T}}^1}=\frac{ \sin 2{\beta}_{2,\mathrm{T}}^2}{ \cos 2{\beta}_{2,\mathrm{T}}^2}=\frac{{\displaystyle {\sum}_{n=1}^N \sin 2{\alpha}_n+ \sin 2{\beta}_{\mathrm{O},1,\mathrm{T}}}}{{\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n+ \cos 2{\beta}_{\mathrm{O},1,\mathrm{T}}}}=\frac{{\displaystyle {\sum}_{n=1}^N \sin 2{\alpha}_n}}{{\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n}} $$ The optimal azimuth of the second CN can be determined as ℜ updated to \( {\displaystyle {\sum}_{n=1}^N \cos 2{\alpha}_n}+ \cos 2{\beta}_{\mathrm{O},1,\mathrm{T}} \). From (35), it is clear that either the positive azimuth factor or the negative azimuth factor of each successive CN equals to that of the first CN. Hence, we define \( {\beta}_{\mathrm{T}}^1 \) and \( {\beta}_{\mathrm{T}}^2 \) as the positive azimuth factor and negative azimuth factor of the system where \( {\beta}_{\mathrm{T}}^1={\beta}_{1,\mathrm{T}}^1 \), \( {\beta}_{\mathrm{T}}^2={\beta}_{1,\mathrm{T}}^2 \). Suppose the absolute value of initial cosine azimuth coefficient is larger than 2, the absolute value of cosine azimuth coefficient refreshed will decrease as the CN is introduced since the directions of \( \overset{\rightharpoonup }{B_{\mathrm{O},1,\mathrm{T}}} \) and \( \overset{\rightharpoonup }{B_{\mathrm{O},2,\mathrm{T}}} \) are opposite to that of initial \( \overset{\rightharpoonup }{A_T} \). That is $$ \left| \cos 2{\beta}_{\mathrm{O},1,\mathrm{T}}+ \cos 2{\beta}_{\mathrm{O},2,\mathrm{T}}+{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}\right|<\left| \cos 2{\beta}_{\mathrm{O},1,\mathrm{T}}+{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}\right|<\left|{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}\right| $$ Moreover, if initial cosine azimuth coefficient is even greater, there exists a theoretical quantity of the CNs termed as Z T satisfying $$ {\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+1,\mathrm{T}}\ne {\beta}_{\mathrm{O},{Z}_{\mathrm{T}},\mathrm{T}}\dots ={\beta}_{\mathrm{O},2,\mathrm{T}}={\beta}_{\mathrm{O},1,\mathrm{T}} $$ which means that the optimal azimuth of the former Z T CNs equal to β O,1,T rather than \( {\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+1,\mathrm{T}} \). For simplicity, we define Z T as inertia dependency factor and estimate it by $$ \left\{\begin{array}{l}\left(\left({Z}_{\mathrm{T}}-1\right)\cdot \cos 2{\beta}_{\mathrm{O},1,\mathrm{T}}+{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}\right){\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}>0\\ {}\left({Z}_{\mathrm{T}} \cos 2{\beta}_{\mathrm{O},1,\mathrm{T}}+{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}\right){\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}<0\end{array}\right. $$ From (38), we have $$ \left\{\begin{array}{l}1+\left({Z}_{\mathrm{T}}-1\right) V>0\\ {}{Z}_{\mathrm{T}} V+1<0\end{array}\right. $$ $$ V=\frac{ \sin 2{\beta}_{\mathrm{O},1, T}}{{\displaystyle \sum_{n=1}^N sin2{\alpha}_n}}=\frac{ \cos 2{\beta}_{\mathrm{O},1, T}}{{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}} $$ Hence, $$ {Z}_{\mathrm{T}}=\left\lceil \sqrt{{\left({\displaystyle \sum_{n=1}^N \sin 2{\alpha}_n}\right)}^2+{\left({\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}\right)}^2}\right\rceil $$ where ⌈⌉ denotes of the operation of rounding up. The effect of the magnitude of M on the deployment is also taken into consideration in our research. Assume the initial value of ℜ is less than zero. If Z T ≥ M, the relationship among the optimal azimuth of each CN is found to be $$ {\beta}_{\mathrm{O}, M,\mathrm{T}}\dots ={\beta}_{\mathrm{O},2,\mathrm{T}}={\beta}_{\mathrm{O},1,\mathrm{T}}={\beta}_{\mathrm{T}}^1 $$ On the contrary, if Z T < M, it is necessary to explore the optimal azimuth of CN whose index is larger than Z T. For the (Z T + 1)th CN to be placed, ℜ is updated to $$ \Re ={Z}_{\mathrm{T}}\cdot \cos 2{\beta}_{\mathrm{T}}^1+{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n} $$ Since \( \left({Z}_{\mathrm{T}} \cos 2{\beta}_{\mathrm{T}}^1+{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}\right)<0 \), we have $$ {\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+1,\mathrm{T}}={\beta}_{\mathrm{T}}^2 $$ Then, for the (Z T + 2)th CN to be placed, we have $$ \Re ={Z}_{\mathrm{T}}\cdot \cos 2{\beta}_{\mathrm{T}}^1+{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n}+ \cos 2{\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+1,\mathrm{T}}=\left({Z}_{\mathrm{T}}-1\right)\cdot \cos 2{\beta}_{\mathrm{T}}^1+{\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n} $$ Consequently, the relationship among the optimal azimuth of each CN is found to be $$ \begin{array}{l}{\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+2 K,\mathrm{T}}=\dots ={\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+4,\mathrm{T}}={\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+2,\mathrm{T}}={\beta}_{\mathrm{O},{Z}_{\mathrm{T}},\mathrm{T}}\dots ={\beta}_{\mathrm{O},2,\mathrm{T}}={\beta}_{\mathrm{O},1,\mathrm{T}}={\beta}_{\mathrm{T}}^1\\ {}{\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+2 K-1,\mathrm{T}}=\dots ={\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+3,\mathrm{T}}={\beta}_{\mathrm{O},{Z}_{\mathrm{T}}+1,\mathrm{T}}={\beta}_{\mathrm{T}}^2\end{array} $$ $$ K=\left\lfloor 0.5\left( M+1-{Z}_{\mathrm{T}}\right)\right\rfloor $$ where ⌊⌋ denotes of the coperation of rounding down. Similar conclusion can also be derived in the case of the initial value of ℜ equal or greater than zero. Hence, a solution to the optimal deployment of cooperative nodes in two-dimensional TOA-based localization system aiming at the lowest GDOP can be summarized as follows. Step 1: According to the spatial relationship between the GN and PT, establish the initial observation function and calculate \( {\beta}_{\mathrm{T}}^1 \), \( {\beta}_{\mathrm{T}}^2 \), Z T and the initial ℜ. Step 2: Determine the optimal azimuth of the first CN from \( {\beta}_{\mathrm{T}}^1 \) and \( {\beta}_{\mathrm{T}}^2 \), according to the polarity of ℜ. Step 3: If Z T < M, deploy the whole CNs in the same azimuth as the first CN assigned. Step 4: If Z T ≥ M, deploy the former Z T CNs in the same azimuth as the first CN assigned while place the (Z T +1)th CN in the azimuth vertical to that of the first CN. Then, \( {\beta}_{\mathrm{T}}^1 \) and \( {\beta}_{\mathrm{T}}^2 \) are alternatively chosen as the optimal azimuth of the rest CNs. OPMPD approach in AOA-based cooperative localization system As mentioned in the initial portion of the "Section 4," in two-dimensional AOA-based localization system, both the azimuth and distance of each CN exert an influence on the performance of the GDOP. First of all, we analyze the influence of β 1 on the value of \( G D O{P}_{\mathrm{C}\hbox{-} \mathrm{AOA}}^1 \) depicted in (23) assuming each r m is a fixed value. With the help of a method similar to that in TOA-based system, the positive azimuth factor and negative azimuth factor of the system can be derived as $$ \begin{array}{l}{\beta}_{\mathrm{A}}^1={\beta}_{1,\mathrm{A}}^1=0.5\times \arctan \left({\displaystyle {\sum}_{n=1}^N{d_n}^{\hbox{-} 2} \sin 2{\alpha}_n}/{\displaystyle {\sum}_{n=1}^N{d_n}^{\hbox{-} 2} \cos 2{\alpha}_n}\right)\\ {}{\beta}_{\mathrm{A}}^2={\beta}_{1,\mathrm{A}}^2={\beta}_{1,\mathrm{A}}^1+0.5\pi \end{array} $$ where \( {\beta}_{1,\mathrm{A}}^1 \) and \( {\beta}_{1,\mathrm{A}}^2 \) denote the arrest points of the observation function depicted in (24). The azimuth vector of system can also be obtained by $$ \overset{\rightharpoonup }{A_{\mathrm{A}}}=\Re + j\kappa ={\displaystyle {\sum}_{n=1}^N{d_n}^{\hbox{-} 2} \cos 2{\alpha}_n}+ j{\displaystyle {\sum}_{n=1}^N{d_n}^{\hbox{-} 2} \sin 2{\alpha}_n} $$ Accordingly, the optimal azimuth of the first CN is updated to $$ {\beta}_{\mathrm{O},1,\mathrm{A}}=\left\{\begin{array}{cc}\hfill \begin{array}{l} random\\ {}\kern1em {\beta}_{\mathrm{A}}^1\end{array}\hfill & \hfill \begin{array}{l} if\mathit{\Re}=0\\ {} if\mathit{\Re}<0\end{array}\hfill \\ {}\hfill {\beta}_{\mathrm{A}}^2\hfill & \hfill if\mathit{\Re}>0\hfill \end{array}\right. $$ where β O,1,A represents the optimal azimuth corresponding to the least GDOP. Secondly, we further discuss the influence of r 1 on \( G D O{P}_{\mathrm{AOA}}^1 \). Suppose the scope of r m follows $$ {r}_{1, \min}\le {r}_m\le {r}_{1, \max } $$ where r 1,min and r 1,max denote the minimum and maximum. For simplicity, let $$ \begin{array}{l} C={r}_1^{\hbox{-} 2},\\ {} D={\displaystyle \sum_{n=1}^N{d_n}^{\hbox{-} 2}},\\ {} E={\displaystyle \sum_{n=1}^N{d_n}^{\hbox{-} 2}{ \sin}^2\left({\beta}_1-{\alpha}_n\right)},\\ {} F={\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{d_n}^{\hbox{-} 2}{d_m}^{\hbox{-} 2}{ \sin}^2\left({\alpha}_n-{\alpha}_m\right)}}\end{array} $$ The first-order derivative of \( G D O{P}_{\mathrm{AOA}}^1 \) with respect to r 1 is expressed as follows: $$ \frac{\partial GDO{P}_{\mathrm{AOA}}^1\left({r}_1\right)}{\partial {r}_1}=\frac{1}{2 GDO{P}_{\mathrm{AOA}}^1}\frac{-2\left( F- DE\right)}{r_1^3{\left( F+ CE\right)}^2} $$ Here, we pay our attention to the polarity of (F − DE) to investigate the monotone property of \( G D O{P}_{\mathrm{AOA}}^1 \). Based on (53), we have $$ \begin{array}{l}2 E- D=-\left( \cos 2\beta {\displaystyle \sum_{n=1}^N \cos 2{\alpha}_n{d_n}^{\hbox{-} 2}}+ \sin 2\beta {\displaystyle \sum_{n=1}^N \sin 2{\alpha}_n{d_n}^{\hbox{-} 2}}\right)\\ {}\kern3.6em =\sqrt{\left({\displaystyle \sum_{n=1}^N{d_n}^{\hbox{-} 4}}\right)+2{\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= i+1}^N{d_n}^{\hbox{-} 2}{d_m}^{\hbox{-} 2}\left(1-2 si{n}^2\left({\alpha}_n-{\alpha}_m\right)\right)}}}\end{array} $$ $$ \begin{array}{l}{D}^2-4 F={\left({\displaystyle \sum_{n=1}^N{d_n}^{\hbox{-} 2}}\right)}^2-4{\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{d_n}^{\hbox{-} 2}{d_m}^{\hbox{-} 2}{ \sin}^2\left({\alpha}_n-{\alpha}_m\right)}}\\ {}\kern5.5em =\left({\displaystyle \sum_{n=1}^N{d_n}^{\hbox{-} 4}}\right)+2{\displaystyle \sum_{n=1}^{N-1}{\displaystyle \sum_{m= n+1}^N{d_n}^{\hbox{-} 2}{d_m}^{\hbox{-} 2}\left(1-2 si{n}^2\left({\alpha}_n-{\alpha}_m\right)\right)}}\end{array} $$ $$ 2 F-{D}^2=-\left({\displaystyle \sum_{i=1}^p{d_i}^{\hbox{-} 4}}\right)+2{\displaystyle \sum_{i=1}^{p-1}{\displaystyle \sum_{j= i+1}^p{d_i}^{\hbox{-} 2}{d_j}^{\hbox{-} 2}\left( si{n}^2\left({\alpha}_j-{\alpha}_i\right)-1\right)}}<0 $$ From (55), (56), and (57), we have $$ \sqrt{D^2-4 F}=2 E- D\ge 0\ge \frac{2 F-{D}^2}{D} $$ Futhermore, we have $$ F- D E\le 0 $$ Obviously, \( G D O{P}_{\mathrm{AOA}}^1 \) is an increasing function with respect to r 1, which means r 1,min is the optimal distance of the first CN. Similar conclusion is available for the other CN. After the former m CN have been placed, we have $$ \Re ={\displaystyle {\sum}_{j=1}^{m-1}{r}_{j, \min}^{-2} \cos 2{\beta}_{\mathrm{O}, j,\mathrm{A}}}+{\displaystyle {\sum}_{n=1}^N{d_n}^{\hbox{-} 2} \cos 2{\alpha}_n} $$ Then the optimal azimuth of the (m + 1)th CN to be placed can be determined by $$ {\beta}_{\mathrm{O}, m+1,\mathrm{A}}=\left\{\begin{array}{cc}\hfill \begin{array}{l} random\\ {}\kern1em {\beta}_{\mathrm{A}}^1\end{array}\hfill & \hfill \begin{array}{l} if\mathit{\Re}=0\\ {} if\mathit{\Re}<0\end{array}\hfill \\ {}\hfill {\beta}_{\mathrm{A}}^2\hfill & \hfill if\mathit{\Re}>0\hfill \end{array}\right. $$ Particularly, if r 1,min = r 2,min = … = r M,min = r min, the inertia dependency factor can also be calculated by $$ \left\{\begin{array}{l}\left(\left({Z}_{\mathrm{A}}-1\right)\cdot {r}_{\min}^{-2} \cos 2{\beta}_{\mathrm{O},1,\mathrm{A}}+{\displaystyle \sum_{n=1}^N{d}_n^{-2} \cos 2{\alpha}_n}\right){\displaystyle \sum_{n=1}^N{d}_n^{-2} \cos 2{\alpha}_n}>0\\ {}\left({Z}_{\mathrm{A}}{r}_{\min}^{-2} \cos 2{\beta}_{\mathrm{O},1,\mathrm{A}}+{\displaystyle \sum_{n=1}^N{d}_n^{-2} \cos 2{\alpha}_n}\right){\displaystyle \sum_{n=1}^N{d}_n^{-2} \cos 2{\alpha}_n}<0\end{array}\right. $$ $$ {Z}_{\mathrm{A}}=\left\lceil \sqrt{{\left({\displaystyle \sum_{n=1}^N{r}_{\min}^2{d}_n^{-2} \sin 2{\alpha}_n}\right)}^2+{\left({\displaystyle \sum_{n=1}^N{r}_{\min}^2{d}_n^{-2} \cos 2{\alpha}_n}\right)}^2}\right\rceil $$ Finally, the solution to the optimal deployment of cooperative nodes in two-dimensional AOA-based localization system can be generalized as follows. Step 1: According to the spatial relationship between the GN and PT, establish the initial observation function and calculate \( {\beta}_{\mathrm{A}}^1 \), \( {\beta}_{\mathrm{A}}^2 \), and the initial ℜ. Step 2: Choose the lower bound of each r m as the the optimal distance for each CN. Step 3: Determine the optimal azimuth of the first CN from \( {\beta}_{\mathrm{A}}^1 \) and \( {\beta}_{\mathrm{A}}^2 \), according to the polarity of ℜ. Step 4: If r 1,min = r 2,min = … = r M,min = r min is not satisfied, determine the optimal azimuth of the other CN according to the polarity of ℜ updated. Step 5: If r 1,min = r 2,min = … = r M,min = r min is satisfied, calculate Z A. If Z A < M, deploy the whole CNs in the same azimuth as the first CN assigned. If Z A ≥ M, deploy the former Z A CNs in the same azimuth as the first CN assigned while place the (Z A +1)th CN in the azimuth vertical to that of the first CN. Then, \( {\beta}_{\mathrm{A}}^1 \) and \( {\beta}_{\mathrm{A}}^2 \) are alternatively chosen as the optimal azimuth of the rest CNs. In this section, we establish three quite different simulation environments to evaluate the performance of OPMPD approach and compare it with partial exhaustive method, global exhaustive method, and eigenvalue-based approach. It indicates that our approach could provide a rapid and accurate deployment of CNs for TOA-based and AOA-based localization system and is superior to the other approaches. Simulation set up These simulations are implemented in MATLAB language and tested on a PC with an Intel Core i5-3470 CPU of 3.20 GHz and DDR3 SDRAM of 8 GB. Suppose there are totally 6 CNs to be placed to improve the accuracy of a PT that serviced by 6 GNs. Simulation data are collected from three environments whose details depicted in Fig. 3 and Table 1. As reported in (31) and (49), if ℜ = 0, any azimuth can be regarded as the best choice. In our test, for the case of ℜ = 0, we respectively choose π/5, π/6, and π/4 as the optimal azimuth of CN in environment 1, environment, 2 and environment 3 for easy of analysis. In addition, we set the domain of β O,m,T as \( \left[-\frac{\pi}{4},\frac{3\pi}{4}\right] \), considering the periodicity of the Q(β m ) and the scope configured in (28). The layout of the three different simulation environments. a Environment 1. b Environment 2. c Environment 3 Table 1 Optimal performance obtained by OPMPD approach Deployment accuracy Because OPMPD approach estimate the optimal position of each CN based on the partial differentiation and the stepwise strategy, both partial exhaustive method and global exhaustive method are chosen to evaluate the accuracy. The significant difference between them is that the optimum of the CNs in partial exhaustive method is decided step by step just like eigenvalue-based approach and our approach while the optimum of the CNs in global exhaustive method determined all at once. For the two kinds of exhaustive method, the granularity of azimuth is set π/3600 rad and the granularity of distance is set 0.5 m. Table 2 depicts the optimal performance obtained by OPMPD approach. As revealed in Table 2(a), 0.6283, 2.9044, and 2.0051 are chosen as the optimal azimuth of the first CN in the three environments. Compared to the performance depicted in Fig. 4, the optimum of each CN achieved by OPMPD nearly equals to the result afforded by partial exhaustive method, which proves the correctness of theoretical derivation in OPMPD approach. Similar conclusion about azimuth toward AOA-based localization system can be derived from Table 2(b) and Figs. 5, 6, and 7. Besides, it is showed that the GDOP- of AOA-based cooperative localization system decreases as the distance of the CN shrinks, which agrees with the conclusion in (59). Performance of partial exhaustive method towards TOA-based system. a Environment 1. b Environment 2. c Environment 3 Performance of partial exhaustive method towards AOA-based system in environment 1. a Introduction of the first CN. b Introduction of the second CN. c Introduction of the third CN. d Introduction of the fourth CN. e Introduction of fifth CN. f Introduction of sixth CN Performance of partial exhaustive method towards AOA-based system in environment 2. a Introduction of first CN. b Introduction of second CN. c Introduction of third CN. d Introduction of fourth CN. e Introduction of fifth CN. f Introduction of sixth CN The accuracy comparison with global exclusive method is illustrated in Figs. 8 and 9. It can be seen that the GDOP of OPMPD approximately equals to that of global exclusive method with occasionally a little larger. Whereas, it is important to note that the OPMPD approach is much less complex. Accuracy comparison of OPMPD approach and global exclusive method towards TOA-based system. a Environment 1. b Environment 2. c Environment 3 Accuracy comparison of OPMPD approach and global exclusive method towards AOA-based system. a Environment 1. b Environment 2. c Environment 3 Deployment efficiency Running time is chosen to evaluate the efficiency of the OPMPD approach and the eigenvalue-based approach. For the TOA-based cooperative system, the procedure of the eigenvalue-based approach consists of three steps. First, the symmetric matrix G TOA would be calculated. Second, the eigenvalue and eigenvector of G TOA should be determined. Third, the optimal position of the CN to be placed is decided between the two eigenvector by contrasting the fore-and-aft eigenvalue. On the other, the procedure in OPMPD approach mainly comprises two steps. First, the positive azimuth factor, the negative azimuth factor, the initial cosine azimuth coefficient, and the inertia dependence factor are calculated. Second, the optimal position of each CN can be obtained according to the polarity of the initial cosine azimuth coefficient and the value of the inertia dependence factor. Figures 10 and 11, respectively, shows the pseudocode for the procedure in eigenvalue-based approach and in OPMPD approach towards TOA-based localization system. The data of running time is collected on the basis of 10,000 times Monte Carlo simulation. As illuminated in Table 3, the efficiency of the OPMPD approach is nearly two times higher than that of the eigenvalue-based approach. One reason is that the complex eigen-decomposition of the matrix has been replaced by the straightforward arithmetic operation. Pseudocode for the deployment procedure in eigenvalue-based approach towards TOA-based localization system Pseudocode for the deployment procedure in OPMPD approach towards TOA-based localization system Table 3 Average running time comparison of OPMPD approach and eigenvalue-based approach towards TOA-based localization system Requirement of inertia dependence factor Inertia dependence factor that reveals the inertia and the recursiveness of deployment also accounts for the efficiency improvement. For TOA-based localization system, the requirement of inertia dependence factor is that the CN can be placed in any azimuth. In environment 2, since Z T = 6, assigning the total CNs in the azimuth 2.9044 rad would achieve the minimum value of \( {\mathrm{GDOP}}_{\mathrm{C},\mathrm{TOA}}^6=0.5775 \). For AOA-based localization system, besides azimuth, the requirement of inertia dependence factor includes that the lower bound of the distance to the PT of each CN should be identical to the others'. In environment 1, since r 1,min = r 2,min = … = r 6,min = 20, Z A = 2, thereby assigning the former two CNs in the azimuth 2.1057 rad and alternately choosing 0.5349 rad and 2.1057 rad as the optimal azimuth of each remaining CN would achieve the minimum value of \( {\mathrm{GDOP}}_{\mathrm{C},\mathrm{AOA}}^6=14.8611 \). But in environment 3, as the lower bound of the distance of the CNs differ from each other, inertia dependence factor is unavailable which renders the optimal azimuth of each CN decided by the polarity of the updating cosine azimuth coefficient. In this paper, our research is to solve the optimal deployment of cooperative nodes in two-dimensional localization system. To achieve the deployment with the least GDOP, the GDOP expressions of TOA-based and AOA-based cooperative localization systems are derived on the basis of Cramer-Rao bound. By examining the partial differentiation of the GDOP and leveraging stepwise strategy, an approach termed as OPMPD is suggested to expose the optimal position for each CN. Inertia dependence factor is also deducted which reveals the relationship among the optimal positions of each CN. Simulation results prove that our solution almost achieves the same GDOP to that of global exclusive method but with less time complexity. It is noted that the solution proposed in this paper are only suitable for TOA system and AOA system, not for TDOA system. The reason for this is that the idea of TDOA is to determine the relative position of the mobile transmitter by examining the difference in time at which the signal arrives at multiple measuring units, rather than the absolute arrival time of TOA. The Jacobian matrix of TDOA is quite different from those of TOA and AOA. Hence, our future work is to study the optimal deployment of cooperative nodes in other more complex localization systems such as the TDOA-based, the TDOA/TOA-based, and the TDOA/AOA-based. A Fehske, G Fettweis, J Malmodin et al., The global footprint of mobile communications: the ecological and economic perspective. Communications Magazine, IEEE 49(8), 55–62 (2011) AH Sayed, A Tarighat, N Khajehnouri, Network-based wireless location: challenges faced in developing techniques for accurate wireless location information. Signal Processing Magazine, IEEE 22(4), 24–40 (2005) D Liu, B Sheng, F Hou et al., From wireless positioning to mobile positioning: an overview of recent advances. Systems Journal, IEEE 8(4), 1249–1259 (2014) PK Enge, The global positioning system: signals, measurements, and performance. Int. J. Wireless Inf. Networks 1(2), 83–105 (1994). Modeling and analysis for the GPS pseudo-range observable,1995 X Zhiyang, S Pengfei, Wireless location determination for mobile objects based on GSM in intelligent transportation systems. Systems Engineering and Electronics, Journal of 14(2), 8–13 (2003) D Peral-Rosado, J Lopez-Salcedo, G Seco-Granados et al., Achievable localization accuracy of the positioning reference signal of 3GPP LTE[C]//Localization and GNSS (ICL-GNSS), 2012 International Conference on (IEEE, 2012), pp. 1–6 A Bours, E Cetin, AG Dempster, Enhanced GPS interference detection and localisation. Electron. Lett. 50(19), 1391–1393 (2014) DKP Tan, H Sun, Y Lu et al., Passive radar using global system for mobile communication signal: theory, implementation and measurements. Radar, Sonar and Navigation, IEE Proceedings. IET 152(3), 116–123 (2005) S Wing, Mobile and wireless communication: space weather threats, forecasts, and risk management. IT Professional 5, 40–46 (2012) YL Yeh, KC Cheng, WH Wang et al., Very short-term earthquake precursors from GPS signal interference based on the 2013 Nantou and Rueisuei earthquakes, Taiwan[J]. J Asian Earth Sci, 114,312-320(2015) MH MacAlester, W Murtagh, Extreme space weather impact: an emergency management perspective. Space Weather 12(8), 530–537 (2014) J Meguro, T Murata, J Takiguchi et al., GPS multipath mitigation for urban area using omnidirectional infrared camera. Intelligent Transportation Systems, IEEE Transactions on 10(1), 22–30 (2009) Y Qi, H Kobayashi, H Suda, On time-of-arrival positioning in a multipath environment. Vehicular Technology, IEEE Transactions on 55(5), 1516–1526 (2006) I Suberviola, I Mayordomo, J Mendizabal, Experimental results of air target detection with a GPS forward-scattering radar. Geoscience and Remote Sensing Letters, IEEE 9(1), 47–51 (2012) AJ Fehske, F Richter, GP Fettweis, Energy efficiency improvements through micro sites in cellular mobile radio networks[C]//GLOBECOM Workshops, 2009 IEEE (IEEE, 2009), pp. 1–5 S McAleavey, Ultrasonic backscatter imaging by shear-wave-induced echo phase encoding of target locations. Ultrasonics, Ferroelectrics, and Frequency Control, IEEE Transactions on 58(1), 102–111 (2011) WY Chiu, BS Chen, CY Yang, Robust relative location estimation in wireless sensor networks with inexact position problems. Mobile Computing, IEEE Transactions on 11(6), 935–946 (2012) R Faragher, R Harle, Location fingerprinting with bluetooth low energy beacons. IEEE Journal on Selected Areas in Communications 33(11), 2418–2428 (2015) L Yang, Y Chen, XY Li et al., Tagoram: Real-time tracking of mobile RFID tags to high precision using COTS devices[C]//Proceedings of the 20th annual international conference on Mobile computing and networking (ACM, 2014), pp. 237–248 K Yu, J Montillet, A Rabbachin et al., UWB location and tracking for wireless embedded networks. Signal Processing 86(9), 2153–2171 (2006). 超宽带 Z He, Y Ma, R Tafazolli, Accuracy limits and mobile terminal selection scheme for cooperative localization in cellular networks[C]//Vehicular Technology Conference (VTC Spring), 2011 IEEE 73rd (IEEE, ᅝ, 2011), pp. 1–5 RM Vaghefi, RM Buehrer, Cooperative localization in NLOS environments using semidefinite programming. Communications Letters, IEEE 19(8), 1382–1385 (2015) W Li, Y Hu, X Fu et al., Cooperative positioning and tracking in disruption tolerant networks. Parallel and Distributed Systems, IEEE Transactions on 26(2), 382–391 (2015) H Liu, H Darabi, P Banerjee et al., Survey of wireless indoor positioning techniques and systems. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on 37(6), 1067–1080 (2007) FL Piccolo, A new cooperative localization method for UMTS cellular networks[C]//Global Telecommunications Conference, 2008. IEEE GLOBECOM 2008. IEEE. IEEE, 2008, pp. 1–5 DJ Torrieri, Statistical theory of passive location systems. IEEE Transactions on Aerospace and Electronic Systems 20(2), 183–198 (1984) N Levanon, Lowest GDOP in 2-D scenarios. IEE Proceedings-radar, sonar and navigation 147(3), 149–155 (2000) AG Dempster, Dilution of precision in angle-of-arrival positioning systems. Electronics Letters 42(5), 291–292 (2006) P Deng, J Yu Li, GDOP performance analysis of cellular location system. Journal of southwest jiaotong university 40(2), 184–188 (2005) Q Quan, Low bounds of the GDOP in absolute-range based 2-D wireless location systems. Information Science and Digital Content Technology (ICIDT), 2012 8th International Conference on. IEEE 1, 135–138 (2012) XW Lv, KH Liu, P Hu, Efficient solution of additional base stations in time-of-arrival positioning systems. Electronics letters 46(12), 861–863 (2010) This work was funded by the National Science Foundation of China (No. 61372011), Application Foundation and Advanced Technology Project of Tianjin (No. 15JCYBJC16300), and Natural Science Foundation of Tianjin (No. 16JCTPJC46900). WS and XQ conceived and designed the study. JL, SY and LC performed the simulation experiments. YY and XF wrote the paper. WS and XQ reviewed and edited the manuscript. All authors read and approved the manuscript. Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, School of Electronics and Information Engineering, Tianjin Polytechnic University, Tianjin, 300387, China Weiguang Shi, Xiaoli Qi, Jianxiong Li, Shuxia Yan, Liying Chen, Yang Yu & Xin Feng Weiguang Shi Xiaoli Qi Jianxiong Li Shuxia Yan Liying Chen Yang Yu Xin Feng Correspondence to Weiguang Shi. Shi, W., Qi, X., Li, J. et al. Simple solution to the optimal deployment of cooperative nodes in two-dimensional TOA-based and AOA-based localization system. J Wireless Com Network 2017, 79 (2017). https://doi.org/10.1186/s13638-017-0859-6 GDOP Optimal placement Cooperative node Radar and Sonar Networks
CommonCrawl
Mechanics of Advanced Materials and Modern Processes Taguchi-Grey relation analysis for assessing the optimal set of control factors of thermal barrier coatings for high-temperature applications Mohammed Yunus1Email author, Mohammad S. Alsoufi1 and Shadi M. Munshi1 Mechanics of Advanced Materials and Modern Processes20162:4 Accepted: 4 October 2016 In an aerospace industry, the efficient use of thermally sprayed coatings for high-temperature applications is achieved by improving the thermal characteristics (TC) such as thermal drop/barrier (TD) and thermal fatigue cycles (TFC). The characterization of ceramic coatings demands a better understanding of TC and their performance. In this paper, an attempt has been made to use hybrid Taguchi design method based grey relation analysis (GRA) for optimizing the control factors such as the thickness of coating, type of coating, bond coating and exposed temperature. The necessary experiments were carried out using Taguchi L16 factorial design of experiments for analysis based on the larger the better signal-to-noise (S/N) ratio. The multi-response/output optimization and grading of control factors were successfully carried out by GRA. The significance of each factor as regards TD and TFC were investigated. The ANOVA results showed that most important parameters at 95 % confidence level and were validated with a confirmation test using optimum process factors. It shows an improvement in the TC of thermal barrier coatings. This work revealed that the hybrid GRA with Taguchi technique had improved the durability and efficient usage of TBC for high-temperature applications. Thermal characteristics Thermal drop Hybrid Taguchi-Grey The preparation of an engineering surface to withstand severe working conditions such as high temperature, corrosive environment, excessive wear and so on demands suitable surface modification techniques for existing materials and alloys by coating with thermally sprayed industrial ceramic coatings (Gibson et al. 2013; Javadi et al. 2014; Koilraj et al. 2012; Lakshminarayanan and Balasubramanian 2007; Pan et al. 2007; Yunus and Alsoufi 2016). Thermal barrier coatings (TBC) are ceramic oxide coatings applied to the metal substrate and are widely used in aerospace, automobiles, gas turbines, and power generators and so on in order to increase the life of engineering materials. TBCs, due to their low thermal conductivity, act as an insulator and protect the metal substrate from high temperature as well as the corrosive environment. Generally, TBCs consist of a pair of layers: the first layer is a metallic bond coat to protect the metal substrate from corrosion, oxidation and as well as increases adhesive intensity with a coating layer, whereas the second layer is ceramic oxide coating protects metal from high temperature as well as from excessive wear and corrosion. The TBCs are generally applied on the substrate of a metal substrate. The desirable properties of these include high thermal expansion, low thermal conductivity and excellent thermal cycling resistance. In a complex and multivariate performance systems such as engineering design and material characterization system, where many parameters simultaneously influence the system to determine their state of development. Typically, we want to know the parameters which affect the system more or little. However, a relationship between various parameters is usually grey due to unclear, incomplete and uncertain information. Moreover, difficult to obtain practical and experimental data due to scattered too much to analyze. Two conventional statistical techniques (factor and regression analysis) frequently used on the data consisting of the relationship between dependent and independent parameters. However, these analyses prescribe mutual influence between variables, the large volume of data and should conform to the typical distribution. However, usually, difficult to obtain the internal factors relationship data. Therefore, traditional multivariate statistical methods possess a hard time in getting a convincing explanation. Therefore, a new analysis method needs to be addressed to overcome the disadvantages of regression and factor analysis and multi-response method. Grey relational analysis (GRA) has been recommended to solve our problem, a kind of efficient tool to perform system analysis, lays a framework for modelling, forecasting, and clustering of grey systems. Grey relational has the many advantages such as small sample having no typical distribution, no requirement of independent variables and a small amount of calculation. Additionally, GRA analysis is already proved to be a straightforward and accurate method for selecting parameters for those problems with unique characteristics (Gibson et al. 2013; Javadi et al. 2014; Koilraj et al. 2012; Lakshminarayanan and Balasubramanian 2007; Mishra and Ma 2005; Pan et al. 2007; Yunus and Alsoufi 2015). Therefore, the present study will utilize the GRA to establish grey relational grade (GRG with a ranking scheme to place the series of the grey relationship between both independent and dependent parameters. GRA is a judgment model for deciding influential parameters in multi-output systems using rearranged GRG according to their magnitude order. Table 1 provides the control parameters and their levels adopted for studying the application of industrial ceramic TBCs. Tests were conducted using Taguchi L16 orthogonal array. Process parameters and their levels for thermal characteristics of coatings Process parameters Parameters levels Coating Type Super-Z Coating Side Temperature, °C Coating Thickness, μm Bond Coating Thickness, μm Grey relational analysis (GRA) Deng Julong et al. presented the GRA approach (Cavaliere et al. 2008) for estimating the magnitude of the relationship between arrays of experiments using GRG. GRA is employable in the optimization of control factors having multi-performances/responses/outputs through the GRG (Cavaliere et al. 2008). Mostly, GRA is successfully utilized in a process with limited or inadequate information (Fig. 1). Various stages involved in the GRA technique are: Steps involved in hybrid GRA based on Taguchi method Step No.1: Normalizing or data preprocessing would be the initial step conducted to normalize (transforming measured units into dimensionless parameters) the test results on the scale of zero and one due to the series of the degree of the process factors are different (Cavaliere et al. 2008; Lin 2002). In this grey relational generating step, initial series are changed into a collection of comparable series. Several methods are available to pre-process the grey data. In this work, Taguchi design method based GRA used to optimize TC of TBCs at high-temperature application parameters for multiple response quality features such as TD and TFC to withstand high temperature in studying the amount of barrier offered (temperature drop occurring across the coating) and lifespan of coatings over the metal substrate. Multiple objective features were depending upon the quality characteristics of the initial data. It represents the initial ideal array and preprocessed data (compatibility series) by \( {z}_j^{*}(h){z}_j^0(h) \), j = 1, 2, 3, …, u; h = 1, 2, 3, …, v, where u and v are the number of trials and data observations, respectively. Moreover, on the quality features of original data, the principal divisions identified for normalizing the initial series using either Eq. (1) for "smaller-the-better" or Eq. (2) for "larger-the-better" (Sharma et al. 2005): $$ {z}_j^{*}=\frac{ \max {z}_j^0(h)-{z}_j^0(h)}{ \max {z}_j^0(h)- \min {z}_j^0(h)} $$ $$ {z}_j^{*}(h)=\frac{z_j^0(h)- \min {z}_j^0(h)}{ \max {z}_j^0(h)- \min {z}_j^0(h)} $$ where, \( \max {\mathrm{z}}_{\mathrm{j}}^0\left(\mathrm{h}\right) \) and \( \min {\mathrm{z}}_{\mathrm{j}}^0\left(\mathrm{h}\right) \) are the greatest and least values of \( {z}_j^0(h) \), the series after the data processing (or compatibility series) and \( {z}_j^{*}(h) \) is the original series of the target/end value. In the present analysis, u = 16 and v = 2 are used. Step No.2: Calculate the grey relational coefficient (GRC) using the deviation coefficient from Eq. (3). Deviation coefficient is defined as the arbitrary value of the variation between the compatibility and reference series: $$ {\Delta}_j^o(h)={z}_j^0(h)-{z}_j^{*}(h) $$ where, \( {\Delta}_{\mathrm{j}}^{\mathrm{o}}\left(\mathrm{h}\right) \) is the deviation coefficient, \( {\mathrm{z}}_{\mathrm{j}}^{\ast}\left(\mathrm{h}\right) \) is the ideal series (or reference series) and \( {z}_j^0(h) \) is the compatible series. The GRC is then determined using Eq. (4): $$ \gamma =\underset{\mathrm{GRC}\ \left(\mathrm{between}\ 0\ \mathrm{and}\ 1\right)}{\underbrace{\left({z}_0^{*}\left(h\times {z}_j^{*}(h)\right)\right)}}=\frac{\ast {\Delta}_{min}+\varsigma \cdot {\Delta}_{max}}{\Delta_j^o(h)+\varsigma \cdot {\Delta}_{max}} $$ Step No.3: Calculate the GRGs \( \gamma \left({z}_0^{*}\cdot {z}_j^{*}\right) \) is obtained by the weighted sum of the ζ GRC which represents the level of relationship between ideal and compatible series and evaluated using Eq. (5): $$ \gamma \left({z}_0^{*}\cdot {z}_j^{*}\right)=\frac{1}{N}\gamma \left({z}_0^{*}(h)\cdot \gamma \left({z}_j^{*}(h)\right)\right) $$ Step No.4: Ranking of GRG where GRGs are sequenced in decreasing order to obtain the higher/leading value of the GRG which represents substantial correlation degree between the ideal and the compatible series. The maximum value of the GRG represents the optimal mixture/blend of process parameters for the aspired outputs. Experimental methodology In this wok, all experiments have been carried out on mild steel flat specimens coated with atmospheric plasma spraying method using commercial industrial ceramic coating powder materials operated at optimum spray parameters. Alumina (Al2O3), Alumina-Titania (Al2O3 + TiO2), Super-Z alloy (20 % Al2O3+ 80 % PSZ) and partially stabilized zirconia (PSZ) and of varying coating thicknesses (100 to 300 μm) were used for the preparation of different coating surfaces. Using 40 KW Sulzer, Metco plasma spray with a seven MB gun; coating powders sprayed on the substrate which was grit blasted, degreased as well as applied with an alloy layer of NiCrAl bond coat of 50 to 100 μm thickness according to the spray parameters. The experimentation on thermal characteristics was carried out using a high-temperature flame to estimate temperature drop across the thickness of coating called thermal drop (TD) and a number of cycles withstood known as thermal fatigue cycles (TFC) (Lakshminarayanan and Balasubramanian 2007). Four levels for every process parameter were selected without taking the parametric interaction effect into consideration. The control factors considered were the type of coating powder (A), the temperature on coating side (T), coating thickness (C) and bond coat thickness (B). The numerical values of control factors at various levels are depicted in Table 1. In multi-output optimization, it is always expected that there is some loss in quality characteristics when compared to a single-output optimization to always improve the overall quality. The proper adoption of various process factors and their assigned levels that stir these quality characteristics will enhance TCs. The graded characters examined were TD and TFC. The Taguchi design methodology for four factors of four levels is used to implement the system of the orthogonal array (OA) trials/experiments. An L16 OA having 16 × 4 array is utilized in the current work. The trials are conducted according to the pattern of the OA and the output values are obtained provide in Table 2. Experimental output results and data processing of TC of TBCs Run no. Thermal control factors Bond coating thickness (μm) Normalized data Coating side temperature (°C) Coating thickness (μm) Ideal Sequence Thermal drop/barrier test (TD) Thermal drop tests were carried out by measuring the temperature of a metal substrate before and after heating the coating side surface in the range of 700 and 1000 °C for about 20 to 30 min to reach steady state. Temperature drop across the substrate and ceramic coating are noted down. A pair of thermocouples of chrome-alumel type (K-type) has been used for measuring the temperature between the metal surface and ceramic coated surface for a given heat input value. The obtained temperature difference values are measured for L16 OA as shown in Table 2 (Koilraj et al. 2012). Thermal fatigue cycling (TFC) test A thermal fatigue cycling test is carried out to investigate the resistance or strength of coating as regards sudden changes in temperature (heating and cooling) (Javadi et al. 2014; Koilraj et al. 2012; Lakshminarayanan and Balasubramanian 2007; Mishra and Ma 2005) and to examine the withstanding of the sprayed coatings under the severity of thermal cycling. The TBCs were subjected to thermal fatigue cycles by heating the coated surface to an oxyacetylene flame until it reaches approximately 1000 °C temperature for about one minute and subsequently cooled down by air blow until the temperature drops to around 30 °C in the atmospheric conditions for about one minute. The TFC process will be repeated until the coated surface fails and peels off from the substrate to determine the number of cycles it withstood as detailed in Table 3. Deviation and grey relational coefficients of thermal characteristics of TBCs Deviation coefficient Grey relational coefficient Multi-response optimization of control factors using GRA The performance of thermal characteristics is decided from the maximum value of TD and TFC of TBCs. Also, TD and TFC are selected "higher-the-better" type for normalization in GRA approach. We used Taguchi's L16 OA to optimize the multiple-response of thermal characteristics of TBC to propose their use in the field of higher temperature applications. Thus, the data series have "larger-the-better" TCs. The "larger-the-better" methodology using Eq. (2), was employed for data pre-processing (Das et al. 2014). The normalized values of TD and TFC are given in Table 3. The GRC (shown in Table 4) of TD and TFC was computed using Eq. (4) by the deviation sequences obtained from Eq. (3). All the control factors have equal weighting and set at 0.5. Next, the weight of both responses has been used to compute the GRG by using Eq. (5), for each trial are given in Table 4 (El-Gallab and Sklad 1998). Precise observations from Table 4 and Fig. 2, indicate that trial number 13 has the excellent performance for both responses TD and TFC between all experiments as the larger GRG mean the comparable sequence is exhibiting a stronger correlation with the ideal series (Singh and Kumar 2004). GRA was employed to find the highest significant parameter based on the hypothesis that a mixture of the levels provides the greatest average response and is the optimal parameters combination to increase the lifespan of TBC for high-temperature applications. From maximum average GRG, optimum parameter level has been found as A4T1C4B2, i.e., Super-Z coating material, heating of coating at 700 °C, 300 μm coating thickness and 75 μm bond coat thickness. Response tables were generated using the Taguchi design method to evaluate the average value of the GRG for each process parameter level, as illustrated in Table 5 for thermal characteristics of TBC coatings. GRG response tables and their order Grey relational grade Grey grade Total mean of GRG vs. process parameters Mean grey relational grade at each level Max.–Min. Total mean GRG = 0.5266 Table 5 shows the greatest values of GRG obtained for the combination of A4-T3-C4-B2. It depicts the optimal combination of TBC control factors for the multiple outputs of TC during estimation of lifespan for high-temperature applications. A4-T3-C4-B2 combination indicates the type of coating of Super-Z alloy, the temperature on coating side of 1000 °C, Coating thickness of 300 μm and bond coat thickness of 75 μm. The main effect plots for means of GRG are shown in Fig. 2. The dashed lines in the main effect plots describe the total average of the GRG. Figure 3 reveals the different residual plots of GRG which are evaluated to understand the effectiveness of method used in the optimization (Mohammed and Rahman 2011). Residual plots of GRG Analysis of variance (ANOVA) for GRG ANOVA is employed to determine the grades or ranks of GRA at 95 % confidence level for investigating the significance level of control factors on multiple TC of TBC Coatings at high-temperature applications using Minitab® 17 statistical software (Sato and Kokawa 2003). A total sum of squared variances (SST) can be estimated using Eq. (6): $$ {\mathrm{SS}}_T = {\displaystyle \sum_{\mathrm{j}=1}^{\mathrm{N}}}{\left(\mathrm{y}\mathrm{j}\ \hbox{--}\ \mathrm{ym}\right)}^2 $$ where, N = number of trials in the OA, y j = mean of GRG for j th trial and y m = total average of GRG. Mean square (MS) can be obtained by dividing the SS by the corresponding degree of freedom (DF), i.e., $$ MS=\frac{SS}{DF} $$ The most/principal significant process parameter (significant influence on the output attributes) is one which has a higher F value (or smaller P value). The percentage contribution of a parameter can be evaluated by dividing SS T with the corresponding SS, as presented in Eq. (8). There is also a residual plot for GRG (Khalid 2005). $$ \%\ of\ contribution=\frac{S{S}_T}{SS}\times 100 $$ The ANOVA results for GRG of TBCs have been given in Table 6. It is seen that only the coating type played most significant thermal control factor for TD and TFC considered simultaneously during high temperature applications of TB coatings. The percentage of contribution control factor of the type of coating for the multi-response thermal characteristics of TBCs is 72.98. Results of ANOVA on GRG Adj MS Contribution ratio Bond Coat Thickness, μm Verification of results using confirmation/ratification test The optimal set of factors combination in achieving maximum TD and TFC is obtained using Taguchi based GRA as A4-T3-C4-B2. A confirmation test was carried out by using an A4-T3-C4-B2 optimal setting. The results of the verification test were 265 °C for TD (temperature fall from coating side to substrate side) and TFC of 466 (number of cycles withstood). The validation test result is observed better than the experiments carried out in Table 3. Once, we find an optimal combination of TC process parameters and the principal significant parameter for TD and TFC of TBC, the final step would be to verify the practicability of proposed Taguchi design method based GRA by carrying out some validation tests. The optimum GRG, г opt , is computed as (Khalid 2005): $$ {\upbeta}_{opt}={\upbeta}_m+{\displaystyle {\sum}_{i=1}^c\left({\upbeta}_i-{\upbeta}_m\right)} $$ where, β m is the total average of the GRG, β i , is the average of the GRG of i th factor at the optimum level, and c is the number of largest significant TBC control factors (Khalid 2005). In both cases, the values of TD and TFC for an optimal combination of control factors were adequately larger as compared to those of initial set of process parameters and equal to 0.8168. Additional experiments are performed at optimum levels of parameters and the average of additional experimental results is taken for the verification/confirmation test. Table 7 provides the predicted value of GRG and corresponding validation test results. It is noticed that TBC characteristics viz. TD is elevated from 226 to 265 °C and TFC is improved from 431 to 466 numbers of cycles withstood. TBC performance results at high temperature using initial and optimal process parameters Initial process parameters Optimal process parameters A3-T3-C3-B3 Thermal characteristics experiments were carried out using TBCs as a coating material. We have accumulated the amount of thermal barrier (TD) and a number of thermal fatigue cycles (TFC) under various conditions and combinations of TBC parameters. The Taguchi design based GRG approach was applied in the present investigation to optimize the multi-output of the control parameters of TC for high-temperature applications of TBC coatings. The following conclusions have been formed to summarize the results: The evaluation of GRG quantifies the overall performance of control factors of thermal characteristics for TBC coatings lifespan when used in high-temperature applications. Taguchi design method based GRA is a very helpful technique for performing multi-output optimization to forecast the TD and TFC of TBCs at high temperature applications. From the response table, the largest value of the GRG is achieved for the type of coating of Super-Z lloy, coating thickness of 300 μm, the bond coating thickness of 75 μm and temperature on coating side of 1000 °C for thermal characteristics of TBC coatings for high-temperature applications. These could be recommended levels of TBC coating control parameters for high-temperature applications in the aerospace industry for maximizing thermal barrier/drop (TD) and thermal fatigue cycles withstood (TFC) considered. Results of ANOVA for the grades of the GRA revealed that factor A (coating type) is the principal significant control factor of a thermal characteristic which affects the multi-output characteristics of TBCs. The contribution of type of coatings of Super-Z alloy is quite large 72.98 % in comparison to the other set of control factors (coating thickness, bond coat thickness and temperature on coating side) of TBCs used in the aerospace industry. The best performance characteristics were obtained with these set of parameters. Confirmation tests/experiments disclosed that the improvement in GRG of an optimal combination of parameters is 0.9628 compared to an initial setting of parameters 0.8168. The optimum levels of TBC factors can bring several significant enhancements in the control factors. Therefore, the GRA based on Taguchi design method can be applied to an optimization of control factors of multi-output performance and using the combination of optimal TC process parameters, necessary improvement in the production of efficient TBCs for high temperature applications can be fulfilled. Authors have made substantial contributions to conception, design, in the acquisition of data, and analysis and interpretation of data. Authors participated in drafting the article or revising it critically for important intellectual content. Authors involved in the final approval of the version to be submitted. Department of Mechanical Engineering, College of Engineering and Islamic Architecture, Umm Al-Qura University, Makkah, Kingdom of Saudi Arabia Cavaliere P, Squillace A, Panella F (2008) Effect of welding parameters on mechanical and micro structural properties of AA6082 joints produced by friction stir welding. J Mater Process Technol 200:364–372View ArticleGoogle Scholar Das DK et al (2014) Properties of ceramic-reinforced aluminium matrix composites–a review. Int J Mech Mater Eng 9(1):1–16View ArticleGoogle Scholar El-Gallab M, Sklad M (1998) Machining of Al/SiC particulate metal matrix composites: Part II: Workpiece surface integrity. J Mater Process Technol 83(1–3):277–285View ArticleGoogle Scholar Gibson BT et al (2004) Friction stir welding: Process, automation, and control. J Manuf Proc 16(1):56–73. Google Scholar Javadi Y, Sadeghi S, Najafabadi MA (2014) Taguchi optimization and ultrasonic measurement of residual stresses in the friction stir welding. Mater Des 55:27–34View ArticleGoogle Scholar Khalid, Terry (2005) An outsider looks at friction stir welding. Report #ANM-112N-05Google Scholar Koilraj M et al (2012) Friction stir welding of dissimilar aluminium alloys AA2219 to AA5083–Optimization of process parameters using Taguchi technique. Mater Des 42:1–7View ArticleGoogle Scholar Lakshminarayanan AK, Balasubramanian V (2007) Process parameters optimization for friction stir welding of RDE-40 aluminium alloy using Taguchi technique. T Nonferr Metal Soc 18:548–554View ArticleGoogle Scholar Lin TR (2002) Optimization technique for face milling stainless steel with multiple performance characteristics. Int J Adv Manuf Technol 19(5):330–335View ArticleGoogle Scholar Mishra RS, Ma ZY (2005) Friction stir welding and processing. Mater Sci Eng R 50:1–78View ArticleGoogle Scholar Mohammed Y, Rahman JF (2011) Optimization of usage parameters of ceramic coatings in high temperature applications using Taguchi design. Int J Eng Sci Tech (IJEST) 3(8):6364–6371Google Scholar Pan LK et al (2007) Optimizing multiple quality characteristics via Taguchi method-based Grey analysis. J Mater Process Technol 182(1–3):107–116View ArticleGoogle Scholar Sato YD, Kokawa H (2003) Friction stir welding (FSW) process. Weld Int 17(11):852–853View ArticleGoogle Scholar Sharma P et al (2005) Process parameter selection for strontium ferrite sintered magnets using Taguchi L9 orthogonal design. J Mater Process Technol 168:147–151View ArticleGoogle Scholar Singh H, Kumar P (2004) Tool wear optimization in turning operation by Taguchi method. Indian J Eng Mater Sci 11:19–24Google Scholar Yunus M, Alsoufi MS (2015) A Statistical Analysis Of Joint Strength Of Dissimilar Aluminium Alloys Formed By Friction Stir Welding Using Taguchi Design Approach, Anova For The Optimization Of Process Parameters. IMPACT: Int J Res Eng Tech (IMPACT: IJRET) 3(7):63–70Google Scholar Yunus M, Alsoufi MS (2016) Multi-output optimization of tribological characteristics control factors of thermally sprayed industrial ceramic coatings using hybrid Taguchi-grey relation analysis. Friction 3(4):208–216View ArticleGoogle Scholar
CommonCrawl
Activities by year Filter by research sector Select sector HECAP CMSP MATH ESP AP QLS New Research Areas Each year, ICTP organizes more than 60 international conferences, workshops, and numerous seminars and colloquia. Interested in attending an activity? Complete an online application form: Events in the scientific calendar that have an "smr" number require participants to apply online. Click on the activity and complete the online application form. Have a question about an "smr" activity? Please contact the organizers at smrXXXX (x being the 4-digit number of the activity)@ictp.it Wondering about the status of your application to an "smr" activity? Access your profile in the ICTP portal to check. CALL FOR 2021 ACTIVITIES: Want to propose a conference, school or workshop? See our guidelines. External organizations can pay for and organize their own high-level scientific and cultural events at ICTP. Details for these "Hosted Activities" are available in the Logistic Guidelines for Hosted Activities (pdf download). Travel fellowships for ICTP conferences and workshops are available. Download a pdf of the 2020 Scientific Calendar. Sync your calendar Download 310 records: Permanent link for calendar synchronization: Showing 310 records Europe/Rome The global scientific landscape is changing rapidly. New scientific powers are emerging, and international collaboration is growing dramatically. In the first part of this talk I will review these changes as a prelude to discussing attempts to harness global science to address global problems (such as climate change, the spread of diseases, and the loss of bio-diversity). In the second half of the talk I will review the status of SESAME (Synchrotron-light for Experimental Science and Applications in the Middle East) as an example of a project with the dual aims of building scientific capacity and building bridges between people in diverse countries through scientific collaboration. SESAME, which is modelled on CERN, is a third generation light source under construction in Jordan: the members are currently Bahrain, Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, the Palestinian Authority, and Turkey. ICTP ICTP [email protected] @ ICTP International Scientific Collaboration - benefits, challenges and opportunities Address: Strada Costiera, 11 I - 34151 Trieste (Italy) Room: Leonardo da Vinci Building Main Lecture Hall Speaker(s): Professor Chris LLEWELLYN-SMITH (Director of Energy Research, Oxford University and President SESAME Council) Colloquium Start Time: 16:30 Europe/Rome * ACTIVITY CANCELLED * Bariloche - Argentina ICTP [email protected] @ Bariloche - Argentina High Energy Cosmology and Astroparticle Physics ICTP Latin American School of Phenomenology (LASP) | (smr 2379) Organizer(s): E. Roulet and G. Senjanovic Secretary: N. van Buuren Support E-Mail: [email protected] * ACTIVITY CANCELLED * Europe/Rome The biggest challenge of the 21st century is to provide sufficient food, water, and energy to allow everyone on the planet to live decent lives, in the face of rising population, the threat of climate change, and (sooner or later) declining fossil fuels. I will review the nature of the challenge and the portfolio of measures that must be adopted if it is to be met. These include lowering energy demand, through better planning and design and changes of behaviour, using energy more efficiency, the deployment of carbon capture and storage (if feasible and safe), and expansion of the use of renewable energy sources to the maximum extent reasonably possible. Major expansion of nuclear energy will be necessary. In the second half of the century thorium and/or fast breeder reactors and/or large scale solar power, and/or fusion will be essential: all must be developed as a matter of urgency. The technical challenge is enormous, but the political and economic challenges are even greater. ICTP ICTP [email protected] Future World Energy Demand and Supply Europe/Rome Abstract: The living cell and many of its components as well as artificially energized mimics of motility are examples of active soft matter. An understanding of the individual and collective behavior of active particles is one of the grand challenges of nonequilibrium statistical physics, and holds the key to a physical grasp of the mechanics and statistics of living matter. In this colloquium, I will examine how one can use a number of simple ideas to construct nonequilibrium model systems that can self-propel at micro- and nano-scale, and study their collective properties. ICTP ICTP [email protected] Active soft matter: from molecular swimmers to colloidal supernova Speaker(s): Professor Ramin GOLESTANIAN (Rudolf Peierls Centre for Theoretical Physics) Europe/Rome Detailed information on the Cape Town International Cosmology School is available on the activity web page (see link below). Stellenbosch - South Africa ICTP [email protected] @ Stellenbosch - South Africa High Energy Cosmology and Astroparticle Physics , Cosmology ICTP/SAAO/PI Cape Town International Cosmology School | (smr 2290) Organizer(s): Directors: B. Bassett, P. Creminelli, R. Maartens, R. Sheth Secretary: R. Sain Cosponsor(s): - AIMS South Africa - National Research Foundation - Perimeter Institute - Royal Society - SKA Africa Europe/Rome The school will be held in french and is aimed at Haitian students and young scientists. The school will cover the following fields: Earthquake Mechanics: Abdelkrim Aoudia (ICTP) Earthquake Geology: Yann Klinger (IPGP, France) Earthquake Geodesy: Eric Calais (Purdue University, USA) Earthquake Seismology: Françoise Courboulex (CNRS, France) Engineering Seismology and Earthquake Risk: Etienne Bertrand (LRPC, France) Port-au-Prince - Haiti ICTP [email protected] @ Port-au-Prince - Haiti Haiti School on Earthquake Science | (smr 2213) Director(s): Abdelkrim Aoudia (ICTP) Local organizer(s): Jean Vernet Henry (Rector of UEH, Haiti), Jean Raoul Momplaisir & Janin Jadotte (FDS, UEH), Eric Calais (UNDP Haiti) Secretary: G. De Meo Europe/Rome Trieste - Italy ICTP [email protected] @ Trieste - Italy School on Strongly Coupled Physics Beyond the Standard Model | (smr 2326) Organizer(s): K. Agashe (U. of Maryland), C. Grojean (CERN & CEA/Saclay), R. Rattazzi (EPFL), M. Serone (SISSA & ICTP) Collaborations: SISSA (International School for Advanced Studies) & INFN (Italian Institute for Nuclear Physics) Europe/Rome The effect of electron correlation on honeycomb lattice has attracted a lot of interest recently. In this talk, I will present the results of three analytical studies of correlated electrons on honeycomb lattice. In the first study, we provide an analytical argument for the absence of topological degeneracy in the "Mott insulating" state found recently in the Hubbard model on honeycomb lattice. In the second study, we propose a SU(2) generalization of staggered flux phase on honeycomb lattice, which reveals the close relation between the RVB physics on square lattice and honeycomb lattice. In the third study, we predict that the quarter-doped Hubbard model on honeycomb lattice will realize a magnetic ordered ground state with non-zero spin chirality and quantized Hall conductivity. It is argued that such an exotic state should be observable in quarter-doped graphene system. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Three studies of correlated electrons on honeycomb lattice" Room: Leonardo da Vinci Building Luigi Stasi Seminar Room Seminar Start Time: 11:30 Europe/Rome ICTP ICTP [email protected] On the uniqueness of extremal Kaehler metrics Speaker(s): Dr. Alberto Della Vedova (Princeton University, USA) Support E-Mail: [email protected] Joint ICTP-IAEA Workshop on Fusion Plasma Modelling using Atomic and Molecular Data | (smr 2327) Organizer(s): IAEA: B.J. Braams and Hyun K. Chung. Local Organiser: J. Niemela Laboratories: LB LAB Secretary: D. Sauleek Hosted Activity 23 Jan 2012 - 3 Feb 2012 School on Quantum Monte Carlo Methods at Work for Novel Phases of Matter | (smr H293) Room: Adriatico Guest House Kastler Lecture Hall Organizer(s): Federico Becca(CNR and SISSA), Saverio Moroni (CNR and SISSA), Markus Mueller (ICTP) and Sandro Sorella (SISSA) Laboratories: AGH (Infolab.) Europe/Rome Arusha - United Republic of Tanzania ICTP [email protected] @ Arusha - United Republic of Tanzania Workshop on Infectious Diseases | (smr 2279) Organizer(s): Directors: Andy P. Dobson, Graciela A. Canziani, Giulio A. De Leo, Mercedes Pascual. Laboratories: yes Europe/Rome The Generalized Elastic Model accounts for the dynamics of several physical systems, such as polymers, fluctuating interfaces, growing surfaces, membranes, proteins and file systems among others. We derive the fractional stochastic equation governing the motion of a probe particle (tracer) in such kind of systems. This Langevin equation involves the use of fractional derivative in time and satisfies the Fluctuation-Dissipation relation, it goes under the name of Fractional Langevin Equation. Within this framework the spatial correlations appearing in the Generalized Elastic Model are translated into time correlations described by the fractional derivative together with the space-time correlations of the fractional Gaussian noise. We derive the exact scaling analytical form of several physical observables such as structure factors, roughness and mean square displacement. Special attention will be devoted to the dependence on initial conditions and linear-response relations in the case of an applied potential. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical Physics Seminar: " Fractional Langevin Equation and its application to linear stochastic models " Speaker(s): A. Taloni (IENI, Milano) Workshop on Strongly Coupled Physics Beyond the Standard Model | (smr 2400) Europe/Rome Pattern formation and decay in the early stage of growth is fundamental to many materials physics and chemistry. Understanding the complex interplay between factors that influence the evolution of surface-based nanostructures can be challenging. Computer simulation will play an important role in providing insight. In this talk, I will first introduce a one-, two-, and three-dimensional Ehrlich-Schwoebel (ES) barrier in kinetics-driven growth. Within this framework, I will show how to control the island shape, the island instability, and the film roughness efficiently. Furthermore, I will discuss a novel concept: a true upward adatom diffusion on metal surface, which is beyond the traditional Ehrlich-Schwoebel (ES) barrier model. This process offers new indications as how to use ab initio kinetic Monte Carlo simulation can uncover some of the building regulations of the evolution mechanism down to atomic-scale. ICTP ICTP [email protected] Joint ICTP/SISSA Colloquium on Condensed Matter Physics: "Interpretation and prediction of nanostructural evolution based on first-principles studies" Speaker(s): Enge Wang (School of Physics, Peking University) Europe/Rome We derive the exact expressions for the fluctuation conductivity in two dimensional superconductors as a function of temperature and magnetic field in the whole phase diagram above the upper critical field line Hc2(T). Focusing on the vicinity of the quantum phase transition near zero temperature we arrive to the conclusion that as the magnetic field approaches the critical field Hc2(0) from above, a peculiar dynamic state consisting of clusters of coherently rotating fluctuation Cooper-pairs forms. We estimate the characteristic size QF( H) and lifetime QF( H) of such clusters and indicate in the corresponding domain of the phase diagram, where such phenomenon can be observed. The derived values QF( H) and QF( H) allow us to reproduce qualitatively the available results for the quantum fluctuation contributions to the in-plane conductivity, magnetization, and the Nernst coefficient. Finally we predict the existence of a peak in the zero temperature transverse magneto-conductivity of layered superconductors at fields above Hc2(0). ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: " Quantum Fluctuations and Dynamic Clustering of Fluctuating Cooper Pairs " Speaker(s): Andrey Varlamov (Universita' di Roma II, CNR-SPIN, Roma) GEOMETRIC ANALYSIS SEMINAR - Applications of singularity exponents in Kaehler geometry Speaker(s): Dr. Shi Yalong ((Nanjing University/ICTP)) Europe/Rome Abstract: SO(10) grand unified theory which unifies strong and electroweak interactions also provides a constrained and unified description of the fermion masses and mixing angles. In this talk, I will review many SO(10) models and will discuss their viability in explaining all the fermion masses and mixing angles. The supersymmetric and non-supersymmetric versions of minimal, non-minimal and flavor symmetric SO(10) models and their consequences on the fermion mass spectrum will be discussed. Several interesting observations like the possible origin of differences in the flavor mixing patterns of quarks and leptons and predictions for the reactor angle and leptonic CP violating phases will be outlined in each case. ICTP ICTP [email protected] High Energy Cosmology and Astroparticle Physics , Phenomenology of Particle Physics Fermion Masses in SO(10) Models Speaker(s): Ketan PATEL (Physical Research Laboratory, Ahmedabad, India) Preparatory School to the Winter College on Optics: Advances in Nano-Optics and Plasmonics | (smr 2397) Room: Leonardo da Vinci Building Euler Lecture Hall Organizer(s): Directors: M. Bertolotti, N. Zheludev, Z. Ben Lakhdar - Local Organizers: J. Niemela, M. Danailov Secretary: F. Delconte Europe/Rome In this talk I will revise the basic ideas underlying a powerful correspondence between integrable models and ordinary differential equations (or IM/ODE correspondence, for short). This spectral equivalence played an important role in the study of PT-symmetry, providing rigorous proofs about the spectra of an important class of non-Hermitian Hamiltonians based on known integrable results. My intention is to make progress in the opposite direction, namely to extract information about integrable systems by using differential equations. The comparison between the eigenvalues of the ODEs and the spectral parameters in certain Quantum Integrable Models is considerably well understood for the vacuum state of the latter. For excited states, however, results are only known for the problem with su(2) symmetry. Here I will show this can be extended to the su(2|1) super-symmetric problem in order to establish the exact correspondence also for excited states. SISSA, Santorio Bldg, Room 128 ICTP [email protected] @ SISSA, Santorio Bldg, Room 128 Joint ICTP/SISSA Statistical Physics Seminar: " Higher level Integrals of Motion from ODEs in the complex plane " Speaker(s): Paulo Eduardo Conçalves de Assis (SISSA) High Energy Cosmology and Astroparticle Physics , Strings and Higher Dimensional Theories Scattering Amplitudes and the Positive Grassmanian Speaker(s): Nima ARKANI-HAMED (Institute for Advanced Study/Princeton University, USA) The Kneser-Tits problem for a form of E8 Speaker(s): Professor Maneesh Thakur (Indian Statistical Institute, New Delhi) Europe/Rome Despite years of intense theoretical attack from different directions, the ground state of the S = 1/2 Kagome Heisenberg antiferromagnet has remained elusive. I will revisit this question within the framework of Gutzwiller projected fermionic wave functions studied using Variational quantum Monte Carlo technique. We found the so called U(1) Dirac state, an exotic algebraic spin liquid, to have the best variational energy. While there were doubts concerning its stability, experiments have hinted towards a gapless, algebraic spin liquid behavior. Indeed we show that the U(1) Dirac spin liquid is remarkably stable w.r.t dimerizing towards a large class of Valence bond crystal perturbations. This stability is also preserved upon addition of a weak 2nd NN exchange couplings. However we find, that upon addition of a weak 2nd NN ferromagnetic coupling, a non-trivial valence bond crystal is stabilized, and has the lowest energy. This VBC possesses a non-trivial flux pattern and is a strong dimerization of another competing U(1) gapless spin liquid with a large spinon Fermi surface, the so called uniform RVB state. The U(1) Dirac state and the uniform RVB state are shown to be remarkably stable w.r.t. destabilizing into the class of Z2 spin liquids. I will also briefly touch upon my ongoing work dealing with a complete group theoretical classification of time-reversal invariant Valence bond crystals on the Kagome lattice, and present some results concerning the properties of the ground state on small clusters which are extracted using the Lanczos method on a given variational wave function. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: " Competing spin-disordered phases of the spin-1/2 Heisenberg antiferromagnet on the Kagome lattice " Speaker(s): Yasir IQBAL (Universite de Toulouse) Europe/Rome An important prediction of Mode-Coupling-Theory (MCT) is the relationship between the decay exponents in the beta regime: Gamma2(1-a)/Gamma(1-2a) = Gamma2(1+b)/Gamma(1+2b)= lambda. In the original structural glass context this relationship follows from the MCT equations that are obtained making rather uncontrolled approximations. As a consequence, it is usually assumed that the relationship between the exponents is correct while lambda has to be treated like a tunable parameter. On the other hand, it is known that in some mean-field spin-glass models the dynamics is precisely described by MCT. When we are in the so called "schematic" case, lambda can be computed exactly but, again, its computation becomes difficult when we consider more complex models including finite-dimensional ones. We have been able to unveil the physical meaning of lambda in complete generality, relating the dynamical parameter to the static Gibbs free energy and giving, thus, a ``recipe'' for its computation. In this new framework we compute the exponents a and b for some mean-field models and compare our results with numerical simulations or, when available, exact results obtained via purely dynamical equations. ICTP ICTP [email protected] Special Seminar in Statistical Physics: " On the exponents of the critical slowing down in Mode Coupling Theory " Speaker(s): F. Caltagirone Europe/Rome Lahore - Pakistan ICTP [email protected] 4 Feb 2012 - 13 Feb 2012 @ Lahore - Pakistan CIMPA-ICTP Research School on Local Analytic Geometry | (smr 2393) Organizer(s): to be advised Secretary: A. Bergamo Cosponsor(s): UNESCO-MESR-MICINN Winter College on Optics: Advances in Nano-Optics and Plasmonics | (smr 2328) Cosponsor(s): International Commission for Optics (ICO), Optical Society of America (OSA), International Society for Optics and Photonics (SPIE), European Optical Society (EOS), Società Italiana di Ottica e Fotonica (SIOF), US National Academy of Sciences (NAS), Photonics Society (IEEE), International Society on Optics Within Life Sciences (OWLS) ICTP-ITU/BDT School on Sustainable Wireless ICT Solutions for Environmental Monitoring | (smr 2329) Room: MARCONI LABORATORY Organizer(s): Sandro M. Radicella (ICTP), Mario Maniewicz and Ryszard Struzak (ITU/BDT) Laboratories: GGH (Marconi Lab.) Secretary: S. Radosic Europe/Rome We study a Renormalization Group transformation that can be used also for models with quenched disorder, like spin glasses. The method is based on a mapping between disorder distributions, chosen such as to keep some physical properties (e.g. the ratio of correlations) invariant under the transformation. We validate this Ensemble Renormalization Group by applying it to the hierarchical model (both the diluted ferromagnetic version and the spin glass version), finding results in agreement with Monte Carlo simulations. ICTP ICTP [email protected] Special Seminar in Statistical Physics: " Ensemble Renormalization Group for Disordered Systems " Speaker(s): Maria Chiara Angelini Europe/Rome Transparent Conducting Oxides (TCO) constitute an unique class of materials which combine two physical properties - transparency for visible spectrum and high electrical conductivity together. TCO's are widely used in a variety of technological applications and are subject of active experimental and theoretical research. Development of an electronic structure method capable of accurately describing the key physical properties of TCO namely - the band gap and electron effective mass has been a subject of enduring interest. In this talk, I will first discuss the electronic band structure of basic TCO's calculated using density functional theory (DFT). It is observed that although the structural parameters are correct, the DFT band gap are severely underestimated compared to experiment. Since DFT is a ground state theory and in principle the Kohn-Sham gaps can not be compared with the experiment, we use the state of the art - GW approximation technique to calculate the excited states. The implementation of the GW approximation within ABINIT code will be presented in brief. The modifications in the pseudopotential necessary for accurate calculation of the self-energy within the GW approximation is discussed with the help of a systematic study of the pseudopotentials for ZnO. The GW results are presented for ZnO, SnO_2 and ZnX_2O_4 (X=Al, Ga and In) spinel structure. Finally I will also discuss the electronic band structure calculated using the computationally inexpensive Tran-Blaha modified Becke-Johnson potential (TB-mBJ) scheme for prototype TCO's and formation energies of native point defects in ZnAl_2O_4 spinel structure. ICTP ICTP [email protected] Special Seminar in Condensed Matter Physics: "Ab-initio electronic structure calculations of Transparent Conducting Oxide (TCO) materials " Speaker(s): Hemant Dixit Europe/Rome When a fluid is cooled down sufficiently rapidly to avoid crystallization, its dynamics slow down very rapidly, and an amorphous solid is formed. This phenomenon is called the glass transition. When piling randomly in a box more and more spheres, a density is eventually attained, where all spheres come in contact. At this density and upon further compression, the system acquires rigidity: it is the jamming transition. Jamming and glass transition are very old statistical mechanics problems that are often associated with one another, because intuition suggests that both phenomenons arise from the same physical effect, the caging of each particle by its shell of neighbours. During my phd thesis, I studied analytically a model system of harmonic spheres (soft spheres that repel each other with finite amplitude and finite range), that allows to simultaneously study the jamming and the glass transition. I will present results obtained on the dynamics near the glass transition, as well as thermodynamics near the jamming point, with field theoretic methods and replica theory. ICTP ICTP [email protected] Seminar on disorder and strong correlations: "Statistical mechanics of harmonic spheres: Glass and jamming transition" Speaker(s): Hugo JACQUIN Moduli stabilisation for chiral global models Speaker(s): Michele CICOLI (ICTP) Europe/Rome We present a method for calculating the asymmetric quantum current noise in systems described by a Born-Markov master equation. The technique is particularly suited to describing mesoscopic devices containing Josephson junctions. As an example we calculate the full frequency dependent current noise in a superconducting single electron transistor tuned to the double Josephson quasiparticle resonance. Our results are consistent with those obtained in a recent experiment which was able to probe the asymmetric noise. ICTP ICTP [email protected] Special seminar: "Quantum current noise in a superconducting single electron transistor" Speaker(s): Peter KIRTON A sharp lower bound for the log canonical threshold Speaker(s): Dr. Pham Hoang Hiep (Institut Fourier, Grenoble, France) Europe/Rome After the discovery of the C60 fullerene some 25 years ago, many more hollow and endohedrally doped structures made out of various elementshave been proposed theoretically. However, since no other fullerenes have been synthesized up to date , the question arises whether experimentalists have just not yet found a way to synthesize these theoretically predicted fullerenes, or whether they do not exist at all in nature. Following the theoretical discovery of the B_80 fullerene by Szwacki et al, various other fullerene and stuffed fullerene structures were proposed but none of them could be synthesized in the laboratory yet. Using the minima hopping global geometry optimization method on the density functional potential energy surface we show that the energy landscape of boron clusters is glass like. Medium size boron clusters exhibit many structures which are lower in energy than the cages. This is in contrast to carbon and boron nitride systems which can be clearly identified as structure seekers. The differences in the potential energy landscape explain why carbon and boron nitride systems are found in nature whereas pure boron fullerenes have not been found. We thus present a methodology which can make predictions on the feasibility of the synthesis of new nano structures. ICTP ICTP [email protected] Special seminar: "Energy landscape of fullerene materials: A comparison of boron to boron nitride and carbon" Speaker(s): Sandip DE VIDEO String Theory Journal Club - Strongly correlated electron systems as string theories?!? Speaker(s): Shamit KACHRU Europe/Rome We shall discuss Scott-Swarup intersection number for the splittings of a free group of rank n and the geometric intersection number for the embedded spheres in a 3-manifold with fundamental group a free group of rank n. We shall show that these intersection numbers coincide. ICTP ICTP [email protected] Algebraic and geometric intersection numbers for free groups Speaker(s): Dr. Suhas Jaykumar Pandit Europe/Rome Lévy flights determine how animals search for food, how earthquakes are distributed, and how the stock market goes up and down daily. In this talk, I will explain how one can realize optical materials in which light waves follow Lévy flights, and which new possibilities this offers in photonics. Disordered photonic materials have surprisingly interesting physical properties, and allow to study the fundamental physics of transport processes. On the other hand, their fascinating optical response leads to unexpected effects, like the efficient trapping of light in thin films. The latter property turns out to be very valuable for improving thin film photo-voltaic solar cells and creating new light sources. SISSA, Santorio Bldg., Room 128 1st floor ICTP [email protected] @ SISSA, Santorio Bldg., Room 128 1st floor Joint ICTP/SISSA Colloquium on Condensed Matter Physics: " Photons, dust and honey bees " Speaker(s): Diederik S. WIERSMA Advanced School on Scientific Software Development: Concepts and Tools | (smr 2330) Organizer(s): S. Cozzini (CNR/IOM/Uos/SISSA Trieste), A. Balaz (SCL, Institute of Physics Belgrade), G. Giuliani (ICTP Trieste/University of L'Aquila) Secretary: N. Ivanissevich Cosponsor(s): PRACE, IBM, eXact lab srl, CNR/IOM Democritos The Third Cordex-Africa Analysis Workshop | (smr H302) Room: Adriatico Guest House Denardo Lecture Hall Organizer(s): The Climate Systems Analysis Group, University of Cape Town, South Africa and START Europe/Rome Transport properties of 2D superconducting systems can be very different from those of bulk superconductors because thermal and quantum fluctuations of superconducting order parameter are more pronounced and play a crucial role. First we focus on the influence of superconducting fluctuations on dynamics, while the system is in the normal state but close to the superconducting transition. In the fluctuational regime, we derive a Ginzburg-Landau-type action under far-from equilibrium conditions. Then, utilizing it, we calculate the fluctuation-induced density of states and Maki-Thomson- and Aslamazov-Larkin-type contributions to the in-plane electrical conductivity [1,2]. We propose an experimental setup where our results can be tested: thin superconducting film sandwiched between a gate and a substrate, which have different temperatures and different electrochemical potentials. Then, we concentrate on transport at lower temperatures in close-to-equilibrium conditions investigating influence of quantum fluctuations on the unbinding of vortex-antivortex pairs. We determine the temperature below which quantum fluctuations dominate over thermal fluctuations and describe the transport in this quantum regime. The crossover from the quantum to the classical regime is discussed and the quantum correction to the classical current-voltage relation is determined [3]. [1] Phys. Rev. Lett. 105, 187003 (2010) [2] Phys. Rev. B 84, 064510 (2011) [3] Phys. Rev. B 80, 212504 (2009) ICTP ICTP [email protected] Special seminar: "Transport properties of thin superconducting films" Speaker(s): Aleksandra PETKOVIC Europe/Rome We present some recent investigations on systemic features of the immune system, based on a statistical mechanics approach. We first introduce a mean-field spin-glass model for the interaction between helper cells and the effector branches (B and K cells) able to reproduce, as emerging properties, several collective phenomena shown in real immune networks (e.g. the connection between autoimmunity and lymphoproliferative disorders or the breakdown of immunosurveillance by lymphocyte unbalance). We also show that this system can be mapped into an associative neural network, where helper cells directly interact with each other and are able to orchestrate an immune response retrieving "strategies'' previously learnt. Then, we go beyond the fully-connected approximation by introducing dilution in the interactions between helpers and effectors and show that this makes the former able to retrieve parallel strategies to fight several pathogens simultaneously. That is to say, dilution, which is a biological requisite, results in multitasking capabilities. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "A statistical mechanics perspective on immune networks" Speaker(s): E. AGLIARI Existence of hypersurfaces with large constant mean curvature and free boundaries Speaker(s): Dr. Fethi Mahmoudi Europe/Rome This is the continuation of the seminars of the Applied Physics Scientific Section. You will be especially welcome if you are visiting the Centre, or another Trieste Institute, as Associate Members, TRIL Fellows, or if you are from Affiliated Institutes in the various areas of Applied Physics. The summary of the seminar is available at: http://www.ictp.it/~chelaf/ss298.html ICTP ICTP [email protected] A model of the Mars ionosphere Room: Leonardo da Vinci Building Oppenheimer Meeting Room Speaker(s): Beatriz Sánchez–Cano Support E-Mail: [email protected] Europe/Rome Motivated by various disordered propagation problems with competing channels, I study the representative problem of Anderson localization on an asymmetric two-leg ladder. The problem is solved by the Fokker-Planck approach, which is exact in the weak disorder limit. The localization radius of various one dimensional systems, such as a polaritons or other hybrid particles, can be investigated by this model. These applications correspond to parametrically different intra-chain hopping integrals and/or different disorder amplitudes on the two legs, situations in which it is non-trivial to predict what dominates the transport in the joint system. An extended Dorokhov-Mello-Pereya-Kumar (DMPK) equation is obtained and solved analytically. Two localization lengths are obtained as functions of the parameters of the model. We find that: 1) Near the resonance energy (where the dispersion curves of the two decoupled and disorder-free chains intersect) the "slow'' chain dominates the localization properties of the ladder. 2) Away from the resonance the "fast'' chain dominates the transmission probability. ICTP ICTP [email protected] Seminar on on Disorder and strong electron correlations: "Localization of hybrid particles" Speaker(s): Hongyi XIE ICGEB Theoretical Course "RNA Structure and Function" | (smr H294) Room: Adriatico Guest House Giambiagi Lecture Hall Organizer(s): Prof. G. Tocchini-Valentini (CNR, Rome, Italy) Secretary: E. Lippolis Type IIB LARGE Volume Scenario Speaker(s): Fernando QUEVEDO (ICTP) Europe/Rome Motivated by the study of quantum quenches in integrable field theories (IFTs) and in particular the structure of the initial state after such a quench, we are led to the investigation of transformations of the Zamolodchikov-Faddeev operators that respect their algebra. After showing that, unlike in free theories, linear transformations (Bogoliubov transformations) do not satisfy this condition, we explore simple cases of infinitesimal transformations that do so and study their properties. Examples from typical IFTs like the Sinh-Gordon and Lieb-Liniger models are presented, emphasizing the possibility to study quantum quenches. SISSA, Santorio Bldg., room 128 1st floor ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Zamolodchikov-Faddeev algebra and quantum quenches in integrable field theories" Speaker(s): Spyros SOTIRIADIS The 4-site model at LHC Speaker(s): Daniele DOMINICI (University and INFN, Florence) Europe/Rome Quantum point contacts and quantum dots, two elementary building blocks of semiconducting nanodevices, both exhibit famously anomalous conductance features at low energy scales: the 0.7-anomaly in the former case, the Kondo effect in the latter. For both effects the conductance shows a remarkably similar low-energy dependence on temperature (T), magnetic field (B) and source-drain voltage. This has led to the suggestion that the 0.7-anomaly and the Kondo effect have the same microscopic origin. Here we explore this notion theoretically and experimentally by studying the geometric crossover between a quantum dot and a quantum point contact. We introduce a one-dimensional model that reproduces the essential features of the geometry- and B-dependence of the conductance at T=0 for both Kondo effect and 0.7-anomaly. Though their properties differ markedly at high energies, at low energies there are striking similarities. We attribute the latter to similar interaction-enhanced spin-fluctuations in regions of low charge density and conjecture that these can be described using similar Fermi-liquid theories. Our predictions are consistent with our experimental results, which confirm, in particular, that the 0.7-anomaly exhibits Fermi-liquid behavior at low B and T. We also explain in detail how the 0.7-structure at T=B=0 arises from a combination of geometry and interaction effects. ICTP ICTP [email protected] Seminar on disorder and strong electron correlations: "Geometric crossover between the Kondo effect and the 0.7-anomaly: A Fermi-liquid perspective" Speaker(s): Jan von Delft (Ludwig-Maximilians-Universitaet, Munich) 4 Mar 2012 - 6 Mar 2012 Iranian National Observatory (INO) Conceptual Design Review Meeting | (smr H295) Organizer(s): Dr. Reza Mansouri and Dr. S. Arbabi IAEA Workshop on "Environmental Degradation of Components in Nuclear Power Reactors" | (smr H299) Organizer(s): IAEA, Vienna (Mr. B.M. Tyobeka) Laboratories: AGH (Infolab) Secretary: C. Czipin (IAEA) From trace-anomaly matching to the a-theorem Speaker(s): Stefan THEISEN (Albert-Einstein Institute, Potsdam) Europe/Rome The Arrhenius law appears in many different physical contexts. It is one of the most important laws governing the behavior of systems characterized by energy scales which are much smaller than the temperature. We shall derive and discuss its extension out of equilibrium. We will focus on classical systems which are out of equilibrium either because they are subjected to non-potential forces or because they are coupled to an out of equilibrium thermal bath. A generalization of time-reversal, that already emerged in the context of fluctuation theorems out of equilibrium, plays a key role in our derivation and will be discussed thoroughly. We shall conclude by discussing applications and further extensions of our results. Work in collaboration with Serena Bradde. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical physics seminar: "Arrhenius law out of equilibrium" Speaker(s): Giulio BIROLI (Service de Physique Theorique, CEA Saclay ) Europe/Rome We consider ordered or unordered k−tuplets of distinct points in CP^n which generate subspaces of fixed dimension i, and we compute the fundamental groups of the configuration spaces formed by these k−tuplets. We apply these to study connectivity of more complicated configurations of points. ICTP ICTP [email protected] Braid groups in complex projective spaces Speaker(s): Dr. Saima Parveen (G.C. University, Faisalabad, Pakistan) Europe/Rome In this talk I will begin by sharing results on the structure and dynamics of ions at the water-amorphous silica interface. For 80 years, scientists have employed models in which ions and water near the silica surface form a stagnant layer called the Stern layer. To account for all experimental features, these models invoke puzzling properties such as the transport of ions through immobile water. In this talk I will present a realistic theoretical description of the water-amorphous silica interface. We have successfully constructed and validated a model for the water-amorphous silica interface and have begun to examine the fate of biomolecules near this important interface. Our simulations challenge the classical textbook Stern layer model. Both ions and water exhibit a substantial degree of mobility, yet the phenomena the Stern layer was originally invoked to explain, are reproduced by our calculations. In the second part of my talk I will discuss some very recent results on the mechanism of the recombination of hydronium and hydroxide ions in water. This process following water ionization is one of the most fundamental processes determining the pH of water. The neutralization step once the solvated ions are in close proximity, is phenomenologically understood to be fast but the molecular mechanism has not been directly probed by experiments. We elucidate the mechanism of recombination in liquid water with ab initio molecular dynamics simulations and it emerges as quite different from the conventional view of the Grotthuss mechanism. The neutralization event involves a collective compression of the water wire bridging the ions which occurs in 0.5 ps triggering a concerted triple-jump of the protons. This process leaves the neutralized hydroxide in a hypercoordinated state, with the implications that enhanced collective compressions of several water molecules around similarly hypercoordinated states, are likely to serve as nucleation events for the autoionization of liquid water. ICTP ICTP [email protected] Joint ICTP/SISSA Condensed Matter seminar: " Ions in water: The amorphous silica interface and the recombination of H30+ and OH- " Speaker(s): Ali A. HASSANALI (ETH Zurich) Europe/Rome Dirac points lie at the heart of many fascinating phenomena in condensed matter physics, from massless electrons in graphene to the emergence of conducting edge states in topological insulators. At a Dirac point, two energy bands intersect linearly and the particles behave as relativistic Dirac fermions. In solids, the rigid structure of the material sets the mass and velocity of the particles, as well as their interactions. A different, highly flexible approach is to create model systems using fermionic atoms trapped in an optical lattice, a method which so far has only been applied to explore simple lattice structures. In my talk I will report on the creation of Dirac points with adjustable properties in a tunable honeycomb optical lattice. Using momentum-resolved interband transitions, we observe a minimum band gap inside the Brillouin zone at the position of the Dirac points. We exploit the unique tunability of our lattice potential to adjust the effective mass of the Dirac fermions by breaking the inversion symmetry of the lattice. Moreover, changing the lattice anisotropy allows us to move the position of the Dirac points inside the Brillouin zone. When increasing the anisotropy beyond a critical limit, the two Dirac points merge and annihilate each other. We map out this topological transition in lattice parameter space and find excellent agreement with ab initio calculations. Our results not only pave the way to model materials where the topology of the band structure plays a crucial role, but also provide the possibility to explore many-body phases resulting from the interplay of complex lattice geometries with interactions. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Engineering Dirac points with ultracold fermions in optical lattices" Speaker(s): Leticia TARRUELL (Institute of Quantum Electronics, ETH Zurich & Laboratoire Photonique, Numerique et Nanosciences, Bordeaux ) Study Day of EATGA | (smr H300) Organizer(s): Dr. Silvia Amati Secretary: n.a. Global Embeddings for Branes at Toric Singularities Speaker(s): Per BERGLUND (University of New Hampshire, USA) HP Italia 2012: Italian Workshop on High-Pressure Science | (smr H301) Organizer(s): Organizers: Roberto Bini (Firenze); Paolo Postorino (Roma); Andrea Lausi (Elettra); Andrea Perucchi (Elettra); Sandro Scandolo (ICTP)Organized jointly by ICTP and Elettra Synchrotron Facility, Trieste. Physics Potential of Future e+e- Colliders Speaker(s): James WELLS (CERN) Europe/Rome Free many-particle systems have relatively simple eigenstates. Nevertheless, these states possess non-trivial entanglement properties which serve as points of orientation for the whole field. In the talk, it will be shown how one can obtain them from the correlations in the state, how this leads to a statistical physics problem and what the nature of this problem is. The resulting spectra, which contain the basic entanglement information, will be discussed for a number of cases and an outline of further topics will be given. SISSA, Santorio Bldg., Room 128 ICTP [email protected] @ SISSA, Santorio Bldg., Room 128 Joint ICTP/SISSA Statistical Physics seminar: "Entanglement in free-particle models" Speaker(s): Ingo PESCHEL (Freie Universitaet Berlin ) Europe/Rome Abstract: I will present an analysis of the 2011 LHC Higgs data in the context of an effective low-energy theory of the Higgs boson. The focus will be on the 4-leptons and diphoton channels at the LHC and the b-bar channel at the Tevatron. Combining the available data in these channels, one can derive constraints on the effective theory parameters. These constraints can then be mapped several simplified scenarios of natural new physics capturing the Higgs physics of simple supersymmetric and composite Higgs models. I will also discuss the phenomenological consequences of the assumption that the Higgs couplings to the electroweak gauge bosons is larger than in the Standard Model. The latter may be hinted at by the CMS diphoton channel excess in the dijet category, and by the Tevatron excess in the W/Z+H -> b-bar channel. ICTP ICTP [email protected] Spinning the Higgs Speaker(s): Adam FALKOWSKI (LPT, Orsay, France) Training course in science communication in the framework of the project UNILHC | (smr H310) Organizer(s): SISSA Medialab Hyperkahler Geometry, Quaternionic isometries and the Legendre transform Speaker(s): Dr. Joseph Malkoun (Notre Dame University, Zouk Mosbeh, Lebanon/ICTP) Meeting Lufthansa Systems | (smr H314) Europe/Rome We study the effect of Coulomb drag between two closely positioned graphene monolayers. In the limit of weak electron-electron interaction and small inter-layer spacing the drag is described by a universal function of the chemical potentials of the layers measured in the units of temperature. When both layers are tuned close to the Dirac point (i.e. when both chemical potentials are much smaller than temperature), then the drag coefficient is proportional to the product of the chemical potentials. In the opposite limit of low temperature the drag is inversely proportional to the product of the chemical potentials. In the mixed case where the chemical potentials of the two layers belong to the opposite limits we find that the drag coefficient is proportional to the smaller chemical potential and inverse proportional to the larger. For stronger interaction and larger values of the inter-layer spacing the drag coefficient acquires logarithmic corrections and can no longer be described by a power law. Further logarithmic corrections are due to the energy dependence of the impurity scattering time in graphene (if both chemical potentials are much larger than temperature, then these are small and may be neglected). In the case of strongly doped (or gated) graphene (i.e. when the chemical potential are much larger than the inverse inter-layer spacing) the drag coefficient acquires additional dependence on the inter-layer spacing and we recover the usual Fermi-liquid result if the screening length is smaller than the spacing. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Coulomb drag in graphene" Speaker(s): Boris NAROZHNY (Karlsruhe Institute of Technology ) Joint ICTP-SISSA String Seminars: The Higgs mass, SUSY and Strings Speaker(s): Luis IBANEZ (UAM, Madrid, Spain) Europe/Rome Abstract: The partition function on the three-sphere of many superconformal field theories (SCFTs)(including the ABJM theory) can be reduced to a matrix integral by using localization techniques. This makes possible to study this quantity in the strong coupling, large N regime, by studying the associated matrix models. In this talk I will review these developments, which lead to precision tests of the AdS/CFT correspondence, and explain a new method to study a large class of matrix models appearing in this context. The method is based on reformulating the model as a one-dimensional ideal Fermi gas with a non-trivial, one-particle quantum Hamiltonian, and leads to a simple picture of the strongly coupled dynamics of these theories (including the famous 3/2 scaling of the number of degrees of freedom). Moreover, simple semiclassical calculations in the Fermi gas make possible to determine the full 1/N asymptotics of the partition function. ICTP ICTP [email protected] Joint ICTP-SISSA String Seminars: Three-dimensional superconformal field theories and Fermi gases Speaker(s): Marcos MARINO (University of Geneva, Switzerland) Photodynamic therapy for skin cancer treatment Speaker(s): Humberto Cabrera Type IIB LARGE Volume Scenario part II: Cosmology and SUSY breaking Spring School on Superstring Theory and Related Topics | (smr 2331) Organizer(s): E. Gava (INFN), S. Minwalla (Tata Institute), K.S. Narain (ICTP), S. Randjbar-Daemi (ICTP), E. Silverstein (Stanford University) Collaborations: the Asia Pacific Center for Theoretical Physics (APCTP), the International School for Advanced Studies (SISSA) and the Italian Institute for Nuclear Physics (INFN) School on Synchrotron and FEL Based Methods and their Multi-Disciplinary Applications | (smr 2332) Organizer(s): Directors: Nadia Binggeli (ICTP), Maya Kiskinova (Elettra) and Francoise Mulhauser (IAEA) Secretary: E. Brancaccio Europe/Rome I will present results on two of my topics of interest. In the first part we investigate the existence of a stripe-glass phase in a two-dimensional system with no quenched disorder, which is however frustrated by the competition of interactions on different length scales. The configurational entropy is computed through a method applied to frustrated systems with no quenched disorder, and through a 1/N expansion of the glass free energy. The stripe-glass phase is connected to the appearance of a finite off-diagonal replica correlation function, below a crossover temperature, related to the mobility of defects in the sample. The off-diagonal correlations in replica space are connected to the asymptotic limit of the two-times dynamic correlation function. Within this approach we find no finite contribution for the correlation between distinct replicas, which results in a vanishing configurational entropy. Therefore, we conclude that glassiness does not emerge at any temperature in the aforementioned model. In the second part I will present some results on the ergodicity properties of the Anderson model defined on the Bethe lattice. Our study is motivated by the conjectured existence of a phase for intermediary disorder strength values whose ergodicity properties are distinct from the fully ergodic extended phase, as well as from the completely ergodicity broken localised one. This kind of phase is seen to occur on other models such as the Directed Polymer with random complex weights. For an ensemble of system's realizations, we have studied eigenvalues and eigenstates statistics through exact diagonalization. In particular we analysed the neighboring gaps ratio statistics, the statistics of inverse participation ratios, including multifractality analysis. We find evidence of the presence of an intermediary disorder phase whose corresponding statistics are neither expressing extended nor localised states. ICTP ICTP [email protected] Informal seminar on Statistical Physics: "Some results on the stripe-glass phase in 2-d and on ergodicity properties of the Anderson model" Speaker(s): Ana Carolina Ribeiro-Texeira (Paris Jussieu) Europe/Rome Quantum quenches for the 1D Bose gas (Lieb-Liniger model) are discussed. The first part of the talk discusses how the Algebraic Bethe Ansatz can be used for quantum quenches. As an explicit example we quench the Lieb-Liniger model by instantaneously switching off the interactions. The subsequent time evolution is studied for various correlation functions. The long time average is compared with the predictions of several statistical ensembles. The second part concerns the discussion of the generalized Gibbs ensemble for the Lieb-Liniger model by generalizing the well known TBA equations. Possible applications and strategies are discussed. ICTP ICTP [email protected] ICTP/SISSA Statistical physics seminar: "Quenching the Lieb-Liniger model" Speaker(s): Jorn MOSSEL (University of Amsterdam ) Bounds on factors in Z[X] Speaker(s): Gerard Kientega (Universite' de Ouagadougou, Burkina Faso) Europe/Rome We discuss the full counting statistics of charge transport through a nano device and introduce a general way to calculate it numerically at zero-temperature. We apply this to a strongly correlated model: the interacting resonant level model; where it turns out we can also calculate the full counting statistics exactly, to good agreement with the numerics. From the analytic properties of the full solution, we show that the system undergoes charge fractionalization when the bias voltage exceeds a certain threshold. ICTP ICTP [email protected] Special seminar: "Full counting statistics of the interacting resonant level model" Speaker(s): Sam CARR (Karlsruhe Institute of Technology) Europe/Rome In this talk, I will present some of our recent theoretical results concerning photons in a coupled cavity array under a synthetic magnetic field. A suitable coupling of the propagation and polarization degrees of freedom introduces a geometrical phase for photons tunneling between adjacent sites of a coupled cavity array, which gives rise to the artificial field. I will discuss the feasibility of observing strong magnetic field effects in the optical spectra of realistic systems, including the Hofstadter butterfly spectrum. I will also touch on the observability of a fractional quantum Hall effect for photons in these systems, which is expected to appear when effective interactions are induced between photons. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Artificial magnetic field for photons in a coupled cavity array" Speaker(s): Onur UMUCALILAR (Universita' di Trento ) Informal Geometric Analysis & Complex Differential Geometry Seminar - Overview of symplectic mean curvature flow. Speaker(s): Dr. J. Sun (ICTP) Europe/Rome Disordered systems typically respond non-smoothly to external driving. This manifests itself in Barkhausen noise, earthquakes and avalanches in granular media. A simple classical model which describes such phenomena quite accurately is a particle driven by a spring on a Brownian random-force landscape (also known as the ABBM model). I will present some analytical results on statistics of avalanches in this model. I will then show how the ABBM model arises as the mean-field limit of a spatially extended elastic interface in a disordered environment. In the end I will discuss the behavior below the upper critical dimension using renormalization-group methods. ICTP ICTP [email protected] Informal seminar on Statistical physics: "Avalanches in disordered elastic systems" Speaker(s): Alexander DOBRINEVSKI (Ecole Normale Superieure, Paris ) Dark matter searches with the Fermi-LAT data Speaker(s): Gabrijela ZAHARIJAS (ICTP) EURASNET Meeting 2012 | (smr H297) Organizer(s): ICGEB, Trieste, Italy et al. Secretary: FASI srl Rome Europe/Rome A few years ago, we initiated a research program to investigate the finite volume behaviour of matrix elements of local operators (form factors) in quantum field theory. The testing ground of these ideas is provided by integrable field theories such as the Lee-Yang, Ising and sine-Gordon models. The framework we developed can be used to provide a non-perturbative numerical method to determine form factors, or to verify bootstrap form factor solutions against the field theory dynamics. It also serves as a theoretical tool to construct spectral expansions of correlation functions in a thermal bath or in a boundary setting, relevant to condensed matter problems. In addition, it gives us a way to address questions of potential interest to lattice QCD. I plan to review the framework and its applications in a pedagogical manner, and to discuss briefly the latest developments. SISSA, Santorio Bldg.,room 128 ICTP [email protected] @ SISSA, Santorio Bldg.,room 128 Joint ICTP/SISSA Statistical Physics seminar: "Form factors in finite volume: Recent developments" Speaker(s): Gabor TAKACS (Eotvos Lorand University, Budapest ) Con il server fra le nuvole: infrastrutture, piattaforme e servizi condivisi nei nuovi modelli di gestione dei servizi bibliotecari | (smr H304) Organizer(s): ITALE (Associazione Italiana Utenti ExLibris); Local Organizer (L. Visintin, ICTP Library Head) Secretary: ITALE Personnel Europe/Rome Spinless p-wave superconducting wires can be in a topological phase in which they harbor Majorana bound states at their ends. Although there are no known spinless p-wave superconductors in nature, several routes to the artificial creation of such systems have been proposed. In this talk, I will discuss how non-idealities in some of the proposed routes, such as potential disorder and deviations from a strict one-dimensional limit, affect the topological phase. In particular, I'll discuss how the topological phase can persist at weak disorder or for multichannel wires, although some of the signatures of the presence of Majorana fermions are obscured. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Majorana fermions and subgap states in spinless p-wave superconducting wires" Speaker(s): Piet BROUWER (Freie Universitaet Berlin ) Non-linear elliptic problems with dependence on the gradient and singular on the boundary Speaker(s): Dr. Boumediene Abdellaoui (Universite' Aboubekr Belkaid, Tlemcen, Algeria) A Framework for R-parity and the LHC Speaker(s): Sogee SPINNER (SISSA, Trieste) Olimpiadi delle Neuroscienze | (smr H306) Organizer(s): Universita' di Trieste Secretary: Aura Bernardi 2 Apr 2012 - 5 Apr 2012 Joint SISSA-ICTP Workshop on Interacting Galaxies and Binary Quasars | (smr H292) Organizer(s): Dr. J. Moreno (SISSA) and R. Sheth (ICTP) More on Closed String Induced Higher Derivative Interactions on D-branes Speaker(s): Ehsan HATEFI (ICTP) Europe/Rome In this talk I will review some of our recent work on relaxation after a quantum quench. For dynamics generated by bosonic quadratic many-body Hamiltonians, I will show under which conditions the state is locally described by a Gaussian state in the long time limit. That is, subsystems converge to Gaussian states in trace norm for large times. These Gaussian states can be given explicitly and there is no need for time averaging. In the second part of the talk, I will introduce a setting in general spin systems for which thermalization and thermalization time can be addressed: Fixing the spectrum of the Hamiltonian and picking its basis from the Haar measure, the time-dependent state may locally be described by the maximally mixed state for almost all times in [0,T]. Here, T is algebraically small in the system size. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical physics seminar: "Local equilibration and thermalization after a quantum quench" Speaker(s): Marcus CRAMER (Ulm University) The next round of double beta decay experiments Speaker(s): Mauro MEZZETTO (INFN, Padova) Informal Geometric Analysis and Complex Differential Geometry Seminar - Overview of Lagrangian mean curvature flow. Europe/Rome In this talk I will discuss quantum states of bosonic lattice systems and present benchmark results, based on quantum Monte Carlo simulations, for phase diagrams. These systems provide unique opportunities to explore interesting problems in strongly correlated physics, providing a wide variety of novel ground state phases (e.g. supersolids, paired phases, Bose glass) and phase transitions, with the possibility of comparison with experiments. I will be focusing on the following two systems: i) bosons in a disordered square lattice potential ii) bosons interacting via dipolar interactions in a bilayer geometry. I will present the ground state phase diagram of both systems as well as finite temperature results. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations:"Quantum phases of lattice bosons" Speaker(s): Gunes SOYLER (ICTP) Europe/Rome We present a framework with minimal ingredients for the study of coevolution in dynamical networks. This phenomenon is observed in many complex systems in nature and consists of the coexistence of two processes on networks of interacting elements: node state change and rewiring of connections between nodes. We consider that the process by which a node changes its connections, called rewiring, and that the process by which a node changes its state, can have their own dynamics, characterized by probabilities Pr and Pc that express the time scales for each process, respectively. We describe the process of rewiring in terms of two basic actions: disconnection and reconnection between nodes, both based on a mechanism of comparison of their states that in turn occur with different probabilities, d and r. Then, for a given rewiring process, a coevolution model corresponds to a specific coupling relation Pc = f(Pr). The collective behavior of a coevolutionary system can be studied on any subspace of the parameters Pr, Pd, d, and r. As an application, for a voterlike node dynamics we find that reconnections between nodes with similar states lead to network fragmentation. The critical boundaries for the onset of fragmentation in networks with different properties are calculated on the space (Pr,Pc). The occurrence of a homogeneous connected phase and a fragmented phase in a network is predicted for diverse models, and agreement is found with some earlier results. We also find regions of parameters where modular structures with nodes in different states coexist for very long times on one large, connected network. Thus, coevolution under some conditions can be seen as a mechanism for the emergence of communities observed in many real networks. ICTP ICTP [email protected] Informal seminar on Statistical Physics: "A framework for coevolution dynamics" Speaker(s): Mario COSENZA (Universidad de Los Andes, Merida ) Workshop on Science Applications of GNSS in Developing Countries (11-27 April), followed by the: Seminar on Development and Use of the Ionospheric NeQuick Model (30 April-1 May) | (smr 2333) Organizer(s): S.M. Radicella (ICTP), Patricia H. Doherty (USA) Secretary: P. Wardell Informal Geometric Analysis and Complex Differential Geometry Seminar - Multiplier ideal sheaves in Kahler geometry, I. Speaker(s): Dr. Y. Shi (ICTP) Europe/Rome Indistinguishability of particles plays a major role in destabilizing crystalline order in Bose systems. In this talk, this effect will be semi-quantitatively described in terms of damped quasi-particle modes, and illustrated in the dual language of Feynman paths. A robust confirmation of this prediction is provided by means of first-principle simulations of dipolar bosons and bulk condensed helium-four. Two major implications are that a) contrary to conventional wisdom, zero-point motion alone cannot prevent helium-four crystallization at low temperature, at saturated vapour pressure, and b) Bose statistics leads to quantum jamming at finite temperature, dramatically enhancing the metastability of superfluid glasses. Only theoretical studies in which quantum statistics is fully included can reliably address these issues. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Bose statistics, crystallization and quantum jamming" Speaker(s): Massimo BONINSEGNI (University of Alberta, Edmonton ) Europe/Rome Abstract: The possible detection of a non-Gaussian component in the initial condition of the Universe could provide crucial information on the physics of inflation. While current constraints on non-Gaussian parameters are still consistent with the Gaussian hypothesis, the Planck satellite will soon significantly improve these limits. In case of detection, a confirmation from different observables, as for instance the galaxy distribution, will be required. I will review the basic effects of non-Gaussian initial conditions on the evolution of the large-scale structure of the Universe, paying specific attention to how they affect the nonlinear growth, both of the matter power spectrum and higher-order correlation functions as well as the corrections induced to linear and nonlinear galaxy bias. In addition, I will present in details the case of the Quasi-Single Field model of inflation, characterized by a quite rich phenomenology in terms of its observational consequences. ICTP ICTP [email protected] Testing the Initial Conditions of the Universe with the Large-Scale Structure Speaker(s): Emiliano SEFUSATTI (ICTP) AIP Industrial Physics Forum 2012: Capacity Building for Industrial Physics in Developing and Emerging Economies | (smr 2334) Organizer(s): AIP: Philip W. Hammer; ICTP: Joseph Niemela (Local Organizer) Secretary: S. Fairclough Cosponsor(s): Co-sponsored by ICTP, American Institute of Physics and TWAS Europe/Rome Rio de Janeiro - Brazil ICTP [email protected] @ Rio de Janeiro - Brazil 7th International Conference on Mathematical Methods in Physics (Joint Conference: CBPF-IMPA-ICTP-SISSA-TWAS) | (smr 2383) Organizer(s): E. ABDALLA, L. BONORA, H. BURSZTYN, A. BYTSENKO, B. DUBROVIN, M.E.X. GUIMARAES and J.A. HELAYEL-NETO Secretary: M. de Comelli Collaborations: CBPF-IMPA-ICTP-TWAS-SISSA Cosponsor(s): CBPF, IMPA, TWAS, SISSA, CLAF, CAPES, CNPq, CAPES, FAPERJ and FAPESP. Europe/Rome In this talk I will present results obtained on fully-connected mean-field models of quantum spins with p-body ferromagnetic interactions and a transverse field, as a toy model for studying quantum annealing of disordered systems. For p=2 this corresponds to the quantum Curie-Weiss model which exhibits a second-order phase transition, while for p>2 the transition is first order. We provide a refined analytical description of both the static and dynamics properties of these models, allowing us to study the slow annealing from the pure transverse field to the pure ferromagnet (and vice versa) and discuss the effect of first-order phase transition and spinodals on the residual excitation energy, both on finite and exponentially divergent time-scales. We expect the general features that we found to be relevent for real disordered optimization problems. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "On the quantum annealing of quantum mean-field models" Speaker(s): Victor BAPST (ENS, Paris) Implications of a 125 GeV Higgs Speaker(s): Abdelhak DJOUADI (Laboratoire de Physique Theorique d'Orsay, France) Europe/Rome Financial institutions need to price a huge number of instruments every day, and each of these instruments is generally priced according to models calibrated on financial data. The key issue, however, is that different institutions price the same instruments using different models and data, i.e. different approximations. Such a heterogeneous picture might lead to a system of inconsistent prices, or, in other words, to arbitrage opportunities, i.e. the emergence of strategies yielding a risk free profit. In this talk I will address this issue in the stylized framework of a random economy, where the differences in the pricing models are introduced as differences in the probability measures used to price different financial instruments. Depending on the variability of such measures and the number of assets, the market may be either in an arbitrage free state or in a state where arbitrage opportunities actually arise. The problem will be analyzed borrowing techniques from the statistical physics of disordered systems: a general solution for the arbitrage region volume in the model's parameter space will be derived and discussed, eventually providing numerical evidence for some specific examples. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Pricing in a complex financial market: Instability from local measures" Speaker(s): Giacomo LIVAN (ICTP) Informal Geometric Analysis and Complex Differential Geometry Seminar - Multiplier ideal sheaves in Kahler geometry, II. Europe/Rome Abstract: D-brane constructions of the Standard Model are often carried out in a local setting; the branes are at singularities or wrap local cycles. Isometries in the local geometry imply global symmetries in the model. In this talk, we will discuss the size and form of the leading symmetry breaking effects once such models are embedded into a compactification. The setting allows for hierarchical separation in the symmetry breaking parameters. ICTP ICTP [email protected] Symmetry Breaking from Compactification Speaker(s): Anshuman MAHARANA (ICTP) Joint ICTP-TWAS Workshop on Entrepreneurship for Physicists and Engineers from Developing Countries | (smr 2335) Organizer(s): Directors: Surya Raghu, Dipali Bhatt-Chauhan, Joseph Niemela From Genes to Atomic Structures: an Introduction to Synchrotron-Based Structural Biology | (smr 2336) Organizer(s): Silvia Onesti and Giorgio Paolucci. Local Organiser: Joseph Niemela Laboratories: EKLUND Lab. AGH Secretary: A. Lusenti Cosponsor(s): IUBMB (International Union of Biochemistry and Molecular Biology ), IUCr (International Union of Cristallography ), Douglas Instruments Ltd. Europe/Rome Abstract: The localization technique is explained, which plays an essential role to obtain the exact results for the three dimensional supersymmetric gauge theory. Then, we discuss several applications and/or extensions of the exact results. ICTP ICTP [email protected] Localization technique and exact results for three dimensional theory Speaker(s): Futoshi YAGI (SISSA) Black Hole N portrait Speaker(s): Cesar GOMEZ (Instituto de Fisica Teorica IFT, Madrid, Spain) Europe/Rome We investigate the properties of ultra-cold atomic gases in optical lattices, which we use as a toy-system to explore the intriguing properties of strongly correlated quantum many-body systems in periodic potentials. In this work, we address the regime of weak and intermediate optical lattices, where the conventional description in terms of the single band Hubbard model is not reliable. In the case of bosonic atoms, we employ a novel hybrid Monte Carlo technique which allows to simulate the superfluid to insulator transition in continuous space, thus going beyond the single-band approximation. For fermions, we apply Kohn-Sham Density Functional Theory (DFT), which is the most powerful computational tool routinely used in material science to perform electronic structure simulations. In this work, we use a new energy-density functional for dilute atomic Fermi gases with short-range interactions, as opposed to the Coulomb interaction in electronic systems. The first results based on a local spin-density approximation show evidence of a ferromagnetic phase due to the strong repulsive inter-species interactions close to a Feshbach resonance, and of anti-ferromagnetic order at half filling. We will discuss how the development of DFT for ultracold Fermi gases can form a strong link between materials science and atomic physics. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Ultracold atoms in optical lattices: Beyond the Hubbard model" Speaker(s): Sebastiano PILATI (ICTP) Europe/Rome San Jose - Costa Rica ICTP [email protected] @ San Jose - Costa Rica First ICO-ICTP-TWAS Central American Workshop in Lasers, Laser Applications and Laser Safety Regulations | (smr 2381) Organizer(s): Directors: Maria L. Calvo and Angela M. Guzman (ICO), Joe Niemela (ICTP). Local Organiser: Luis Diego Marin Naranjo (Costa Rica) Secretary: R. del Rio 5th Alpe-Adria Medical Physics Meeting and EFOMP Meeting | (smr H288) Organizer(s): Dr. Mario de Denaro, Chief of Medical Physics Department, A.O.U. Ospedali Riuniti di Trieste Europe/Rome I will discuss some of the most basic questions in fermionic and bosonic transport, such as the conditions for the existence of a steady-state current, its uniqueness, the role of interactions and spin statistics, its entanglement entropy, etc. This will lead me to introduce an alternative viewpoint to conduction - the micro-canonical formalism of transport - which is ideal to study the above issues [1]. I will point out the similarities and differences with the widely used Landauer formalism, and advance a series of predictions that can be verified by loading ultra-cold atoms into artificial optical lattices. [1] M. Di Ventra, Electrical Transport in Nanoscale Systems (Cambridge University Press, 2008). ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Fundamental aspects of transport in nanoscale systems and ultra-cold atoms" Speaker(s): Massimiliano Di Ventra (University of California, San Diego ) Europe/Rome We will define the Levi Curvatures on real hypersurfaces in Kaehler manifolds, then we specialize on the Levi Mean Curvature. We will present some symmetry results of Alexandrov type. We will see some results on the Hamiltonian Curvature as well. ICTP ICTP [email protected] Informal Geometric Analysis and Complex Differential Geometry Seminar - Symmetry results for hypersurfaces in Complex Spaces, I. Speaker(s): Dr. Martino Vittorio (SISSA) 7 May 2012 - 18 May 2012 International School on Nuclear Security | (smr 2338) Organizer(s): IAEA: A. Braunegger-Guelich; ICTP: C. Tuniz Cosponsor(s): the Italian Ministry of Foreign Affairs; the International Atomic Energy Agency (IAEA) Sixth ICTP Workshop on the Theory and Use of Regional Climate Models | (smr 2337) Room: AGH (Kastler Lecture Hall) Organizer(s): F. Giorgi, R. Anyah, R. Porfirio Da Rocha, P. Ruti. Secretary: L. Iannitti Europe/Rome Activity Secretariat: The Abdus Salam International Centre for Theoretical Physics (ICTP) Ms. S. Radosic Strada Costiera 11 I-34151 Trieste Italy Telefax: +39-040-22407226 Telephone: +39-040-2240226 [email protected] Guayaquil - Ecuador ICTP [email protected] @ Guayaquil - Ecuador Joint ICTP-TWAS Workshop on Climate Change in Mediterranean and Caribbean Seas: Research Experiences and New Scientific Challenges | (smr 2382) Organizer(s): Organisers: R. Martinez (CIIFEN, Ecuador), A Crise and S. Salon (OGS, Italy). Organiser at ICTP: R. Farneti Europe/Rome Dakar - Senegal ICTP [email protected] @ Dakar - Senegal CIMPA-ICTP Research School on Structures Geometriques et Theorie du Controle | (smr 2394) Secretary: K. Mabilo Congress "In silico enzyme design and screening" (EU Project IRENE) | (smr H303) Organizer(s): Dipartimento di Scienze Chimiche e Farmaceutiche dell'Università di Trieste Europe/Rome We show that the eigenvalue density of the product of n identically distributed R-diagonal random matrices from a given matrix ensemble is equal to the eigenvalue density of n-th power of a single matrix from this ensemble in the limit of infinite matrix size. Using this observation one can derive the limiting eigenvalue density of the product of n independent identically distributed matrices for non-Hermitian matrix ensembles with invariant measures. We discuss two examples: the product of n Girko-Ginibre matrices and the product of n truncated unitary matrices. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical physics seminar: "Product of free identically distributed R-diagonal random matrices" Speaker(s): Z. BURDA (Jagellonian University, Krakow ) Interactive Learning: Using Research to Promote Student Understanding in the Science Classroom Speaker(s): Alex L. RUDOLPH (California State Polytechnic University, Pomona and Universite de Pierre et Marie Curie, Paris VI) Europe/Rome I will start with a short introduction into the ratchet problem by using a simple model of a particle in an ac-driven periodic potential, give a basic on the symmetry analysis, and discuss the mechanisms of current rectification in the Hamiltonian limit. Theoretical findings are illustrated with the experimental results obtained with cold atoms and optical potentials. Next I will turn to the quantum limit, specify quantum features of ac-driven Hamiltonian ratchets, and review the recent experimental realization of the quantum flashing/tilting ratchet with BEC of Rubidium atoms. Finally, the performance of quantum ratchets in the presence of decoherence will be addressed. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations "Current rectification in ac-driven Hamiltonian systems: From classical to quantum ratchets" Speaker(s): Sergey DENISOV (University of Augsburg ) Informal Geometric Analysis and Complex Differential Geometry Seminar - Symmetry results for hypersurfaces in Complex Spaces, II Synchrotron Radiation X-ray Imaging for Life Sciences and Cultural Heritage | (smr H307) Organizer(s): Sincrotrone Trieste Secretary: Kevin Prince Europe/Rome I will introduce the anisotropic Ginzburg-Landau and Lawrence-Doniach models that describe layered superfluids and superconductors, including the large class of high-Tc superconductors. I will show how these equations apply to ultracold Fermi gases in optical lattices. This allows one to find the temperature of the 3D-2D crossover where the bulk superfluid decouples into uncorrelated 2D systems. I will show how this behaviour can be explored in ultracold gases, and how to reproduce the physics of layered superconductors. I will show how the optical lttice affects the gas differently across the BEC-BCS crossover and that the most interesting behaviour is found near the unitary limit. SISSA, Santorio Bldg., 1st floor- Room 128 ICTP [email protected] @ SISSA, Santorio Bldg., 1st floor- Room 128 Joint ICTP/SISSA Statistical Physics seminar: "Anisotropic Ginzburg-Landau and Lawrence-Doniach models for ultracold Fermi gases" Speaker(s): Mauro IAZZI (SISSA) Europe/Rome Abstract: I will discuss how "mixed" amplitudes (i.e. amplitudes with both closed and open strings) can be used to derive the geometrical backreaction of different D-brane bound states. As an example, I will mainly focus on a particular class of 1/8-BPS configurations in type IIB and derive from string theory several non-trivial features of the corresponding supegravity solution. I will also discuss a simple dynamical process in the case of 1/2-BPS configurations. ICTP ICTP [email protected] Joint ICTP-SISSA String Seminars: Emergent geometry from string amplitudes Speaker(s): Rodolfo RUSSO (Queen Mary, University of London) Joint ICTP-SISSA String Seminars: Holographic metals Speaker(s): Larus THORLACIUS (NORDITA, Stockholm and University of Iceland) Europe/Rome Laser speckles provide a unique possibility to create fully controlled two-dimensional disorder potentials for neutral atoms. This type of disorder has exponential probability distribution and finite spatial correlation length. In this talk, I shall present our recent results on the superfluidity and Bose-Einstein condensation of interacting bosons in the presence of the laser speckles at zero temperature obtained using diffusion Monte Carlo method in continuum as well as Gross-Pitaevskii equation and Bogolubov theory. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations "Dilute Bose gas in quasi-2D correlated random potentials" Speaker(s): Grigori ASTRAKHARCHIK (Universitat Politecnica de Catalunya, Barcelona) The obstacle problem associated with nonlinear elliptic equations in generalized Sobolev spaces. Speaker(s): Professor Stanislas Ouaro (Universite' de Ouagadougou, Burkina Faso) Multi-groups Epidemic Models Speaker(s): Professor Abderrahman Iggidr (INRIA Nancy and Universite' de Metz, France) Europe/Rome Abstract: This is a series of two JC's devoted to holographic implementation of (General) Gauge Mediation set up. We will briefly review the basics of Gauge Mediation models and present, as an instance of holographic realization, the work by Benini et al. (arxiv:0903.0619). Then we will introduce the framework of General Gauge Mediation (arxiv:0801.3278) and outline the computation of GGM two-point functions in a 5D background by means of holographic renormalization. ICTP ICTP [email protected] Gauge Mediation and Holography Speaker(s): Lorenzo Di PIETRO and Flavio PORRI (SISSA) First Workshop on Linear Mirror Systems and their Integrations | (smr H315) Organizer(s): Prof. H. Grassmann, University of Udine OLIFIS "A un passo dalla IPHO" | (smr H308) Organizer(s): Prof. Cavaggioni ICTP-ESF School and Conference in Dynamical Systems | (smr 2340) Organizer(s): Organisers: S. Luzzatto (ICTP), M. Viana, J-C Yoccoz Cosponsor(s): European Science Foundation, PRONEX - Dynamical Systems/CNPq, DynEurBraz, Scuola Normale Superiore di Pisa Workshop on Atmospheric Deposition: Processes and Environmental Impacts | (smr 2339) Organizer(s): F. Dentener, C. Galy-Lacaux. Local Organiser: F. Solmon Laboratories: t.b.a. Cosponsor(s): Co-Sponsored by the International Union of Geodesy and Geophysics - IUGG Europe/Rome Are we ready to let machines do the work so we can now concentrate on the science questions? Have we reached the point where statistical and machine learning methods have become useful tools rather than a nuisance and the practice of the few? I will present four examples where significant progress has been made and the interdisciplinary practices have brought fruitful results. These include, automatic classification, Bayesian parameter inferences, event detection and anomaly detection, and design of surveys and follow ups. ICTP ICTP [email protected] Interdisciplinary Science in Astronomy: A review Speaker(s): Pavlos PROTOPAPAS (Harvard Smithsonian Center for Astrophysics, USA) Europe/Rome We study the market impact of a meta-order in the framework of the Minority Game. This amounts to studying the response of the market when introducing a trader who buys or sells a fixed amount h for a finite time T. This perturbation introduces statistical arbitrages that traders exploit by adapting their trading strategies. The market impact depends on the nature of the stationary state: we find that the permanent impact is zero in the unpredictable (information efficient) phase, while in the predictable phase it is non-zero and grows linearly with the size of the meta-order. This establishes a quantitative link between information efficiency and trading efficiency (i.e. market impact). By using statistical mechanics methods for disordered systems, we are able to fully characterize the response in the predictable phase, to relate execution cost to response functions and obtain exact results for the permanent impact. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Impact of meta-order in the Minority Game" Speaker(s): Marco BARDOSCIA (ICTP) Europe/Rome In this talk we use generalizations of beautiful classical geometric constructions of Kummer and Humbert to construct new higher Chow cycles on Abelian surfaces, generalizing some work of Collino. These cycles can be used to resolve a non-Archimedean version of the Hodge-D-conjecture of Beilinson for Abelian surfaces. ICTP ICTP [email protected] Abelian surfaces, Kummer surfaces and the non-Archimedean Hodge-D-conjecture Speaker(s): Professor Ramesh Sreekantan (Indian Statistical Institute, Bangalore) A tale of two cascades: string duals to tumbling and cascading gauge theories Speaker(s): Eduardo CONDE (Universidade de Santiago de Compostela, Spain) Europe/Rome Bosons can exhibit superfluid, density-ordered, and even glassy phases. It is interesting to ask whether superfluidity and density order can coexist, which requires the simultaneous breaking of two symmetries with competing order parameters. We have analyzed a fully connected model of frustrated bosons, which indeed exhibits such an intermediate phase. Its hallmarks are anticorrelations between the local order parameters, and a non-monotonous superfluid order parameter as a function of T. To study transport, we extend this fully connected model to a Bethe lattice with finite but large connectivity. While thermodynamic properties are insensitive to quantum fluctuations, the latter play an important role for the dynamics in the glassy insulator. The glassy order is shown to affect significantly the superfluid-insulator transition, which is exactly solvable. In the glassy insulator we find a finite mobility edge for bosonic excitations, which does not close upon approaching the superfluid transition. ICTP ICTP [email protected] Informal seminar on Statistical Physics: "Superfluidity and localization in bosonic glasses" Speaker(s): Xiaoquan YU (SISSA) Europe/Rome I will argue that the freezing transition scenario, previously conjectured to take place in Statistical Mechanics of 1/f-type Random Energy Model, governs, after reinterpretation, the value distribution of the maximum of the modulus of the characteristic polynomials of large random unitary (CUE) matrices. I then conjecture that the results extend to the large values taken by the Riemann zeta-function over stretches of the critical line s=1/2+it of constant length, and present the results of numerical computations of the large values of \zeta(1/2+it). The main purpose is to draw attention to possible connections between the statistical mechanics of random energy landscapes, random matrix theory,and the theory of the Riemann zeta function. The presentation will be based on the joint work with G.Hiary and J.P. Keating: Phys. Rev. Lett. 108 , 170601 (2012) ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Freezing transition: From 1/f landscapes to characteristic polynomials of random matrices and the Riemann zeta-function" Speaker(s): Yan V. FYODOROV (Queen Mary, University of London) Informal Geometric Analysis & Complex Differential Geometry Seminar - Donaldson's theorem on cscK metrics and Chow stability (I) Europe/Rome In this talk, we review the development of the theory of new forms of half-integral weight. ICTP ICTP [email protected] On new forms of half-integral weight Speaker(s): Professor Balakrishnan Ramakrishnan (Harish Chandra Research Institute, Allahabad, India) Natural and flavorful SUSY at the LHC Speaker(s): Andreas WEILER (DESY, Hamburg, Germany) Convegno "mare NORDEST" | (smr H313) Organizer(s): R. Bolelli Europe/Rome The African School series on Electronic Structure Methods and Applications (ASESMA) is planned on a biennial basis from 2010 to 2020. The schools will emphasize the theory and computational methods for predicting and understanding properties of materials through calculations at the fundamental level of electronic structure. The 2010 school was held in Cape Town and focused on the basic aspects of electronic structure. The 2012 school will be held at Chepkoilel University College, Moi University, Eldoret, Kenya and will include both basic introductory topics and more advanced aspects, including applications of the methods to the study of materials for energy applications. The school is organized by the Abdus Salam International Centre for Theoretical Physics (ICTP) in collaboration with the IUPAP Commissions on Physics for Development (C13) and on Computational Physics (C20), the Italian National Simulation Centre CNR/INFM "Democritos" the South African National Institute for Theoretical Physics (NITheP). PROGRAMME The workshop will cover methodological aspects such as density functional theory, pseudopotentials, plane waves, iterative diagonalization methods, etc. It will also cover the application of electronic structure methods to the mechanical, dynamical, electronic, and magnetic properties of materials. The workshop will consist of lectures and of hands-on computational laboratories based on the Quantum-Espresso codes. INTERNATIONAL ADVISORY PANEL R.M. Martin (USA, Chair), O.O. Adewoye (Nigeria), M. Alouani (France), G. Amolo (Kenya), S. Baroni (Italy), P. Borcherds (UK), R. Car (USA), R. Catlow (UK), N. Chetty (South Africa), XinGao Gong (China), J. Gubernatis (USA), W. Kohn (USA), A. Leggett (USA), Yu Lu (China), N. Marzari (Switzerland), S.Y. Mensah (Ghana), B. M'Passi Mabiala (Congo-Brazzaville), S. Narasimhan (India), C.M.I. Okoye (Nigeria), D. Pettifor (UK), K. Reed (USA), S. Scandolo (Italy), W. Soboyejo (USA), N. Spaldin (Switzerland), P. Woafo (Cameroon). PARTICIPATION The participation in the activity is open to young scientists from all countries, which are members of the United Nations, UNESCO or IAEA, and aimed preferably at candidates from the African continent. As it will be conducted in English, participants should have an adequate working knowledge of this language. Participants should also have an adequate knowledge of Linux and of basic programming. Limited travel funds are available for some participants, who are nationals of, and working in, a developing country, and who are not more than 45 years old. Such support is available only for those who attend the entire School. There is no registration fee. Local accommodation and meals will be provided at no charge to all those selected for participation. The Application form should be completed on-line link here below before 17th February 2012. Eldoret - Kenya ICTP [email protected] @ Eldoret - Kenya 2nd African School on 'Electronic Structure Methods and Applications' (ASESMA 2012) | (smr 2385) Organizer(s): Organisers: G. Amolo, N. Makau, N. Chetty, R. Martin, S. Scandolo Secretary: M. Poropat Cosponsor(s): American Physical Society, Division of Computational Physics; Beijing Computational Science Research Center; Chepkoilel University College; CNR-IOM-DEMOCRITOS National Simulation Center, Trieste, Italy; International Center for Materials Research (ICMR), Santa Barbara, USA; International Union for Pure and Applied Physics (IUPAP); National Council for Science and Technology, Kenya; National Institute for Theoretical Physics (NITheP), South Africa; US Liaison Committee for IUPAP; International Council for Science (ICSU) Europe/Rome In this seminar I will give an introduction to the arithmetic of Calabi-Yau 3-folds. The main quantities of interest are the numbers of points of the manifold considered as a manifold over finite fields. I will be concerned with the computation of these numbers and their dependence on the complex structure parameters. As we will see, these numbers are given by expressions which involve the periods of the manifold. I will discuss the form of zeta function of the Dwork pencil of quintic 3-folds and, time permitting, we will see what happens at singularities and speculate about what mirror symmetry has to say about the form of the zeta function. ICTP ICTP [email protected] Arithmetic of Calabi-Yau Manifolds Speaker(s): Professor Xenia de la Ossa (University of Oxford, U.K.) Informal Geometric Analysis & Complex Differential Geometry Seminar - Donaldson's theorem on cscK metric and Chow stability (II) Knots and Mirrors Speaker(s): Cumrun VAFA (Harvard University, Cambirdge, USA)) Europe/Rome The proposed Workshop, aims to guide the scientific community in developing countries into the potentialities of the newest mobile scientific educational possibilities and the creation of educational scientific contents for mobile devices. The Workshop also aims to give a balanced mix of technical detail, general overview, societal impact and a sense of the possible with "Scientific m-Learning". Trieste - Italy ICTP [email protected] 4 Jun 2012 - 7 Jun 2012 Scientific m-Learning | (smr 2342) Organizer(s): E. Canessa, C. Fonda, M. Zennaro (SDU, ICTP) Laboratories: SDU Lab. Joint ICTP-IAEA Nuclear Safety Assessment Institute Workshop | (smr 2343) Organizer(s): IAEA: M. Mellinger-Deroy and P. Wells. Local Organizer: C. Tuniz Collaborations: with International Atomic Energy Agency, IAEA, Vienna, Austria Europe/Rome In this talk I will review some of the work on Entanglement Entropy that I have carried out in collaboration with various authors since 2007. For general 1d Quantum Field Theories, the central objects are Twist Fields whose correlation functions are directly related to the entropy. In the context of Quantum Spin Chains, Twist Operators play a similar role and products thereof can be related to the twist fields in the continuous limit. Once the entropy has been expressed in terms of correlation functions of these fields (or operators on the chain) one may exploit the powerful tools of integrable quantum models to establish new results. We may study both the short- and long-distance behaviours of the entropy in various kinds of theories, study the matrix elements of twist fields (or twist operators) on their own right, or investigate the features of other correlation functions involving the twist field. In my talk I will try to touch on the main results we have obtained when addressing some of these problems. SISSA, Santorio Bldg, Room 128 ICTP [email protected] Joint ICTP/SISSA Statistical physics seminar: "Entanglement entropy of one-dimensional quantum systems" Speaker(s): Olalla CASTRO ALVAREDO (City University London ) Europe/Rome Incompressibility in the fractional QHE is identified as a variant of Mott-Hubbard incompressibility, where additional electrons are excluded from a region occupied by a "composite boson" of p electrons in q orbitals (or p electrons with q "attached flux quanta") at Landau-level filling \nu = p/q. The new ingredient is that the shape of this "exclusion zone" is described by an intrinsic "unimodular" (determinant = 1) spatial metric tensor field g_{ab}(x,t), that is a true dynamical variable that fluctuates about the shape that minimizes the correlation energy of the FQH state. It emerges that the electron density is pinned to the combination peB-\hbar s K_g rather than to the magnetic flux density B alone, where K_g is the gaussian curvature of the metric, and s is a new characteristic topologically-quantized "guiding-center spin" related to the so-called "shift" of the FQH state. This metric can be seen in the Laughlin state as a explicit variational parameter that has remained hidden for the past thirty years because of a fundamental misinterpretation of the Laughlin state as a "lowest Landau level Schrödinger wavefunction": (in fact it is a guiding-center coherent state that does not depend on which Landau level is partially filled). ICTP ICTP [email protected] Joint ICTP/SISSA Condensed Matter seminar: " Geometry and incompressibility in the fractional quantum Hall effect" Speaker(s): F.D.M. HALDANE (Princeton University) Europe/Rome Consider the clusters of same-sign spins in the Ising model. In the scaling limit at criticality, their boundaries become (countably) infinitely many loops that don't intersect, and the Boltzman weight becomes a probability measure for these loop configurations. Conformal loop ensemble (CLE) is a one-parameter family of measures on such loop configurations with properties of locality and conformal invariance. It includes the measure obtained from the scaling limit of the critical Ising model, but also measures corresponding to all central charges between 0 and 1. It is believed that this loop description contains all the information of the usual local-operator CFT, in particular about minimal models. In this talk I will tell you how to represent the stress tensor and some of its descendants as random variables in CLE. The random variables essentially measure how likely it is that loops be separated by some specific closed curves that look like stars (with a bit of imagination); these stars are made to "rotate" at various angular velocities. It is interesting that the star-figure random variables in principle give definitions for the associated stress-tensor descendants even beyond CFT. This talk will be very elementary: I'll introduce all necessary notions, and if time permits I will give you an idea of how the proof works. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical physics seminar: "Star-figures in conformal loop ensembles: The stress tensor and some of its descendants" Speaker(s): Benjamin DOYON (King's College London ) Europe/Rome Understanding dynamical phenomena in strongly coupled systems is an interesting challenge. While the quantum dynamics has been tackled on its own by a variety of techniques developed over the years, an important window into such physics is provided by the holographic AdS/CFT correspondence, which effectively converts the quantum problem into a classical master field dynamics in a suitable limit. I will review progress made over the last few years using AdS/CFT in shedding light in out-of equilibrium dynamics. Focus will be on elucidating the hydrodynamic regime of interacting field theories and issues regarding thermalization, etc., which will holographically be related to the behaviour of black holes in the dual description. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Holographic insights into non-equilibrium quantum fields' Speaker(s): Mukund RANGAMANI (University of Durham) Informal Geometric Analysis and Complex Differential Geometry Seminar - Donaldson's theorem on cscK metrics and Chow stability, III Speaker(s): Dr. Yalong SHI (ICTP) ICTP eJournals Delivery Service (eJDS) for Scientists in Developing Countries: 10th Anniversary | (smr 2344) Organizer(s): H. Cerdeira, M. Blume, E. Canessa and SDU Europe/Rome Abstract: The diffuse gamma-ray background is expected to exhibit small scale anisotropies which can carry information on the underlying sources contributing to it. Astrophysical sources as well as more exotic processes like galactic or extragalactic DM annihilation are expected to leave their imprint in the pattern of the gamma-ray anisotropies. I will present the results of an angular power spectrum analysis of the high-latitude diffuse emission measured by the Fermi-LAT, and discuss the implications of the measured angular power spectrum for gamma-ray source populations that may provide a contribution to the diffuse background. ICTP ICTP [email protected] Dissecting the Diffuse Gamma-ray Background using anisotropies Speaker(s): Alessandro CUOCO (Albanova University Centre, Stockholm) ICTP-ESF School and Conference on Geometric Analysis | (smr 2345) Organizer(s): C. Arezzo (ICTP), F. Pacard, R. Schoen, Gang Tian Cosponsor(s): European Science Foundation (ESF) Is synchrotron radiation doing better than colliders to constrain Dark Matter? Speaker(s): Bryan ZALDIVAR MONTERO (IFT, Madrid, Spain) Europe/Rome The Mott metal-insulator transition is 60 years old. Despite major recent progress with the help of DMFT it retains mysteries. Ancient experiments on electron-hole droplets in germanium in silicon display a first order transition, similar to that found in the half filled fermion Hubbard model. What is the corresponding order parameter? How does one evolve from that first order regime to the simple 1949 Mott picture? The issue remains open. The recent discovery of supersolids extends the Mott transition to Bose liquids. A Bose Hubbard model with integer filling has a choice between superfluidity or localization at zero temperature, with a second order transition: as usual bosons are simpler than fermions! In practice we want a real crystal where weak crystallization is due to freezing of soft phonon modes: the challenge is to introduce localization into such a spontaneous solid. The seminar will try to put these questions in perspective. If time allows the issue of supersolidity in He4 will be discussed in some detail. It seems to occur only in defects , but it opens a whole new field. SISSA, Santorio Bldg., Room 128 (first floor) ICTP [email protected] @ SISSA, Santorio Bldg., Room 128 (first floor) Joint ICTP/SISSA Colloquium on Condensed Matter Physics: "Mott localization transition: From semiconductors to supersolids" Speaker(s): Philippe NOZIERES (Institut Max von Laue-Paul Langevin) Europe/Rome Many examples of Calabi-Yau manifolds have been constructed. These are partially classified by the Hodge numbers (h^11, h^21). A plot of these numbers reveals interesting structure, for the case that the Hodge numbers are large. I will 'explain' some of this structure in terms of the fibration structure of the manifolds. At the other end of the distribution, finding manifolds with small Hodge numbers seems to be synonymous with finding manifolds with large groups of freely acting symmetries. These seem to be rare. I will discuss some constructions of what may be the simplest Calabi-Yau manifolds. ICTP ICTP [email protected] MATHEMATICS SPECIAL LECTURE - Calabi-Yau Manifolds with Hodge Numbers that are small and large. Speaker(s): Professor Philip Candelas (University of Oxford, U.K.) Europe/Rome Shanghai - People's Republic of China ICTP [email protected] @ Shanghai - People's Republic of China Fourth Hands-On Research in Complex Systems School | (smr 2387) Organizer(s): D. Cai, R. Roy, K. Showalter, H.L. Swinney. Local Organiser in Trieste: J. Niemela School and Workshop on Computational Algebra and Number Theory | (smr 2346) Organizer(s): H. Cohen, G. Tornaria, W. Stein, F. Rodriguez Villegas; Local Organiser: R. Ramakrishnan First Glimpses at Higgs' face Speaker(s): Jose Ramon ESPINOSA SEDANO (Univ. Autonoma de Barcelona, Spain) Europe/Rome Abstract: The CP-violating version of the Minimal Supersymmetric Standard Model (MSSM) is an example of a model where experimental data do not preclude the presence of light Higgs bosons in the range around 10 -- 110 GeV. Such light Higgs bosons, decaying almost wholly to b-bbar pairs, may be copiously produced at the LHC, but would remain inaccessible to conventional Higgs searches because of intractable QCD backgrounds. We demonstrate that a significant number of these light Higgs bosons would be boosted strongly enough for the pair of daughter $b$-jet pairs to appear as a single 'fat' jet with substructure. Tagging such jets could extend the discovery potential at the LHC into the hitherto-inaccessible region for light Higgs bosons. ICTP ICTP [email protected] Using Jet Substructure at the LHC to Search for the Light Higgs Bosons of the CP-Violating MSSM Speaker(s): Dilip K. GHOSH (IACS, Kolkata, India) Europe/Rome Abstract: Special limits of theories with higher time derivatives in Anti de Sitter space exhibit interesting features: they realize logarithmic representations of the conformal algebra and they possess a structure akin to gauge theory. These results can be obtained by a simple study of the scalar product of free fields. This study shows that "critical" gravity in dimension larger than 3 is non-unitary, but that the spectrum of certain other higher derivative theories, such as the Flato-Fronsdal singleton dipole theory, contains a unitary subsector. Unitarity of a high spin generalization of the Randall-Sundrum construction will be also discussed. ICTP ICTP [email protected] On the Unitarity of Critical Gravity, Other Higher-Derivative Theories, and High Spin Randall-Sundrum Theories Speaker(s): Massimo PORRATI (New York University, USA) Europe/Rome I present analytical results on the distribution of the largest eigenvalue for random matrices of the Cauchy type. Matrices belonging to this ensemble i) have a rotationally invariant weight, and ii) display a spectral density having support on the full real axis, which decays as a power law for x\to\pm\infty. The distribution exhibits a central regime that is governed by a scaling function (analogous to the Tracy-Widom distribution), flanked on both sides by large deviation tails. The corresponding rate functions are computed analytically using a mapping to a Coulomb gas system with constraints. The analytical results are corroborated by numerical simulations with excellent agreement. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Top eigenvalue of Cauchy random matrices" Speaker(s): Pierpaolo VIVO (LPTMS, Univ. Paris-Sud ) Europe/Rome Abstract. A colored weak singlet scalar state with hypercharge 4/3 is one of the possible candidates for the explanation of the unexpectedly large forward-backward asymmetry in t tbar production as measured by the CDF and D0 experiments. We investigate the role of this state in a plethora of flavor changing neutral current processes and precision observables of down-quarks and charged leptons. Our analysis includes tree- and loop-level mediated observables in the K and B systems, the charged lepton sector, as well as the Z to b bbar decay width. We perform a global fit of the relevant scalar couplings. This approach can explain the (g-2)_mu anomaly while tensions among the CP violating observables in the quark sector, most notably the nonstandard CP phase (and width difference) in the Bs system cannot be fully relaxed. The results are interpreted in a class of grand unified models which allow for a light colored scalar with a mass below 1TeV. We identify contributions of this leptoquark to dimension-six operators, mediated through a box diagram, and tree-level dimension-nine operators, that would destabilize proton if sizable leptoquark and diquark couplings were to be simultaneously present. ICTP ICTP [email protected] Light Color Scalars from Low Energy to Proton Decay Speaker(s): Svjetlana FAJFER (Jozef Stefan Institute, Ljubljana) Modular forms, the Bloch group, and Nahm's conjecture Speaker(s): Don Zagier (Max Planck Institute, Bonn and College de France) Summer School on Quantum Many-Body Physics of Ultra-Cold Atoms and Molecules | (smr 2348) Organizer(s): A. Perali, C. Sa de Melo, C. Salomon, S. Yip. Local Organiser: Mikhail Kiselev Unbalanced Holographic Superconductors and Spintronics Speaker(s): Francesco BIGAZZI (University of Florence and INFN) Some issues in horizon thermodynamics Speaker(s): Sanved KOLEKAR (IUCAA, India) Workshop on Effective Gravity in Fluids and Superfluids | (smr 2355) Organizer(s): Directors: Joseph Niemela (ICTP), W.G. Unruh (Canada), Silke Weinfurtner (SISSA) Cosponsor(s): Scuola Internazionale Superiore di Studi Avanti (SISSA)International School for Advanced Studies Europe/Rome Game theory is relatively modern branch of mathematics with many applications, for instance, in economics, biology and poker. We'll describe some basic models of game theory, in particular exactly solvable games that can be used as toy model of poker situations: 'game of chicken', the 'AKQ game' (in its basic Borel and Von Neumann versions, plus some non trivial extensions), the [0,1]-game, and the No Limit Holdem Jam-or-Fold heads up game (both in cash game and tournaments). The two seminars will not assume any previous knowledge in game theory or poker. ICTP ICTP [email protected] Special seminar: "The mathematics of poker: Game theory" (Part I) Speaker(s): Sergio BENVENUTI (Imperial College, London) Europe/Rome Abstract. In this talk, I introduce a specific model of inflation with three matrix-valued NxN scalar inflaton fields, minimally coupled to Einstein gravity, the M-flation. We argue about the possible string theory origins of M-flation. After discussing effective single field inflationary trajectories of the model, we study the spectrum and role of other "spectator" fields in the model. In particular, we focus on the "quantum eta-problem", that in any scalar-driven inflationary model, due to quantum gravity effects, there is an R\phi^2 term in the effective action of the inflationary model, causing the slow-roll parameter \eta to be of order one. We argue that for the M-flation case there are 1/N suppression factors to the one-loop corrections to \eta, resolving the quantum eta-problem in this model. ICTP ICTP [email protected] M-flation, its naturalness and 1/N resolution to quantum eta-problem Speaker(s): Shahin SHEIKH-JABBARI (IPM, Tehran, Iran) Special seminar: "The mathematics of poker: Game theory" (Part II) Europe/Rome How a referee writes a report is interesting for two main reasons. The general one is that it is an example of how humans perform tasks, and in this it is similar to maintaining correspondence, whether done by letters, email, or messaging systems. Here recently debates have raged about whether such activity leads - perhaps intrinsically - to power-law statistics and how that would follow from priority queue -type models. The second, not necessarily less important one, is that it gives us insight into how science, journals, and peer review works. Here we discuss empirical data from two physics journals, and what can be distilled from the data. We present a simple "many tasks" model, which fits the data fairly well. Conclusions are presented about both the issues, and in particular about refereeing as seen in the light of both the data and models. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Deciding about manuscripts: The dynamics of refereeing" Speaker(s): Mikko J. ALAVA (Aalto University ) Europe/Rome Abstract: I will expain the process of conifold transition from Calabi-Yau threefolds to connected sums of S^{3}X S^{3}. Following the work of Y. Kawamata, G. Tian and R. Friedman, the existence of this kind of conifold transition is reduced to the existence of pairwise disjoint isolated rational curves in these Calabi-Yau threefolds. As an example, I will deal with the complete intersection Calabi-Yau threefolds in products of projective spaces. ICTP ICTP [email protected] Conifold Transitions for Some Complete Intersection Calabi-Yau Threefolds Speaker(s): Dr. Jinxing Xu (USTC- Beijing, China) Europe/Rome Abstract: The spectrum of gamma rays from the galactic center (Fermi+HESS) as well as the spectrum of cosmic ray electrons (Fermi+HESS) both exhibit a cutoff at TeV energies. Such a cutoff can be naturally explained by the spectrum of annihilating dark matter in both cases. We update the previous constraints in the plane thermal cross-section vs. dark matter mass. We also study the best fits to the data in the parameter space defined by the branching ratio to a particular annihilation channel and the mass of dark matter particle. This is more robust than the boost factor to the uncertainties introduced by dark matter distribution and the propagation of charged particles. ICTP ICTP [email protected] Testing the composition of the gamma rays from the galactic center and the cosmic ray electrons at TeV scale in search of dark matter Speaker(s): Alexander BELIKOV (Institut d'Astrophysique de Paris, France) Europe/Rome Abstract: We clarify the problematic aspects of gravitational interaction in a weak-field limit of Kaluza-Klein models. We explain why some models meet the classical gravitational tests, while others do not. We show that variation of the total volume of the internal spaces generates the fifth force. This is the main reason of the problem. It happens for all considered models (linear with respect to the scalar curvature and nonlinear $f(R)$, with toroidal and spherical compactifications). We explicitly single out the contribution of the fifth force to nonrelativistic gravitational potentials. In the case of models with toroidal compactification, we demonstrate how tension (with and without effects of nonlinearity) of the gravitating source can fix the total volume of the internal space, resulting in the vanishing fifth force and consequently in agreement with the observations. It takes place for latent solitons, black strings and black branes. We also demonstrate a particular example where non-vanishing variations of the internal space volume do not contradict the gravitational experiments. In the case of spherical compactification, the fifth force is replaced by the Yukawa interaction for models with the stabilized internal space. For large Yukawa masses, the effect of this interaction is negligibly small, and considered models satisfy the gravitational tests at the same level of accuracy as general relativity. However, for this model gravitating masses acquire effective relativistic pressure in the external space. Such pressure contradicts the observations. We demonstrate that tension is the only possibility to preserve the dustlike equation of state in the external space. Therefore, tension plays a crucial role for the considered models. ICTP ICTP [email protected] Kaluza-Klein models in a weak-field limit: state of the art Speaker(s): Alexander ZHUK (Astronomical Observatory and Department of Theoretical Physics, Odessa National University, Ukraine) Europe/Rome Kumasi - Ghana ICTP [email protected] @ Kumasi - Ghana African School on Fundamental Physics and its Applications | (smr 2388) Organizer(s): B. Acharya, K. Assamagan C. Darve, J. Ellis, S. Muanza Workshop on Quantum Simulations with Ultracold Atoms | (smr 2350) Organizer(s): I. Bloch, M. Inguscio, M. Lewenstein, A. Trombettoni, G. Mussardo (Local Organiser) Summer School on Cosmology | (smr 2354) Organizer(s): S. Borgani (INAF - OATS), P. Creminelli (ICTP), A. Paranjape (ICTP), E. Sefusatti (ICTP) U. Seljak (UC & LBNL Berkeley & Zürich U.), R. Sheth (ICTP & UPenn) Collaborations: the Italian Institute for Nuclear Physics (INFN) Europe/Rome An electron glass (EG) is a disordered film with localized electronic states and hopping conduction. It has a long time relaxation (hours and days), that usually manifests itself in the slow change of the hopping conduction. Moreover, they may have a memory. Namely, conductivity of the film as a function of the electron density memorizes the density at which it has been cooled. Disordered films of InxO [1], films granulated by gold [2] or aluminum [3] are examples of EG's. Important low energy excitations in such a system can be classified as single electron excitations that feature the so-called Coulomb Gap, and pair excitations that are random dipoles [4]. I argue that the polarization of dipole interactions leads to a "polarization catastrophe" due to interaction of these excitations. As a result dipoles form a dipole glass [5]. I propose that this is at the basis of glassy properties of the system. The width of the "memory dip" is calculated. 1. M Ben-Chorin, D. Kowal, and Z. Ovadyahu, Phys. Rev. B44, 342 (1991). 2. C.J. Adkins et al. J. Phys. C17, 4693 (1984). 3. T. Grenet, Eur. Phys. J. B32, 275 (2003). 4. A.L. Efros and B.I. Shklovskii, J. Phys. C8, L 49 (1975). 5. B.E. Vugmeister and M.D. Glinchuk, Rev. Modern Phys. 62, 993 (1990). ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Electron glass" Speaker(s): Alexei L. EFROS (University of Utah, Salt Lake City ) Europe/Rome We study a chain of identical thermodynamic systems in a constrained equilibrium where each bond of the chain is forced to remain at a preassigned distance to the previous one. We apply this description to glassy systems in the limit of long chain where each bond is close to the previous one. We show that in specific conditions this pseudo-dynamic process can formally describe real relaxational dynamics the long time. In particular, in mean field spin glass models we can recover in this way the equations of Langevin dynamics in the long time limit at the dynamical transition temperature and below. We interpret the formal identity as an evidence that in these situations the configuration space is explored in a quasi-equilibrium fashion. Our general formalism, that relates dynamics to equilibrium puts slow dynamics in a new perspective and opens the way to the computation of new dynamical quantities in glassy systems. ICTP ICTP [email protected] Informal seminar on Statistical Physics: "Quasi-equilibrium in glassy dynamics: An algebraic view" Speaker(s): Silvio FRANZ (Université XI - Paris Sud ) Flavor symmetries of N=2 SYM and the 2D/4D correspondence Speaker(s): Gaston GIRIBET (University of Buenos Aires, Argentina & ICTP) Probing Majorana Neutrinos in the Rare Decays of Mesons: Part I Speaker(s): C.S. KIM (Yonsei University, Korea) Europe/Rome We study a simple model of search where the searcher undergoes normal diffusion, but once in a while resets to its initial starting point stochastically with rate r. The effect of a finite resetting rate r turns out to be rather drastic. It leads to finite mean search time which, as a function of r, has a minimum at an optimal resetting rate r_c. This makes the search process efficient. We then generalize this model to study multiple searchers. Resetting alters fundamentally the late time decay of the survival probability of a stationary target when there are multiple searchers: while the typical survival probability decays exponentially with time, the average decays as a power law with an exponent depending continuously on the density of searchers. We also consider various generalisations of this simple model. Refs: M.R. Evans and S.N. Majumdar, "Diffusion with Stochastic Resetting", Phys. Rev. Lett. 106, 160601 (2011). M.R. Evans and S.N. Majumdar, "Diffusion with Optimal Resetting", J. Phys. A-Math. & Theor. 44, 435001 (2011). ICTP ICTP [email protected] Informal seminar on Statistical physics: "Diffusion with stochastic resetting" Speaker(s): Satya N. MAJUMDAR (Université XI - Paris Sud ) A Model for Vector Dark Matter Speaker(s): Yasaman FARZAN (IPM, Tehran, Iran) Europe/Rome X-ray absorption spectroscopy (XAS) is a powerful tool to investigate matter at the molecular scale, but requires a good theory of the excitation process as a prerequisite to a consistent experimental interpretation. New XAS calculations, based on the GW approximation for the self-energy of the excited quasi-electron in presence of a static core hole, are in good agreement with experiment in liquid water, hexagonal (Ih) ice, high pressure ice VIII, and the two amorphous ice forms LDA (low density amorphous ice) and HDA (high density amorphous ice). Good models of disorder are essential to achieve experimental agreement, and disorder associated to the quantal dynamics of the nuclei plays an important role. Analysis of the spectral features yields valuable information on the unifying aspects of short- and intermediate-range order in these systems. ICTP ICTP [email protected] Joint ICTP/SISSA Colloquium on Condensed Matter physics: "X-ray absorption spectra of water and ice" Speaker(s): Roberto CAR (Princeton University ) An Alternative String Cosmology: avoiding Bizarreness Speaker(s): Louis John CLAVELLI (University of Alabama and TUFTS, USA) Europe/Rome The objective of the Workshop is to impart knowledge and develop skills needed to design and build small to large scale distributed measurement and monitoring instrument environments based on freely available Open Source technologies. The Workshop aims to teach how to reduce the development time by using Open Source Frameworks that allow quick and easy customisation to meet individual requirements. The recent introduction of smart phones and tablets has opened up possibilities hitherto not available for people in general, and scientists in particular, in developing countries. Among the smart phones and tablets in the market today Android is an open source platform and this Workshop aims to exploit their capabilities to implement instrument and measurement systems for physics and other experiments. It is also the objective of the organizers to create a community of experimental scientists in developing countries with the necessary skills and willingness to contribute to a much wider scale and long term data collection/management experiment with ICTP as the main focus that would help to build a repository of data pertaining to a number of areas that are of immediate interest to scientists around the world. DEADLINE OPEN until 15 July 2012 ONLY for Malaysian applicants =================================================== Kampar - Malaysia ICTP [email protected] @ Kampar - Malaysia First Regional Workshop on Distributed Embedded Systems with Emphasis on Open Source | (smr 2434) Organizer(s): A. Induruwa (UK), C. Kavka (Italy), U. Raich (Switzerland), M.L. Crespo, A. Cicuttin (ICTP), S.W. Lee, C.S. Ang (Local Organizers in Malaysia) Targeted Training Activity: El Niño Southern Oscillation Monsoon in the Current and Future Climate | (smr 2356) Organizer(s): J. Shukla, E. Sarachik, J.M.Wallace, F. Kucharski (and Local Organiser) Cosponsor(s): Indian Institute for Tropical Meteorology (IITM), Ministry of Earth Sciences (MoES) - Pune, India; International Union of Geodesy and Geophysics (IUGG) - Denmark; National Natural Science Foundation of China, NSFC; Center for Ocean-Land-Atmosphere Interactions (COLA) - USA Co-sponsors and partner organizations: - Indian Institute for Tropical Meteorology (IITM), Ministry of Earth Sciences - Pune, India - International Union of Geodesy and Geophysics (IUGG) - Denmark - National Natural Science Foundation of China, NSFC - Center for Ocean-Land-Atmosphere Interactions (COLA) - USA Workshop on Large Scale Structure | (smr 2419) Tree/loop Superstring amplitudes and geometry - strategy for deriving AdS/CFT Speaker(s): Inyong PARK (Philander Smith College, Little Rock, USA) 6 Aug 2012 - 17 Aug 2012 Joint ICTP-IAEA Workshop on Nuclear Structure Decay Data: Theory and Evaluation | (smr 2358) Organizer(s): D.H. Abriola, J.K. Tuli. Local Organizer: C. Tuniz Innovations in Strongly Correlated Electronic Systems: School and Workshop | (smr 2357) Room: LB (Main Lecture Hall) Organizer(s): P. Coleman, A. Chubukov, A. Schofield, Hai-Hu Wen, H. Takagi. Local Organiser: E. Tosatti Cosponsor(s): Co-sponsored by: ICAM-12CAM, INTEL BIOMAT (funded by ESF), University of Tokyo/RIKEN and Department of Physics, Nanjing University Deadline date was originally 15 April 2012 NOW extended to 22 April 2012 Europe/Rome In Lie symmetry analysis, linearization is the conversion of a (system of) differential equations to linear form, provided there exists a transformation of the independent and dependent variables that can do so. Lie provided criteria for determining if and when second order scalar ODEs could be so transformed. Considerably later the methods were extended to special classes of third order scalar ODEs. Only recently was there significant progress in this direction. It included the general third and fourth order scalar ODEs and some general results were proved for nth order SCALAR ODEs about the classes of such ODES, and some general results about classes of linearizable second order systems of ODEs. Many more advances have been made more recently by the use of geometry and complex analysis for the purpose. Further, without actually linearizing the system of equations linearization has been used to provide their solution. In this talk these developments will be reviewed. ICTP ICTP [email protected] Linearization of ODEs: Geometric, Complex and Conditional Speaker(s): Professor Asghar Qadir (NUST, Islamabad, Pakistan) One-loop Kähler metric of D-branes at angles Speaker(s): Jin-U KANG (Kim II Sung University, Democratic People's Republic of Korea) Joint ICTP-IAEA Workshop on Physics of Radiation Effect and its Simulation for Non-Metallic Condensed Matter | (smr 2359) Organizer(s): IAEA: Aliz Simon and Andrej Zeman; Local Organiser: Sandro Scandolo Laboratories: AGH Infolab week 1, Eklund Lab week 2 Europe/Rome Fortaleza - Brazil ICTP [email protected] @ Fortaleza - Brazil Joint ICTP-KFAS Workshop on the Cooperative Experience for Integrating Land and Water Resources Management in Latin America | (smr 2389) Organizer(s): Humberto A. Barbosa (LAPIS/UFAL, Brazil), Adrian M. Tompkins (ICTP, Italy), Antonio G. Ferreira (FUNCEME, Brazil) Secretary: S. Henningsen Cosponsor(s): KFAS-Kuwait Foundation for the Advancement of Sciences, IUGG-International Union of Geodesy and Geophysics Europe/Rome We study[1, 2] electronic transport through a Luttinger liquid with an embedded weak scatterer (WS) or weak link (WL) in the presence of the electron coupling to one-dimensional massless bosons (phonons). We have found that for an arbitrary fixed strength of boson scattering by the impurity the scaling dimensions of the conductance in the WS and WL cases obeys the duality condition, ∆WS∆WL = 1, established for the standard Luttinger liquid [3] (i.e. in the absence of the additional coupling). On the other hand, with the strength of both fermion and boson scattering by impurity changing from the WS to the WL limit, the system has a rich phase diagram with a metal-insulator transition as a function of a bare backscattering amplitude. The transition exists in a certain interval, defined by the relative strength of the electron-electron and electron-boson coupling and the ratio of Fermi to sound velocities. [1] A. Galda, I. V. Yurkevich, and I. V. Lerner, Phys. Rev. B 83, 041106 (2011); EPL 93, 17009 (2011). [2] I. V. Yurkevich, A. Galda, O. M. Yevtushenko, and I. V. Lerner, to be published (2012). [3] C. L. Kane and M. P. A. Fisher, Phys. Rev. Lett. 68, 1220 (1992); Phys. Rev. B 46, 15233–15262 (1992). ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Duality of weak and strong scatterer in Luttinger liquid coupled to massless bosons" Speaker(s): Igor V. LERNER (University of Birmingham) Workshop on Majorana Fermions, Non-Abelian Statistics and Topological Quantum Information Processing | (smr 2360) Organizer(s): Y. Oreg, G. Rafael, A. Stern, F. von Oppen. Local Organiser: Mikhail Kiselev School on Large Scale Problems in Machine Learning and Workshop on Common Concepts in Machine Learning and Statistical Physics | (smr 2361) Organizer(s): H.J. Kappen, M. Opper, R. Zecchina. Local Organiser: M. Marsili Cosponsor(s): SNN Adaptive Intelligence, Radboud University of Nijmegen, The Netherlands Workshop on Complex Quantum Systems: Non-Ergodicity, Glassiness and Localization | (smr 2362) Organizer(s): C. Chamon (Boston), R. Moessner (MPI Dresden), M. Mueller (ICTP), A. Scardicchio (ICTP), F. Zamponi (ENS Paris) Europe/Rome NEW APPLICATION DEADLINE: 13 MAY 2012 [email protected] Cartagena - Colombia ICTP [email protected] 27 Aug 2012 - 21 Sep 2012 @ Cartagena - Colombia Joint ICTP-TWAS Caribbean School on Electronic Structure Fundamentals and Methodologies (an Ab-initio Perspective) | (smr 2390) Organizer(s): S. Scandolo, J.A. Montoya, C. Beltran ICTP-UNDP Wireless Training Workshop for Africa Adaptation Programme | (smr 2524) Organizer(s): M. Zennaro, E. Pietrosemoli Laboratories: Marconi Laboratory GGH Secretary: S. Tanaskovic Joint ICTP-IAEA School of Nuclear Knowledge Management | (smr 2363) Organizer(s): IAEA: M. Sbaffoni; Local Organiser: C. Tuniz ICTP-IPM Workshop and Conference in Combinatorics and Graph Theory | (smr 2364) Organizer(s): R. A. Brualdi and G.B Khosrovshahi. Local Organiser: S. Luzzatto. Associate Director: Saieed Akbari Constraints on Calabi-Yaus with large volume vacua Speaker(s): Joan SIMON (University of Edinburgh, UK) Children's Voices: Exploring Interethnic Violence and Children's Rights in the School Environment | (smr H319) Room: Adriatico Guest House Lundqvist Lecture Hall Probing Majorana Neutrinos in the Rare Decays of Mesons: Part II AGT correspondence and bases in 2D CFT Speaker(s): Alexander BELAVIN (Landau Institute, Moscow) College on Medical Physics | (smr 2365) Organizer(s): Directors: Slavik Tabakov, Anna Benini, Franco Milano, George D. Frey, Perry Sprawls, Luciano Bertocchi (ICTP) Europe/Rome This is the continuation of the Applied Physics Seminars. You will be especially welcome if you are visiting the Centre, or another Trieste Institute, as Associate Members, TRIL Fellows, or if you are from Affiliated Institutes in the various areas of Applied Physics. The summary of the seminar is available at: http://www.ictp.it/~chelaf/ss302.html ICTP ICTP [email protected] Inhibitors of Key Enzymes Linked with Some Degenerative Diseases Speaker(s): Ganiyu Oboh (Biochemistry Department, Federal University of Technology, Akure, Nigeria) Europe/Rome I discuss a model of bouncing universe from string theory. In our model, bouncing arises as a consequence of unstable dynamics of non-BPS brane. ICTP ICTP [email protected] Bouncing Universe from String Theory Speaker(s): Jin U KANG (Kim II Sung University, Pyongyang, Democratic People's Republic of Korea) A String Theory Explanation for Quantum Chaos in the Hadronic Spectrum Speaker(s): Leopoldo A. PANDO ZAYAS (University of Michigan, USA) Behind Neutrino Mass - Workshop on theoretical aspects of the neutrino mass and mixing | (smr 2366) Room: Adriatico Guest House - Kastler Lecture Hall Area (Lower Level 1) Organizer(s): A. Smirnov, R. Mohapatra, E. Ma, F. Feruglio Collaborations: INFN (Italian Institute for Nuclear Physics) Europe/Rome In this talk I will report on recent results in the study of ferromagnetic long range interactions for the Ising model. After recalling the classical results for this class of interactions, I will present results for a numerical study in two dimensions in the intermediate regime and the connection with the short range regime. Next I will discuss the relevance of the long range interaction versus the short range interaction. Finally, I will present a proposal for computing the influence of the long-range interactions as a dimensional change for a short range model and compare with the results of the numerical simulations. SISSA, Santorio Building, Room 128 (1st Floor) ICTP [email protected] @ SISSA, Santorio Building, Room 128 (1st Floor) Joint ICTP/SISSA Statistical physics seminar: "Long range interactions for the critical Ising model" Speaker(s): Marco PICCO (LPTHE, Paris ) Europe/Rome This will be an informal and basic introduction to the Magma Computational Algebra System. Magma is a large, well-supported software package which provides a mathematically rigorous environment for defining and working with structures such as groups, rings, fields, modules, algebras, schemes, curves, graphs, designs, codes and many others. Magma also supports a number of databases designed to aid computational research in those areas of mathematics which are algebraic in nature. ICTP ICTP [email protected] Introduction to MAGMA Speaker(s): Dr. Mark Watkins (University of Sydney, Australia) Workshop on Physical Virology | (smr 2367) Organizer(s): F. Livolant, V. Lorman, C Micheletti, R. Podgornik. Local Organiser: M. Marsili Europe/Rome Motivated by recent experimental efforts to realize quantum phases of matter with cold atomic Rydberg gases, I will discuss the phase diagram and excitation spectrum of supersolids in a two dimensional bosonic system with soft-core interactions. Previous works [1] showed an exceptional agreement between Monte Carlo simulations and a numerical mean-field approach. I will present a variational analysis of the Gross-Pitaevskii equation with a non-local interaction term and show that we can quantitatively reproduce the superfluid-supersolid transition at finite interaction strength in agreement with Monte Carlo results. We then test the validity of this mean-field analysis perturbing the ground state wave function and looking at the spectrum of the corresponding Bogoliubov equations. This approach provides an intuitive physical insight to the low energy dynamics of the system and is validated through the comparison of our findings with recent Monte Carlo simulations [2]. References [1] N. Henkel, F. Cinti, P. Jain, G. Pupillo, T. Pohl, Phys. Rev. Lett. 108, 265301 (2012) [2] S. Saccani, S. Moroni, and M. Boninsegni, Phys. Rev. Lett. 108, 175301 (2012). SISSA, Santorio Building, Room 128 (1st Floor) ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Quantum phases of soft core bosons with Rydberg gases" Speaker(s): Tommaso MACRI (Max Planck Institute for the Physics of Complex Systems, Dresden) 19th Meeting of Senior Fellowships Officers of the UN System | (smr H311) Organizer(s): Prof. D. Treleani Europe/Rome We will first present a pedagogical introduction to entanglement entropy. Then we will define long-range harmonic oscillators as the building block of long-range interacting bosonic systems. Using hamiltonian technique we will calculate the Von Neumann and Renyi entanglement entropy for this system. Different kind of boundary conditions as well as massive harmonic oscillators will be discussed. References M. Ghasemi Nezhadhaghighi, M. A. Rajabpour, arXiv:1209.1883 ICTP ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Entanglement entropy in long-range harmonic oscillators" Speaker(s): M.A. RAJABPOUR (SISSA) Noncommutative versions of conformal structures Speaker(s): Dr. William J. Ugalde (Universidad de Costa Rica) Europe/Rome We explore the combined physics potential of T2K and NOvA in light of the moderately large measured value of $\theta_{13}$. For $\sin^2 2 \theta_{13} = 0.1$, which is close to the best fit value, a 90\% C.L. evidence for the hierarchy can be obtained only for the combinations (NH, $-170^\circ \leq \dcp \leq 0$) and (IH, $0^\circ \leq \dcp \leq 170^\circ$), with the currently planned runs of NOvA and T2K. However, the hierarchy can essentially be determined for any value of $\delta_{CP}$, if the statistics of NOvA are increased by $50\%$ and those of T2K are doubled. Such an increase will also give an allowed region of $\delta_{CP}$ around the its true value, except for the CP conserving cases $\delta_{CP} = 0 \ {\rm or} \pm 180^\circ$. NOvA experiment has reoptimized its event selection criteria after recent measurements of $\theta_{13}$ by the reactor neutrinos. We study the improvement in the sensitivity to the neutrino mass hierarchy and to leptonic CP violation due to these new features. For favourable values of $\delta_{CP}$, NOvA sensitivity to mass hierarchy and leptonic CP violation is increased by 20\%. Addition of 5 years of neutrino data from T2K to NOvA more than doubles the range of $\delta_{CP}$ for which the leptonic CP violation can be discovered, compared to stand alone NOvA. But for unfavourable values of $\delta_{CP}$, the combination of NOvA and T2K are not enough to provide even a 90\% C.L. hint of hierarchy discovery. Therefore, we further explore the improvement in the hierarchy and CP violation sensitivities due to the addition of a 10 kt liquid argon detector placed close to NOvA site. ICTP ICTP [email protected] Determination of Mass hierarchy and Leptonic CP violation with upgraded NOvA and T2K in light of large theta13 Speaker(s): Suprabh PRAKASH (Indian Institute of Technology Bombay, India) Europe/Rome Many scenarios have been proposed to explain the 511 keV line from the center of the galaxy in terms of a decaying/annihilating dark matter with lifetime larger than the age of the Universe. I will discuss an alternative scenario where a subdominant fraction of Dark matter decays to positrons, early on. The positrons cool down by scattering off the cosmic microwave background and eventually annihilate when they fall into Galactic potential wells. The resulting 511 keV flux not only places constraints on this class of models but might even be consistent with that observed by the INTEGRAL satellite. ICTP ICTP [email protected] Cold positrons from decaying dark matter Speaker(s): Lotfi BOUBEKEUR (University of Valencia, Spain) 1 Oct 2012 - 12 Oct 2012 Joint ICTP-IAEA College on Plasma Physics | (smr 2369) Organizer(s): Directors: S. Mahajan, Z. Yoshida, R. Kamendje (IAEA), D. Gomez. Local Organiser: J. Niemela Joint ICTP-IAEA Training in Radiation Protection for Patients | (smr 2368) Organizer(s): IAEA: M. Rehani; ICTP Local Organiser: L Bertocchi Joint ICTP-IAEA Workshop on Sustainable Energy Development: Pathways and Strategies after Rio+20 | (smr 2372) Organizer(s): IAEA: F.L. Toth. ICTP Local Organiser: R. Gebauer IAEA TC Regional Europe Project Workshop on Strengthening Nuclear Safety Assessment Capabilities | (smr H296) Organizer(s): M. Mellinger (IAEA) Europe/Rome Jakarta - Indonesia ICTP [email protected] @ Jakarta - Indonesia ICTP-IAEA-BATAN Workshop on Wireless Sensor Networks for Radiation Monitoring | (smr 2521) Organizer(s): H. Haditjahyono (BATAN), M. Zennaro (ICTP) Support E-Mail: [email protected] The Physics of Paleo-dentistry: Discovery of a tooth filling during the Stone Age Speaker(s): Federico Bernardini (Multidisciplinary Laboratory, ICTP Applied Physics Section, Trieste) Europe/Rome JOINT ICTP/SISSA COLLOQUIUM ON CONDENSED MATTER PHYSICS Wednesday, 3 October - 4:30 p.m. Luigi Stasi Seminar Room ICTP Leonardo Building (1st floor) Yuri A. FREIMAN ( Verkin Institute for Low Temperature Physics & Engineering, Kharkov ) " Actual problems of physics of cryocrystals" Abstract Gaseous at room temperature or low pressures and solid at low temperatures or high pressures, cryocrystals (rare gas solids, solid hydrogens, oxygen, nitrogen, etc.) are one of the richest areas of contemporary physics research. As the simplest materials resembling solid-state models, they are the best testing ground for studying the physics of condensed matter. A short review will be given of new results and actual problems of physics of cryocrystals which were under discussion at the recent conference Cryocrystals 2012 (Odessa, Ukraine, September 2 – 8, 2012). ICTP ICTP [email protected] Joint ICTP/SISSA Colloquium on Condensed Matter Physics: "Actual problems of physics of cryocrystals" Speaker(s): Yuri A. FREIMAN (Verkin Institute for Low Temperature Physics & Engineering, Kharkov ) ALGEBRAIC GEOMETRY SEMINAR - Geometry of character varieties Speaker(s): Professor Fernando Rodriguez Villegas (ICTP) School and Training Course on Dense Magnetized Plasma as a Source of Ionizing Radiations, their Diagnostics and Applications | (smr 2370) Organizer(s): Directors: V. Gribkov, R. Miklaszewski, C. Tuniz. Local Organiser: M.L. Crespo Europe/Rome "The goal of this meeting is to gather in order to discuss and exchange experiences on the use and future developments of the automated recording system: "openEyA" of the ICTP-SDU. This is also a unique opportunity to meet with other participants of the DxD.tv initiative to fine-tune the project and the published rich-media outputs. The "Didactic for Development" initiative aims at producing free educational video lectures delivered in Spanish and Portuguese, to attain "Quality Education for All" and preserve "Cultural Diversity" in local languages by supporting the production of more educational contents from around the continent(s). The relevance of openEyA and attendance certification for massive open on-line courses (MOOC) is also part of the working agenda." Trieste - Italy ICTP [email protected] First openEyA Users Meeting - Didactica para el Desarrollo (DxD) | (smr 2528) Organizer(s): E. Canessa, SDU Team. Advanced Workshop on Energy Transport in Low-Dimensional Systems: Achievements and Mysteries | (smr 2371) Organizer(s): A. Dhar, M.N. Kiselev, Yuriy A. Kosevich and R. Livi. Local Organiser: M. Marsili First CLIM-RUN Workshop on Climate Services | (smr 2429) Organizer(s): Directors: Paolo Ruti, ENEA, Italy, Filippo Giorgi, ICTP, Italy Laboratories: AGH Eklund Informatics Lab Europe/Rome The measurement of the Baryon Acoustic Oscillation (BAO) feature imprinted in the clustering distribution of galaxies is becoming a mainstream method for large galaxy surveys to determine the relation between distance and redshift. However, obtaining precise determinations of the distance-redshift relation requires a low uncertainty in the position of the BAO, which is usually achieved by extending the survey to probe a larger volume. In this talk I will describe how a simple method dubbed "reconstruction" is able to reduce the uncertainties in these measurements by partially undoing the effects of non-linear evolution responsible of washing away the BAO. The reduction in the error bar is therefore given for free, without the need for further additional observations, since the method uses information from the density field that is not captured by two-point clustering statistics. I will describe how reconstruction has successfully been applied for the first time this year to the LRG and CMASS galaxy samples from Sloan Digital Sky Survey, and how cosmological constraints benefit from these high-precision BAO measurements. ICTP ICTP [email protected] Measuring BAO in the reconstructed field of SDSS galaxies Speaker(s): Antonio CUESTA VAZQUEZ (Yale University, New Haven, USA) Workshop on the Physics of Star Formation and its Role in Galaxy Evolution | (smr H312) Organizer(s): INAF ; Local Organizer: R. Sheth. Europe/Rome In this talk I will make a review of the current status of pulsar science. Pulsar are rapid spinning, highly magnetized neutron stars that can emit radiation in different part of the electromagnetic spectrum such as radio, optical, X-rays and Gamma-rays. These stars are unique laboratories in which very dense forms of matter and extremely curved space-time can be studied. They are also excellent tools to measure time with high precision and to probe the interstellar and potentially the intergalactic medium. In this talk I will review some of these aspects and the exciting results that are coming from the pulsar searches made at different facilities. I will focus particularly in how through these searches we can detect new astrophysical phenomena such as single millisecond radio bursts that may be associated to the coalescence of two compact objects. ICTP ICTP [email protected] Pulsars: wonderful objects to probe different aspects of our Universe Speaker(s): Eduardo RUBIO (Instituto de Astronomia, UNAM, Mexico) Europe/Rome Electron glasses are disordered systems of localized electrons with long-range Coulomb interaction, as found at low temperature in doped semiconductors, amorphous semiconductors, and granular metals. Experiments on several of these systems demonstrate a very slowly relaxing conductivity and striking non-equilibrium effects that suggest glassy physics of electronic origin. I will review our recent studies of the electron glass problem, mostly based on large-scale numerical computations, addressing the existence of a thermodynamic glass phase, the shape of the Coulomb gap in the single-particle density of states (which affects the variable-range hopping conductivity), and the statistics of the charge avalanches induced by a small perturbation of the system. Throughout the talk I will emphasize analogies and differences between electron glasses and spin glasses. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Glassy physics, excitations, and avalanches in electron glasses" Speaker(s): Matteo PALASSINI (Universitat de Barcelona & ICTP) Quantum Entropy of Extremal Black Holes Speaker(s): Rajesh Kumar GUPTA (ICTP) Europe/Rome The new free-electron laser (FEL) source FERMI@Elettra seeded at 260 nanometers with an external laser has produced the first coherent emission from the FEL-1 undulator chain tuned at wavelengths of 65 nanometers (fourth harmonic of the seed laser) and 43 nanometers (sixth harmonic). This marks the first successful operation of FERMI@Elettra in its planned configuration, i.e., as a next-generation seeded free-electron laser source. Over the coming months the commissioning team will continue to improve the overall performance of the system and the light will be provided to the first experimental programs. In the meantime the construction of the FEL-2 will be completed and the commissioning started. Here I will present the scientific program for the first experiments along with the strategic road map for the commissioning of the entire facility and future upgrades. ICTP ICTP [email protected] Joint ICTP/SISSA/Dipartimento di Fisica Colloquium on Condensed Matter physics: "First coherent radiation from the FERMI@Elettra free electron laser: Status report and plan for the first experiments" Speaker(s): Fulvio PARMIGIANI (University of Trieste & FERMI@Elettra Laboratory - Sincrotrone Trieste ) ERCOFTAC Autumn Festival | (smr H317) REFINE Meeting & Industry Day 2012 | (smr H318) Modular Invariance and Elliptic Genera Speaker(s): Qingtao Chen (ICTP, Trieste, Italy) A systems biology approach to thermo-dynamical optimization in E. coli Speaker(s): Moises Santillan (Centro de Investigación y Estudios Avanzados del IPN, Unidad Monterrey, Mexico) 29 Oct 2012 - 3 Nov 2012 Workshop on Geophysical Data Analysis and Assimilation | (smr 2373) Organizer(s): Organisers: A. Ismail-Zadeh, B. Bukchin, G. Panza. Local Organiser: J. Niemela Connections between Dahmen-Micchelli spaces splines and index theory (I) Speaker(s): Professor Claudio Procesi (Universita di Roma 'La Sapienza') Neutrino anarchy in warped space Speaker(s): Mariano QUIROS (Universidad Autonoma de Barcelona, Spain) Connections between Dahmen-Micchelli spaces splines and index theory (II) Speaker(s): Professor Claudio Procesi (Universita' di Roma 'La Sapienza') Europe/Rome I present a functional renormalization group investigation of an Euclidean three-dimensional matrix Yukawa model with U(N) symmetry, which describes N = 2 Weyl fermions that effectively interact via a short-range repulsive interaction. This system relates to an effective low-energy theory of spinless electrons on the honeycomb lattice and can be seen as a simple model for suspended graphene. A continuous phase transition is found which is characterized by large anomalous dimensions both for the fermions and composite degrees of freedom. The critical exponents define a new universality class distinct from Gross-Neveu type models, typically considered in this context. ICTP ICTP [email protected] Special seminar: " Effect of short-range interactions on the quantum critical behavior of spinless fermions on the honeycomb lattice " Speaker(s): David MESTERHAZY (Universitaet Heidelberg ) Europe/Rome The purpose of this school is to provide a unique international educational experience aimed at building future leadership in managing nuclear energy programs from among promising young professionals from developing countries, particularly newcomer countries that seek to develop nuclear power or other nuclear applications, who show promise as future leaders of the nuclear industry, academia and public sector entities in their country. It will enable the transfer of IAEA specific knowledge to Member States towards their capacity building efforts. The prospect of a steady world-wide growth in the use of nuclear technology – for the generation of electricity and other energy services and in a diversity of nuclear applications in medicine, agriculture, and industry – points to the need for a greatly expanded global cadre of nuclear professionals. A highly competent management is vital to the success at all stages of development or re-launching of a nuclear programmed. This school will focus mainly on training young professionals from developing countries with managerial potential on aspects of the industry that will ensure their broad understanding of the current issues that need to be tackled in their various countries. During this training programme, the participants will gain awareness of the most recent developments in nuclear energy and the IAEA specific knowledge and broad international perspective on issues related to the peaceful use of nuclear technology. The School will consist of a series of keynote presentations by leading IAEA specialists on topics relevant to managing nuclear energy programs followed by practical sessions discussing the issues raised and difficulties envisaged. All chosen participants will be expected to be actively involved in discussions, assigned projects, panel reviews and all school activities. At the end of the School all participants are expected to pass a final test. The following topics will be covered: • World Energy Balance, Geopolitics and Climate Issues; • Energy Planning, Energy Economics and Nuclear Power Economics and Finance; • National Nuclear Industry from Planning to Decommissioning; • Nuclear Power – Technology, Nuclear Fuel Cycle and Waste Management ; • Nuclear Safety and Security; • Nuclear Law, International Conventions and Relevant Mechanisms; • Nuclear Non-Proliferation and Safeguards; • Human Resource Development and Knowledge Management; • Leadership and Management in Nuclear Industry; • Nuclear Sociology and Public Communication. Trieste - Italy ICTP [email protected] Joint ICTP-IAEA School of Nuclear Energy Management | (smr 2374) Organizer(s): IAEA: A. Bychkov, Y. Yanev and T. Karseka. ICTP: F. Quevedo and Local Organiser: C. Tuniz Europe/Rome I will present a way to calculate the Particle Entanglement Spectrum (PES) and Real Space Entanglement Spectrum (RSES) of multi-particle systems from their real space wave functions. Our technique allows numerical calculations for much larger systems than were previously feasible. I will illustrate the method with results on the RSES and PES for the Laughlin and Jain composite fermion states and for the Moore-Read state at filling nu=5/2 , for system sizes up to 70 particles. SISSA, Santorio Building, Room 128 (1st Floor) ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Real space entanglement spectra for large systems" Speaker(s): I. RODRIGUEZ (SISSA) Joint ICTP/SISSA String Seminars: SCAPEZILLA: the backreaction of antibranes in flux compactifications Speaker(s): Iosif BENA (Centre d'Etudes de Saclay - CEA, France) Joint ICTP/SISSA String Seminars: Resilience of meta-stable states in String Theory and supergravity Speaker(s): Anatoly DYMARSKY (Institute for Advanced Study, Princeton, USA) Europe/Rome We investigate the number fluctuations in small cells of quantum gases pointing out important deviations from the thermodynamic limit fixed by the isothermal compressibility. Both quantum and thermal fluctuations in weakly as well as highly compressible fluids are considered. For the two-dimensional (2D) superfluid Bose gas we find a significant quenching of fluctuations with respect to the thermodynamic limit, in agreement with recent experimental findings. An enhancement of the thermal fluctuations is instead predicted for the 2D dipolar superfluid Bose gas, which becomes dramatic when the size of the sample cell is of the order of the wavelength of the rotonic excitation induced by the interaction. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Local atom-number fluctuations in quantum gases at finite temperature" Speaker(s): Alessio RECATI (INO-CNR BEC Center, University of Trento ) Europe/Rome The Western Tropical Pacific (WTP) is a region where sea surface temperatures reach their highest values. Its warm waters are the source of intense convective systems, and the associated latent heat release acts as an important forcing for planetary waves across the whole globe. SST variability in the WTP, although weaker in amplitude than the variability in the central and eastern part of the tropical Pacific, has the potential to affect atmospheric teleconnection on a variety of scales. On the sub-seasonal scale, WTP SST affects the intensity and propagation of organized convection within the MJO and other modes of tropical low-frequency variability. On the interannual time-scale, the monsoon systems of East and South Asia are affected by WTP anomalies, as well as the planetary waves propagating into the northern extra-tropics. On the interannual time scales, variations in zonal and meridional gradients of SST in the WTP region affect long-term changes in the Walker and Hadley circulations, and links with decadal variability in the North Pacific and the North Atlantic have been explored by many scientists. The aims of the proposed five-day workshop are to review the current knowledge on the origins and consequences of variability in the WTP, and discuss the ability of the latest generation of coupled GCMs to simulate such variability and teleconnections. Trieste - Italy ICTP [email protected] Workshop on Variability in the Western Tropical Pacific: Mechanisms, Teleconnections and Impacts on Sub-Seasonal, Inter-Annual and Inter-Decadal Time Scales | (smr 2341) Organizer(s): F. Molteni, H. Hendon, Local Organisers: F. Kucharski, R. Farneti Europe/Rome It has recently been suggested that the competition for a finite pool of microRNAs (miRNA) gives rise to effective interactions among their common targets (competing endogenous RNAs or ceRNAs) that could prove to be crucial for post-transcriptional regulation (PTR). I shall discuss a minimal model of PTR where the emergence and the nature of such interactions can be characterized in detail at steady state. Sensitivity analysis shows that binding free energies and repression mechanisms are the key ingredients for the cross-talk between ceRNAs to arise. Interactions emerge in specific ranges of repression values, can be symmetrical (one ceRNA influences another and vice-versa) or asymmetrical (one ceRNA influences another but not the reverse) and may be highly selective, while possibly limited by noise. However, I will also show that non-trivial correlations among ceRNAs can emerge in experimental readouts even in absence of miRNA-mediated cross-talk simply due to transcriptional fluctuations. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: " MicroRNAs: A selective, post-transcriptional channel of communication between RNAs " Speaker(s): Andrea DE MARTINO (Università di Roma 'La Sapienza' ) Europe/Rome Abstract. In the talk I will introduce a cosmology/QFT correspondence and present a holographic model for a slow-roll inflation. In particular I will express the cosmological data such as power spectra and non-gaussianitites in terms of correlation functions of a dual QFT. Finally, I will propose alternative models of inflation, where the early time dynamics involves strongly coupled gravity. ICTP ICTP [email protected] Holographic Slow-Roll Inflation Speaker(s): Adam BZOWSKI (University of Amsterdam, The Netherlands) Europe/Rome The Abdus Salam International Centre for Theoretical Physics (ICTP) is organizing a Workshop on Quantification of Earthquake Hazards in the Indo/Asian collision zone, to be held from 15 to 24 November 2012 in Kathmandu, Nepal. SUMMARY AND PURPOSE More than 50 million people are at risk in the Himalayan region from future earthquakes, and associated landslides and outburst floods. The population between Pakistan and Assam continues to rise and there is increasing need for informed engineering guidelines to aid local planning and construction in the vulnerable countries and increase their resilience against future earthquakes. The workshop will focus on quantifying collisional processes in the Himalaya and the consequent tectonics imposed on regions to their south in Pakistan, India and Bangladesh. The goals of the workshop are to build capacity in the theory and application of modern seismological and geodetic methods as well as facilitate new initiatives for cooperation in the Himalayan region and develop networks for exchange of ideas and expertise. The workshop will include invited lectures, panel discussions and training sessions in paleoseismology, seismology, GPS geodesy, InSAR, seismic hazard and risk. PARTICIPATION The workshop is open to researchers and students from all countries which are members of the United Nations, UNESCO, or IAEA. The principal objective of the ICTP is to help researchers from developing countries through a programme of training activities within a framework of international cooperation. Participants should have an adequate working knowledge of English. Due to budget limitations, every effort should be made by candidates to secure either total or partial support for their expenses. However, funds are available exclusively for a limited number of participants who are nationals of, and working in, Himalayan countries. Participants are required to take part in all aspects of this activity for its entire duration. There is no registration fee. REQUEST FOR PARTICIPATION The on-line application form can be accessed by clicking on the link application form below. Once in the website, comprehensive instructions will guide you step-by-step, on how to fill out and submit the application form. DEADLINE for Applications: 23 September 2012 ACTIVITY SECRETARIAT: Secretary: Ms. Gabriella De Meo Telephone: +39-040-2240-355 Telefax: +39-040-2240-585 [email protected] CO-SPONSORS: Kathmandu - Nepal ICTP [email protected] @ Kathmandu - Nepal Workshop on Quantification of Seismic Hazards in the Indo/Asian Collision Zone | (smr 2386) Organizer(s): A. Aoudia (ICTP, Trieste) R. Bilham, (Colorado University, U.S.A.), B.N. Upreti (Tribhuvan University, Kathmandu, Nepal) Cosponsor(s): International Union of Geodesy and Geophysics - IUGG, National Science Foundation - NSF, UNAVCO and Tribhuvan University Europe/Rome We will discuss a general non-equilibrium setup by which one can approach the quantum transport problem in one dimension. One considers a strongly interacting quantum chain with fully coherent bulk dynamics and driven out of equilibrium in terms of Lindblad dissipators which only act on degrees of freedom near the boundary, i.e. at the ends of the chain. The non-equilibrium state carrying the physical currents is then approximated as the steady state of the corresponding Markovian master equation. In this talk I will describe several interesting results which have recently been obtained along these lines. Firstly, using numerical methods inspired by the density matrix renormalization group, one can - quite remarkably - find examples of diffusive transport in clean, and even Bethe-ansatz integrable, strongly interacting systems (at high-temperature near-equilibrium conditions), such as the Heisenberg XXZ spin 1/2 chain and the one-dimensional fermionic Hubbard model. Secondly, under far from equilibrium conditions, one can find several exact analytical solutions which describe steady states with anomalous transport properties. For example, in the Heisenberg XXZ chain under extreme boundary driving we show that the steady-state density operator of a finite system of size N is -- apart from a normalization constant -- a polynomial of degree 2N-2 in the coupling constant. The perturbative (weak coupling) version of our ansatz is used to derive a novel quasi-local conservation law of the anisotropic Heisenberg model, by means of which we rigorously estimate the spin Drude weight (the ballistic transport coefficient) in the easy-plane regime. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Non-equilibrium steady states of boundary-driven open quantum chains" Speaker(s): Tomaz PROSEN (University of Ljubljana ) Europe/Rome This is an introductory talk aimed at the non-expert. Index calculus provides a relatively efficient method to compute discrete logarithms modulo a prime number $p$. This affects the security of the Diffie-Hellman key-exchange algorithm. The non-existence of an analogue of the index calculus algorithm for elliptic curves gives rise to elliptic curve cryptography. ICTP ICTP [email protected] Discrete logarithms, elliptic curves and cryptography Speaker(s): Professor Rene' Schoof (Universita' di Roma 'Tor Vergata') The Lorenz attractor and beyond Speaker(s): Professor Maria Jose' Pacifico (Universidade Federal do Rio de Janeiro, Brazil) Workshop on Recent Developments in Astronuclear and Astroparticle Physics | (smr 2347) Organizer(s): M. Busso, C. Ciofi degli Atti, S. Fantoni, M. Feroci, D. Treleani, A. Vacchi. Local Organiser: R. Sheth Collaborations: INFN - Italian Institute for Nuclear Physics Europe/Rome Havana - Cuba ICTP [email protected] 19 Nov 2012 - 7 Dec 2012 @ Havana - Cuba Joint ICTP-TWAS Latin-American Advanced Course on FPGA Design for Scientific Instrumentation | (smr 2384) Organizer(s): Directors: L. Hernandez Tabares (Cuba) , N. Abdallah (USA), A. Cicuttin and M.L. Crespo (ICTP) Europe/Rome With recent advances in experimental techniques, it is becoming increasingly clear that the dynamics of cellular biochemical reactions are subject to a great deal of noise. This poses a significant challenge to our understanding of such systems, as it has been known for some time that the effects of noise may lead to substantial differences in the macroscopic behaviour. Here, we report analytical progress on this problem made by studying a simple class of reaction networks whose dynamical behaviour is radically affected by intrinsic stochasticity in finite volume cells. In particular, we show how networks of this type give rise to a separation of timescales between fast almost-deterministic oscillations and slow stochastic metastability. Our class includes the influential Togashi-Kaneko (TK) reaction scheme, which has been found to undergo a noise-induced dynamical transition via numerical simulations. Despite the importance of their work, a satisfactory analytic treatment of this effect has not been achieved in over a decade. Here we provide such a treatment as an application of our theory. ICTP ICTP [email protected] Informal seminar on Statistical Physics: "Noise-induced metastability in biochemical networks" Speaker(s): Tommaso BIANCALANI (The University of Manchester ) Europe/Rome Abstract. Some issues of higher-spin (HS) gauge theory (in d+1>3), mainly related to its holography, are discussed. After providing a very brief overview on the topic, we begin with the boundary side: the HS symmetry, the effective action and the HS Weyl anomaly of scalar CFT are discussed with comments on the generic 3pt functions of HS currents. Moving to the bulk side, we focus on the metric-like description of HS gauge theory, whose interacting action is not yet known. For the quadratic action, we show, through a holographic renormalization, that the both sides match being free from the HS Weyl anomaly (for d+1>3). For the cubic interaction, using the ambient-space formulation, we construct all gauge consistent vertices whose number matches that of 3pt functions. Only one of those vertices correspond to the free scalar CFT on the boundary, hence the metric-like version of the vertex encoded in Vasiliev's equation. The construction of vertices can be generalized to dS---where we also have partially-massless HS fields---showing much richer structure of cubic interactions. We conclude with some remarks on the conformal HS action which arises from the effective action of free scalar field: its spectrum and its possible link to the unknown action of interacting HS fields. ICTP ICTP [email protected] Higher-spin holography and cubic interactions Speaker(s): Euihun JOUNG (Scuola Normale Superiore, Pisa) Europe/Rome In this seminar we show that thermodynamics is sensitive to the existence of a fundamental spin tensor, predicted by extensions of general relativity. In general, thermodynamics is not invariant by a change of the stress-energy tensor of a fundamental quantum field with a divergence transformation leaving the total energy, momentum and angular momentum unchanged. Among the quantities which are changed by such a transformation, there are densities at equilibrium with rotation and non-equilibrium ones like transport coefficients and total entropy. Therefore, at least in principle, it could be possible to probe the existence of a spin tensor with a thermodynamics experiment. SISSA, Santorio Building, Room 128 (1st Floor) ICTP [email protected] Joint ICTP/SISSA Statistical physics seminar: "Thermodynamics and the quantum stress-energy and spin tensor" Speaker(s): F. BECATTINI (University of Florence ) Europe/Rome For many decades physicists have been searching for "hot'' enough playgrounds that can melt magnetic freezing even at extremely low temperatures, purely due to strong quantum fluctuations. The resulting quantum paramagnetic phases of matter called spin liquids, do not spontaneously break any symmetries, possess quantum and topological orders, and consequently fall outside the paradigm of traditional condensed matter theory based on the Landau's Fermi liquid theory and Landau's theory of phase transitions. Remarkably enough, the "deceptively'' simple spin-1/2 Heisenberg antiferromagnetic model, when put on the most frustrated lattice, the "kagome'' lattice, has been shown, both experimentally and theoretically to host such an exotic state. I will present my research work dealing with the precise identification of this state, which is currently being debated very intensely. I will show, within fermionic slave particle approaches, that a certain "marginally'' stable spin liquid with a U(1) low energy gauge structure (so-called Dirac spin liquid) is remarkably robust to all known perturbations towards stable Z2 spin liquids and Valence-bond crystals, despite having strongly interacting gapless fermionic excitations down to zero energy. Finally, using state-of-the-art numerical techniques such as the application of Lanczos steps on variational wave functions combined with Green's function Monte Carlo technique, I will show that such an exotic algebraic spin liquid can in fact exist as a real physical spin liquid ground state. PUBLICATIONS: 1.) Phys. Rev. B 83, 100404 (2011) - Y. Iqbal, F. Becca, and D. Poilblanc. 2.) Phys. Rev. B 84, 020407 (2011) (Editor's suggestion) - Y. Iqbal, F. Becca, and D. Poilblanc. 3.) New J. Phys 14, (in press) (2012); arXiv: 1203.3421 - Y. Iqbal, F. Becca, and D. Poilblanc. 4.) arXiv: 1209.1858 [cond.mat] (2012) - Y. Iqbal, F. Becca, S. Sorella, and D. Poilblanc. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Spin liquids in quantum antiferromagnetic models on two dimensional frustrated lattices" Speaker(s): Yasir IQBAL (ICTP) Ultra-high energy neutrinos at the IceCube and the Glashow resonance Speaker(s): Atri BHATTACHARYA (Harish-Chandra Research Institute, Allahabad, India) New Research Areas , Condensed Matter and Statistical Physics School on Numerical Methods for Materials Science Related to Renewable Energy Applications | (smr 2376) Organizer(s): Organisers: F. De Angelis, S. Fabris, R. Gebauer and N. Seriani Cosponsor(s): South Africa's National Institute for Theoretical Physics (NITheP) Europe/Rome New Delhi - India ICTP [email protected] @ New Delhi - India Physware 2012: A Collaborative Workshop on Low-Cost Equipment and Appropriate Technologies that Promote Undergraduate-level, Hands-on Physics Education throughout the Developing World | (smr 2375) Organizer(s): Directors: Pratibha Jolly (University of Delhi, India), Priscilla Laws (Dickinson College, USA), Elena Sassi (Federico II University, Italy) , Dean Zollman (Kansas State University, USA) ICTP Co-ordinator: Joe Niemela (ICTP, Italy), Local organizers: Pratibha Jolly and Mallika Verma (University of Delhi, India) Cosponsor(s): International Union of Pure and Applied Physics (IUPAP), International Commission on Physics Education (ICPE), India Sri Lanka Foundation, Indian National Science Academy, Council of Scientific and Industrial Research (CSIR, India), Vigyan Prasar (India), Department of Science and Technology (DST, India), University Grants Commission (UGC, India), D S Kothari Centre for Research and Innovation in Science Education (DSKC, India), Miranda House (University of Delhi, India) Europe/Rome New experimental techniques are opening windows on biological mechanisms inside the cell and in the brain, making these systems accessible to quantitative investigation. These advances have shown the importance of the concerted interaction of many agents in producing overall behaviors, and call for an understanding of biological functions at the systemic level. The present school responds to the need to provide physicists with a broad exposure to quantitative problems in the study of living systems. The school is particularly targeted to young researchers, especially at the Ph.D and postdoc level who either work in this area or hope to do so. Lectures by providing a broad overview of topics in Systems Biology, are planned by an array of distinguished lecturers. Research seminars associated with the school will also provide a venue for scientific exchange at the cutting edge Trieste - Italy ICTP [email protected] Winter School on Quantitative Systems Biology | (smr 2415) Organizer(s): Directors: Vijay Balasubramanian, Anirvan Sengupta, Michele Vendruscolo. Local Organizer: Matteo Marsili Laboratories: EKLUND (26-30 Nov); INFOLAB (3-7 Dec) Europe/Rome Abstract. The global Standard Model (SM) electroweak fit has a long tradition in particle physics. It exploits the enormous amount of pioneering work in the calculation of radiative corrections in electroweak physics as well as precision measurements at the Z-peak by the LEP and SLD collaborations. In the recent past precision measurements of the top quark and W boson mass by the Tevatron experiments have complemented the available precision data. In view of the discovery of a new boson with Higgs-like properties by the ATLAS and CMS collaborations an update of the global fit to electroweak precision data is presented. The new boson is assumed to be the SM Higgs boson, which allows for the first time to overconstrain the SM at the electroweak scale and assert its validity. In this talk I will review the status of the electroweak fit and the constraints on new physics through the oblique vacuum corrections. I will conclude with an outlook showing the impact of precision measurements at a possible future linear collider. ICTP ICTP [email protected] The global electroweak fit after the discovery of a new boson at the LHC Speaker(s): Roman KOGLER (University of Hamburg, Germany) Europe/Rome The probability distribution function for an out of equilibrium system may sometimes be approximated by a physically motivated "trial" distribution. A particularly interesting case is when a driven system (e.g., active matter) is approximated by a thermodynamic one. We show here that every set of trial distributions yields an inequality playing the role of a generalization of the second law. This suggests a variational procedure that may be implemented numerically and even experimentally to approximate the distribution function in terms of effective interactions. The fluctuation relation behind this inequality, -a natural and practical extension of the Hatano-Sasa theorem-, does not rely on the a priori knowledge of the stationary probability distribution. ICTP ICTP [email protected] Joint ICTP/SISSA Statistical physics seminar: "Infinite family of second-law-like inequalities" Speaker(s): Alejandro B. KOLTON (Centro Atomico Bariloche) Magnetic field induced lattice ground states from holography Speaker(s): Jonathan SHOCK (Max-Planck-Institute for Physics, Munich, Germany) Europe/Rome In the last few years, a link between Bethe Ansatz equations for conformal field theories and spectral functions of simple ordinary differential operators has emerged. Assuming no prior knowledge of this topic, this talk will review the key facts about this surprising link and discuss recent important findings concerning its off-critical generalisation. SISSA, Santorio Building, Room 138 (1st Floor) ICTP [email protected] Joint ICTP/SISSA Statistical physics seminar: " An alternative road to Bethe Ansatz equations" Speaker(s): R. TATEO (University of Torino) Europe/Rome The interplay of disorder and interactions can have a large impact on the properties of quantum systems. It is well known that, whereas non-interacting particles can be localized by the destructive interference in a disordered potential, a weak repulsive interaction tends to establish coherence between single-particle localized states, thus weakening or destroying such localization. The interplay of disorder and strong interactions is instead still a challenging problem. One paradigmatic problem is that of low-temperature bosonic particles confined to a one-dimensional disordered environment, which, in addition to the standard superfluid and Mott phases, has been predicted to show a peculiar gapless insulating phase, the Bose glass [1,2], that is still proving to be elusive in experiments. We investigate the behavior of a bosonic quantum gas close to T=0 confined in a one-dimensional quasi-periodic lattice, where interactions and disorder can be independently tuned. By studying its momentum distribution, mobility and excitation spectrum features, we trace out a phase diagram showing a coherent phase surrounded by a gapless insulator. The latter has a bosonic character at low interactions, and a fermionic one at large interactions, consistently with the expectations for the presence of a Bose glass phase. Our results confirm long-standing theoretical predictions and indicate the way to study still open questions in 1D bosons and analogous disordered problems in higher dimensions and/or with fermions. [1] D. S. Fisher, Phys.Rev. B 40, 546 (1989). [2] T. Giamarchi, Phys. Rev. A 78, 023628 (2008). ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "The phase diagram of one-dimensional disordered bosons at ultralow temperature" Speaker(s): Chiara D'ERRICO (LENS & University of Florence & INO-CNR) Europe/Rome Abstract. It has been suggested by Dvali et al. that certain derivatively coupled non-renormalizable scalar field theories might restore the perturbative unitarity of high energy hard scatterings by classicalization, i.e. formation of multiparticle states of soft quanta. I will discuss our recent work where we apply the semiclassical method of calculating the multiparticle production rates to the scalar DBI theory which is suggested to classicalize. We find that the semiclassical method is applicable for the energies in the final state above the cutoff scale of the theory L_*^(-1). We encounter that the cross section of the process two to N ceases to be exponentially suppressed for the particle number in the final state N smaller than a critical particle number N_{crit} ~ (E L_*)^(4/3). It coincides with the typical particle number produced in two-particle collisions at high energies predicted by classicalization arguments. ICTP ICTP [email protected] Semiclassical Cross Sections in Classicalizing Theories Speaker(s): Lasma ALBERTE (Ludwig-Maximilians University Munich, Germany) Large N limit of Chern-Simons matter theories and their higher spin duals Speaker(s): Spenta WADIA (International Centre for Theoretical Physics (ICTS-TIFR) IISc Campus, Bangalore and Department of Theoretical Physics, TIFR, Mumbai, India) 3 Dec 2012 - 7 Dec 2012 Joint ICTP-IAEA Course on Natural Circulation Phenomena and Passive Safety Systems in Advanced Water Cooled Reactors | (smr 2349) Organizer(s): IAEA: Jong Ho Choi. ICTP Local Organizer: Joe Niemela Laboratories: AGH Eklund Lab. Workshop on Nanophotonics | (smr 2377) Organizer(s): F. CAPASSO, D. LOPEZ, E. ISAACS, A. FAINSTEIN. Local Organizer: E. TOSATTI Cosponsor(s): Argonne National Laboratory (ANL), US National Science Foundation (NSF) and International Institute for Complex Adaptive Matter (ICAM-12CAM) Nonstable classical Algebraic K-theory Europe/Rome Abstract: I give a brief overview of the claim of a gamma-ray line in the Fermi-LAT data, and its possible explanation in term of dark matter models. In the last part of the talk, I focus on the possibility that gamma-ray lines from dark matter annihilation may come with gluon-lines, and discuss the phenomenological consequences and the experimental constraints on such a scenario. ICTP ICTP [email protected] On the Fermi gamma-ray lines: observations and particle physics models Speaker(s): Michel TYTGAT (Universite' Libre de Bruxelles, Belgium) Europe/Rome I consider the time evolution of observables and reduced density matrices after a sudden quench of a Hamiltonian parameter in one dimensional systems. I discuss the issue of relaxation and show quite generally that if subsystems relax and can be described in terms of a statistical ensemble, dynamical correlations are described by the same ensemble as well. Then I consider quenches in the transverse field Ising chain, where many exact results have been recently obtained. I focus on the approach to the stationary state (which, in the specific case, is a generalized Gibbs ensemble) and discuss the relation between time averages and late time dynamics. SISSA, Santorio Building, Room 134 (1st Floor) ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: " Relaxation after a sudden quench: Insights from exactly solvable models" Speaker(s): M. FAGOTTI (Oxford University) Europe/Rome In this seminar I will extend the properties of the cosmic neutrino background to the non-linear regime. By means of state-of-the-art N-body hydrodynamic simulations, I will investigate the neutrino dynamics down to small scales and redshifts that are not probed by linear theory In particular, I will discuss how the following neutrino properties: the density profile of the neutrino haloes around collapsed objects, the matter clustering as a function of neutrino mass, the neutrino peculiar velocity field and the dark matter halo mass function. These results could be particularly relevant in view of future lensing/clustering experiments like the Euclid satellite, that are expected to measure the total neutrino mass with an error below 30 meV. ICTP ICTP [email protected] The Non-linear evolution of the neutrino cosmic background Speaker(s): Francisco Navarro VILLAESCUSA (Osservatorio Astronomico di Trieste) Europe/Rome Ground state "topological protection" has emerged as a main theme in quantum condensed matter physics. A key question is the robustness of physical properties including topological quantum numbers to perturbations such as disorder or non-equilibrium driving. In this work we investigate the dynamics of a p+ip superfluid following a zero temperature quantum quench. The model describes a 2D topological superconductor with a non-trivial (trivial) BCS (BEC) phase. We work with the full interacting BCS Hamiltonian, which we solve exactly in the thermodynamic limit using classical integrability. The non-equilibrium phase diagram is obtained for generic quenches. A large region of the phase diagram describes strong to weak-pairing quenches wherein the order parameter vanishes in the long-time limit, due to pair fluctuations. Despite this, we find that the topological winding number survives for quenches in this regime, leading to the prediction of a "gapless topological" state. We discuss potential realizations, including a proximity effect quench on the surface of 3D topological insulator. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Quantum quench in p+ip superfluids: Non-equilibrium topological gapless state" Speaker(s): Matthew S. FOSTER (Rice University, Houston ) Derivative expansion of the Casimir energy Speaker(s): Francisco Diego MAZZITELLI (Centro Atomico Bariloche, Argentina) Europe/Rome The off-equilibrium probability distribution of the heat exchanged by a ferromagnet in a time interval after a quench below the critical point is calculated analytically in the large-N limit. Here N is the number of components of the vectorial order parameter. The distribution shows a singular behavior with a critical threshold below which a macroscopic fraction of heat is released by the k=0 Fourier mode of the order parameter. The mathematical structure producing this phenomenon is the same responsible of the order parameter condensation in the equilibrium low temperature phase. The heat exchanged by the individual Fourier modes follows a non trivial pattern, with the unstable modes at small wave vectors warming up the modes around a characteristic finite wave vector k_M. Two internal temperatures, associated to the k=0 and k=k_M modes, rule the heat currents through a fluctuation relation similar to the one for stationary systems in contact with two thermal reservoirs. SISSA, Santorio Building, Room 138 (1st Floor) ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Heat fluctuations in a quenched ferromagnet" Speaker(s): G. GONNELLA (University of Bari ) Joint ICTP-IAEA International Training Workshop on Transitioning from 2D to 3D Conformal Radiotherapy and Intensity Modulated Radiation Therapy | (smr 2378) Organizer(s): IAEA: D. van der Merwe; ICTP: L. Bertocchi (Local Organiser); AAPM: H. Amols Europe/Rome Santiago de Cuba - Cuba ICTP [email protected] @ Santiago de Cuba - Cuba Joint ICTP-TWAS Conference and Advanced School on Quantification of Earthquake Hazards in the Caribbean: The Gonave Microplate | (smr 2380) Organizer(s): L. Alvarez, B. Moreno, A. Aoudia Secretary: P. Malchose Seeding and Self-seeding at New FEL Sources | (smr H316) Organizer(s): Sincrotrone Trieste S.c.p.A. Science @C-Eric | (smr H327) Europe/Rome Quantum entanglement between two far-away constituents of a composite system, when considered in conjunction with a measurement process on one of them, raises some serious problems due to the instantaneous reduction of the state of the other. In some sense one could state that an action performed in a given space time region can affect immediately a system which is space like separated from it. It is not surprising that this peculiar aspect of quantum mechanics has given rise, from the seventies up to now, to many proposals to achieve superluminal communication between distant observers. Almost all such proposals have been shown to be wrong on the basis of general theorems by P. Eberhard and by our group. However, in 1982, a quite tricky new proposal has been put forward by N. Herbert whose refusal required the derivation of a new theorem, which has played and plays an important role for quantum computation and quantum communication: the no-cloning theorem. The seminar intends to briefly review the debate about this interesting issue which involves many subtle aspects of quantum mechanics and to make clear the events which led to the derivation of the no-cloning theorem. Finally, attention will be paid to the general problem of the compatibility of the reduction process "at-a-distance" and relativistic requirements, specifically in the light of the recent proposals of relativistic generalizations of "collapse theories". ICTP ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Entanglement, nonlocality, signaling and cloning" Speaker(s): Gian Carlo GHIRARDI (University of Trieste ) Neutrinos: race for the mass hierarchy Speaker(s): Alexei SMIRNOV (ICTP) Europe/Rome I will talk on the Mahler measure of some temper polynomials, construct the nontrivial integral element of the second K-group for certain tempered elliptic curves. I will also give a proof of Deninger's conjecture on the Mahler measure of x+1/x+y+1/y+1 by Beilinson's regulator map. (This conjecture is proved by M. Rogers and W. Zudilin in 2011). ICTP ICTP [email protected] On the Mahler measure of some temper polynomials Speaker(s): Professor Guo Xuejun (Nanjin University, P.R. China) Anomalous primes of the elliptic curve E_D : y^2 = x^3 + D Speaker(s): Professor Qin Hourong (Nanjing University, P.R. China) Europe/Rome The great experimental progress in realising and studying polariton condensates has made it possible to now study experimentally a number of fundamental questions about the relation between lasing, condensation, coherence and superfluidity. In my talk, I will discuss the consequences of finite particle lifetime on several of these questions. I will discuss the relation between lasing as described by the Maxwell-Bloch equations and polariton condensation, and discuss the consequences of this for both polariton and photon condensation [1]. I will also address aspects of superfluidity [2], pattern formation [3] and coherence[3], and discuss how these are affected by the finite lifetime of polaritons. [1] J. Keeling, M. H. Szymanska, and P. B. Littlewood p. 293 of Optical generation and control of quantum coherence in semiconductor nanostructures (2010) Eds. G. Slavcheva and P. Roussignol [2] J. Keeling, Phys. Rev. Lett. 107 080402 (2011) [3] J. Keeling and N. G. Berloff Phys. Rev. Lett. 100 250401 (2008) [4] G. Roumpos et al, Proc. Natl. Acad. Sci, 109 6467(2012) ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "Condensation vs lasing and superfluidity of coupled light-matter systems" Speaker(s): Jonathan M.J. KEELING (University of St. Andrews ) Europe/Rome We study a quantum quench in the sinh-Gordon model starting from a large initial mass and zero initial interaction and quenching to any value of the mass and interaction. Such a quench can be approximated by a Dirichlet initial state which, being an example of a so-called "squeezed vacuum" state, can be evolved in time using earlier work [1]. However the use of an idealized Dirichlet state leads to UV divergences and it turns out that the UV behaviour of the actual initial state is relevant to the calculation of physical observables. Based on a general method to expand the initial state on the post-quench energy eigenstates [2], we verify that the actual state is of the squeezed vacuum form and derive the correct UV modification that has to be applied on the Dirichlet state in order to describe a quantum quench. We compare with different UV regularization schemes that have been proposed and comment on the applicability of renormalization group tools in quantum-quench problems. [1] D. Fioretto, G. Mussardo, Quantum Quenches in Integrable Field Theories, New J.Phys.12 055015, 2010. [2] S. Sotiriadis, D. Fioretto, G. Mussardo, Zamolodchikov-Faddeev Algebra and Quantum Quenches in Integrable Field Theories, J. Stat. Mech. (2012) P02017, 2012. SISSA, Santorio Building, Room 128 (1st Floor) ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Quantum quench in the sinh-Gordon model" Speaker(s): Spyros SOTIRIADIS (Pisa University ) Europe/Rome Ensembles of ultracold atoms and molecules loaded into lattice potentials are ideally suited to perform model studies of condensed matter systems. In fact, the hope is that ultracold atoms and molecules will some day find use as "quantum simulators" to simulate the properties of a complex many-body system in a fully quantum way. I will present a series of experiments with ultracold atoms in lattices for we tune the interaction properties. For example, we produce a meta-stable Mott state pinned by attractive interactions. In one-dimensional geometry, we investigate the transition to a pseudo anti-ferromagnetic state. I will outline our attempts to go beyond local interactions by preparing dipolar quantum gases of heteronuclear molecules. SISSA, Santorio Building, Room 128 (1st Floor) ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Ultracold atoms with tunable interactions in confined geometry" Speaker(s): H.-C. NÄGERL (Innsbruck University ) Europe/Rome Abstract: Recently, Goettsche and Shende defined a notion of "refined Severi degree", replacing the degree of the Severi variety by a univariate polynomial. We give a (conjectural) interpretation of these invariants for plane curves in terms of tropical geometry: Goettsche and Shende's invariant is computed by enumeration of tropical curves, weighted by a new (Laurent) polynomial multiplicity. This weight specializes to Mikhalkin's 'complex' and 'real' multiplicity of tropical curves. This is joint work with Lothar Goettsche. ICTP ICTP [email protected] Refined curve counting with tropical geometry Speaker(s): Dr. Florian Block ( University of California at Berkeley, USA) Cosmology of the DFSZ axino Speaker(s): Eung Jin CHUN (Korea Institute for Advanced Studi - KIAS, Seoul) Europe/Rome The fractionally charged quasiparticles appearing in the 5/2 fractional quantum Hall plateau are predicted to have an extra non-local degree of freedom, known as topological charge. I will show how this topological charge can block the tunnelling of these particles, and how such 'topological blockade' can be used to readout their topological charge. Moreover I will examine more in general the role that interactions between non-Abelian anyons play in topological quantum computation and how these interactions can be exploited to implement braiding operators. SISSA, Santorio Building, Room 128 (1st Floor) ICTP [email protected] Joint ICTP/SISSA Statistical Physics seminar: "Topological blockade and braiding of non-Abelian anyons via interactions" Speaker(s): M. BURRELLO (Leiden University ) Europe/Rome Integrable interactions are synonymous with non-diffractive scattering, meaning that the set of incoming momenta for any scattering event coincides with the set of outgoing momenta. A system is integrable if the two particle scattering matrix obeys a particular relation known as the Yang-Baxter equation. Nonintegrable interactions correspond to diffractive scattering, where the set of outgoing momenta may take on all values consistent with energy and momentum conservation. Such processes play a vital role in the kinetics of one dimensional gases, where binary collisions are unable to alter the distribution function. In this talk I'll discuss how diffractive scattering arises when the Yang-Baxter equation is violated, and show how to calculate the diffraction amplitude when this violation is small. ICTP ICTP [email protected] Seminar on Disorder and strong electron correlations: "The Yang-Baxter equation and diffraction" Speaker(s): Austen LAMACRAFT (Cambridge University, U.K.))
CommonCrawl
N,N-Dimethyl-D-ribo-phytosphingosine Modulates Cellular Functions of 1321N1 Astrocytes Lee, Yun-Kyung;Kim, Hyo-Lim;Kim, Kye-Ok;Sacket, Santosh J.;Han, Mi-Jin;Jo, Ji-Yeong;Lim, Sung-Mee;Im, Dong-Soon 73 https://doi.org/10.4062/biomolther.2007.15.2.073 PDF KSCI N,N-Dimethyl-D-ribo-phytosphingosine (DMPH) is an N-methyl derivative of sphingosine. In the present paper, we studied effects of DMPH on intracellular Ca$^{2+}$ concentration, pH, glutamate uptake, and cell viability in human 1321N1 astrocytes. DMPH increased intracellular Ca$^{2+}$ concentration and cytosolic pH significantly in a dose-dependent manner. DMPH also inhibited glutamate uptake by 1321N1 astrocytes. Finally, treatment of cells with DMPH for 24 h reduced viability of cells largely and concentration-dependently. In summary, DMPH increased intracellular Ca$^{2+}$ concentration and pH, inhibited glutamate uptake and evoked cytotoxicity in 1321N1 astrocytes. Our observations with DMPH in the 1321N1 astrocytes would enhance understanding of DMPH actions in the brain. Effects of Proto-oncogene Protein DEK on PCAF Localization Lee, In-Seon;Lee, Seok-Cheol;Lee, Jae-Hwi;Seo, Sang-Beom 78 The proto-oncogene protein DEK is a nuclear binding phosphoprotein that has been associated with various human diseases including leukemia. Histone acetylation is an important post-translational modification which plays important role in transcriptional regulation. Auto-acetylation of histone acetyltransferase PCAF results in increment of its HAT activity and facilitation of its nuclear localization. In this study, we report that DEK inhibits PCAF auto-acetylation through direct interaction. The C-terminal acidic domains of DEK are responsible for the interaction with PCAF. Using confocal microscopy, we have shown that nuclear localization of PCAF is severely inhibited by DEK. Taken together, our results suggest that DEK may be involved in various cellular signal transduction pathways accommodated by PCAF through the regulation of PCAF auto-acetylation. Neurotrophic Factors Mediate Memory Enhancing Property of Ethanolic Extract of Liriope platyphylla in Mice Mun, Jung-Hyun;Lee, Sang-Gon;Kim, Dong-Hyun;Jung, Ji-Wook;Yoon, Byung-Hoon;Shin, Bum-Young;Kim, Sun-Ho;Ryu, Jong-Hoon 83 The roots of Liriope platyphylla (Liliaceae) are widely used in traditional Chinese medicine. In the present study, we investigated the effects of ethanol (70%) extract of the roots of Liriope platyphylla (ELP70) on learning and memory using behavioral and immunohistochemical methods in mice. Control animals were treated with vehicle (10% Tween 80). With sub-chronic treatments of ELP70 (p.o.) for 14 days, the latency time was significantly increased compared with that of the vehicle-treated control group (50, 100 and 200 mg/kg; P<0.05). Moreover, immunopositive cells for brain derived neurotrophic factor (BDNF) were significantly increased in the hippocapmpal dentate gyrus and CA1 regions after ELP70 treatments for 14 days (50, 100 and 200 mg/kg; P < 0.05). In addition, those cells for nerve growth factor (NGF) were also increased in the hippocapmpal dentate gyrus region (50, 100 and 200 mg/kg; P<0.05). These results suggest that the sub-chronic administration of ELP70 improves learning and memory, and that their beneficial effects are mediated, in part, by the enhancement of BDNF or NGF expression. Fermented Ginseng with Bifidobacterium Inhibits Angiogenesis of Human Umbilical Endothelial Cells in vitro and in vivo Ko, Yu-Jin;Park, Seung-Hee;Park, Byung-Chul;Lee, Yong-Hwa;Kim, Jung-Ae 89 Ginseng is a widely-used alternative medicine for the treatment of cancer, diabetes, and cardiovascular diseases. Active components of P. ginseng, absorbed through gastrointestinal tract are the fermented ginsenosides by intestinal microorganisms. In the present study, we investigated the inhibitory effects of fermented ginseng with bifidobacterium (FGb) on the angiogenesis by analyzing in vitro tube formation and invasion assay using human umbilical vein endothelial cells (HUVECs), and in vivo angiogenesis using chick chorioallantoic membrane (CAM) assay. Treatment with FGb inhibited tube-like structure formation in a concentration-dependent manner. In addition, FGb significantly suppressed HUVEC invasion through Matrigel. Moreover, FGb dosedependently inhibited VEGF-induced angiogenesis in a CAM assay. These results suggest that FGb is a valuable anti-angiogenic remedy. Growth Inhibition and G2/M Phase Cell Cycle Arrest by 3,4,5-Trimethoxy-4'-bromo-cis-stilbene in Human Colon Cancer Cells Heo, Yeon-Hoi;Min, Hye-Young;Kim, Sang-Hee;Lee, Sang-Kook 95 Resveratrol (3,5,4'-trihydroxy-trans-stilbene), a naturally occurring phytoallexin abundant in grapes and several plants, has been shown to be active in inhibiting proliferation and inducing apoptosis in several human cancer cell lines. On the line of the biological activity of resveratrol, a variety of resveratrol analogs were synthesized and evaluated for their growth inhibitory effects against several human cancer cell lines. In the present study, we found that one of the resveratrol analogs, 3,4,5-trimethoxy-4'-bromo-cis-stilbene, markedly suppressed human colon cancer cell proliferation (EC$_{50}$ = 0.01 ${\mu}$g/ml), and the inhibitory activity was superior to its corresponding trans-isomer (EC$_{50}$ = 1.6 ${\mu}$g/ml) and resveratrol (EC$_{50}$ = 18.7 ${\mu}$g/ml). Prompted by the strong growth inhibitory activity in cultured human colon cancer cells (Col2), we investigated its mechanism of action. 3,4,5-Trimethoxy-4'-bromo-cis-stilbene induced arrest of cell cycle progression at G2/M phase and increased at sub-G1 phase DNA contents of the cell cycle in a time- and dose-dependent manner. Colony formation was also inhibited in a dose-dependent manner, indicating the inhibitory activity of the compound on cell proliferation. Moreover, the morphological changes and condensation of the cellular DNA by the treatment of the compound were well correlated with the induction of apoptosis. These data suggest the potential of 3,4,5-trimethoxy-4'-bromo-cis-stilbene might serve as a cancer chemotherapeutic or chemopreventive agent by virtue of arresting the cell cycle and inducing apoptosis for the human colon cancer cells. Effect of Cimetidine on the Transport of Quinolone Antibiotics in Caco-2 Cell monolayers Kim, Seon-Hwa;Jung, Seo-Jeong;Um, So-Young;Na, Mi-Ae;Choi, Min-Jin;Chung, Myeon-Woo;Oh, Hye-Young 102 Cimetidine, a substrate for P-glycoprotein (P-gp), is a well known drug interacting with a variety of drugs and results in alteration of pharmacokinetic parameters by concomitant administration. The aim of present study was to investigate whether cimetidine affects the transport of various quinolone antibiotics in human colorectal cancer cell line (Caco-2) system which has been typically used to investigate drug transport via P-gp. The apparent permeability coefficients (P$_{app}$) value of 9 quinolone antibiotics in the co-treatment with cimetidine was examined. Apical to basolateral (AP-to-BL) transport of fleroxacin in the co-treatment with cimetidine was increased to 1.5-fold (p<0.01) compared with that of fleroxacin alone, whereas basolateral to apical (BL-to-AP) transport of fleroxacin was decreased to 0.83-fold significantly (p<0.05). Ofloxacin was decreased to 0.8-fold (p<0.01) and 0.72-fold (p<0.01) significantly in AP-to-BL and BL-to-AP direction, respectively by cimetidine cotreatment. The P$_{app}$ values of gatifloxacin, moxifloxacin, ciprofloxacin and rufloxacin also were changed by cimetidine. These results have a potential that cimetidine influences on the pharmacokinetics of quinolone antibiotics. It suggests that careful drug monitoring and dosage adjustment may be necessary during the co-administration of quinolone antibiotics with cimetidine. Influence of Glibenclamide on Catecholamine Secretion in the Isolated Rat Adrenal Gland No, Hae-Jeong;Woo, Seong-Chang;Lim, Dong-Yoon 108 The aim of the present study was to investigate the effect of glibenclamide, a hypoglycemic sulfonylurea, which selectively blocks ATP-sensitive K$^+$ channels, on secretion of catecholamines (CA) evoked by cholinergic stimulation and membrane depolarization from the isolated perfused rat adrenal glands. The perfusion of glibenclamide (1.0 mM) into an adrenal vein for 90 min produced time-dependently enhanced the CA secretory responses evoked by ACh (5.32 mM), high K$^+$ (a direct membrane depolarizer, 56 mM), DMPP (a selective neuronal nicotinic receptor agonist, 100 ${\mu}$M for 2 min), McN-A-343 (a selective muscarinic M1 receptor agonist, 100 ${\mu}$M for 2 min), Bay-K-8644 (an activator of L-type dihydropyridine Ca$^{2+}$ channels, 10 ${\mu}$M for 4 min) and cyclopiazonic acid (an activator of cytoplasmic Ca$^{2+}$-ATPase, 10 ${\mu}$M for 4 min). In adrenal glands simultaneously preloaded with glibenclamide (1.0 mM) and nicorandil (a selective opener of ATP-sensitive K$^+$ channels, 1.0 mM), the CA secretory responses evoked by ACh, high potassium, DMPP, McN-A-343, Bay-K-8644 and cyclopiazonic acid were recovered to the considerable extent of the control release in comparison with that of glibenclamide-treatment only. Taken together, the present study demonstrates that glibenclamide enhances the adrenal CA secretion in response to stimulation of cholinergic (both nicotinic and muscarinic) receptors as well as by membrane depolarization from the isolated perfused rat adrenal glands. It seems that this facilitatory effect of glibenclamide may be mediated by enhancement of both Ca$^{2+}$ influx and the Ca$^{2+}$ release from intracellular store through the blockade of K$_{ATP}$ channels in the rat adrenomedullary chromaffin cells. These results suggest that glibenclamide-sensitive K$_{ATP}$ channels may play a regulatory role in the rat adrenomedullary CA secretion. Bioequivalence Assessment of Nabumetone Tablets in Healthy Korean Volunteers Park, Moon-Hee;Shin, In-Chul 118 This study was performed to evaluate the bioequivalency between the Osmetone$^{TM}$ Tablet (Myeongmoon Pharm. Co., Ltd.) as a test formulation and the Relafen$^{TM}$ Tablet (Handok Pharm. Co., Ltd.) as a reference formulation. Twenty-four healthy male volunteers were administered the formulations by the randomized Latin square crossover design, and the plasma samples were determined by a high performance liquid chromatography (HPLC) with Ultra-Violet (UV) detector. AUC$_t$, C$_{max}$ and T$_{max}$ were obtained from the time-plasma concentration curves, and log-transformed AUC$_t$ and C$_{max}$ and log-untransformed T$_{max}$ values for two formulations were compared by statistical tests and analysis of variation. AUC$_t$ was determined to be 897.8${\pm}$431.1 ug.hr/ml for the reference formulation and 902.3${\pm}$408.4 ug.hr/ml for the test formulation. The mean values of C$_{max}$ for the reference and test formulations were 24.2${\pm}$8.9 and 24.0${\pm}$9.5 ug/ml, respectively. The AUC$_t$ and C$_{max}$ ratios of the reference Relafen$^{TM}$ Tablet to the test Osmetone$^{TM}$ Tablet were +5.01% and -0.83%, respectively, showing that the mean differences were satisfied the acceptance criteria within 20%. The results from analysis of variance for logtransformed AUC$_t$ and C$_{max}$ indicated that sequence effects between groups were not exerted and 90% confidence limits of the mean differences for AUC$_t$ and C$_{max}$ were located in ranges from log 0.80 to log 1.25, satisfying the acceptance criteria of the KFDA bioequivalence. The Osmetone$^{TM}$ Tablet as the test formulation was considered to be bioequivalant to the Relafen$^{TM}$ Tablet used as its reference formulation, based on AUC$_t$ and C$_{max}$ values. Simplified HPLC Method for the Determination of Pseudoephedrine Hydrochloride from Allegra D Tablet A sensitive, simple and highly selective liquid chromatography method of determination for extraction of pseudoephedrine hydrochloride from Allegra D tablet was developed. The chief benefit of the present method is the minimal sample preparation, as the procedure is only filtering through pore syringe filter. Two drugs (pseudoephedrine hydrochloride, fexofenadine) were separated on a C$_{18}$ column and analyzed by high performance liquid chromatography (HPLC). The method had a chromatographic run time of 8.0 min. 1 ml of pseudoephedrine hydrochloride solution (1 mg/ml) was filtered through 0.22 um pore syringe filter. 50 ul of filtering solution was injected to HPLC pump and we knew the retention time (1.85 min) of separating of pseudoephedrine hydrochloride using UV detector at 280 nm. We used C$_{18}$ column (4.6 mm${\times}$250 mm), mobile phase solution (<0.05 mol/L NaH$_2$PO$_4$, 2 ml/L H$_3$PO$_4$>/CH$_3$CN / sodium dodesyl sulfate = 60 ml / 40 ml / 1 g). We separated psedoephedrine hydrochloride at run time of 1.85 min from Allegra D tablet solution (1 mg/ml) filtered through 0.22 um pore syringe filter using UV detector at 280 nm. Flow rate was set at 1.0 ml/min and the column temperature was set at 40$^{\circ}C$. Psedoephedrine hydrochloride solution (1 mg/ml) separated from Allegra D tablet was filtered through 0.22 um pore syringe filter and injected 50 ul. We confirmed the peak of psedoephedrine hydrochloride at same retention time and the separating solution was freeze-dried. In conclusion, A simple isocratic reverse-phase HPLC method has been developed that provides excellent separation of pseudoephedrine from Allegra D tablet. Single Oral Dose Toxicity Test of Water Extracts of Radix Araliae Cordatae in ICR Mice Leem, Moon-Jeong;Ryu, Jei-Man;Ku, Sae-Kwang 127 The object of this study was to evaluate the acute toxicity of lyophilized water extract of Radix Araliae Cordatae (RA) in male and female mice. The extract was administered to female and male ICR mice as an oral dose of 2000 mg/kg (body wt.) according to the recommendation of KFDA Guidelines. Animals were monitored for the mortality and changes in body weight, clinical signs and gross observation during 14 days after dosing, upon necropsy, organ weight and histopathology of 12 principle organs were examined. As results, we could not find any mortality, clinical signs, changes in the body weight and gross findings except for increases of hypertrophy of lymph nodes in male RA extracts-dosing group. In addition, no RA extracts-treatment related abnormal changes in the organ weight and histopathology of principle organs except for some sporadic accidental findings. The results obtained in this study suggest that the RA extracts does not cause any toxicological signs. The LD$_{50}$ and approximate LD of RA extracts in both female and male mice were considered as over 2000 mg/kg.
CommonCrawl
HomeTextbook AnswersMathCalculusCalculus (3rd Edition)Chapter 11 - Infinite Series - 11.6 Power Series - Exercises - Page 57724 Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Appendix A Appendix C Infinite Series - 11.1 Sequences - Preliminary Questions Infinite Series - 11.1 Sequences - Exercises Infinite Series - 11.1 Sequences - Exercises Infinite Series - 11.1 Sequences - Exercises Infinite Series - 11.2 Summing and Infinite Series - Preliminary Questions Infinite Series - 11.2 Summing and Infinite Series - Preliminary Questions Infinite Series - 11.2 Summing and Infinite Series - Exercises Infinite Series - 11.2 Summing and Infinite Series - Exercises Infinite Series - 11.2 Summing and Infinite Series - Exercises Infinite Series - 11.2 Summing and Infinite Series - Exercises Infinite Series - 11.3 Convergence of Series with Positive Terms - Preliminary Questions Infinite Series - 11.3 Convergence of Series with Positive Terms - Exercises Infinite Series - 11.3 Convergence of Series with Positive Terms - Exercises Infinite Series - 11.3 Convergence of Series with Positive Terms - Exercises Infinite Series - 11.4 Absolute and Conditional Convergence - Preliminary Questions Infinite Series - 11.4 Absolute and Conditional Convergence - Exercises Infinite Series - 11.4 Absolute and Conditional Convergence - Exercises Infinite Series - 11.4 Absolute and Conditional Convergence - Exercises Infinite Series - 11.5 The Ratio and Root Tests and Strategies for Choosing Tests - Preliminary Questions Infinite Series - 11.5 The Ratio and Root Tests and Strategies for Choosing Tests - Exercises Infinite Series - 11.5 The Ratio and Root Tests and Strategies for Choosing Tests - Exercises Infinite Series - 11.6 Power Series - Preliminary Questions Infinite Series - 11.6 Power Series - Exercises Infinite Series - 11.6 Power Series - Exercises Infinite Series - 11.6 Power Series - Exercises Infinite Series - 11.7 Taylor Series - Preliminary Questions Infinite Series - 11.7 Taylor Series - Exercises Infinite Series - 11.7 Taylor Series - Exercises Infinite Series - 11.7 Taylor Series - Exercises Infinite Series - Chapter Review Exercises Infinite Series - Chapter Review Exercises Infinite Series - Chapter Review Exercises 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Chapter 11 - Infinite Series - 11.6 Power Series - Exercises - Page 577: 24 The series converges for all $x$ in the interval $[-1,1)$. We apply the ratio test and by using L'Hopital rule, we have $$ \rho=\lim _{n \rightarrow \infty}\left|\frac{a_{n+1}}{a_{n}}\right|=\lim _{n \rightarrow \infty} |\frac{ x^{3n+5}/\ln(n+1) }{ x^{3n+2}/\ln n}|\\ =|x^3|\lim _{n \rightarrow \infty} \frac{\ln n}{\ln(n+1)}=|x^3|\lim _{n \rightarrow \infty} \frac{n+1}{n}=|x^3|. $$ Hence, the series $\Sigma_{n=2}^{\infty} x^{3n+2}/\ln n$ converges if and only if $|x^3|\lt1$. That is, the interval of convergence is $(-1,1)$. Now, we check the end points: At $x=-1$, then $\Sigma_{n=2}^{\infty} x^{3n+2}/\ln n=\Sigma_{n=2}^{\infty}(-1)^{3n+2}/\ln n$, converges (Alternating Series Test). At $x=1$, then $\Sigma_{n=2}^{\infty} x^{3n+2}/\ln n=\Sigma_{n=2}^{\infty}1/\ln n$, diverges (by the comparison test with $\frac{1}{\ln n}\geq \frac{1}{n}$). Hence, the series converges for all $x$ in the interval $[-1,1)$. Next Answer Chapter 11 - Infinite Series - 11.6 Power Series - Exercises - Page 577: 25 Previous Answer Chapter 11 - Infinite Series - 11.6 Power Series - Exercises - Page 577: 23
CommonCrawl
The Annals of Probability Ann. Probab. Volume 41, Number 3B (2013), 2013-2046. Subcritical percolation with a line of defects S. Friedli, D. Ioffe, and Y. Velenik More by S. Friedli More by D. Ioffe More by Y. Velenik We consider the Bernoulli bond percolation process $\mathbb{P} _{p,p'}$ on the nearest-neighbor edges of $\mathbb{Z} ^{d}$, which are open independently with probability $p<p_{c}$, except for those lying on the first coordinate axis, for which this probability is $p'$. Define \[\xi_{p,p'}:=-\lim_{n\to\infty}n^{-1}\log\mathbb{P} _{p,p'}(0\leftrightarrow n\mathbf{e} _{1})\] and $\xi_{p}:=\xi_{p,p}$. We show that there exists $p_{c}'=p_{c}'(p,d)$ such that $\xi_{p,p'}=\xi_{p}$ if $p'<p_{c}'$ and $\xi_{p,p'}<\xi_{p}$ if $p'>p_{c}'$. Moreover, $p_{c}'(p,2)=p_{c}'(p,3)=p$, and $p_{c}'(p,d)>p$ for $d\geq 4$. We also analyze the behavior of $\xi_{p}-\xi_{p,p'}$ as $p'\downarrow p_{c}'$ in dimensions $d=2,3$. Finally, we prove that when $p'>p_{c}'$, the following purely exponential asymptotics holds: \[\mathbb{P} _{p,p'}(0\leftrightarrow n\mathbf{e} _{1})=\psi_{d}e^{-\xi_{p,p'}n}\bigl(1+o(1)\bigr)\] for some constant $\psi_{d}=\psi_{d}(p,p')$, uniformly for large values of $n$. This work gives the first results on the rigorous analysis of pinning-type problems, that go beyond the effective models and don't rely on exact computations. Ann. Probab., Volume 41, Number 3B (2013), 2013-2046. First available in Project Euclid: 15 May 2013 https://projecteuclid.org/euclid.aop/1368623518 doi:10.1214/11-AOP720 Primary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] 82B43: Percolation [See also 60K35] Percolation local limit theorem renewal Russo formula pinning random walk correlation length Ornstein–Zernike analyticity Friedli, S.; Ioffe, D.; Velenik, Y. Subcritical percolation with a line of defects. Ann. Probab. 41 (2013), no. 3B, 2013--2046. doi:10.1214/11-AOP720. https://projecteuclid.org/euclid.aop/1368623518 [1] Abraham, D. B. (1986). Surface structures and phase transitions—exact results. In Phase Transitions and Critical Phenomena, Vol. 10 1–74. Academic Press, London. [2] Aizenman, M. and Barsky, D. J. (1987). Sharpness of the phase transition in percolation models. Comm. Math. Phys. 108 489–526. Zentralblatt MATH: 0618.60098 [3] Alexander, K. S. and Zygouras, N. (2010). Equality of critical points for polymer depinning transitions with loop exponent one. Ann. Appl. Probab. 20 356–366. Digital Object Identifier: doi:10.1214/09-AAP621 Project Euclid: euclid.aoap/1262962326 [4] Beffara, V., Sidoravicius, V., Spohn, H. and Vares, M. E. (2006). Polymer pinning in a random medium as influence percolation. In Dynamics & Stochastics. Institute of Mathematical Statistics Lecture Notes—Monograph Series 48 1–15. IMS, Beachwood, OH. [5] Campanino, M., Chayes, J. T. and Chayes, L. (1991). Gaussian fluctuations of connectivities in the subcritical regime of percolation. Probab. Theory Related Fields 88 269–341. [6] Campanino, M. and Ioffe, D. (2002). Ornstein–Zernike theory for the Bernoulli bond percolation on $\mathbb{Z}^{d}$. Ann. Probab. 30 652–682. [7] Campanino, M., Ioffe, D. and Velenik, Y. (2003). Ornstein–Zernike theory for finite range Ising models above $T_{c}$. Probab. Theory Related Fields 125 305–349. Digital Object Identifier: doi:10.1007/s00440-002-0229-z [8] Campanino, M., Ioffe, D. and Velenik, Y. (2008). Fluctuation theory of connectivities for subcritical random cluster models. Ann. Probab. 36 1287–1321. Digital Object Identifier: doi:10.1214/07-AOP359 [9] Caravenna, F. (2005). A local limit theorem for random walks conditioned to stay positive. Probab. Theory Related Fields 133 508–530. Digital Object Identifier: doi:10.1007/s00440-005-0444-5 [10] Giacomin, G. (2007). Random Polymer Models. Imperial College Press, London. [11] Giacomin, G., Lacoin, H. and Toninelli, F. (2010). Marginal relevance of disorder for pinning models. Comm. Pure Appl. Math. 63 233–265. Digital Object Identifier: doi:10.1002/cpa.20301 [12] Greenberg, L. and Ioffe, D. (2005). On an invariance principle for phase separation lines. Ann. Inst. Henri Poincaré Probab. Stat. 41 871–885. Digital Object Identifier: doi:10.1016/j.anihpb.2005.05.001 [13] Grimmett, G. (1999). Percolation, 2nd ed. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 321. Springer, Berlin. [14] Ioffe, D. and Velenik, Y. (2008). Ballistic phase of self-interacting random walks. In Analysis and Stochastics of Growth Processes and Interface Models 55–79. Oxford Univ. Press, Oxford. Digital Object Identifier: doi:10.1093/acprof:oso/9780199239252.003.0003 [15] Jain, N. C. and Pruitt, W. E. (1972). The range of random walk. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. III: Probability Theory 31–50. Univ. California Press, Berkeley, CA. [16] Newman, C. M. and Wu, C. C. (1997). Percolation and contact processes with low-dimensional inhomogeneity. Ann. Probab. 25 1832–1845. [17] Velenik, Y. (2006). Localization and delocalization of random interfaces. Probab. Surv. 3 112–169. Project Euclid: euclid.ps/1146757332 [18] Zhang, Y. (1994). A note on inhomogeneous percolation. Ann. Probab. 22 803–819. The Institute of Mathematical Statistics Future Papers Lower Bounds on the Connectivity Function in all Directions for Bernoulli Percolation in Two and Three Dimensions Alexander, Kenneth S., The Annals of Probability, 1990 Ornstein-Zernike theory for the Bernoulli bond percolation on $\mathbb{Z}^d$ Campanino, Massimo and Ioffe, Dmitry, The Annals of Probability, 2002 Explicit rates of approximation in the CLT for quadratic forms Götze, Friedrich and Zaitsev, Andrei Yu., The Annals of Probability, 2014 Persistence of iterated partial sums Dembo, Amir, Ding, Jian, and Gao, Fuchang, Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 2013 Bernoulli Percolation Above Threshold: An Invasion Percolation Analysis Chayes, J. T., Chayes, L., and Newman, C. M., The Annals of Probability, 1987 Upper functions for $\mathbb{L}_{p}$-norms of Gaussian random fields Lepski, Oleg, Bernoulli, 2016 On an $F$-algebra of holomorphic functions on the upper half plane IIDA, Yasuo, Hokkaido Mathematical Journal, 2006 Closed ideals with countable hull in algebras of analytic functions smooth up to the boundary Agrafeuil, Cyril and Zarrabi, Mohamed, Publicacions Matemàtiques, 2008 A refinement of a congruence result by van Hamme and Mortenson Sun, Zhi-Wei, Illinois Journal of Mathematics, 2012 The Cauchy problem for quasilinear hyperbolic equations with non-absolutely continuous coefficients in the time variable Ascanelli, Alessia, Advances in Differential Equations, 2005 euclid.aop/1368623518
CommonCrawl
Definition:Parallel (Geometry)/Lines From ProofWiki < Definition:Parallel (Geometry)(Redirected from Definition:Parallel Lines) 3 Reflexivity 4 Also see In the words of Euclid: Parallel straight lines are straight lines which, being in the same plane and being produced indefinitely in either direction, do not meet one another in either direction. (The Elements: Book $\text{I}$: Definition $23$) The contemporary interpretation of the concept of parallelism declares that a straight line is parallel to itself. This page has been identified as a candidate for refactoring. Expand each of these statements into well-defined pages in their own right Until this has been finished, please leave {{Refactor}} in the code. New contributors: Refactoring is a task which is expected to be undertaken by experienced editors only. Because of the underlying complexity of the work needed, it is recommended that you do not embark on a refactoring task until you have become familiar with the structural nature of pages of $\mathsf{Pr} \infty \mathsf{fWiki}$. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{Refactor}} from the code. Different geometries allow different conditions for the existence of parallel lines. Euclidean geometry allows that exactly one line through a given point can be constructed parallel to a given line; Hyperbolic geometry allows for an infinite number of such; Elliptical geometry does not allow construction of such lines. Reflexivity Expand this and put into its own page An attempt can be made to define parallelism by suggesting that the perpendiculars dropped from one (line or plane) to another (line or plane) are the same length everywhere along the line or plane, but this interpretation does not work in the context of non-Euclidean geometries, and is in fact no more than a derivable consequence of the definition of parallel as given here. Results about parallel lines can be found here. 2008: David Nelson: The Penguin Dictionary of Mathematics (4th ed.) ... (previous) ... (next): Entry: parallel Retrieved from "https://proofwiki.org/w/index.php?title=Definition:Parallel_(Geometry)/Lines&oldid=512925" Refactoring In Progress Definitions/Lines Definitions/Parallel Random proof $\mathsf{Pr} \infty \mathsf{fWiki}$ $\LaTeX$ commands ProofWiki.org Definition Index Symbol Index Axiom Index Proofread Articles Wanted Proofs More Wanted Proofs Help Needed Research Required Tidy Articles Improvements Invited This page was last modified on 13 March 2021, at 22:11 and is 0 bytes About ProofWiki
CommonCrawl
Radius of convergence of Taylor series of a function of real variable We know that if we have $f(z)=1/(1+z^2)$ then the radius of convergence of Taylor series associated with $f(z)$ is the distance of nearest singularity of $f(z)$. But if instead of $f(z)$ we have given $f(x) =1/(1+x^2)$ and asked where does the Taylor series of $f(x)$ about converges? As $f(x)$ is infinitely many times differentiable at each real number then can I say that Taylor series of $f(x)$ converges for all $\mathbb{R}$? real-analysis complex-analysis Sangchul Lee prakash nainwalprakash nainwal $\begingroup$ No, certainly not. There are infinitely differentiable functions whose Taylor series based at $x_0$ fail to converge for every $x\ne x_0.$ $\endgroup$ – zhw. Jan 13 '18 at 18:32 Taking: "The radius of convergence of a Taylor series associated with $f(z)$ is the distance to the nearest singularity of $f(z)$", where the distance is measured from the center of the taylor series; we see the singularities of $f(z) = \frac{1}{1 + z^2}$ to be $i$ and $-i$. Thus for any $x \in \mathbb{R} \subset \mathbb{C}$, we have $\texttt{min}(|x - i|, |x -(-i)|) \geq 1 > 0$ and thus $f|_{\mathbb{R}}: \mathbb{R} \rightarrow \mathbb{R}$ is analytic over $\mathbb{R}$, i.e. a Taylor series centered at point $x_0 \in \mathbb{R}$ with positive radius of convergence exists for each $x_0 \in \mathbb{R}.$ BenBBenB $\begingroup$ Thanks. Will you please elaborate analyticity of a function on R. Is there any other way for finding radius of convergence of Taylor series of a function of real variable other than treating this as a function of complex variable? $\endgroup$ – prakash nainwal Jan 15 '18 at 5:39 $\begingroup$ Sure, you can find the radius of a Taylor series for a function on the reals without treating the function as a function of a complex variable; it's just that considering the function as a complex function often makes the analysis simpler. $\endgroup$ – BenB Jan 15 '18 at 5:43 Not the answer you're looking for? Browse other questions tagged real-analysis complex-analysis or ask your own question. Convergence of the taylor series of a branch of logarithm. convergence radius of taylor series of a complex function in different directions, the same? Radius of convergence of a complex function with a Taylor Series expansion Radius of convergence of power series of complex $\log$ Radius of convergence of a complex power series? Radius of convergence of $f(z) = \sqrt{z}$ at $a=-1+i$. Can the "radius of analyticity" of a smooth real function be smaller than the radius of convergence of its Taylor series without being zero? Radii of convergence for a family of Taylor series expansions of holomorphic $f$ Relationship between "analytic" functions and their taylor series Radius of Convergence of Laurent Series
CommonCrawl
Mark Transtrum Model Reduction Model Reduction: The Manifold Boundary Approximation Method Removing Parameters from Sloppy Models Sloppy models have lots of parameters and most of them cannot be inferred from data. Statistical uncertainties in the inferred values are often huge, even infinite. You might wonder, why don't we just remove the sloppy parameters from the model? In other words, why don't we simplify or reduce the model to include only the important pieces? Simple models are a powerful tools for understanding the behavior of a physical system. In fact, science is full of examples of mathematical models that are simple descriptions of otherwise complex processes. Because these models concisely illustrate the important parts of the system, they vividly illustrate the physical mechanisms that control the system. In fact, identifying which are the control mechanisms is a large part of complex systems research. Unfortunately, simplifying models in a way that reveals the important parts and ignores the unimportant pieces is a very difficult problem. There are several challenges to doing this generally for sloppy models. It is often combinations rather individual parameters that are important These combinations are often nonlinear functions of parameters Even if the right nonlinear combination could be found, how do you remove it from the model? To overcome these challenges, we developed a new approach to model reduction based on our geometric understanding of models. We call this method the Manifold Boundary Approximation Method (MBAM). Manifold Boundaries By thinking about models geometrically, we can better understand how a model reduction scheme ought to work. Although sloppy models have many parameters, their behavior exhibits a low-effective dimensionality. This low-dimensionality is quantified by the hyper-ribbon structure of the model manifold. Just as a ribbon is a three-dimensional object that can be approximated by a two-dimensional surface or even a one-dimensional curve, we would like to approximate our hyper-ribbon by an object of lower dimension. The approach we ultimately decided to use was to use the boundary of the model manifold itself as an approximation. Superficially, the boundaries make a nice approximation scheme since they naturally follow the curvature of the original model manifold. However, what makes the boundary approximation method so powerful are the physical interpretations that can be attached to the new, simplified models. To illustrate, consider what happens to a geodesic curve as it approaches the boundary of a model manifold. Geodesics curves are approximately straight lines on the model manifold, but correspond to highly nonlinear curves in parameter space. As these curves approach a manifold boundary, however, several things happen that make indicate the boundaries of the manifold have special physical meaning. First, as they approach a boundary the geodesic paths in parameter space stop bending and straighten out as in the figure below. Second, although the initial direction of the geodesic may have involved a complicated combination of parameters, as it approaches a boundary it rotates into a simple, physically relevant combination. Finally, as the geodesic approaches the boundary, the smallest eigenvalue of the Fisher Information, peels away from the others and approaches zero. These three observations about the geodesics as they approach a boundary indicate that the boundaries correspond to limiting approximations of the model. In the figure above, for example, the boundary corresponds to the limit that the second parameter becomes zero. The physical meaning of the boundaries vary with the model. They might correspond to chemical reactions equilibrating or saturating in systems biology. They may correspond to removing high-frequency fluctuations in the Ising model. Whatever the model, the boundaries always correspond to limits that can be given simple, physical interpretations. Understanding the nature of the manifold boundaries enables an approach to model reduction that we call the Manifold Boundary Approximation Method (MBAM). By approximating the original model manifold with the boundary along the most narrow width, we retain most of the predictive power of the original model while removing the least important parameter combination. The new approximate model is only marginally less sloppy than previous, but it can likewise be simplified in the same way. MBAM therefore consists of this four step iterative algorithm: Find the (locally) least important parameter combination from the eigenvalues of the FIM. Follow a geodesic path oriented along this direction until the manifold boundary is discovered. Identify the limiting approximation corresponding to this boundary and explicitly evaluate it in the model. Calibrate the new model by fitting its behavior to the behavior of the original model The MBAM Algorithm. Iterating these four steps removes parameter combinations one at a time until the model is sufficiently simple. What constitutes "sufficiently simple" will vary based on context. The final model will then correspond to a hyper-corner of the original model manifold. The manifold boundary approximation method naturally overcomes the three challenges to model reduction listed above. By repeatedly evaluating limiting approximations in the model, the irrelevant parameters are removed and the remaining parameters naturally group into the physically important combinations. These combinations are often nonlinear combinations of the original bare parameters. MBAM naturally connects microscopic physics with emergent, macroscopic physics in a semi-automatic way. This is perhaps best illustrated by an example. Systems Biology Example One of the first models to be identified as sloppy was a systems biology model describing signaling in a developing rat cell. This particular model had 48 parameters and 29 differential equations. However, because the model was made of many inhomogeneous components interacting in highly nonlinear ways, it was not obvious which parameter combinations were important nor could any of the traditional model reduction methods be used to simplify it. MBAM, however, was able to identify a series of limiting approximations that could be applied to the model. The result was a new effective model that had only 12 parameters and 6 differential equations. Original and Simplified PC12 Network Diagrams. The figure shows the network diagram for the original model and the new, effective diagram for the simplified model. Notice how most of the superfluous components have been removed. What remains is a dramatic illustration of the negative feedback loop (from Erk to P90/RSK to Ras) that is the mechanism responsible for the system behavior. Qualitatively, this negative feedback loop is how the biologists had understood the behavior of the system. MBAM naturally recovered the same result and produced a quantitative, mathematical model to describe this interpretation in the process. What remains after the simplifying process are precisely the 12 parameters that are necessary for the model to be predictive. This can be seen by noting that there are no small eigenvalues in the FIM for the new model. All the sloppy parameters (combinations) have been removed and all of the stiff parameter (combinations) have been kept! The nice thing about the approximate model is that it is not just a "black box". Instead, it explains the system's collective behavior in terms of the components of the original model. For example, consider the effective interaction between C3G and Erk in the new model. The strength of this interaction is controlled by a parameter (labeled \(\phi\) in the diagram). This parameter has a definition in terms of the original model parameters that emerges through the sequence of limiting approximations: $$ \phi = \frac{ (\mathrm{k_{Rap1ToBRaf}}) (\mathrm{K_{mdBRaf}}) (\mathrm{k_{pBRaf}}) (\mathrm{K_{mdMek}}) }{ (\mathrm{k_{dBRaf}}) (\mathrm{K_{mRap1ToBRaf}}) (\mathrm{k_{dMek}}) }.$$ This parameter is the "renormalized" parameter of the effective model. Notice that it is a nonlinear combination of 7 other parameters. Each of the seven parameters can influence the behavior of the model but only through their effect on \(\phi\). Since this model has 12 parameters, there are 11 other expressions (of varying complexity) similar to the one for \(\phi\) above. Each one of these parameters controls an important feature in the model's behavior. In fact, if the simplified model were fit to data, all of the parameters could be inferred with small error bars (notice the eigenvalues of the simplified model in the figure). The new model is not sloppy! Unifying Model Reduction Methods The manifold boundary approximation method makes very few assumptions about the model. We have seen that it performs spectacularly on our systems biology model. To what other models can it be applied in what contexts? At its heart, MBAM is a tool for identifying limiting approximations. There are a wide variety of model reduction techniques that have grown up searching for these types of approximations in different contexts. For example, in dynamical systems, singular perturbation theory explores the behavior of differential equations in the limit that one of the time scales is much faster than the others. If there is a parameter in the model that explicitly controls the time scale, then the limit that that parameter becomes zero will correspond to a boundary of the model manifold. Therefore, singular perturbation theory can be recovered as a special case of the MBAM procedure. Many other approximation techniques can similarly be subsumed as special cases of the MBAM procedure. Continuum limits and thermodynamic limits are two other obvious examples of a limiting approximation. However, other approximation methods that are not obviously limiting approximations can similarly be reproduced by MBAM. For example, the renormalization group, a powerful tool in statistical physics, as well as balanced truncation, the cornerstone of dynamical systems order reduction in control theory, both can be recovered as special cases of MBAM. MBAM provides a new way to think about model reduction that is very general. By unifying and generalizing many time-tested model reduction techniques, it is a step toward a unified theory of model reduction.
CommonCrawl
Quantum Artificial Life in an IBM Quantum Computer Quantum computing at the frontiers of biological sciences Prashant S. Emani, Jonathan Warrell, … Aram W. Harrow Towards practical applications in quantum computational biology A. K. Fedorov & M. S. Gelfand Simulating quantum many-body dynamics on a current digital quantum computer Adam Smith, M. S. Kim, … Johannes Knolle QDataSet, quantum datasets for machine learning Elija Perrier, Akram Youssry & Chris Ferrie Computer-inspired quantum experiments Mario Krenn, Manuel Erhard & Anton Zeilinger Herding an evolving biological population with quantum control tools Daniel M. Weinreich Benchmarking an 11-qubit quantum computer K. Wright, K. M. Beck, … J. Kim A universal qudit quantum processor with trapped ions Martin Ringbauer, Michael Meth, … Thomas Monz Small-world complex network generation on a digital quantum processor Eric B. Jones, Logan E. Hillberry, … Lincoln D. Carr U. Alvarez-Rodriguez1,2,3, M. Sanz ORCID: orcid.org/0000-0003-1615-90353, L. Lamata ORCID: orcid.org/0000-0002-9504-86853 & E. Solano3,4,5 Scientific Reports volume 8, Article number: 14793 (2018) Cite this article 473 Altmetric Computational biophysics We present the first experimental realization of a quantum artificial life algorithm in a quantum computer. The quantum biomimetic protocol encodes tailored quantum behaviors belonging to living systems, namely, self-replication, mutation, interaction between individuals, and death, into the cloud quantum computer IBM ibmqx4. In this experiment, entanglement spreads throughout generations of individuals, where genuine quantum information features are inherited through genealogical networks. As a pioneering proof-of-principle, experimental data fits the ideal model with accuracy. Thereafter, these and other models of quantum artificial life, for which no classical device may predict its quantum supremacy evolution, can be further explored in novel generations of quantum computers. Quantum biomimetics, quantum machine learning, and quantum artificial intelligence will move forward hand in hand through more elaborate levels of quantum complexity. As described by Deutsch, a quantum computer is a device that intends to fulfill the Deutsch-Church-Turing principle, namely, to efficiently simulate a finitely realizable physical system in the framework of quantum mechanics1. In this context, quantum supremacy would be reached when a quantum processor outperforms classical computers realizing a given task. Along these lines, several proposals to achieve quantum supremacy for a variety of quantum algorithms and quantum simulations have been proposed2,3,4,5,6,7. The keyword "quantum" has overflowed the limits to which was initially constrained and, currently, incessantly spreads through the interdisciplinary scientific literature. Indeed, it is a source of inspiration for the breeding extensions of already existing models with their quantum counterparts8,9,10,11,12,13,14,15,16,17. Besides the appealing intellectual exercise, this hybridization is often motivated by a plausible improvement in the conditions and enhancement in the efficiency of the developed protocols. From all possible ramifications, including quantum machine learning and quantum artificial intelligence, our research in quantum biomimetics is concerned with the design of a framework for quantum algorithms based on the imitation of biological processes, belonging to the macroscopic classical complexity, and brought down by design to the microscopic quantum realm18,19,20,21,22,23,24,25,26. There may not always be a neat analogy between the physical models underlying our protocols and those used to describe real biological systems, but our proposed effective dynamics only partially aims at emulating core aspects of the mimicked process. From a wide perspective in the history of arts and science, close imitation is a natural first layer and wish in the aesthetic process. In this sense, plain simulation is a valid and fruitful engineering playground, where analogies abound and serve as communicating vessels between unconnected fields. Our central goal in quantum simulations and quantum computing is to go beyond it, through a higher creativity challenge, in the search of a second layer of a major art. In the particular scenario of artificial life, simple models of organisms are able to undergo most common stages of life in a controlled virtual environment27,28,29. When extending this to the quantum realm, particularities of quantum physics, such as its limitation to linear dynamics, the no-cloning theorem, or the exponentially growing dimensionality of Hilbert spaces, play a relevant role. The quantum artificial life protocol we have engineered and implemented goes beyond the straightforward quantization of existing classical models. In this sense, and with similar spirit of other contributions30,31,32,33, it is noteworthy to mention that we leave open the question whether the origin of life is genuinely quantum mechanical. What we prove here is that microscopic quantum systems can efficiently encode quantum features and biological behaviours, usually associated with living systems and natural selection. In this article, we report the first experimental implementation of a model for quantum artificial life20 into a quantum computer. To this end, we make use of the facilities provided by the IBM ibmqx4 quantum computing chip34. This work should be aligned with the ramping developments in classical and quantum machine learning and artificial intelligence: the development of algorithms and devices with the capacity to interpret and mimic human behaviors in order to solve useful problems and improve the interaction with human beings. Along these lines, we may foresee a future in which these idealized machines hybridize the knowledge in machine learning, artificial intelligence and artificial life, with an internal structure and dynamics following the laws of quantum physics, as is already happening in the classical domain35. We begin with a brief description of the model for quantum artificial life20, whose most important elements are the quantum living units or individuals. Each of them is expressed in terms of two qubits that we call genotype and phenotype. The genotype contains the information describing the type of living unit, an information that is transmitted from generation to generation. The state of the phenotype is determined by two factors: the genetic information and the interaction between the individual and its environment. This state, together with the information it encodes, is degraded during the lifetime of the individual. The goal of the proposed model is to reproduce the characteristic processes of Darwinian evolution, adapted to the language of quantum algorithms and quantum computing. The self-replication mechanism is based on two partial quantum cloning events, an operation that entangles either the genotype or the phenotype with a blank state and copies a certain expectation value of the original qubit in both of the outcome qubits. In this set of experiments, the self-replication consists in duplicating the expectation value of σz in the genotype, in a blank state that will be transformed in the genotype of the individual in the next generation19. The process is completed by copying again σz of the new genotype in another blank state that will be transformed in the phenotype of the new individual. The next subprotocol in the algorithm is the interaction between the individuals and the environment, which emulates the aging of living units until an asymptotic state that represents its death. This evolution is encoded in a dissipative dynamics that couples a bath with each of the phenotype qubits, with σ = |0〉〈1| as Lindblad operator. The effective lifetime, i.e., the time the phenotype needs to arrive to the dark state of the Lindbladian up to a given error, depends implicitly on the genotype. The protocol also accounts for mutations, performed via random single qubit rotations in the genotype qubits or via errors in the self-replication process20. The final ingredient is the interaction between individuals, which conditionally exchange the phenotypes depending on the genotypes20. This behavior is achieved via a four-qubit unitary operation, where genotypes and phenotypes play the role of control and target qubits, respectively. The conjunction of these components leads to a minimal but consistent Darwinian quantum scenario. The protocol may be enriched when including spatial information, either quantum or classical, or increasing the model complexity by considering a larger set of observables. The first step for this implementation is to express each of the building blocks of the previous paragraph in terms of the quantum gates available in the superconducting circuit architecture of IBM cloud quantum computer34. Since we have selected σz as the observable to clone, every partial quantum cloning event requires the realization of a UCNOT gate, that can be directly performed in the experiment. Regarding the interaction with the environment, we have adapted our protocol, because the experimental device does not allow to realize a conditional projection of the quantum state to the |0〉 in the phenotypes. The alternative we propose is to implement the transition between the basis states as a sequence of small rotations in σy for the phenotype qubits, \({e}^{-i{\sigma }_{y}\theta }\), with θ tuned according to the duration of each simulated time step. We have employed u2(ϕ, λ) and u3(θ, ϕ, λ) available in the experimental platform, to implement the single qubit gates. $${u}_{2}(\varphi ,{\rm{\lambda }})=\frac{1}{\sqrt{2}}(\begin{array}{cc}1 & -{e}^{i{\rm{\lambda }}}\\ {e}^{i\varphi } & {e}^{i({\rm{\lambda }}+\varphi )}\end{array})\,{u}_{3}(\theta ,\varphi ,{\rm{\lambda }})=(\begin{array}{cc}\cos \,\frac{\theta }{2} & -{e}^{i{\rm{\lambda }}}\,\sin \,\frac{\theta }{2}\\ {e}^{i\varphi }\,\sin \,\frac{\theta }{2} & {e}^{i(\varphi +{\rm{\lambda }})}\,\cos \,\frac{\theta }{2}\end{array})$$ The gate u3(θ, 0, 0) acting on genotype qubits can be used for the mutation events. Ideally, and in order to emulate their randomness both in the phase θ and in the presence or absence of the event, we could design the experimental runs following a classical program. For making the procedure tractable, we could discretize the range of θ in n values, and divide the total experimental runs in n + 1 groups to account for each of the different possibilities. The weight, or number of runs for each group, would depend on our selection for the mutation rate as well as on the random parameters obtained with the external program. However, constrained by the flexibility of the experimental device, we propose a less realistic but pragmatic procedure: assume that the mutations will only be of a specific θ and therefore eliminate a source of randomness and diversity in the protocol. The single-qubit gate accounting for the mutations will be σx. Regarding the randomness in the presence or absence of mutation events, we will have to adapt our algorithm to perform the mutations in groups of 1024 experimental runs, and achieve the mutation rate accordingly. The last subprotocol, the inter-individual interactions, requires the implementation of the interaction gate UI, whose effect is to exchange two pairs of quantum levels, while leaving the rest unaltered, as UI|xxyy〉 = |xyyx〉 and UI|xyyx〉 = |xxyy〉, for {x, y} ∈ {0, 1}. The challenge is to decompose UI in terms of the gates offered by the experimental setup. Our solution is given by \({U}_{I}={S}_{23}{U}_{12}({\mathbb{1}}\otimes F){U}_{12}{S}_{23}\), with \(F={U}_{43}{C}_{34}{C}_{24}{U}_{23}{C}_{34}^{\dagger }{U}_{43}{U}_{23}\). Here, the first and second subindices denote the control and target qubit respectively, U is the controlled-not gate, S is the SWAP gate and C is the controlled square root of not gate. These can be rewritten in terms of the controlled-not gate as Sij = UijUjiUij and \({C}_{12}=(T\otimes P{u}_{3}(\,-\,\pi /4,0,\mathrm{0)}){U}_{12}({\mathbb{1}}\otimes {u}_{3}(\pi \mathrm{/4,0,0)}){U}_{12}({\mathbb{1}}\otimes {P}^{\dagger })\), with \(P=\sqrt{{\sigma }_{z}}\) and \(T=\sqrt{P}\). An additional relation to point out is that the control target behavior in the controlled-not gate can be exchanged by introducing Hadamard gates, U21 = (H ⊗ H)U12(H ⊗ H). This is a useful formula for designing the quantum circuit in an experimental platform that only allows a single direction for the implementation of the UCNOT. Interaction between two individuals We start with a quantum circuit designed for reproducing the dynamics of two interacting individuals. Two precursor genotypes are initialized in \(|\psi {\rangle }_{{g}_{1}}=\,\cos \,\frac{\pi }{8}\mathrm{|0}\rangle +\,\sin \,\frac{\pi }{8}\mathrm{|1}\rangle \) and \(|\psi {\rangle }_{{g}_{2}}=\,\cos \,\frac{3\pi }{8}\mathrm{|0}\rangle +\,\sin \,\frac{3\pi }{8}\mathrm{|1}\rangle \) with u3. Afterwards, both individuals are completed by copying the genotype qubits in blank states via UCNOT gate, \(|\psi {\rangle }_{1}=\,\cos \,\frac{\pi }{8}\mathrm{|00}\rangle +\,\sin \,\frac{\pi }{8}\mathrm{|11}\rangle \) and \(|\psi {\rangle }_{2}=\,\cos \,\frac{3\pi }{8}\mathrm{|00}\rangle +\,\sin \,\frac{3\pi }{8}\mathrm{|11}\rangle \). In terms of θ1 = π/8 and θ2 = 3π/8, the complete state, |ψ1〉 ⊗ |ψ2〉, reads $$|\psi \rangle =\,\cos \,{\theta }_{1}\,\cos \,{\theta }_{2}\mathrm{|0000}\rangle +\,\cos \,{\theta }_{1}\,\sin \,{\theta }_{2}\mathrm{|0011}\rangle +\,\sin \,{\theta }_{1}\,\cos \,{\theta }_{2}\mathrm{|1100}\rangle +\,\sin \,{\theta }_{1}\,\sin \,{\theta }_{2}\mathrm{|1111}\rangle .$$ We now apply the interaction gate UI to conclude this building block, $${U}_{I}|\psi \rangle =\,\cos \,{\theta }_{1}\,\cos \,{\theta }_{2}\mathrm{|0000}\rangle +\,\cos \,{\theta }_{1}\,\sin \,{\theta }_{2}\mathrm{|0110}\rangle +\,\sin \,{\theta }_{1}\,\cos \,{\theta }_{2}\mathrm{|1001}\rangle +\,\sin \,{\theta }_{1}\,\sin \,{\theta }_{2}|1111\rangle .$$ Notice that the interaction fully exchanges the phenotypes, 〈σz〉2 and 〈σz〉4, that are now equal to the opposite genotype, \({\langle {\sigma }_{z}\rangle }_{1}={\cos }^{2}{\theta }_{1}-{\sin }^{2}{\theta }_{1}={\langle {\sigma }_{z}\rangle }_{4}\) and \({\langle {\sigma }_{z}\rangle }_{3}={\cos }^{2}{\theta }_{2}-{\sin }^{{\rm{2}}}\,{\theta }_{2}={\langle {\sigma }_{z}\rangle }_{2}\). The experiment is planned to reduce the total errors induced by the use of two-qubit gates. Consequently, we have reordered the initial Hilbert space |g1p1g2p2〉, where gi is genotype and pi is phenotype, as |p2 g2p1g1〉 and assigned each of these qubits to the experimental ones |Q0Q1Q2Q3〉. See Fig. 1 for the remaining quantum circuit diagram. Quantum circuit diagram for the protocol of two interacting individuals. Squares with a continuous line denote the phase values of u3 gate, while squares with a dashed line denote the phase values of u2 gate, both in units of π. When possible we reduce the expression to the value of θ and avoid writing the additional phases. The results, in Table 1, agree with the ideal case with a 71.58% fidelity according to \(F(p,q)={\sum }_{j}\,\sqrt{{p}_{j}{q}_{j}}\), that compares the probability distribution obtained when measuring in the computational basis with the theoretical prediction. Therefore, this result is valid, but not equivalent to the one that is expected when the complete wave function is considered, which is hindered by the use of full tomography and computed via \(F({\rho }_{1},{\rho }_{2})={\rm{Tr}}\sqrt{\sqrt{{\rho }_{1}}{\rho }_{2}\sqrt{{\rho }_{1}}}\)36. The expectation values extracted from the data show a reasonable overlap between p1 and g2, as expected and a considerable distance between g1 and p2. Table 1 Interaction between two individuals. Interaction with the environment In this round of experiments we test the combination of partial quantum cloning events and dissipation. A precursor genotype is initialized in \(|\psi {\rangle }_{{g}_{1}}=\,\cos \,\frac{\pi }{3}\mathrm{|0}\rangle +\,\sin \,\frac{\pi }{3}\mathrm{|1}\rangle \), and the individual completed with a first partial quantum cloning event via UCNOT and a blank state, \(|\psi {\rangle }_{1}=\,\cos \,\frac{\pi }{3}\mathrm{|00}\rangle +\,\sin \,\frac{\pi }{3}\mathrm{|11}\rangle \). Then, a single qubit rotation, u3(π/8, 0, 0), is applied in the phenotype, that substitutes the dissipation in a discrete manner, losing its exponential character. The course of time is simulated by this gate, by implementing one of them for every simulated time step. Subsequently, a second individual is created in a complete self-replication event with two partial quantum cloning operations. To conclude, u3(π/8, 0, 0), is implemented again on both genotypes associated with a next time step. We assign the Hilbert space of the simulating device as |Q0Q1Q2Q4〉 → |p2 g2 g1p1〉 to maximize the efficiency of the protocol. See Fig. 2 for the quantum circuit diagram. The results, shown in Table 2, account for similar probability distributions between the ideal and the real data with a fidelity of 91.18%, as before computed only for the computational basis. The initialization of a genotype before three partial quantum cloning events. The first of these will produce an initial individual and the remaining two will replicate it into a second one. The protocol continues with single-qubit gates that emulate the dissipation. The squares denote u3(θ, ϕ, λ) gates where the number indicates the value of θ. Table 2 Self-replication and interaction with the environment in the σz basis. For the self-replication instance, there is an additional property of the model that only arises when measuring some purely quantum correlations of the system. The partial quantum cloning operation entangles the qubits which are involved on it, transmitting 〈σx〉 of the original state into 〈σx ⊗ σx〉. Note that this data can be extracted from the experiment when measuring on the σx basis, which is done by introducing a Hadamard gate in every entry before projecting. Therefore, one has to compute 〈σz ⊗ σz ⊗ σz ⊗ σz〉 in the new basis, to retrieve 〈σx ⊗ σx ⊗ σx ⊗ σx〉. This technique is based on the equality Tr[σxρ] = Tr[σzHρH], since σx = HσzH. Even if the calculation for the fidelity yields a satisfactory 93.45%, the value of 〈σx ⊗ σx ⊗ σx ⊗ σx〉 still shows a sizable error with respect to the ideal one, as we show in Table 3. Table 3 Self-replication and interaction with the environment in the σx basis. Even if this implementation does not coincide with the time evolution presented in the original model, it is able to emulate its results when only focusing on the σz or σx basis, but not to compare both measurements in general. Accordingly, if the lifetimes of each living qubits undergo a similar dynamics to the ones proposed in the model, the effect of the environment on the correlations cannot be correctly reproduced, and viceversa, unless the gates are specifically selected for a given precursor genotype. The theoretical value of 〈σx ⊗ σx ⊗ σx ⊗ σx〉 for a system that only undergoes self-replication events and dissipation decreases as \(\langle {\sigma }_{x}\rangle {e}^{-\gamma ({t}_{1}+{t}_{2})/2}\), with 〈σx〉 calculated over the precursor genotype and ti being the time between self-replication events. For the variant of the single qubit gates analyzed here, the theoretical value goes as 〈σx〉 cosθ1 cosθ2, where θi indicate the phase of each u3(θi, 0, 0). This part of the dissipative dynamics should match with the evolution of 〈σz〉, so the following set of equations should be fulfilled: $$\begin{array}{c}{e}^{-\gamma ({t}_{1}+{t}_{2})/2}\langle {\sigma }_{x}\rangle =\,\cos \,{\theta }_{1}\,\cos \,{\theta }_{2}\langle {\sigma }_{x}\rangle \\ 1-2{e}^{-\gamma ({t}_{1}+{t}_{2})}\mathrm{(1}-a)=\mathrm{(2}a-\mathrm{1)}\,\cos \,{\theta }_{1}\\ 1-2{e}^{-\gamma {t}_{2}}\mathrm{(1}-a)=\mathrm{(2}a-\mathrm{1)}\,\cos \,{\theta }_{2}\end{array}$$ where a is the |0〉〈0| component in the precursor genotype. Given that there is no solution for θ1 and θ2 which is independent of a, the method of single-qubit gates for mimicking the dissipation is not valid as a general protocol, because it has to be tuned for each case. Nevertheless, the important quantum feature of the model, the existence of quantum correlations, and their role as witnesses of the interactions between quantum living units can correctly be represented with the approach followed here, even if their time dependence is different to the one presented in the original model. In more practical terms, the implementations summarized in Tables 2 and 3 are realistic, but not compatible between them, because both can be associated to dissipative dynamics but with different representative parameters, as we have seen in Eq. (2). Furthermore, we believe that the ideal realization of the experiment will soon be feasible at least for a small number of individuals. Our proposal for introducing the dissipation is to exploit the natural decoherence present in quantum platforms and use error correction protocols only in the genotype qubits. This phenotype-genotype asymmetry in the decay probability is the key element in the emulation of the interaction between individuals and environment. The implementation of mutations requires to combine the outcome of different designs of quantum circuit diagrams and, therefore, experimental runs. In this case, we consider that a mutation event, which can affect both individuals, is simulated with a σx. The complete result is achieved when gathering data from 4 different groups of experiments, that correspond to the cases of mutation on the first genotype, mutation on the second genotype, mutation on both genotypes and no mutation. We have performed 1024 experimental runs for each of the three cases with mutations and 8192 runs for the no-mutation rate. These results have been combined with the ones shown in Table 2, that coincide with the no-mutation case, with the goal of reducing the mutation rate for each individual, which takes a final value of 2/19. Table 4 contains the agreggated data of the mutation experiments. In IVb the mutation occurs before the self-replicating event, therefore affecting the second individual, in IVc the mutation can only occur after the second individual has been created, and IVd contains both mutations. See the illustration of this process in Fig. 3. See Table 4 for the measured data with a fidelity of 94.86% with respect to the ideal case in the σz basis. Table 4 Self-replication, interaction with the environment and mutations. Visualization of the ideal processes in experiments I and IV. We depict the individuals as combinations of two diamonds that represent the genotype and phenotype qubits. The color in the genotype qubit, the upper diamond of each pair, depends on the value of σz as indicated in the color bar. The color in the phenotype qubit is the same as in the genotype one, as the color is meant to be showing the genetic information. Moreover, the opacity of this color is modified according to the expectation value of σz being limited by the value of 1 that corresponds to the blank qubits. In both cases the right arrow separates two consecutive time steps. Following these clarifications, we can see the exchange of phenotypes in I and the self-replication followed by different mutation possibilities in IV. Realization of the complete model of quantum artificial life The last round of experiments is devoted to the reproduction of the aggregate of properties in the quantum artificial life algorithm. In order to maintain the fidelity in values that allow us to claim that the experiment is indeed behaving according to the protocol, we restrict our analysis to the case of two interacting individuals, which undergo mutations and dissipation. Then, the quantum circuit diagram, shown in Fig. 4, is an upgraded version of the one shown in Fig. 1 that includes u3(π/8, 0, 0) for simulating the dissipation in the phenotypes. For the mutations, we follow the same strategy as in the previous subsection, combining the data generated with different quantum circuit diagrams each of them emulating a specific case of the presence or absence of mutation instances. In particular, 3 rounds of 8192 runs emulating the no-mutation case and 1024 runs for each of the mutation cases determine a mutation rate of 2/27. The post-processing of the data, in Table 5, matches the ideal probability distribution in the computational basis with a fidelity of 93.94%. Quantum circuit diagram for the complete quantum artificial life protocol. Squares with a continuous line denote the phase values of u3 gate, while squares with a dashed line denote the phase values of u2 gate, both in units of π. When possible we reduce the expression to the value of θ and avoid writing the additional phases. Table 5 Complete model. Quantum vs Classical A natural method to evaluate the Quantum Artificial Life framework is to clearly describe on the similarities and differences between the quantum model and a classical analogue approach. On the one hand, all the indicators based in measurements on the 〈σz〉 basis can be reproduced by classical probability distributions. This means, that one can create a classical model of interacting individuals with identical ingredients, in which the single qubit results for each of the living units would be equal to the ones achieved in the quantum version. On the other hand, non-zero quantum correlations in particular, 〈σx ⊗ σx〉, and its generalization to more pairs of qubits, can only be achieved in the quantum case. These introduce a new feature when compared with the classical version of the model, as they can be interpreted as a time correlations between the quantum living units. Let this be illustrated with the following example. Suppose we have two pairs of quantum living units, all of them with the same value of 〈σz〉 in the genotype qubit but with different values for the phenotype, in a simplified system without mutations and interindividual interactions. When the individuals are not causally connected, the value of 〈σx ⊗ σx ⊗ σx ⊗ σx〉 in the four qubits of both individuals would yield a value of α2. However, if these individuals are related by a self-replication operation, the measurement would yield a value of α, where α accounts for 〈σx〉 in the precursor genotype. In other words, the nonzero quantum correlation between the subspaces of the first and second individuals allows to discriminate two different operations from the timeline perspective. One can show that a classical counterpart of this model would not be able to store the information of the time correlations. The correlations we are interested in are originated in the initial nonzero value of 〈σx〉 the precursor genotype, and afterwards propagated in each partial quantum cloning event because of the property mentioned above. In order to validate our claim, let us assume that the precursor genotype is given by an incoherent mixture of states of the form ρ = a|0〉〈0| + (1 − a)|1〉〈1|. Already from this point, one can see that 〈σx〉 = 0, and therefore no information is going to be propagated to the global correlations when more than one qubit is present. In other words, the purely quantum information is produced by a quantum superposition state between the basis elements and afterwards propagated through the entanglement of quantum living units. The interpretation of this phenomenon is that the quantum model allows to keep track of the relation between the individuals without the need of introducing additional variables to the ones that describe the genotype and phenotype. Experimental Errors Regarding errors in the experimental protocol, even if the fidelities achieved are satisfactory, they do not correspond to the fidelities of the complete quantum state. In this sense, the prediction for the number of events to measure is done by simply multiplying the probability distribution by the number of events. These do not exactly match the experimental data (see Table 6 for all experimental outcomes). Indeed, the distances have to be properly weighed, since the probability distribution is the key quantity to be extracted (see Fig. 5). Moreover, the overlap between expectation values of observables in the measurement basis is lower than the fidelity of the probability distribution for all cases analyzed here. Table 6 Experimental data. Experimental and ideal probability distributions for all the cases analyzed. In each pair of columns the blue one in the left denotes the experimental value while the yellow one in the right denotes the ideal estimation. The labels in the subplots correspond to the tables for the different quantum artificial life instances. In parallel, the assignment between the simulated and the simulating Hilbert spaces is designed to maximize the fidelity according to the calibration parameters provided by IBM. Nevertheless, the recalibration of the circuit changes the gate and readout errors, so we reevaluate our circuit according to the new parameters and adapt it when the fidelity outcome can be improved with a different labeling of qubits. Consequently, the performance of the different experiments is not directly comparable, since they have been implemented under unequal conditions. Despite the different factors degrading the implementation, the performed experiments reproduce the characteristic properties of the sought quantum natural selection scenario. We have observed how the partial quantum cloning events allow us to inherit the information of 〈σz〉 from qubit to qubit, and use this property to encode the self-replication process. We have also seen how nonzero quantum correlations assure that both individuals have been part of a same event in their timelines, in this case self-replication. Another relevant characteristic of the analysis is the inclusion of mutations as a source of randomness that, counterintuitively, significantly improved the fidelity of the quantum algorithm outcome. Our explanation is that mutations tend to homogenize the probability distribution, which is the same effect as the one produced by the errors naturally present in the experimental platform. Surprisingly, in the classical realm, mutations also help the species to adapt to changing environments. Scope of Quantum Artificial Life Regarding the emergence of complexity, the route towards the scalability of our quantum algorithm is intrinsically related to the inclusion of more degrees of freedom in the description of quantum living units. These may be introduced by simply increasing the number of qubits, and making them part of the updated genotype and phenotype. A part of the dynamics would be adapted by repeating the partial cloning processes and extending the dissipation to the new phenotype qubits. Another part of the dynamics would deal with the properties introduced by the new degrees of freedom. In the same way as the genotype in the current model rules the individual-environment and inter-individual interactions, additional observables in the genotype would enable the exploration of more characteristics: different self-replication rates, independent lifetime and interaction role, or capacity to displace along the associated Hilbert space, all of them encoded in the genotype. An alternative is to encode the information in quantum states of higher dimensions. The general result of partial quantum cloning to qudits of any dimension makes this family of hypothetical models feasible, conditional to the availability of high dimensional entangling operations. A different question is the scalability of the current model without including any modification. Notice that, in our protocol, the information about the lifetime and the predator-prey character is classical and encoded in the mean value 〈σz〉 of the phenotype and the genotype of the individual, respectively. Therefore, this partial dynamics can be predicted with a simpler classical analogue. However, the full quantum description of our protocol allows one to retrieve the connections between the quantum living units, linked through entanglement, which is a useful complement that is absent in the classical analogue. In other words, the extra free parameters available in the superposition and entanglement of quantum states are used for describing questions regarding the collective dynamics of individuals, and this is precisely the new source of complex behavior our algorithm is able to create. In this sense, the complexity of our quantum algorithm may only be reached by a larger quantum computer, currently being built in academic institutions and companies. This might yield unexpectedly interesting outcomes but, at the same time, will increase the sensitivity to decoherence during the self-replication process. This experimental realization of the proposed quantum algorithm represents the consolidation of the theoretical framework of quantum artificial life. The improvement in scalable quantum computers will soon allow us for more accurate quantum emulations with growing complexity towards quantum supremacy, even considering spatial variables for the individuals and a mechanism for tracing out death living units. These future developments should lead towards an autonomous character of the set of individuals, i.e., the evolution will be an intrinsic property of the system, and the desired behavior will emerge without following the instructions of a previously designed quantum algorithm. In this context, the system would be transformed into an intelligent source of quantum complexity whose evolutionary plot for a large number of individuals may not be predicted classically and, consequently, has the capacity to produce unexpected results when scaled up. An interesting question to address is to establish the relation existing between the parameters defining the fundamental processes of the model, and the emergent multiqubit quantum state. Along these lines, and following the frame of artificial life oriented genetic algorithms37, we speculate about the idea of channeling this complexity to encode optimization problems by tuning the self-replication, mutation and dissipation rates that define the evolution. Furthermore, recent advances in quantum machine learning constitute a promising material to work with in the study of algorithms combining the properties of both fields, pursuing the design of intelligent and replicating quantum agents. Therefore, the creation of these quantum living units and their possible applications are expected to have deep implications in the community of quantum simulation and quantum computing in a variety of quantum platforms. All in all, the experiments presented here entail the validation of quantum artificial life in the lab and, in particular, in cloud quantum computers as that of IBM. Still another interesting step would be the development of autonomous quantum devices following the theoretical and experimental results in quantum cellular automata38,39,40,41,42. Our quantum individuals are driven by an adaptation effort along the lines of a quantum Darwinian evolution, which effectively transfer the quantum information through generations of larger multiqubit entangled states. We believe that the presented results and vision, both in theory and experiments, should hoist this innovative research line as one of the leading banners in the future of quantum technologies. Deutsch, D. Quantum theory, the Church-Turing principle and the universal quantum computer. Proc. Roy. Soc. London A 400, 97 (1985). Article ADS MathSciNet Google Scholar Feynman, R. P. Simulating physics with computers. Int. J. Theor. Phys. 21, 467 (1982). Deutsch, D. & Josza, R. Rapid solutions of problems by quantum computation. Proc. Roy. Soc. London A 439, 553 (1992). Grover, L. K. A fast quantum mechanical algorithm for database search, Proceedings, 28th Annual ACM Symposium on the Theory of Computing (STOC) 212 (1996). Lloyd, S. Universal quantum simulators. Science 273, 1073 (1996). Article ADS MathSciNet CAS Google Scholar Shor, P. W. Polynomial time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Sci. Statist. Comput. 26, 1484 (1997). Aaronson, S. & Arkhipov, A. The Computational Complexity of Linear Optics, Proceedings, 43rd Annual AMC Symposium on the Theory of Computing (STOC) 333 (2011). Wiesner, S. Conjugate coding. SIGACT News 15, 78 (1983). Bennett, C. H. & Brassard, G. Quantum cryptography: Public key distribution and coin tossing. In Proceedings of IEEE International Conference on Computers, Systems and Signal Processing, volume 175, page 8 (New York, 1984). Ekert, A. K. Quantum cryptography based on Bells theorem. Phys. Rev. Lett. 67, 661 (1991). Aharonov, Y., Davidovich, L. & Zagury, N. Quantum random walks. Phys. Rev. A 48, 1687 (1993). Article ADS CAS Google Scholar Shenvi, N., Kempe, J. & Whaley, K. B. Quantum random-walk search algorithm. Phys. Rev. A 67, 052307 (2003). Article ADS Google Scholar Biamonte, J. et al. Quantum machine learning. Nature 549, 195 (2017). Dunjko, V. & Briegel, H. J. Machine learning & artificial intelligence in the quantum domain. Rep. Prog. Phys. 81, 074001 (2018). Schuld, M., Sinayskiy, I. & Petruccione, F. An introduction to quantum machine learning. Contemporary Phys. 56, 172 (2014). Alvarez-Rodriguez, U., Lamata, L., Escandell-Montero, P., Martín-Guerrero, J. D. & Solano, E. Supervised Quantum Learning without Measurements. Scientific Reports 7, 13645 (2017). Lamata, L. Basic protocols in quantum reinforcement learning with superconducting circuits. Scientific Reports 7, 1609 (2017). Schrödinger, E. What is life? (Cambridge University Press, Cambridge, UK, 1944). Alvarez-Rodriguez, U., Sanz, M., Lamata, L. & Solano, E. Biomimetic Cloning of Quantum Observables. Scientific Reports 4, 4910 (2014). Alvarez-Rodriguez, U., Sanz, M., Lamata, L. & Solano, E. Artificial Life in Quantum Technologies. Scientific Reports 6, 20956 (2016). Alvarez-Rodriguez, U., Di Candia, R., Casanova, J., Sanz, M. & Solano, E. Algorithmic quantum simulation of memory effects. Phys. Rev. A 95, 020301 (2017). Alvarez-Rodriguez, U. et al. Advanced-Retarded Differential Equations in Quantum Photonic Systems. Scientific Reports 7, 42933 (2017). Las Heras, U., Alvarez-Rodriguez, U., Solano, E. & Sanz, M. Genetic Algorithms for Digital Quantum Simulations. Phys. Rev. Lett. 116, 230504 (2016). Pfeiffer, P., Egusquiza, I. L., Di Ventra, M., Sanz, M. & Solano, E. Quantum Memristors. Scientific Reports 6, 29507 (2016). Salmilehto, J., Deppe, F., Di Ventra, M., Sanz, M. & Solano, E. Quantum Memristors with Superconducting Circuits. Scientific Reports 7, 42044 (2017). Sanz, M., Lamata, L. & Solano, E. Quantum memristors in quantum photonics. APL Photonics 3, 080801 (2018). Langton, C. G. Artificial Life: An overview. (MIT Press, Cambridge USA, 1997). Gardner, M. The fantastic combinations of John Conway's new solitaire game life. Sci. Am. 223, 120 (1970). Ray, T. S. Ecology and Optimization of Digital Organisms. Santa Fe Institute Available at: http://samoa.santafe.edu/media/workingpapers/92-08-042.pdf (Accessed: 10th October 2017) (1992). Martin-Delgado, M. A. On Quantum Effects in a Theory of Biological Evolution. Sci. Rep. 2, 302 (2012). Abbott, D., Davies, P. C. W. & Pati, A. K. Quantum Aspects of Life. (Imperial College Press, London, 2008). Arrighi, P. & Grattage, J. A Quantum Game of Life. Paper presented at Journées Automates Cellulaires, Turku, Finland (2010, December 15). Bleh, D., Calarco, T. & Montangero, S. Quantum Game of Life. EPL 97, 20012 (2012). IBM Quantum Experience, https://www.research.ibm.com/ibm-q/ (last accessed 10 October 2017). Steels, L. The Artificial Life Roots of Artificial Intelligence. Artif. Life 1, 75 (1994). Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information. (Cambridge University Press, Cambridge, UK, 2000). Mitchell, M. & Forrest, S. Genetic Algorithms and Artificial Life. Artif. Life 1, 267 (1994). Lent, C. S., Tougaw, P. D., Porod, W. & Bernstein, G. H. Quantum cellular automata. Nanotechnology 4, 1 (1993). Orlov, A. O., Amlani, I., Bernstein, G. H., Lent, C. S. & Snider, G. L. Realization of a Functional Cell for Quantum-Dot Cellular Automata. Science 277, 928 (1997). Amlani, I. et al. Digital Logic Gate Using Quantum-Dot Cellular Automata. Science 284, 289 (1999). Cowburn, R. P. & Welland, M. E. Room Temperature Magnetic Quantum Cellular Automata. Science 287, 1466 (2000). Imre, A. et al. Majority Logic Gate for Magnetic Quantum-Dot Cellular Automata. Science 311, 205 (2006). We thank Armando Pérez-Leija and Alexander Szameit for enthusiastic and enlightening discussions. We acknowledge the use of IBM Quantum Experience for this work. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Quantum Experience team. We acknowledge support from Spanish MINECO/FEDER FIS2015-69983-P, UPV/EHU new PhD program, Basque Government Programa Posdoctoral de Perfeccionamiento de Personal Investigador Doctor, Basque Government IT986-16, and Ramón y Cajal Grant RYC-2012-11391. Basque Centre for Climate Change (BC3), 48940, Leioa, Spain U. Alvarez-Rodriguez School of Mathematical Sciences, Queen Mary University of London, London, E1 4NS, United Kingdom Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080, Bilbao, Spain U. Alvarez-Rodriguez, M. Sanz, L. Lamata & E. Solano IKERBASQUE, Basque Foundation for Science, Maria Diaz de Haro 3, 48013, Bilbao, Spain E. Solano Department of Physics, Shanghai University, 200444, Shanghai, China M. Sanz L. Lamata U.A.-R. performed the experiments on the IBM Quantum Experience. U.A.-R., M.S., L.L. and E.S. developed the model and analyzed the data. All authors contributed to writing the manuscript. Correspondence to L. Lamata. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Alvarez-Rodriguez, U., Sanz, M., Lamata, L. et al. Quantum Artificial Life in an IBM Quantum Computer. Sci Rep 8, 14793 (2018). https://doi.org/10.1038/s41598-018-33125-3 Quantum Artificial Intelligence Quantum Machine Learning Quantum Supremacy Quantum Circuit Diagram Comparison of the similarity between two quantum images You-hang Liu Zai-dong Qi Qiang Liu Scientific Reports (2022) Self-replication of a quantum artificial organism driven by single-photon pulses Daniel Valente Demonstration of minisuperspace quantum cosmology using quantum computational algorithms on IBM quantum computer Anirban Ganguly Ritu Dhaulakhandi Prasanta K. Panigrahi Quantum Information Processing (2021) Solving Linear Systems of Equations by Using the Concept of Grover's Search Algorithm: an IBM Quantum Experience Rituparna Maji Bikash K. Behera International Journal of Theoretical Physics (2021) Studying the effect of lockdown using epidemiological modelling of COVID-19 and a quantum computational approach using the Ising spin interaction Anshuman Padhi Sudev Pradhan By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Journal Top 100 Top 100 in Physics About Scientific Reports Guide to referees Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
CommonCrawl
Yinxia Wang 1, School of Mathematics and Information Sciences, North China University of Water Resources and Electric Power, Zhengzhou 450011, China Received January 2016 Revised March 2016 Published June 2016 In this paper, we study the initial value problem for the three-dimensional nematic liquid crystal flows. Blow up criterion of smooth solutions is established by the energy method, which refines the previous result. Keywords: blow up criterion., smooth solutions, Nematic liquid crystal flows. Mathematics Subject Classification: Primary: 76A15; Secondary: 76B0. Citation: Yinxia Wang. A remark on blow up criterion of three-dimensional nematic liquid crystal flows. Evolution Equations & Control Theory, 2016, 5 (2) : 337-348. doi: 10.3934/eect.2016007 G. Brown and W. Shaw, The Mesomorphic State, Liquid Crystals,, Chem. Rev., 57 (1957), 1049. doi: 10.1021/cr50018a002. Google Scholar Q. Chen, Z. Tan and G. Wu, LPS's criterion for incompressible nematics liquid crystal flows,, Acta Mathematica Scientia, 34 (2014), 1072. doi: 10.1016/S0252-9602(14)60070-9. Google Scholar J. Chemin, perfect Incompressible Fluids,, Oxford Lecture Ser. Math. Appl. 14, 14 (1998). Google Scholar J. Ericksen, Hydrostatic theory of liquid crystal,, Arch. Rational Mech. Anal., 9 (1962), 371. doi: 10.1007/BF00253358. Google Scholar J. Ericksen and D. Kinderlehrer (eds.), Theory and Applications of Liquid Crystals,, The IMA Volumes in Mathematics and its Applications, 5 (1987), 21. doi: 10.1007/978-1-4613-8743-5. Google Scholar J. Ericksen, Equilibrium theory of liquid crystals,, Advances in Liquid Crystals, 2 (1976), 233. doi: 10.1016/B978-0-12-025002-8.50012-9. Google Scholar F. Frank, On the theory of liquid crystals,, Discussions Faraday Soc., 25 (1958), 19. Google Scholar P. G. de Gennes, The Physics of Liquid Crystals,, Oxford, (1974). Google Scholar Y. Hao and X. Liu, The existence and blow-up criterion of liquid crystals system in critical Besov space,, Commun. Pure Appl. Anal., 13 (2014), 225. Google Scholar R. Hardt, D. Kinderlehrer and F. Lin, Existence and partial regularity of static liquid crystal configurations,, Comm. Math. Phys., 105 (1986), 547. doi: 10.1007/BF01238933. Google Scholar M. Hong, Global existence of solutions of the simplified Ericksen-Leslie system in $\mathbbR^2$,, Cal. Var. Part. Differ. Equ., 40 (2011), 15. doi: 10.1007/s00526-010-0331-5. Google Scholar T. Huang and C. Wang, Blow up criterion for nematic liquid crystal flows,, Comm. Part. Differ.Equ., 37 (2012), 875. doi: 10.1080/03605302.2012.659366. Google Scholar T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations,, Comm. Pure Appl. Math., 41 (1988), 891. doi: 10.1002/cpa.3160410704. Google Scholar H. Kozono, T. Ogawa and Y. Taniuchi, The critical Sobolev inequalities in Besov spaces and regularity criterion to some semi-linear evolution equations,, Math. Z., 242 (2002), 251. doi: 10.1007/s002090100332. Google Scholar F. Leslie, Some constitutive equations for liquid crystals,, Arch. Rational Mech. Anal., 28 (1968), 265. doi: 10.1007/BF00251810. Google Scholar F. Leslie, Theory of flow phenomemum in liquid crystal,, In: Brown (ed.) Advances in Liquid Crystals, 4 (1979), 1. Google Scholar F. Lin, Nonlinear theory of defects in nematic liquid crystal: Phase transition and flow phenomena,, Comm. Pure Appl. Math., 42 (1989), 789. doi: 10.1002/cpa.3160420605. Google Scholar F. Lin and C. Liu, Nonparabolic Dissipative Systems Modeling the Flow of Liquid Crystals,, Comm. Pure Appl. Math., 48 (1995), 501. doi: 10.1002/cpa.3160480503. Google Scholar F. Lin, J. Lin and C. Wang, Liquid crystal flows in two dimensions,, Arch. Rational Mech. Anal., 197 (2010), 297. doi: 10.1007/s00205-009-0278-x. Google Scholar F. Lin and C. Wang, On the uniqueness of heat flow of harmonic maps and hydrodynamic flow of nematic liquid crystals,, Chin. Ann. Math. Ser. B, 31 (2010), 921. doi: 10.1007/s11401-010-0612-5. Google Scholar A. Majda and A. Bertozzi, Vorticity and Incompressible Flow,, Cambridge University Press: Cambridge, (2002). Google Scholar C. Oseen, Die anisotropen Fliissigkeiten, Tatsachen und Theorien,, Forts. Chemie, 21 (1929), 25. Google Scholar H. Triebel, Theory of Function Spaces,, Monograph in Mathematics, 78 (1983). doi: 10.1007/978-3-0346-0416-1. Google Scholar Y. Wang and Y. Wang, Blow up criterion for three-dimensional nematic liquid crystal flows with partial viscosity,, Math. Meth. Appl. Sci. 36 (2013), 36 (2013), 60. doi: 10.1002/mma.2569. Google Scholar Y. Wang, A logarithmically improved blow up criterion for three-dimensional nematic liquid crystal flows with partial viscosity,, Scienceasia, 39 (2013), 73. Google Scholar Y. X. Wang, Blow-up criteria of smooth solutions to the three-dimensional magneto-micropolar fluid equations,, Boundary Value Problems, 2015 (2015). doi: 10.1186/s13661-015-0382-9. Google Scholar H. Wen and S. Ding, Solutions of incompressible hydrodynamic flow of liquid crystals,, Nonlinear Analysis: Real World Appl., 12 (2011), 1510. doi: 10.1016/j.nonrwa.2010.10.010. Google Scholar Y. Zhang, Z. Tan and G. Wu, Blow up criterion for incompressible nematics liquid crystal flows,, preprint, (). Google Scholar Z. Zhang, S. Liu, J. Pan and L. Ma, A refined blow up criterion for the nematics liquid crystals,, Int. J. Contemp. Math. Sciences, 9 (2014), 441. doi: 10.12988/ijcms.2014.4438. Google Scholar Yang Liu, Sining Zheng, Huapeng Li, Shengquan Liu. Strong solutions to Cauchy problem of 2D compressible nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3921-3938. doi: 10.3934/dcds.2017165 Jihong Zhao, Qiao Liu, Shangbin Cui. Global existence and stability for a hydrodynamic system in the nematic liquid crystal flows. Communications on Pure & Applied Analysis, 2013, 12 (1) : 341-357. doi: 10.3934/cpaa.2013.12.341 Chun Liu, Huan Sun. On energetic variational approaches in modeling the nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 455-475. doi: 10.3934/dcds.2009.23.455 Yu-Zhu Wang, Weibing Zuo. On the blow-up criterion of smooth solutions for Hall-magnetohydrodynamics system with partial viscosity. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1327-1336. doi: 10.3934/cpaa.2014.13.1327 Yan Jia, Xingwei Zhang, Bo-Qing Dong. Remarks on the blow-up criterion for smooth solutions of the Boussinesq equations with zero diffusion. Communications on Pure & Applied Analysis, 2013, 12 (2) : 923-937. doi: 10.3934/cpaa.2013.12.923 Francisco Guillén-González, Mouhamadou Samsidy Goudiaby. Stability and convergence at infinite time of several fully discrete schemes for a Ginzburg-Landau model for nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4229-4246. doi: 10.3934/dcds.2012.32.4229 Hao Wu. Long-time behavior for nonlinear hydrodynamic system modeling the nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 379-396. doi: 10.3934/dcds.2010.26.379 M. Gregory Forest, Hongyun Wang, Hong Zhou. Sheared nematic liquid crystal polymer monolayers. Discrete & Continuous Dynamical Systems - B, 2009, 11 (2) : 497-517. doi: 10.3934/dcdsb.2009.11.497 Bagisa Mukherjee, Chun Liu. On the stability of two nematic liquid crystal configurations. Discrete & Continuous Dynamical Systems - B, 2002, 2 (4) : 561-574. doi: 10.3934/dcdsb.2002.2.561 Shanshan Guo, Zhong Tan. Energy dissipation for weak solutions of incompressible liquid crystal flows. Kinetic & Related Models, 2015, 8 (4) : 691-706. doi: 10.3934/krm.2015.8.691 Stefano Bosia. Well-posedness and long term behavior of a simplified Ericksen-Leslie non-autonomous system for nematic liquid crystal flows. Communications on Pure & Applied Analysis, 2012, 11 (2) : 407-441. doi: 10.3934/cpaa.2012.11.407 Yi-hang Hao, Xian-Gao Liu. The existence and blow-up criterion of liquid crystals system in critical Besov space. Communications on Pure & Applied Analysis, 2014, 13 (1) : 225-236. doi: 10.3934/cpaa.2014.13.225 Qiang Tao, Ying Yang. Exponential stability for the compressible nematic liquid crystal flow with large initial data. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1661-1669. doi: 10.3934/cpaa.2016007 Junyu Lin. Uniqueness of harmonic map heat flows and liquid crystal flows. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 739-755. doi: 10.3934/dcds.2013.33.739 Boling Guo, Yongqian Han, Guoli Zhou. Random attractor for the 2D stochastic nematic liquid crystals flows. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2349-2376. doi: 10.3934/cpaa.2019106 Zhenlu Cui, M. Carme Calderer, Qi Wang. Mesoscale structures in flows of weakly sheared cholesteric liquid crystal polymers. Discrete & Continuous Dynamical Systems - B, 2006, 6 (2) : 291-310. doi: 10.3934/dcdsb.2006.6.291 Chun Liu, Jie Shen. On liquid crystal flows with free-slip boundary conditions. Discrete & Continuous Dynamical Systems - A, 2001, 7 (2) : 307-318. doi: 10.3934/dcds.2001.7.307 Xiaoli Li, Boling Guo. Well-posedness for the three-dimensional compressible liquid crystal flows. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1913-1937. doi: 10.3934/dcdss.2016078 Zhenlu Cui, Qi Wang. Permeation flows in cholesteric liquid crystal polymers under oscillatory shear. Discrete & Continuous Dynamical Systems - B, 2011, 15 (1) : 45-60. doi: 10.3934/dcdsb.2011.15.45 Zhiyuan Geng, Wei Wang, Pingwen Zhang, Zhifei Zhang. Stability of half-degree point defect profiles for 2-D nematic liquid crystal. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6227-6242. doi: 10.3934/dcds.2017269 Yinxia Wang
CommonCrawl
HiSSI: high-order SNP-SNP interactions detection based on efficient significant pattern and differential evolution Xia Cao1, Jie Liu1, Maozu Guo2,3 & Jun Wang ORCID: orcid.org/0000-0002-5890-03651 Detecting single nucleotide polymorphism (SNP) interactions is an important and challenging task in genome-wide association studies (GWAS). Various efforts have been devoted to detect SNP interactions. However, the large volume of SNP datasets results in such a big number of high-order SNP combinations that restrict the power of detecting interactions. In this paper, to combat with this challenge, we propose a two-stage approach (called HiSSI) to detect high-order SNP-SNP interactions. In the screening stage, HiSSI employs a statistically significant pattern that takes into account family wise error rate, to control false positives and to effectively screen two-locus combinations candidate set. In the searching stage, HiSSI applies two different search strategies (exhaustive search and heuristic search based on differential evolution along with χ2-test) on candidate pairwise SNP combinations to detect high-order SNP interactions. Extensive experiments on simulated datasets are conducted to evaluate HiSSI and recently proposed and related approaches on both two-locus and three-locus disease models. A real genome-wide dataset: breast cancer dataset collected from the Wellcome Trust Case Control Consortium (WTCCC) is also used to test HiSSI. Simulated experiments on both two-locus and three-locus disease models show that HiSSI is more powerful than other related approaches. Real experiment on breast cancer dataset, in which HiSSI detects some significantly two-locus and three-locus interactions associated with breast cancer, again corroborate the effectiveness of HiSSI in high-order SNP-SNP interaction identification. It has been widely recognized that single nucleotide polymorphisms (SNPs) are associated with a variety of human complex diseases. Genome-wide association study (GWAS) has became a powerful tool for detecting SNPs and detected hundreds of single SNPs associated with complex diseases [1]. However, these single SNPs can only explain a portion of the theoretical estimated heritability of complex diseases [2]. Complex diseases are influenced by various genetic variants and environmental factors. Therefore, SNP-SNP interactions defined as various joint effects of genetic variations should also be considered to better understand etiology of complex diseases. Existing approaches for searching two-locus SNP interactions can be grouped into three categories: exhaustive search, stochastic search and machine learning based search. Methods based on exhaustive search enumerate all possible SNP combinations of two-locus and perform interaction tests for each combination. Ritchie et al. [3] proposed the multifactor dimensionality reduction (MDR) approach, which partitions genotype combinations into two classes and exhaustively searches the best SNP combination by predicting the disease status. Stochastic methods use the random sampling procedures to search the space of SNP combinations. Zhang et al. [4] proposed a Bayesian epistasis association mapping approach, which iteratively uses the Markov chain Monte Carlo to search two-locus interactions. Machining learning methods [5], such as random forest, neural networks and support vector machines, also have been applied to discover SNP interactions. Bureau et al. [6] focused on measures of predictive importance and applied random forest to discover predictive polymorphisms or markers of a phenotype, which are likely to affect disease susceptibility. There are some challenges in detecting high-order SNP interactions. The first is the computational challenge. Although the overall complexity is linear with the number of individuals, it becomes exponential with the increase of locus. For example, for a dataset containing 1 million SNPs, the number of combinations to be tested is tremendous: 5 ×1011 pairwise interactions, 1.7 ×1017 3-way interactions, 8.3 ×1027 5-way interactions [7]. Therefore, exhaustively searching high-order epistatic interactions would be a heavy computational burden. The second is the statistical challenge. To balance the false-positive rate and false-negative rate, many stringent significance thresholds should be applied. Several high-order SNP interactions detection approaches were developed to attack the aforementioned challenges. Xie et al. [8] proposed EDCF (Epistasis Detector based on the Clustering of relatively Frequent items) to detect multi-locus epistatic interactions based on two-locus interaction models. EDCF is a two-stage method, it firstly groups all genotype combinations into three clusters and then evaluates the significance of interaction modules based on χ2-test. Guo et al. [9] proposed a two-stage method called DCHE (Dynamic Clustering for High-order genome-wide Epistatic interactions detecting). DCHE dynamically groups all genotype combinations into three to six subgroups, and then separately adopts χ2-test to evaluate the candidate pairwise combination in each subgroup. Yang et al. [10] proposed a stochastic search method (DECMDR). DECMDR combines the differential evolution algorithm [11] with a classification based multifactor-dimensionality reduction to detect the significant associations between cases and controls among all possible SNP combinations. These high-order SNP interactions detection approaches still have some limitations. Most of these approaches do not control false positives and apply Bonferroni correction [12] in multiple hypothesis test for GWAS. Bonferroni correction is simple, but it is often overly conservative when the number of SNP is very huge. The correction comes at the cost of increasing the probability of producing false negatives, i.e., reducing statistical power [13, 14]. In this paper, we propose a two-stage approach named HiSSI to detect high-order SNP interactions based on candidate pairwise SNP combinations. In the screening stage, a statistically significant pattern considering family wise error rate (FWER) is introduced to control false positives in multiple hypothesis test. HiSSI makes the statistically significant pattern faster and more memory-efficient via a fast Westfall-Young permutation testing [15], and obtains a corrected significant threshold to screen significant pairwise SNP combination candidates. In the search stage, HiSSI employs two different strategies to search high-order SNP interactions. For a small set, HiSSI uses the exhaustive search. For a large set, HiSSI employs a heuristic search technique named differential evolution (DE) algorithm [10, 11] along with χ2-test. We conduct simulation studies with various two-locus and three-locus disease models to comparatively study the power of HiSSI and that of state-of-the-art approaches, including EDCF [8], DCHE [9] and DECMDR [10]. The empirical study demonstrates that our proposed HiSSI is generally more powerful than these approaches. Further study on a real breast cancer (BC) dataset shows that HiSSI also detects some two-locus and three-locus combinations that are significantly associated with breast cancer. These experiments prove that HiSSI is capable to identify high-order interactions from genome-wide data. Suppose a genotype dataset include N samples and M SNPs. We use y to denote the phenotype (including case and control), P(s(i,j)) to denote the pattern of pairwise SNP (i-th SNP and j-th SNP) combinations. Let N1 and N0 denote the number of affected samples (i.e., cases) and the number of controls. Suppose a SNP with a major allele A, and a minor allele a. Three genotypes of a SNP are the homozygous reference genotype (AA), the heterozygous genotype (Aa), and the homozygous variant genotype (aa). Generally, these three genotypes are encoded as {0, 1, 2}. In this paper, for k-th (k={0,1,2}) genotype of i-th (i={1,2,…,M}) SNP, we encode it as {0, 1} by the ratio of the number of case and the number of control, which can be calculated as: $$ R_{ik}=\frac{N_{0ik}}{N_{1ik}} \times \omega $$ where N0ik and N1ik denote the number of k-th genotype of i-th SNP under control and case set, respectively; \(\omega =\frac {N_{1}}{N_{0}}\) is a balance factor to control the influence of unbalanced GWAS datasets. If Rik>1, the genotype is encoded by 1; otherwise, encoded by 0. In this way, each SNP is encoded by {0, 1}. For each pairwise SNP combination P(s(i,j)), it is also encoded by {0, 1} instead of nine genotype combinations as follows: $$ \begin{aligned} P(s(i,j))= \left\{\begin{array}{ll} 0 & \quad S_{i}=0 \ and \ S_{j}=0\\ 1 & \quad otherwise \end{array}\right. \end{aligned} $$ where Si and Sj denote the i-th and j-th SNPs. In the screening stage, HiSSI attempts to find all significant candidate pairwise SNP combinations (snpi,snpj) such that P(s(i,j)) is statistically associated with the phenotype y after correction for multiple hypothesis testing. In the search stage, HiSSI tries to find out high-order SNP interactions based on candidate set. The whole procedure of HiSSI is illustrated in Fig. 1. The following two subsections elaborate on these two stages, respectively. Procedure overview of HiSSI Stage 1: screening pairwise SNP combinations For each pairwise SNP combination P(s(i,j)), we can obtain the 2 ×2 contingency table for P(s(i,j)) and phenotype y as Table 1. Table 1 Contingency table for two-locus combinations and phenotype HiSSI evaluates the association between the phenotype y and the variable P(s(i,j)) by χ2-test [16]. Suppose pi,j is the corresponding p-value of the two-locus combination (snpi,snpj) derived from the contingency table. If pi,j≤δ∗ (δ∗ is the corrected significant threshold), HiSSI deems the two-locus combination is significant and places it into candidate set. HiSSI utilizes the minimum attainable p-value and the set of testable SNP combinations at significance level δ to make the permutation-testing more fast and efficient. Since the minimum attainable p-value Ψ(x) is symmetric about N/2 [17], there are only \(\lfloor \frac N2 \rfloor \) +1 different values of Ψ(x) denoted as \(\{\delta _{0},\delta _{1},\ldots,\delta _{\lfloor \frac N2 \rfloor }\}\), which is a monotonically decreasing sequence. Σ(δ) is the testable region, one two-locus combination (snpi,snpj) is testable if and only if the marginal x∈Σ(δ). \(\Sigma _{k}=[\sigma _{l}^{k},\sigma _{r}^{k}]\bigcup [N-\sigma _{r}^{k},N-\sigma _{l}^{k}]\) can be computed by starting from Σ(δ0)=[0,N] and iteratively shrinked to obtain Σ(δk) from Σ(δk−1). At initialization, HiSSI generates J phenotypes based on J permutations, initializes J different minimum p-values \(\{p_{min}^{(j)}\}_{j=1}^{J} \) =1 (the maximum value a p-value can take) and initializes the corrected significance threshold as δ=δ1, δ1 is the largest value that Ψ(x) can take other than the trivial value δ0=1, which deems all SNP pairs that are testable and significant. Then, HiSSI computes the corresponding testability region Σk and \(\sigma _{l}^{k}\). For each two-locus combination, HiSSI computes x(i,j) and check if the combination is testable or not in the current corrected significance level δ. If xi,j∈Σk, the combination is testable and needs to be processed. In such case, HiSSI does not need to exhaustively analyze all two-locus combinations, and only needs to analyze these combinations whose marginal x is in testable region Σ. By updating the minimum p-values by J permutations, FWER can be obtained. If FWER (δ)>α, k needs to be increased so as to decrease FWER (δ) to control the false positives. The corrected significance threshold δ∗ can be calculated as follows: $$ \delta^{*}=\text{max}\{\delta|\text{FWER}(\delta) \le \alpha\} $$ The above processes are summarized in Algorithm 1. Once the corrected significance threshold δ∗ is obtained, for each two-locus combination, HiSSI computes the marginal xi,j and ai,j, which is the number of \(\{P(s_{i}(i,j))\}_{i=1}^{N}\) under the cases, and then computes the corresponding p-value via χ2-test. If pi,j≤δ∗, HiSSI deems the combination is significant and places it into the candidate set. Stage 2: high-order SNP interactions detection In the search stage, HiSSI provides two strategies (exhaustive search and DE-based search) to search high-order SNP interactions based on candidate set. Exhaustive search for small candidate set Exhaustive search is affordable when the candidate set is small and has a larger chance to detect high-order SNP interactions than heuristic search. HiSSI applies exhaustive search on a small candidate set. To exhaustively search K-SNP (K≥3) interactions, HiSSI combines all candidate SNP pairs to a set of K-SNP, and computes the corresponding p-value obtained by χ2-test of K-SNP. HiSSI reports these combinations whose p-values are smaller than the corrected significant thresholds δ∗ of K SNPs, obtained by the Algorithm 1. Heuristic search for large candidate set For a large candidate set, HiSSI employs a heuristic search approach based on differential evolution (DE) algorithm [11, 18–23] with χ2-test to identify high-order SNP interactions. DE is a powerful heuristic and parallel direct search approach with few control variables. Here, we take K=3 as an example to illustrate the process of detecting high-order interactions. The DE-based search strategy is presented as follows. Initialization: for the candidate set C obtained from the first stage, a target vector is employed to represent a combination of three SNPs from C and defined as: : $$ \begin{aligned} X_{i,g} = (f_{1,i,g},f_{2,i,g},f_{3,i,g}|{f \in \mathrm{C}}), \ \ i=1,2,\ldots,ps \end{aligned} $$ where ps is the population size, i.e., the number of randomly generated target vectors; g means the g-th iteration. i is the i-th target vector in the population, fj,i,g(j=1,2,3) represents one of the three SNPs in the i-th target vector in the g-th generation. At the initialization (g=0), fj,i,g(j=1,2,3) are randomly generated as follows: $$ \begin{aligned} f_{j,i,0} = \text{rand}_{j}([0,1))\times(upper-lower)+lower, \ \ j=1,2,3 \end{aligned} $$ where upper and lower are the upper and lower bounds of the indexes of the candidate set. randj([0,1)) randomly generates a uniformly distributed random value within the range [0,1). Mutation: in the mutation operation, each target vector generates a mutant vector: $$ \begin{aligned} V_{i,g+1} = X_{r1,g}+F \cdot (X_{r2,g}{-}X_{r3,g}), \ \ i=1,2,\ldots,ps \end{aligned} $$ where r1, r2 and r3∈(1,2,…,ps) are the random indices of the population, and they are mutually different. Xr1,g, Xr2,g and Xr3,g are the selected three target vectors. F∈[0,2] is a real and constant factor that controls the amplification of differential variation (Xr2,g−Xr3,g). Recombination: in the recombination operation, the mutant vector Vi,g+1 and the current target vector Xi,g are incorporated to generate a trial vector: $$ U_{i,g+1} = (u_{1,i,g+1},u_{2,i,g+1},u_{3,i,g+1}) $$ $$ \begin{aligned} u_{j,i,g+1}\,=\, \begin{cases}{} \!v_{j,i,g+1}, \ \text{if}\ randb(j) \!\leq\! CR \ \text{or} \ j\,=\,rnbr(i)\\ \!x_{j,i,g}, \ \ \ \ \text{if} \ randb(j) \!>\! CR \ \text{or} \ j\!\neq\! rnbr(i) \end{cases} \ \ \!\!\!\!\!\!\!j=1,2,3 \end{aligned} $$ where randb(j) is the j-th evaluation of a uniform random number generator with an outcome in [0,1], CR∈[0,1] is the crossover constant. rnbr(i) is a randomly chosen index in (1,2,3), it ensures that Ui,g+1 obtains at least one parameter from Vi,g+1. Boundary constraints [10]: a trail vector must be checked whether it is a feasible SNP combination (i.e., no parameters in the trial vector outside of the problem space), and can be adjusted as follows: $$ \begin{aligned} u_{j,i,g+1}= \begin{cases}{} \text{rand}_{j}([0,1))\times(upper-lower)+lower, \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{if }(u_{j,i,g}\textless lower \ \text{or} \ u_{j,i,g+1}\textgreater upper)\\ x_{j,i,g}, \ \ \ \ \ \ \ \text{otherwise} \end{cases} \end{aligned} $$ Selection: the selection operation determines whether the target vector Xi,g is replaced by the trial vector Ui,g+1 in the next generation or not. An evaluation function is used to evaluate the target and trial vectors. Here, HiSSI employs the chi-square test as the evaluation function. If the corresponding p-value of trial vector Ui,g+1 obtained by chi-square tese yields a better value than the corresponding p-value of target vector Xi,g, namely p(Ui,g+1)<p(Xi,g), then the target vector Xi,g+1 is set to Ui,g+1 in the next generation; otherwise, Xi,g+1 is set to Xi,g. Through the above four iterative operations (step (2)-(5)), the value of the target vector can be improved by competing between target vectors and trial vectors. These four operations are repeated until the maximum number of generations (gmax) is reached, and the target vector with the best fitness value is the detected high-order SNP interaction. FWER control In GWAS, SNP interaction detection leads to a multiple hypothesis testing problem that generates lots of false positives. To alleviate this problem, Boferroni correction [12] and permutation-testing [24], are widely used for correcting the multiple testing problem. However, Bonferroni correction only works when the number of test patterns is known in advance and small [14]. HiSSI applies a fast permutation-testing method [15] to strictly control the family wise error rate (FWER), defined as the possibility of producing at least one false positive. In the permutation-testing, HiSSI generates a re-sampled dataset by randomly permuting the phenotype. Then, HiSSI computes the minimum p-value across all genotype combinations. Repeating the permutation for a sufficiently number (J) of times, it obtains J different minimum p-values \(\{p_{min}^{(t)}\}_{t=1}^{J} \). The FWER can be evaluated as: $$ \text{FWER}(\delta)=\frac{1}{J}\sum_{t=1}^{J}1[p_{min}^{(t)} \le \delta] $$ where 1[·] is an indicator function which takes value 1 if its argument is true and 0 otherwise; δ is the corrected significance threshold. FWER control requires FWER ≤α with α being the desired significant threshold. By doing this, the corrected significant threshold δ is chosen appropriately. The optimal δ∗ is obtained by solving the same optimization problem as Equation (3). In addition, the optimization problem also can yield the highest power (the probability of detecting true positives), and strictly control the FWER. In this section, we evaluate the performance of HiSSI on both simulated and real datasets. In the simulated study, we compare HiSSI with EDCF [8], DCHE [9], DECMDR [10] and HiSSI-BC on different disease models (including two-locus and three-locus) with different parameters settings. HiSSI-BC is a variant of HiSSI, it obtains the corrected significant threshold using the Bonferroni correction. We adopt the same measure of power suggested by Wan et al. [25] as follows: $$ Power=\frac{D^{\prime}}{D} $$ where D′ is the number of datasets where exist true SNP interactions, and D is the number of all datasets. The definition of marginal effect size λ of a disease locus is the same as the one used in Zhang et al. [4]: $$ \lambda=\frac{p_{Aa} / p_{AA}}{(1-p_{Aa}) / (1-p_{AA})}-1 $$ where pAA and pAa denote the penetrance of genotype AA and Aa, respectively. For the real study, we apply HiSSI on the real breast cancer (BC) GWAS dataset collected from Wellcome Trust Case Control Consortium (WTCCC) [26]. Experiments on simulated datasets To do comprehensive experimental comparison, we conduct simulation experiments on both two-locus and three-locus disease models. Since the number of candidate SNP combinations is small after screening in the first stage, we apply exhaustive search to detect high-order interaction. Two-locus disease models Three two-locus disease models (Model1, Model2 and Model3) are used to compare HiSSI with EDCF [8], DCHE [9], DECMDR [10] and HiSSI-BC. Model1 and Model2 are proposed by Marchini et al. [27], where Model1 with a threshold effect, and Model2 with a multiplicative effect. Model3 is proposed by Zhang et al. [4] with an additive effect. The marginal effect size is relatively small in the simulation study, λ=0.2 for Model1, Model2, and Model3. Minor allele frequencies (MAFs) are the same for both loci at three levels: MAF = 0.1, 0.2 and 0.4; and for Linkage disequilibrium (LD), r2 is set to 0.7 and 1.0: r2=0.7 is simulated for disease loci ungenotyped, but their LD markers genotyped; r2=1.0 is simulated for directly genotyped disease loci. We use the same simulation program as [4] to simulate 100 datasets under each parameter setting for each disease model. Each dataset contains 100 SNPs, and the sample size is fixed to 1000, 2000 and 4000. Figure 2 reveals the performance of different approaches on these three models. The power of all methods improves significantly when the sample size increasing from 2000 to 4000, and r2 changing from 0.7 to 1. However, the power of most approaches decreases as the MAFs of disease associated markers varying from 0.2 to 0.4. The trend is consistent with the results in [4, 27]. Powers of different approaches on three two-locus disease models (Models 1–3) with 100 SNPs. Powers of DCHE, DECMDR, EDCF, HiSSI and HiSSI-BC on three two-locus disease models with different minor allele frequency (MAF), sample size (N) and linkage disequilibrium (LD); HiSSI-BC is a variant of HiSSI that uses the Bonferroni correction to obtain the corrected significant threshold. N0 is the number of controls, N1 is the number of cases, and M is the number of SNPs. The absence of a bar indicates no power. (a) Model1; (b) Model2; (c) Model3 Across all models, HiSSI outperforms HiSSI-BC, which evidences that the adopted permutation test is more effective than Bonferroni correction for controlling false positives in multiple hypothesis test. In addition, HiSSI also has a better performance than other approaches (EDCF, DCHE and DECMDR) for Model1–Model3 except some cases, Model1 with N = 2000, r2 = 0.7, MAF = 0.4, and Model2 with r2 = 1.0, MAF = 0.4. In these cases, HiSSI has a lower power than EDCF and DCHE. That is because HiSSI may lose some genetic associations, since it partitions two-locus genotype combinations into two groups, which is much smaller than the number of genotypes. On the contrary, EDCF and DCHE partition genotype combinations into more groups than HiSSI; EDCF has three groups, DCHE has three to six groups; whose numbers are larger than two and can retain more genetic information. In most cases, DECMDR has the lowest power, since it applies heuristic search and only reports the optimal solution. Another interesting observation is that the power of EDCF drastically decreases when N = 4000 with r2 = 0.7 and MAF = 0.2. One possible reason is that EDCF divides each three-locus combinations into three groups and uses the chi-square test with two degrees of freedom to measure the significance, resulting in more false positives. In addition, high-dimensional simulation datasets with 1000 SNPs, 2000 and 4000 samples on Model2 are also used to test HiSSI and other comparing approaches. The settings of MAF and LD are the same as before and the simulation datasets are also generated by the same simulation program as Zhang et al. [4]. Figure 3 reveals the performance of different approaches on Model2 with 1000 SNPs. Similarly, the power of all approaches significantly increases when the sample size increase from 2000 to 4000, r2 varies from 0.7 to 1; and decreases when MAF varies from 0.2 to 0.4. For the model with 1000 SNPs, HiSSI still outperforms HiSSI-BC, which confirms the effectiveness of permutation test on high-dimensional datasets. HiSSI has a better performance than other approaches except r2 = 1.0, MAF = 0.4. In such case, HiSSI has a lower power than EDCF and DCHE. EDCF loses its power when N = 4000 with r2 = 0.7 and MAF = 0.2. All these results are consistent with the results on the small simulation datasets with Model2. Powers of different approaches on Model 2 with 1000 SNPs. Powers of DCHE, DECMDR, EDCF, HiSSI and HiSSI-BC on Model2 under different minor allele frequency (MAF) and linkage disequilibrium (LD) with 1000 SNPs, 2000 and 4000 samples; HiSSI-BC is a variant of HiSSI which uses the Bonferroni correction to obtain the corrected significant threshold. N0 is the number of controls, N1 is the number of cases, and M is the number of SNPs. The absence of a bar indicates no power Three-locus disease models We use two three-locus disease models (Model4 and Model5) to test the ability of HiSSI in detecting high-order SNP interactions. Model4 is a three-locus interaction model proposed by Zhang et al. [4]. Model5 is the extension of Model1, which is a two-locus interaction model with a threshold effect. The sample size increases from 2000 to 4000; the minor allele frequencies (MAFs) is set to 0.1, 0.2, and 0.4; the r2 changes from 0.7 to 1.0; and the marginal effect is set to λ=0.3 for Model4 and Model5. We use the same simulation program in Zhang et al. [4] to simulate 100 datasets under each parameter setting for each disease model, and each dataset contains 100 SNPs. Figure 4 shows the performance of different approaches on two three-locus disease models for high-order interactions detection. The power of all approaches significantly improves with the sample size increasing from 2000 to 4000, and r2 changing from 0.7 to 1. Besides, for Model5, the power of most approaches decreases with MAFs of the disease associated markers varying from 0.2 to 0.4. This trend is consistent with the results in two-locus disease model. However, the trend is not obvious for Model4 with MAF varying from 0.1 to 0.4. Powers of different approaches on two three-locus disease models (Models 4–5) with 100 SNPs. Powers of DCHE, DECMDR, EDCF, HiSSI and HiSSI-BC on two three-locus disease models with different minor allele frequency (MAF), sample size (N) and linkage disequilibrium (LD); HiSSI-BC is a variant of HiSSI that uses the Bonferroni correction to obtain the corrected significant threshold. N0 is the number of controls, N1 is the number of cases, and M is the number of SNPs. The absence of a bar indicates no power. (a) Model4; (b) Model5 For these two models, HiSSI again has a better performance than HiSSI-BC, which shows that permutation test is also more effective than Bonferroni correction in detecting high-order interactions. In addition, HiSSI obtains the highest power for Model4–Model5 except some cases, Model4 with N = 2000, r2 = 1.0, MAF = 0.2, 0.4; and N = 4000, MAF = 0.4. In these cases, HiSSI has a lower power than EDCF and DCHE. That is due to the same reason as two-locus disease models, resulting more false positives in HiSSI. For both models, the power of EDCF still drastically decreases when N = 4000 with r2 = 0.7 and MAF = 0.2, that is consistent with the results in two-locus disease model. Since EDCF, DCHE and HiSSI-BC all employ corrected Bonferroni correction to calculate the threshold, from the power between HiSSI between these methods, we can conclude that permutation test is more effective than Bonferroni correction for controlling false positives in multiple hypothesis test. In most cases, DECMDR has the lowest power, since it applies heuristic search in a larger search space and only reports the optimal solution. Besides, high-dimensional simulation datasets with 1000 SNPs, 2000 and 4000 samples on Model4 and Model5 are also used to test HiSSI and other comparing approaches. The settings about MAF and LD are the same as the simulation datasets with 100 SNPs. Figure 5 reveals the performance of different approaches on Model4 and Model5 with 1000 SNPs. The trend for power of all approaches is consistent with that on the small simulation datasets. For both the two models, HiSSI still has a better power than HiSSI-BC; and HiSSI obtains the highest power except Model4 with MAF = 0.4, and N = 2000, r2 = 1.0, MAF = 0.2. In these cases, the power of HiSSI is lower than EDCF and DCHE. All these results are consistent with the results on small simulation datasets. Powers of different approaches on two three-locus disease models (Models 4–5) with 1000 SNPs. Powers of DCHE, DECMDR, EDCF, HiSSI and HiSSI-BC on two three-locus models under different minor allele frequency (MAF) and linkage disequilibrium (LD) with 1000 SNPs, 2000 and 4000 samples; HiSSI-BC is a variant of HiSSI that uses the Bonferroni correction to obtain the corrected significant threshold. N0 is the number of controls, N1 is the number of cases, and M is the number of SNPs. The absence of a bar indicates no power. (a) Model4; (b) Model5 In addition, we also conduct experiments on two models (two-locus and three-locus models) without marginal effect with 100 and 1000 SNPs. The experimental settings, results and analysis can be found in Additional file 1. All simulated models used in simulated experiments are showed in Additional file 2. Experiment on the breast cancer dataset A real breast cancer dataset (BC) collected from WTCCC project [26] is used to further evaluate HiSSI. It is reported that breast cancer is caused by a combination of genetic and environmental risk factors [28]. The BC dataset contains 15347 SNPs from 1045 affected individuals and 3893 normal individuals. Quality control is performed to exclude very low rate samples and SNPs. For a SNP, if its call rate <95% across all samples, or its p-value (Hardy-Weinberg equilibrium) <0.0001 in controls, or with MAF <0.1, the SNP will be excluded. For a sample, if its call rate <98%, the sample will be excluded. Through the quality control, the BC dataset contains 1045 case samples and 3893 control samples with 5607 SNPs. Some significant two-locus and three-locus combinations on BC dataset identified by HiSSI is shown in Table 2. In the two-locus combinations, (rs1108842, rs2289247) is in gene GNL3 on chromosome 3. The protein encoded by GNL3 may interact with p53 and may be involved in tumorigenesis. (rs2242665, rs2856705) is on chromosome 6, where rs2856705 is susceptibly associated with breast cancer [29]. (rs1801197, rs6971091) is on chromosome 7, where rs1801197 is located in gene CALCR. It is evidenced that rs1801197/CALCR can lead to breast cancer [29]. (rs365990, rs7158731) is on chromosome 14, where rs365990 is in gene MYH6, and rs7158731 is in gene ZNF839. MYH6 encodes the alpha heavy chain subunit of cardiac myosin, and mutations in this gene cause familial hypertrophic cardiomyopathy and atrial septal defect 3. It is reported that MYH6 and ZNF839 are associated with breast cancer [29]. (rs8059973, rs3785181) is on chromosome 16, where rs8059973 is in gene DBNDD1, rs3785181 is in gene GAS11. rs8059973/DBNDD1 is associated with breast cancer [29]. GAS11 includes 11 exons spanning 25 kb and maps to a region of chromosome 16 that is sometimes deleted in breast and prostrate cancer. This gene is a putative tumor suppressor gene and is reported as being associated with breast cancer [30]. (rs2822558, rs2822787) is on chromosome 21, where rs2822558 is located in gene ABCC13. ABCC13 is a member of the superfamily of genes encoding ATP-binding cassette (ABC) transporters. It is reported that rs2822558/ABCC13 is related to breast cancer [29]. Table 2 Significant two-locus and three-locus combinations identified by HiSSI on WTCCC BC data In the three-locus combination, (rs879882, rs2523608, rs805262) is on chromosome 6. rs879882 is in gene POU5F1, which plays a key role in embryonic development and stem cell pluripotency [31]. rs2523608 is located at gene HLA-B and belongs to human leukocyte antigen (HLA) class I heavy chain paralogs. HLA class I antigen expression plays a central role in the immune system and is closely related to the aggressiveness and prognosis of BC [32]. rs805262 belongs to gene BAG6, which was first characterized as part of a cluster of genes located within the human major histocompatibility complex class III region. In addition, BAG6 is implicated in the control of apoptosis and is associated with basal cell carcinoma [33]. These identified significant two-locus and three-locus combinations demonstrate that HiSSI is capable to detect SNP interactions on genome-wide data. Parameter setting In the screening stage, we set J=100 (number of permutations), α=0.05 (target FWER). In the search stage, there are four common parameters of DE algorithm: population size (ps), generation size (g), the scaling factor (F) and crossover constant (CR). We set these parameters according to previous studies [10, 34] For real dataset, we set: ps=500, g=500, F=0.5 and CR=0.5. Comparison between HiSSI and other approaches Comparison between HiSSI and HiSSI-BC: HISSI-BC is a variant of HiSSI, the main difference between HiSSI and HiSSI-BC is that HiSSI employs a fast permutation test to obtain corrected significant threshold, while HiSSI-BC uses the Bonferroni correction. For all simulation datasets on different disease models (including two-locus and three-locus), HiSSI always outperforms HiSSI-BC, which demonstrates that permutation test is more effective than Bonferroni correction in correcting multiple testing. Comparison between HiSSI and EDCF, DCHE: HiSSI utilizes a statistically significant pattern combined with permutation test to partition genotype combinations into two subgroups, which considers FWER to control false positives; while EDCF partitions genotype combinations into three subgroups, and DCHE dynamically partitions genotype combinations into three to six subgroups. Moreover, both EDCF and DCHE utilize the Bonferroni correction to correct multiple testing. The results on simulation datasets reveals HiSSI has a better performance than EDCF and DCHE, which proves the effectiveness of significant pattern in controlling false positives. Comparison between HiSSI and DECMDR: both DECMDR and HiSSI utilize differential evolution (DE) algorithm to identify SNP interactions. DECMDR utilizes DE algorithm in the whole search space and uses the classification based multifactor-dimensionality reduction (CMDR) as a fitness measure to evaluate values of solutions in the DE process. While HiSSI utilizes DE algorithm in a smaller search space based on candidate set and the chi-square test as the fitness measure in DE process, it has a higher probability to search the true interactions. Since MDR is time-consuming and only reports the optimal solution, DECMDR has a lower power than other approaches in most cases. The advantages and limitations of HiSSI The development of HiSSI is to overcome of the limitations of existing approaches on detecting high-order SNP interactions from genome-wide data. HiSSI displays several advantages over existing methods: HiSSI applies a FWER-constrained statistically significant pattern to strictly control false positives in multiple hypothesis test. HiSSI utilizes a fast permutation testing to obtain corrected significant threshold, which avoids analyzing all two-locus combinations, greatly reduces the total runtime; and also avoids the conservatism of Bonferroni correction. HiSSI provides two alternative search strategies, exhaustive search and heuristic search for different sizes of GWAS datasets. The running time of HiSSI is relatively long compared with other approaches. It is a general problem for existing approaches that employ permutation test. Although HiSSI utilizes a fast permutation test, which is faster than traditional permutation test, it is still time-consuming compared with heuristic algorithms and those approaches with Bonferroni correction. In addition, HiSSI does not directly control the main effects, which may introduce the negative influence of main effects for pairwise SNP combinations; and HiSSI only partitions genotype combinations into two groups, which may lose some genetic association. These limitations may degrade the performance of HiSSI. Future work can be extended to address the above limitations. Detecting potential SNP-SNP interactions in GWAS is an indispensable and challenging problem. In this paper, we proposed a two-stage method called HiSSI to solve the problem. In the screening stage, HiSSI controls the false positives using an efficient statistically significant pattern that considers the family wise error rate, and obtains significant candidate pairwise SNP combinations. In the search stage, HiSSI utilizes two different strategies, exhaustive search and DE-based search, to detect high-order SNP interactions. Exhaustive search is applied to a small candidate set, and DE-based search is used for a large candidate set. A series of simulation experiments on both two-locus and three-locus disease models show that HiSSI is more powerful than other related approaches in detecting SNP interactions. Further experiment on a real WTCCC dataset corroborates that HiSSI is capable to identify high-order SNP interactions from genome-wide data. The source code of HiSSI is available at http://mlda.swu.edu.cn/codes.php?name=HiSSI. The simulated disease models are specified in Additional file 2; and the simulated datasets are generated by the program in BEAM (http://bioinformatics.ust.hk/SNPHarvester.html). The genome-wide Breast Cancer dataset is requested from Wellcome Trust Case Control Consortium (WTCCC), and its accession number is "EGAD00000000013". WTCCC datasets cannot be shared without the permission from WTCCC. The researchers interested in WTCCC datasets can also apply them from WTCCC (https://www.wtccc.org.uk/). Differential evolution FWER: Family wise error rate GWAS: Genome-Wide association study HiSSI-BC: High-order SNP-SNP interactions detection with Bonferroni correction HiSSI: High-order SNP-SNP interactions detection LD: Linkage disequilibrium MAF: Minor allele frequency MDR: Multifactor dimensionality reduction SNP: Single nucleotide polymorphism WTCCC: Wellcome trust case control consortium Welter D, MacArthur J, Morales J, Burdett T, Hall P, Junkins H, Klemm A, Flicek P, Manolio T, Hindorff L, et al. The nhgri gwas catalog, a curated resource of snp-trait associations. Nucleic Acids Res. 2013; 42(D1):1001–6. Kraft P, Hunter DJ. Genetic risk prediction–are we there yet?New Engl J Med. 2009; 360(17):1701–3. Ritchie MD, Hahn LW, Roodi N, Bailey LR, Dupont WD, Parl FF, Moore JH. Multifactor-dimensionality reduction reveals high-order interactions among estrogen-metabolism genes in sporadic breast cancer. Am J Human Genet. 2001; 69(1):138–47. Zhang Y, Liu JS. Bayesian inference of epistatic interactions in case-control studies. Nature Genet. 2007; 39(9):1167. Upstill-Goddard R, Eccles D, Fliege J, Collins A. Machine learning approaches for the discovery of gene–gene interactions in disease data. Briefings in Bioinformatics. 2012; 14(2):251–60. Bureau A, Dupuis J, Falls K, Lunetta KL, Hayward B, Keith TP, Van Eerdewegh P. Identifying snps predictive of phenotype using random forests. Genet Epidemiol: Official Publ Int Genet Epidemiol Soc. 2005; 28(2):171–82. Ritchie MD. Finding the epistasis needles in the genoe-wide haystack. Methods in Molecular Biology. 2015; 2015:19–33. Xie MZ, Li J, Jiang T. Detecting genome-wide epistases based on the clustering of relatively frequent items. Bioinformatics. 2012; 28(1):5–12. Guo X, Meng Y, Yu N, Pan Y. Cloud computing for detecting high-order genome-wide epistatic interaction via dynamic clustering. BMC Bioinformatics. 2014; 15(1):102. Yang C, Chuang L, Lin Y. Cmdr based differential evolution identifies the epistatic interaction in genome-wide association studies. Bioinformatics. 2017; 33(15):2354–62. Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Opt. 1997; 11(4):341–59. Weisstein EW. Bonferroni correction. From MathWorld-A Wolfram Web Resource. 2019 update. 2004. http://mathworld.wolfram.com/BonferroniCorrection.html. Nakagawa S. A farewell to bonferroni: the problems of low statistical power and publication bias. Behav Ecol. 2004; 15(6):1044–5. Li Y, Zhao Y, Wang G, Wang Z, Gao M. Elm-based large-scale genetic association study via statistically significant pattern. Trans Syst, IEEE, Man, and Cybernet: Syst. 2017; PP(99):1–14. Llinares-López F, Sugiyama M, Papaxanthos L, Borgwardt K. Fast and memory-efficient significant pattern mining via permutation testing. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM: 2015. p. 725–34. Pearson K. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling, Vol. 50; 1990. pp. 157–175. Llinares-López F, Grimm DG, Bodenham DA, Gieraths U, Sugiyama M, Rowan B, Borgwardt K. Genome-wide detection of intervals of genetic heterogeneity associated with complex traits. Bioinformatics. 2015; 31(12):240–9. Yang M, Guan J, Li C. Differential evolution with auto-enhanced population diversity: The experiments on the cec'2016 competition. In: Evolution Computation: 2016. p. 4785–9. Yang M, Li C, Cai Z, Guan J. Differential evolution with auto-enhanced population diversity. IEEE Trans Cybernet. 2015; 45(2):302. Yang M, Cai Z, Li C, Guan J. An improved jade algorithm for global optimization. In: Evol Comput: 2014. p. 806–12. Yang M, Guan J, Cai Z, Li C. A self-adaptive differential evolutionary algorithm based on population reduction with minimum distance. Int J Innov Comput Appl. 2014; 6(1):13–24. Yang M, Guan J, Cai Z, Wang L. Self-adapting differential evolution algorithm with chaos random for global numerical optimization. In: International Symposium on Intelligence Computation and Applications: 2010. p. 112–122. Fang Z, Yang M, Zhang G, Guan J. A hybrid differential evolutionary algorithm based on the hierarchical clustering. In: Evol Comput: 2016. p. 2367–74. Chaubey YP. Resampling-based multiple testing: Examples and methods for p-value adjustment. Taylor & Francis. 1993. Wan X, Yang C, Yang Q, Xue H, Fan X, Tang NL, Yu W. Boost: A fast approach to detecting gene-gene interactions in genome-wide case-control studies. Am J Human Genet. 2010; 87(3):325–40. Burton PR, Clayton DG, Cardon LR, Craddock N, Deloukas P, Duncanson A, Kwiatkowski DP, McCarthy MI, Ouwehand WH, Samani NJ, et al. Association scan of 14,500 nonsynonymous snps in four diseases identifies autoimmunity variants. Nature Genet. 2007; 39(11):1329. Marchini J, Donnelly P, Cardon LR. Genome-wide strategies for detecting multiple loci that influence complex diseases. Nature Genet. 2005; 37(4):413–7. Michailidou K, Hall P, Gonzalez-Neira A, Ghoussaini M, Dennis J, Milne RL, Schmidt MK, Chang-Claude J, Bojesen SE, Bolla MK. Large-scale genotyping identifies 41 new loci associated with breast cancer risk. Nature Genet. 2013; 45(4):1–2. Milne RL, Burwinkel B, Michailidou K, Arias-Perez J-I, Zamora MP, Menéndez-Rodríguez P, Hardisson D, Mendiola M, González-Neira A, Pita G, et al. Common non-synonymous snps associated with breast cancer susceptibility: findings from the breast cancer association consortium. Human Mole Genet. 2014; 23(22):6096–111. Whitmore SA, Settasatian C, Crawford J, Lower KM, Mccallum B, Seshadri R, Cornelisse CJ, Moerland EW, Cleton-Jansen AM, Tipping AJ. Characterization and screening for mutations of the growth arrest-specific 11 (gas11) and c16orf3 genes at 16q24.3 in breast cancer. Genomics. 1998; 52(3):325–31. Cai S, Geng S, Jin F, Liu J, Qu C, Chen B. Pou5f1/oct-4 expression in breast cancer tissue is significantly associated with non-sentinel lymph node metastasis. BMC Cancer. 2016; 16(1):175. Hicklin, Daniel J, Marincola, Francesco M, Ferrone, Soldano. Hla class i antigen downregulation in human cancers: T-cell immunotherapy revives an old story. Mole Med Today. 1999; 5(4):178–86. Zhang M, Liang L, Morar N, Dixon AL, Lathrop GM, Ding J, Moffatt MF, Cookson WOC, Kraft P, Qureshi AA. Integrating pathway analysis and genetics of gene expression for genome-wide association study of basal cell carcinoma. Human Genet. 2012; 131(4):615–23. Price K, Storn RM, Lampinen JA. Differential Evolution: a Practical Approach to Global Optimization; 2006. The two-page short paper of this work was an oral presentation in the 14th International Symposium on Bioinformatics Research and Applications (ISBRA 2018). This article has been published as part of BMC Medical Genomics, Volume 12 Supplement 7, 2019: Selected articles from the 14th International Symposium on Bioinformatics Research and Applications (ISBRA-18): medical genomics. The full contents of the supplement are available at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-12-supplement-7. This work is supported by Natural Science Foundation of China (61873214, 61872300, 61871020 and 61562054), and partially supported by Chongqing Graduate Student Research Innovation Project (Grant No. CYS19113), the National Key Research and Development Plan Task of China (Grant No. 2016YFC0901902), Natural Science Foundation of CQ CSTC (cstc2018jcyjAX0228 and cstc2016jcyjA0351), Fundamental Research Funds for the Central Universities (XDJK2019B024), the Open Research Project of The Hubei Key Laboratory of Intelligent Geo-Information Processing (KLIGIP-2017A05), the Natural Science Fundamental Research Plan of Shaanxi Province (2016JM6038), and the Fundamental Research Funds for the Central Universities, NWSUAF, China (2452015060). The funders had no role in the study design, data collection, analysis and interpretation, decision to publish, or preparation of the manuscript. College of Computer and Information Science, Southwest University, Beibei, Chongqing, 400715, China Xia Cao, Jie Liu & Jun Wang School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, 100044, China Maozu Guo Beijing Key Laboratory of Intelligent Processing for Building Big Data, Beijing, 100044, China Xia Cao Jie Liu XC and JL implemented and conducted the experiments; JW and MG initialized and conceived the whole program; XC, JL and JW analyzed the results, drafted and finalized the manuscript; MG revised the manuscript. All the authors read and approved the final manuscript. Correspondence to Jun Wang. Additional file 1 Experiments on models without marginal effect. Two disease models (a two-locus and a three-locus models) without marginal effect are used to test the performances of different approaches under different parameter settings. Simulated disease models. Simulated two-locus and three-locus models used in the simulation experiments are listed in tables. Cao, X., Liu, J., Guo, M. et al. HiSSI: high-order SNP-SNP interactions detection based on efficient significant pattern and differential evolution. BMC Med Genomics 12 (Suppl 7), 139 (2019). https://doi.org/10.1186/s12920-019-0584-6 High-order SNP interactions Statistically significant pattern
CommonCrawl
Existence, regularity and approximation of global attractors for weakly dissipative p-Laplace equations A global existence and blow-up threshold for Davey-Stewartson equations in $\mathbb{R}^3$ December 2016, 9(6): 1913-1937. doi: 10.3934/dcdss.2016078 Well-posedness for the three-dimensional compressible liquid crystal flows Xiaoli Li 1, and Boling Guo 2, College of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China Institute of Applied Physics & Computational Math., Beijing 100088 Received July 2015 Revised September 2016 Published November 2016 This paper is concerned with the initial-boundary value problem for the three-dimensional compressible liquid crystal flows. The system consists of the Navier-Stokes equations describing the evolution of a compressible viscous fluid coupled with various kinematic transport equations for the heat flow of harmonic maps into $\mathbb{S}^2$. Assuming the initial density has vacuum and the initial data satisfies a natural compatibility condition, the existence and uniqueness is established for the local strong solution with large initial data and also for the global strong solution with initial data being close to an equilibrium state. The existence result is proved via the local well-posedness and uniform estimates for a proper linearized system with convective terms. Keywords: existence and uniqueness., strong solution, vacuum, compressible, Liquid crystals. Mathematics Subject Classification: Primary: 35A05, 76D10, 76D0. Citation: Xiaoli Li, Boling Guo. Well-posedness for the three-dimensional compressible liquid crystal flows. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1913-1937. doi: 10.3934/dcdss.2016078 K. C. Chang, W. Y. Ding and R. Ye, Finite-time blow-up of the heat flow of harmonic maps from surfaces, J. Diff. Geom., 36 (1992), 507-515. Google Scholar H. J. Choe and H. Kim, Strong solutions of the Navier-Stokes equations for isentropic compressible fluids, J. Diff. Equations, 190 (2003), 504-523. doi: 10.1016/S0022-0396(03)00015-9. Google Scholar Y. Cho, H. J. Choe and H. Kim, Unique solvability of the initial boundary value prob lems for compressible viscous fluids, J. Math. Pures Appl., 83 (2004), 243-275. doi: 10.1016/j.matpur.2003.11.004. Google Scholar Y. Cho and H. Kim, Existence results for viscous polytropic fluids with vacuum, J. Diff. Equations, 228 (2006), 377-411. doi: 10.1016/j.jde.2006.05.001. Google Scholar S. Ding, C. Wang and H. Wen, Weak solution to compressible hydrodynamic flow of liquid crystals in dimension one, Discrete Conti. Dyna. Sys. Ser. B, 15 (2011), 357-371. doi: 10.3934/dcdsb.2011.15.357. Google Scholar S. Ding, J. Lin, C. Wang and H. Wen, Compressible hydrodynamic flow of liquid crystals in 1-D, Discrete Conti. Dyna. Sys., 32 (2012), 539-563. doi: 10.3934/dcds.2012.32.539. Google Scholar J. Ericksen, Conservation laws for liquid crystals, Trans. Soc. Rheol., 5 (1961), 22-34. doi: 10.1122/1.548883. Google Scholar J. Ericksen, Equilibrium theory for liquid crystals, in: G. Brown (Ed.), Advances in Liquid Crystals, Academic Press, New York, 2 (1976), 233-298. doi: 10.1016/B978-0-12-025002-8.50012-9. Google Scholar J. Ericksen, Continuum theory of nematic liquid crystals, Molecular Crystals, 7 (2007), 153-164. doi: 10.1080/15421406908084869. Google Scholar M. Hong, Global existence of solutions of the simplified Ericksen-Leslie system in dimension two, Calc. Var. Partial Diff. Equations, 40 (2011), 15-36. doi: 10.1007/s00526-010-0331-5. Google Scholar X. Hu and H. Wu, Global solution to the three-dimensional compressible flow of liquid crystals, SIAM J. Math. Anal., 45 (2013), 2678-2699, arXiv:1206.2850. doi: 10.1137/120898814. Google Scholar T. Huang, C. Wang and H. Wen, Strong solutions of the compressible nematic liquid crystal flow, J. Diff. Equations, 252 (2012), 2222-2265. doi: 10.1016/j.jde.2011.07.036. Google Scholar T. Huang, C. Wang and H. Wen, Blow up criterion for compressible nematic liquid crystal flows in dimension three, Arch. Rational Mech. Anal., 204 (2012), 285-311. doi: 10.1007/s00205-011-0476-1. Google Scholar F. Jiang and Z. Tan, Global weak solution to the flow of liquid crystals system, Math. Meth. Appl. Sci., 32 (2009), 2243-2266. doi: 10.1002/mma.1132. Google Scholar F. M. Leslie, Theory of flow phenomena in liquid crystals, in: G. Brown (Ed.), Advances in Liquid Crystals, Academic Press, New York, 4 (1979), 1-81. doi: 10.1016/B978-0-12-025004-2.50008-9. Google Scholar X. Li and D. Wang, Global strong solution to the density-dependent incompressible flow of liquid crystals, Trans. Amer. Math. Soc., 367 (2015), 2301-2338. doi: 10.1090/S0002-9947-2014-05924-2. Google Scholar F. H. Lin, Nonlinear theory of defects in nematic liquid crystal: Phase transition and flow phenomena, Comm. Pure Appl. Math., 42 (1989), 789-814. doi: 10.1002/cpa.3160420605. Google Scholar F. H. Lin, Existence of solutions for the Ericksen-Leslie system, Arch. Rat. Mech. Anal., 154 (2000), 135-156. doi: 10.1007/s002050000102. Google Scholar F. H. Lin and C. Liu, Nonparabolic dissipative systems modeling the flow of liquid crystals, Comm. Pure Appl. Math., 48 (1995), 501-537. doi: 10.1002/cpa.3160480503. Google Scholar F. H. Lin and C. Liu, Partial regularities of the nonlinear dissipative systems modeling the flow of liquid crystals, Discrete Conti. Dyna. Sys., 2 (1996), 1-22. Google Scholar F. H. Lin, J. Lin and C. Wang, Liquid crystal flows in two dimensions, Arch. Ration. Mech. Anal., 197 (2010), 297-336. doi: 10.1007/s00205-009-0278-x. Google Scholar C. Liu, Dynamic theory for incompressible smectic-A liquid crystals, Discrete Conti. Dyna. Sys., 6 (2000), 591-608. doi: 10.3934/dcds.2000.6.591. Google Scholar X. Liu, L. Liu and Y. Hao, Existence of strong solutions for the compressible Ericksen-Leslie model,, , (). Google Scholar X. Liu and Z. Zhang, {Existence of the flow of liquid crystals system, Chinese Ann. Math., 30 (2009), 1-20. Google Scholar S. Shkoller, Well-posedness and global attractors for liquid crystals on Riemannian manifolds, Comm. Partial Diff. Equations, 27 (2001), 1103-1137. doi: 10.1081/PDE-120004895. Google Scholar C. Wang, Well-posedness for the heat flow of harmonic maps and the liquid crystal flow with rough initial data, Arch. Ration. Mech. Anal., 200 (2011), 1-19. doi: 10.1007/s00205-010-0343-5. Google Scholar H. Wen and S. Ding, Solutions of incompressible hydrodynamic flow of liquid crystals, Nonlinear Analysis: Real World Applications, 12 (2011), 1510-1531. doi: 10.1016/j.nonrwa.2010.10.010. Google Scholar Xiaoli Li. Global strong solution for the incompressible flow of liquid crystals with vacuum in dimension two. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 4907-4922. doi: 10.3934/dcds.2017211 Shijin Ding, Changyou Wang, Huanyao Wen. Weak solution to compressible hydrodynamic flow of liquid crystals in dimension one. Discrete & Continuous Dynamical Systems - B, 2011, 15 (2) : 357-371. doi: 10.3934/dcdsb.2011.15.357 Wenya Ma, Yihang Hao, Xiangao Liu. Shape optimization in compressible liquid crystals. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1623-1639. doi: 10.3934/cpaa.2015.14.1623 T. Tachim Medjo. On the existence and uniqueness of solution to a stochastic simplified liquid crystal model. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2243-2264. doi: 10.3934/cpaa.2019101 Tong Tang, Yongfu Wang. Strong solutions to compressible barotropic viscoelastic flow with vacuum. Kinetic & Related Models, 2015, 8 (4) : 765-775. doi: 10.3934/krm.2015.8.765 Xian-Gao Liu, Jie Qing. Globally weak solutions to the flow of compressible liquid crystals system. Discrete & Continuous Dynamical Systems, 2013, 33 (2) : 757-788. doi: 10.3934/dcds.2013.33.757 Shijin Ding, Junyu Lin, Changyou Wang, Huanyao Wen. Compressible hydrodynamic flow of liquid crystals in 1-D. Discrete & Continuous Dynamical Systems, 2012, 32 (2) : 539-563. doi: 10.3934/dcds.2012.32.539 Yachun Li, Shengguo Zhu. Existence results for compressible radiation hydrodynamic equations with vacuum. Communications on Pure & Applied Analysis, 2015, 14 (3) : 1023-1052. doi: 10.3934/cpaa.2015.14.1023 Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5383-5405. doi: 10.3934/dcdsb.2020348 Chun Liu. Dynamic theory for incompressible Smectic-A liquid crystals: Existence and regularity. Discrete & Continuous Dynamical Systems, 2000, 6 (3) : 591-608. doi: 10.3934/dcds.2000.6.591 Patricia Bauman, Daniel Phillips, Jinhae Park. Existence of solutions to boundary value problems for smectic liquid crystals. Discrete & Continuous Dynamical Systems - S, 2015, 8 (2) : 243-257. doi: 10.3934/dcdss.2015.8.243 Sili Liu, Xinhua Zhao, Yingshan Chen. A new blowup criterion for strong solutions of the compressible nematic liquid crystal flow. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4515-4533. doi: 10.3934/dcdsb.2020110 Qunyi Bie, Haibo Cui, Qiru Wang, Zheng-An Yao. Incompressible limit for the compressible flow of liquid crystals in $ L^p$ type critical Besov spaces. Discrete & Continuous Dynamical Systems, 2018, 38 (6) : 2879-2910. doi: 10.3934/dcds.2018124 Yi-Long Luo, Yangjun Ma. Low Mach number limit for the compressible inertial Qian-Sheng model of liquid crystals: Convergence for classical solutions. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 921-966. doi: 10.3934/dcds.2020304 Peixin Zhang, Jianwen Zhang, Junning Zhao. On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 1085-1103. doi: 10.3934/dcds.2016.36.1085 Hongjun Gao, Šárka Nečasová, Tong Tang. On weak-strong uniqueness and singular limit for the compressible Primitive Equations. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4287-4305. doi: 10.3934/dcds.2020181 Jishan Fan, Shuxiang Huang, Fucai Li. Global strong solutions to the planar compressible magnetohydrodynamic equations with large initial data and vacuum. Kinetic & Related Models, 2017, 10 (4) : 1035-1053. doi: 10.3934/krm.2017041 Yi-hang Hao, Xian-Gao Liu. The existence and blow-up criterion of liquid crystals system in critical Besov space. Communications on Pure & Applied Analysis, 2014, 13 (1) : 225-236. doi: 10.3934/cpaa.2014.13.225 Yingshan Chen, Mei Zhang. A new blowup criterion for strong solutions to a viscous liquid-gas two-phase flow model with vacuum in three dimensions. Kinetic & Related Models, 2016, 9 (3) : 429-441. doi: 10.3934/krm.2016001 Tai-Ping Liu, Zhouping Xin, Tong Yang. Vacuum states for compressible flow. Discrete & Continuous Dynamical Systems, 1998, 4 (1) : 1-32. doi: 10.3934/dcds.1998.4.1 Xiaoli Li Boling Guo
CommonCrawl
A double-blind placebo-controlled trial of azithromycin to reduce mortality and improve growth in high-risk young children with non-bloody diarrhoea in low resource settings: the Antibiotics for Children with Diarrhoea (ABCD) trial protocol The ABCD study team Trials volume 21, Article number: 71 (2020) Cite this article Acute diarrhoea is a common cause of illness and death among children in low- to middle-income settings. World Health Organization guidelines for the clinical management of acute watery diarrhoea in children focus on oral rehydration, supplemental zinc and feeding advice. Routine use of antibiotics is not recommended except when diarrhoea is bloody or cholera is suspected. Young children who are undernourished or have a dehydrating diarrhoea are more susceptible to death at 90 days after onset of diarrhoea. Given the mortality risk associated with diarrhoea in children with malnutrition or dehydrating diarrhoea, expanding the use of antibiotics for this subset of children could be an important intervention to reduce diarrhoea-associated mortality and morbidity. We designed the Antibiotics for Childhood Diarrhoea (ABCD) trial to test this intervention. ABCD is a double-blind, randomised trial recruiting 11,500 children aged 2–23 months presenting with acute non-bloody diarrhoea who are dehydrated and/or undernourished (i.e. have a high risk for mortality). Enrolled children in Bangladesh, India, Kenya, Malawi, Mali, Pakistan and Tanzania are randomised (1:1) to oral azithromycin 10 mg/kg or placebo once daily for 3 days and followed-up for 180 days. Primary efficacy endpoints are all-cause mortality during the 180 days post-enrolment and change in linear growth 90 days post-enrolment. Expanding the treatment of acute watery diarrhoea in high-risk children to include an antibiotic may offer an opportunity to reduce deaths. These benefits may result from direct antimicrobial effects on pathogens or other incompletely understood mechanisms including improved nutrition, alterations in immune responsiveness or improved enteric function. The expansion of indications for antibiotic use raises concerns about the emergence of antimicrobial resistance both within treated children and the communities in which they live. ABCD will monitor antimicrobial resistance. The ABCD trial has important policy implications. If the trial shows significant benefits of azithromycin use, this may provide evidence to support reconsideration of antibiotic indications in the present World Health Organization diarrhoea management guidelines. Conversely, if there is no evidence of benefit, these results will support the current avoidance of antibiotics except in dysentery or cholera, thereby avoiding inappropriate use of antibiotics and reaffirming the current guidelines. Clinicaltrials.gov, NCT03130114. Registered on April 26 2017. Acute diarrhoea continues to be one of the most common illnesses in infants and young children, especially in low-income settings. Approximately half a million children continue to die annually as a result of acute diarrhoeal episodes [1], mostly in sub-Saharan Africa or southeast Asia. The current World Health Organization (WHO) recommended management guidelines for acute diarrhoea (rehydration, supplemental zinc, feeding advice and appropriate follow-up) [2] have contributed to significant reductions in diarrhoea-associated mortality [3]. These guidelines do not suggest a role for antibiotics except in case of bloody diarrhoea (as a proxy for shigella or other invasive bacterial infections) or suspected cholera. Data from the large Global Enteric Multicentre Study (GEMS) [4] show that at least one putative pathogen can be identified in over 80% of children presenting with moderate–severe diarrhoea. Similar results were also seen with a more recent molecular re-analysis of stool samples from children from low-income settings which identified a putative pathogen in 65% of children [5]. A bacterial aetiology was identified in a quarter of all stool samples studied. The recent global burden of disease study on childhood diarrhoea indicates that, while rotavirus is the leading cause of diarrhoea-related deaths (28%) in young children, bacterial species including Shigella spp. (14.5%), non-typhoid Salmonella spp. (8.4%), Campylobacter spp. (9%) and Escherichia coli (6.5%) also contribute to a significant number of deaths in this age group [1]. These studies suggest a significant role for bacterial pathogens in diarrhoea-associated deaths [1]. Other studies have also demonstrated that visible blood in the stool is a poor indicator of a bacterial aetiology in children with diarrhoea [6, 7], suggesting that the use of blood in the stool as a proxy for Shigella spp. may be inadequate and that the indications for antibiotic use in young children with acute diarrhoea could be expanded. Taken together, these studies suggest that current treatment guidelines may be missing the opportunity to appropriately provide antibiotics to a highly selected group of young children who, because of aetiology, disease severity and/or undernutrition, are at a particularly high risk of diarrhoea-associated mortality [1, 8, 9]. Azithromycin, a macrolide with a broad spectrum of antibacterial activity and immunomodulatory properties, has been administered to children in the context of mass drug administration programmes largely for trachoma prevention [10]. Observations of mortality sparing related to azithromycin administration in these trachoma interventions prompted several large community trials of azithromycin administered via mass drug administration which have shown significant mortality reductions in children in low-resource settings [11,12,13]. Given these findings, we are conducting a randomised, placebo-controlled trial, the Antibiotics for Childhood Diarrhoea (ABCD) trial, to determine if the addition of an antibiotic (azithromycin) to the standard management for acute non-bloody watery diarrhoea in a subset of children 2–23 months of age who are dehydrated or undernourished could reduce the mortality and improve growth in settings where such deaths commonly occur. The main aim of the ABCD trial is to compare rates of all-cause mortality in the 180 days following enrolment for an episode of acute non-bloody diarrhoea among high-risk children (dehydrated and/or undernourished) aged 2 to 23 months, living in low-resource settings, who are randomised to receive a 3-day course of azithromycin or placebo, in addition to the WHO recommended management of acute watery diarrhoea. A second main aim is to compare the change in linear growth 90 days after enrolment between the same groups. The secondary aims include a comparison of indicators of acute malnutrition, hospitalisations and/or death. These are described in detail under the section 'Outcomes'. Given the risk of antimicrobial resistance, we plan to compare resistance profiles in nasopharyngeal (Streptococcus pneumoniae) and stool (E. coli) isolates from a sample of children from the placebo and treatment arms. The protocol for the main trial is described here. The ABCD trial is a double-blind, individual randomised, parallel group superiority trial comparing azithromycin with placebo conducted in 11,500 high-risk children aged 2–23 months, presenting with non-bloody diarrhoea in seven countries (Bangladesh, India, Kenya, Malawi, Mali, Pakistan and Tanzania). The trial protocol was developed by collaborators at the WHO, Department of Maternal Newborn, Child and Adolescent Health (Geneva, Switzerland) together with teams in Dhaka, Bangladesh (the International Centre for Diarrhoeal Disease Research), New Delhi, India (the Centre for Public Health Kinetics), Nairobi, Kenya (the Kenya Medical Research Institute), Blantyre, Malawi (the Malawi Liverpool Wellcome Trust), Bamako, Mali (the Centre pour le Développement des Vaccins), Karachi, Pakistan (the Aga Khan University), Dar es Salaam, Tanzania (the Muhimbili University of Health and Allied Sciences), Boston, MA, US (the Boston Children's Hospital and Harvard TH Chan School of Public Health), the University of Maryland, the University of Washington Department of Global Health and the Centre for the Integrated Health of Women, Adolescents and Children. The study design is described in Fig. 1. The Antibiotics for Children with Diarrhoea (ABCD) trial study design. AMR antimicrobial resistance Study setting and population The study will be implemented in health facilities in the seven sub-Saharan and south Asian countries listed above. The countries were selected based on study site characteristics as well as the strengths and experience of the investigator teams in conducting large intervention trials. Within each country, 2–10 individual health facilities will be the sites of patient enrolment. Children aged 2–23 months, presenting to a designated health care facility at a participating study site with: Diarrhoea as per caregiver perception and at least three loose or watery stools in the previous 24 h Diarrhoea for less than 14 days prior to screening and with at least one of the following risk criteria at presentation: Signs of some or severe dehydration as per the WHO Pocket Book 2013 [14] Moderately wasted as defined by a mid-upper arm circumference (MUAC) <125 mm (but ≥115 mm) or a weight-for-length z score (WLZ) greater than −3 and less than or equal to −2 after rehydration during a stabilisation period Severely stunted (length-for-age z score (LAZ) less than −3 based on WHO norms) Parent or guardian (caregiver) willing to allow household visits on day 2 and day 3 and willing to return to facility on day 90 post-enrolment Parent or guardian (caregiver) provides written consent for trial participation on behalf of the child Dysentery (blood in stool reported by caregiver or observed by health care worker) Clinically suspected Vibrio cholerae infection Previously or currently enrolled in the ABCD study Concurrently enrolled in another interventional clinical trial Sibling or other child in the household enrolled in the ABCD study and currently taking study medication Signs of associated infections (pneumonia, severe febrile illness, meningitis, mastoiditis or acute ear infection) requiring antibiotic treatment Documented antibiotic use in the 14 days prior to screening (not including standard use of prophylactic antibiotics, i.e. co-trimoxazole use in HIV-exposed children) Documented use of metronidazole within the last 14 days Known allergy or contraindication to azithromycin antibiotics Severe acute malnutrition defined as weight-for-length z score less than −3, or MUAC less than 115 mm, or oedema of both feet Living at a distance from the enrolment health centre that prevents adequate directly observed therapy on day 2 and day 3 Site-specific recruitment plans were made. Each site will discuss the protocol and its implementation with medical staff, community health workers in the area and community leaders in order to increase awareness of the trial, as well as to encourage referral of eligible children to enrolling sites. Screening and enrolment procedures After a potentially eligible child has been identified, the study staff will screen the child for eligibility, based on the above-mentioned inclusion and exclusion criteria, using a standardised screening form. If the child presents with diarrhoea and no signs of dehydration, the child is enrolled if anthropometric criteria are met. If the child has signs of "some" or "severe" dehydration, or if the child requires urgent care, they will be kept under observation. During this "stabilisation" period, oral and/or intravenous rehydration will be provided, and treatment of all urgent conditions will be performed using standard treatment in accordance with the WHO Pocket Book of Hospital Care for Children, 2013 [14]. Assignment to either study drug (azithromycin or placebo) will not require alteration to the current standard of care. Once rehydration of the child is successfully completed, and urgent care has been provided, the child is enrolled, if eligible. However, if the child is not stabilised, or requires additional treatment, they will not be screened further. Anthropometric measurements will be carried out in accordance with the methods specified in the WHO module on measuring a child's growth [15]. Anthropometry measurements will be taken after rehydration and stabilisation as required. Weight will be measured using an electronic scale with a sensitivity of ±10 g by two independent observers; length will be measured using a length board to the nearest 0.1 cm. Calibration of the instruments to measure length and weight will be done daily. MUAC will be measured using non-stretchable tape to the nearest 0.1 cm. After confirming eligibility, the accompanying primary caregiver will provide written informed consent. For caregivers who are illiterate, documented witnessed verbal consent and a thumbprint will be obtained. During consent, the purpose of the study, as well as all the study procedures, will be explained to the caregiver by a member of the trial team in the local language. Consent is taken for the collection and storage of and use of samples collected. Caregivers were informed that they could request discontinuation of storage and destruction of samples collected from the child. Caregivers were also informed that they could choose to participate in the trial but not provide samples. Randomisation, allocation concealment and blinding Stratified randomisation (by site) will be carried out in permuted blocks (block size 4,6,8). A computer-generated randomisation list will be converted into unique serial numbers for each enrolled child at each site. Each individual enrolled child's supply of study medicine will be provided to each site in advance, labelled with this serial number. To ensure blinding, the containers and doses for each of the two groups will be identical. The active and placebo medications will be similar in all aspects including the colour, smell and taste. The containers will be identical. Treatment allocation (once assigned) will remain blinded to the participant, the site Principal Investigator, the site study staff and the hospital clinicians during all data collection phases of the study. The randomisation code is kept with a third-party agency. The code will only be broken if the Data Safety and Monitoring Board (DSMB) requests this information for a participant (because of a suspected relationship between a serious adverse event and the study drug) or for unmasking a study arm (as part of the interim analysis). Children will be randomised to one of two arms. Those randomised to the treatment arm receive azithromycin 10 mg/kg as a single dose on 3 consecutive days, reconstituted from a dry powder, and given to the children orally. Children randomised to the placebo arm receive an inactive placebo identical in appearance, also as a dry powder, reconstituted similarly, and given for 3 consecutive days. The first dose of the study medicine is given directly observed at the health facility by a trained study worker. On days 2 and 3, a study team member will visit the home of all enrolled children to provide the subsequent doses of the study medicine, or to observe the caregiver giving it. Both groups will receive standard of care for diarrhoeal disease, including rehydration, supplemental zinc, nutritional counselling, follow-up and guidance on when to return, as per the WHO guidelines. Children with some or severe dehydration will be rehydrated and stabilised prior to completion of screening. Primary outcomes All-cause mortality in the 180 days following enrolment between the placebo and treatment arms Change in linear growth measured as change in length-for-age z score (∆LAZ) in the 90 days following enrolment between the placebo and treatment arms Secondary outcomes Change in markers of acute malnutrition between arms (∆MUAC and ∆WLZ and ∆weight) The proportion of children who are hospitalised in the 90 days following enrolment The proportion of children who are hospitalised or have died in the 90 days following enrolment Cause-specific mortality as determined by verbal and social autopsy Proportion of strains of E. coli, isolated from stools, resistant to azithromycin and other antibiotics at enrolment among the study population Proportion of strains of E. coli, isolated from stools, and S. pneumoniae, isolated from nasopharyngeal swabs, resistant to azithromycin and other antibiotics at day 90 and day 180 in a randomly selected sub-sample of children enrolled in the study and their siblings or close household contacts Trial assessments and follow-up Children enrolled into the trial will be followed up for 180 days post-enrolment or until the first primary end point is reached, whichever is earlier. The schedule of enrolment, intervention and follow-up assessments is shown in Fig. 2, which follows the SPIRIT guidelines. Schedule of enrolment, intervention and follow-up assessments for the Antibiotics for Children with Diarrhoea (ABCD) trial *Screening is expected to be completed on the same single day, except is the child needs treatment for dehydration or urgent care for an illness, in which case, it can be completed once the child has been stabilized. #Stool samples and nasopharyngeal isolates are only collected for 15% of participants in the antimicrobial resistance study. D=day. MUAC=mid-upper arm circumference At day 45, a telephonic or in-person contact will be made by a non-medical member of the trial team to ascertain the vital status of the child, any hospital admissions and to remind the patient of the day 90 appointment. At day 90, participants will be followed-up in the clinic to ascertain vital status, hospitalisations, and health of the enrolled child, as well as their nutritional status (weight, length and MUAC). Participant retention will be encouraged by the frequent follow-up during days 1–3, the provision to caretakers of a study telephone number for questions, and the follow-up schedule noted in Fig. 2. Caretakers of enrolled children provided study staff with their home address and the closest available telephone number on which they could be contacted. At day 180, study personnel will contact study participants by telephone or in person. They will ascertain the vital status of the enrolled child at day 180 and document any hospital admissions since the day 90 visit. If a child is reported to have died at any time during the study follow-up period, specially trained study staff will review the hospital record (if available) and conduct a standardised verbal and social autopsy interview to ascertain the date, cause and context of death. The verbal autopsy effort will include capturing all relevant hospital/facility-based information on a child that dies in a facility. A subset of 1700 ABCD participants will be enrolled in an antimicrobial resistance (AMR) detection study in the trial. This AMR component will study the development of AMR among E. coli (stool isolates) and S. pneumoniae (nasopharyngeal isolates) in study participants and their close household contacts. A faecal stool sample will be collected from all children at enrolment for culture and sensitivity testing of E. coli. Children enrolled in the AMR sub-study will also be asked to provide a stool sample and a nasopharyngeal swab on day 90 and day 180. In addition, a stool sample and a nasopharyngeal swab will also be collected from a sibling or other close household contact of the enrolled child at day 90 and day 180. The protocols for this AMR detection study will be described elsewhere. Diarrhoea aetiology Each site will undertake an assessment of the potential viral and bacterial pathogens associated with the diarrhoeal episode in a separate study. A total of 7000 stool specimens will be tested. Identification will be facilitated by quantitative molecular diagnostic methods. Recording of serious adverse events Serious adverse events are defined as any deaths or hospitalisation or any life-threatening event that occur in the period from enrolment to day 10. Although it is relatively safe for use in young children, azithromycin can cause adverse events including nausea, abdominal pain [16] and diarrhoea from its effects on the gut microbiota [17, 18]. Any adverse events in the ABCD trial are monitored by the Trial Steering Committee (TSC) and reviewed by the DSMB. These events will be recorded by the trial staff and confirmed by the study physicians. Study physicians can, at any time, withdraw a participant from the trial if a risk to their safety from continuation in the study is perceived. The following measures provide quality assurance: An extensive initial and subsequent ongoing training sessions for data collectors in the protocol procedures with a focus on anthropometry measurements and dehydration assessment to ensure the reliability between data collectors is high. Real-time electronic data capture will ensure data validation, such as range, logical checks and data integrity. Principal Investigators will receive brief monthly progress reports from the Data Coordinating Centre during the entire study period and will participate in regular telephone conferences with WHO staff. The monthly progress reports will include the number children assessed, number of children recruited, home visits due to be conducted, actual visits conducted, child hospitalisations, deaths and verbal and social autopsies [19] conducted. Field supervisors will be responsible for assuring that the training of the field staff is rigorous and of high quality. They will schedule the testing and retraining as required at their individual sites. Assessment of individual study personnel's abilities to use the standardised enrolment criteria and conduct the anthropometric measurements and dehydration assessment consistently across the study population are key responsibilities of the field supervisor. WHO study coordinators, and others identified by them, will ensure that at least two structured monitoring visits are conducted to each site every year. The monitoring visits will have as their primary aim quality control and the improvement of study implementation. The monitors will make direct observations of all relevant study procedures and data management activities. The data management team will run a monthly set of range and consistency checks, resolve inconsistencies or queries with the sites and provide data summaries as the trial progresses. The queries will be resolved before the next set of monthly checks report. No financial compensation will be provided for participation in the trial. The presenting diarrhoeal episode and any serious illnesses will be treated at the time of presentation. An external data management agency will ensure harmonisation of data collection and data management processes across all sites. All sites will collect information on a core set of variables with standard definitions. A set of range and consistency checks that must be applied to these variables will be available. Each site will be responsible for data entry and initial cleaning of the data, including running range and consistency checks, as well as periodic reviews of distributions and identification of outliers. Each study site will resolve any inconsistencies within its database, in consultation with the field data collection team, and with field verification if needed. Individual sites will be required to provide data on the core set of variables into a REDCap database. The data management team will run a monthly set of range and consistency checks, resolve inconsistencies or queries with the sites and provide data summaries as the trial progresses. Statistical considerations Sample size and power To determine the sample size for the ABCD trial, we estimated that the 180-day mortality in the control group would be 2.7%. The estimated overall mortality in the control group was based on the earlier GEMS trial [4] as well as the assumed proportions of children with various risk factors in the sample and respective sub-group-specific mortality. Assuming this baseline mortality, a relative risk in the intervention group of 0.65 (35% reduction in mortality in the intervention group), 90% power, 95% confidence, assumed loss to follow-up of 10%, and 1:1 ratio in the numbers of participants in the control and intervention group, the required sample size would be 5696 per group or 11,392 in total. Since there is an initial plan to conduct one interim analysis, the sample size will be inflated by a factor of 1.009 [20]. The total planned sample size will be 5750 per group and 11,500 in total. As the study is implemented in seven countries, each country will on average need to enrol 1645 children (approximately 822 per study group). The second primary aim is to compare ∆LAZ between the control and intervention from enrolment to 90 days. We have estimated that, with the above sample size, we will have 80% power to detect at least a 0.04 difference in mean ∆LAZ between study groups using a two-sided t test to compare the difference in mean change in LAZ between two groups with α = 0.05 and standard deviation (SD) of 0.7 in both groups. This is comparable to results observed in GEMS. As the SD of the difference in ∆LAZ has been shown to vary, this will provide an adequate sample size to detect a ∆LAZ ranging between 0.04 and 0.06 given varying SDs between 0.5 to 0.7, with a power of 80–90%. An intention-to- treat (ITT) approach, including all randomised participants, is considered the primary analysis approach. For the first primary outcome, i.e. the mortality outcome, period prevalence of death will be compared between the randomly assigned treatment groups using relative risk regression. Mortality (a binary variable) will be defined as any event of death from time of randomisation to end of day 180. Participants with a missing mortality outcome will be assigned 'alive'. For the primary ITT analysis, the following model will be used: $$ \boldsymbol{E}\left(\boldsymbol{Y}|\boldsymbol{randomisation}\ \boldsymbol{arm}\right)={\boldsymbol{e}}^{\left({\boldsymbol{\beta}}_{\mathbf{0}}+{\boldsymbol{\beta}}_{\mathbf{1}}{\boldsymbol{X}}_{\boldsymbol{azm}}\right)} $$ where xazm is an indicator variable specifying randomised to the azithromycin group (xazm = 1) or not (xazm = 0). The risk ratio comparing the risk of death in children randomised to azithromycin versus placebo will be determined by eβ. The statistical significance of this comparison will be determined by a Wald test. In a per-protocol analysis, secondary to the ITT analysis, relative risk regression will be fit as described above among the subset of children with documented completion of the full course of treatment. For the co-primary outcome, i.e. ∆LAZ, a linear regression model will be used to compare mean ∆LAZ across the treatment groups in surviving children. ∆LAZ will be operationalised as the difference in LAZ between the 90-day follow-up visit and LAZ at baseline. For the primary ITT analysis, LAZ outcome will only be analysed for those children with a measured outcome. The following model will be used: $$ E\;\left(Y| randomisation\; arm\right)={\beta}_0+\beta {X}_{azm} $$ where Y = mean ∆LAZ, and xazm is an indicator variable of being randomised to the azithromycin group (xazm = 1) or not (xazm = 0). The mean difference in ∆LAZ in children randomised to azithromycin versus placebo will be determined by (β). The statistical significance of this comparison will be determined by independent t test. In a per-protocol analysis, also secondary to the ITT analysis, a linear regression model will be fit as described above among the subset of children with documented completion of the full course of treatment. A secondary analysis will be done as a per-protocol analysis excluding participants who are not adherent to the full treatment schedule or having missing outcomes. Effect modification by site, age, sex, anthropometry and socioeconomic status will be explored. This is planned to be exploratory. No testing within any stratum is envisaged as there is unlikely to be adequate power to test within a stratum. Trial governance The ABCD trial is overseen by the TSC which consists of the site Principal Investigators and the WHO Trial Coordinators. The TSC is responsible for overall supervision of the trial. A second committee, the Trial Advisory Group (TAG), consists of external experts with experience in diarrhoeal disease in low-income contexts. The TAG provides advice to the TSC periodically during the life of the trial. An independent DSMB has been established by the WHO to monitor severe adverse events and to approve the statistical analysis plan and associated stopping rules for benefit, futility, or harm determined using O'Brien-Fleming stopping boundaries. The DSMB includes five members with expertise in clinical trials, statistics, child mortality assessment, ethics, and paediatric care in resource-limited settings. When approximately half of the person-time is accrued in the study, the DSMB will review an interim data analysis by arm to determine whether stopping boundaries have been crossed. The SPIRIT checklist for present study is provided in Additional file 1. ABCD is a large, multi-site, paediatric trial testing the potential benefits of azithromycin in reducing mortality and improving linear growth when targeted to high-risk children with non-bloody diarrhoea. A recently concluded cluster randomised trial of mass drug administration of azithromycin has shown a reduction in all-cause mortality among children [11,12,13]. However, the effect appeared limited to a single site in this trial. Another large trial of azithromycin in addition to seasonal malaria chemoprophylaxis [21] has not documented any benefit on mortality, although a reduction in gastrointestinal infections was seen. The potential effect of azithromycin on mortality in children with a high risk of a diarrhoea-related death has not been studied to date. Young children who are undernourished or have dehydrating diarrhoea are at higher risk of death in the 3-month period after the onset of diarrhoea [4]. More recently, global data suggest that multiple bacterial and parasitic diarrhoeal pathogens are significantly associated with death, particularly in sub-Saharan Africa [1]. In low -resource settings, the high burden of bacterial causes of diarrhoea [1, 3, 5] has led to suggestions that antibiotics be used more widely, even in the absence of dysentery, as most children with these infections do not have bloody stools [6, 7]. Despite WHO recommendations for the management of diarrhoea which suggest that only children with bloody diarrhoea receive antibiotics, over 40% of children with non-bloody diarrhoea currently receive antibiotics as part of non-standard treatment in low-income settings [22]. This overuse of antibiotics contributes to the development and spread of antimicrobial resistance, both at individual and population levels. While concerns have been raised about expanding the use of empiric antibiotic treatment for diarrhoea, targeting such treatment to a specific sub-group of vulnerable children at high risk of death may potentially lower antimicrobial resistance if prescribers perceive that specific sub-groups of children at the highest risk are being targeted with antibiotic treatment. Prior studies in pneumonia have documented reduced antibiotic use when guidelines were updated to clarify which risk groups should be treated [23]. The ABCD trial will determine if providing azithromycin to children with acute diarrhoea who are at risk reduces mortality and/or diarrhoea-related morbidity, including linear growth faltering. These data will inform the global debate regarding the potential role of antibiotics in reducing child mortality. If the ABCD trial demonstrates benefit, this will provide evidence to support a reconsideration of the present WHO guidelines for the management of diarrhoea in a clearly identified high-risk population. Conversely, if there is no evidence of benefit, a convincing case for more rigorous control over the inappropriate use of antibiotics in diarrhoeal case management and further strengthening of global efforts to expand coverage and improve coverage of the current Integrated Management of Childhood Illness guidelines can be made. Trial status Recruitment to the ABCD trial began in December 2017 and is currently ongoing. It is expected to conclude in March 2020. The current protocol is version 9.0 and is dated 21 December 2018. The datasets generated during the current study are available from the corresponding author on reasonable request. GBD 2016 Diarrhoeal Diseases Collaborators. Estimates of the global, regional, and national morbidity, mortality, and aetiologies of diarrhoea in 195 countries: a systematic analysis for the Global Burden of Disease Study 2016. Lancet Infect Dis. 2018;18:1211–28. World Health Organization. Integrated management of childhood illness. https://apps.who.int/iris/bitstream/handle/10665/43993/9789241597289_eng.pdf?sequence=1. Accessed 22 June 2019. Chopra M, Mason E, Borrazzo J, Campbell H, Rudan I, Liu L, Black RE, Bhutta ZA. Ending of preventable deaths from pneumonia and diarrhoea: an achievable goal. Lancet. 2013;381(9876):1499–506. Kotloff KL, Nataro JP, Blackwelder WC, et al. Burden and aetiology of diarrhoeal disease in infants and young children in developing countries (the Global Enteric Multicenter Study, GEMS): a prospective, case-control study. Lancet. 2013;382(9888):209–22. Platts-Mills JA, et al. Use of quantitative molecular diagnostic methods to assess the aetiology, burden, and clinical characteristics of diarrhoea in children in low-resource settings: a reanalysis of the MAL-ED cohort study. Lancet Global Health. 2018;6(12):e1309–18. Pavlinac PB, Denno DM, John-Stewart GC, Onchiri FM, Naulikha JM, Odundo EA, Hulseberg CE, Singa BO, Manhart LE, Walson JL. Failure of syndrome-based diarrhea management guidelines to detect Shigella infections in Kenyan children. J Pediatric Infect Dis Soc. 2016;5(4):366–74. Pernica JM, Steenhoff AP, Welch H, Mokomane M, Quaye I, Arscott-Mills T, Mazhani L, Lechiile K, Mahony J, Smieja M, Goldfarb DM. Correlation of clinical outcomes with multiplex molecular testing of stool from children admitted to hospital with gastroenteritis in Botswana. J Pediatric Infect Dis Soc. 2016;5(3):312–8. Rice AL, Sacco L, Hyder A, Black RE. Malnutrition as an underlying cause of childhood deaths associated with infectious diseases in developing countries. Bull World Health Organ. 2000;78:1207–21. Wasihun AG, Dejene TA, Teferi M, et al. Risk factors for diarrhoea and malnutrition among children under the age of 5 years in the Tigray Region of Northern Ethiopia. PLoS One. 2018;13(11):e0207743. World Health Organization. Report of the 20th meeting of the WHO alliance for the global elimination of trachoma by 2020 Sydney, Australia. 2019. Porco TC, Gebre T, Ayele B, et al. Effect of mass distribution of azithromycin for trachoma control on overall mortality in Ethiopian children: a randomized trial. JAMA. 2009;302:962–8. Keenan JD, Bailey RL, West SK, et al. Azithromycin to reduce childhood mortality in sub-Saharan Africa. N Engl J Med. 2018;378:1583–92. Keenan JD, Arzika AM, Maliki R, et al. Longer-term assessment of azithromycin for reducing childhood mortality in Africa. N Engl J Med. 2019;380:2207–14. World Health Organization. Pocket book of hospital care for children. Guidelines for the management of common childhood illnesses. Geneva: World Health Organization; 2013. World Health Organization. Measuring a child's growth. https://www.who.int/childgrowth/training/module_b_measuring_growth.pdf. Accessed 22 June 2019. Lakoš AK, Pangerčić A, Gašparić M, Kukuruzović MM, Kovačić D, Baršić B. Safety and effectiveness of azithromycin in the treatment of respiratory infections in children. Curr Med Res Opin. 2012;28(1):155–62. Seidman JC, Coles CL, Silbergeld EK, et al. Increased carriage of macrolide-resistant fecal E. coli following mass distribution of azithromycin for trachoma control. Int J Epidemiol. 2014;43:1105–13. Parker EPK, Praharaj I, John J, Kaliappan SP, Kampmann B, Kang G, Grassly NC. Changes in the intestinal microbiota following the administration of azithromycin in a randomised placebo-controlled trial among infants in south India. Sci Rep. 2017;7(1):9168. Koffi AK, Maina A, Yaroh AG, Habi O, Bensaïd K, Kalter HD. Social determinants of child mortality in Niger: results from the 2012 national verbal and social autopsy study. J Glob Health. 2016;6(1):010603. Jennison C, Turnbull BW. Group sequential methods with applications to clinical trials. Boca Raton: Chapman & Hall/CRC; 2000. Chandramohan D, Dicko A, Zongo I, et al. Effect of adding azithromycin to seasonal malaria chemoprevention. N Engl J Med. 2019;380:2197–206. Rogawski ET, Platts-Mills JA, Seidman JC, et al. Use of antibiotics in children younger than two years in eight countries. Bull World Health Organ. 2017;95(1):49–61. ISCAP Study Group. Three day versus five day treatment with amoxicillin for non-severe pneumonia in young children: a multicentre randomised controlled trial. BMJ. 2004;328(7443):791. The ABCD team would like to thank the children and families that have participated in the trial to date. All the staff in all the participating sites are acknowledged for their dedication. The authors also appreciate the independent oversight provided by the DSMB members including Mathuram Santosham (chair), Yin Bun Cheung, Shinjini Bhatnagar, Elizabeth Molyneux and Godwin Ndossi. The data management team at RTI International are acknowledged for supporting the data management function and organising monitoring visits at the site. Group authorship A list of members of the ABCD study team and their affiliations is as follows: First name Last name Affiliation T Alam International Centre for Diarrhoeal Disease Research, Bangladesh D Ahmed International Centre for Diarrhoeal Disease Research, Bangladesh T Ahmed International Centre for Diarrhoeal Disease Research, Bangladesh MJ Chisti International Centre for Diarrhoeal Disease Research, Bangladesh MW Rahman International Centre for Diarrhoeal Disease Research, Bangladesh AK Asthana Subharti Medical College, Swami Vivekananda Subharti University, Meerut, India PK Bansal District Hospital Meerut (UP). A Chouhan Center for Public Health Research (CPHK), New Delhi, India S Deb Center for Public Health Research (CPHK), New Delhi, India. P Dhingra Center for Public Health Research (CPHK), New Delhi, India. U Dhingra Center for Public Health Research (CPHK), New Delhi, India. A Dutta Center for Public Health Research (CPHK), New Delhi, India. VK Jaiswal Department of Pediatrics, LLRM Medical College, Meerut, India J Kumar Center for Public Health Research (CPHK), New Delhi, India. A Pandey Subharti Medical College, Swami Vivekananda Subharti University, Meerut, India S Sazawal Center for Public Health Research (CPHK), New Delhi, India AK Sharma Pediatrician, CHC Mawana, Meerut, India. C McGrath University of Washington, Department of Global Health C Nyabinda University of Washington, Kenya M Okello University of Washington, Kenya PB Pavlinac University of Washington, Department of Global Health B Singa Kenya Medical Research Institute J L Walson University of Washington, Department of Global Health N Bar-Zeev International Vaccine Access Center, Department of International Health, Bloomberg School of Public Health, Johns Hopkins University, Baltimore Q Dube Malawi-Liverpool-Wellcome Trust Clinical Research Programme, Queen Elizabeth Central Hospital, Blantyre, Malawi B Freyne Malawi-Liverpool-Wellcome Trust Clinical Research Programme, Malawi. Institute of Infection and Global Health, University of Liverpool, C Ndamala Malawi-Liverpool-Wellcome Trust Clinical Research Programme, Malawi L Ndeketa Malawi-Liverpool-Wellcome Trust Clinical Research Programme, Malawi H Badji Centre pour le Développement des Vaccins (CVD-Mali), Bamako, Mali JP Booth Department of Pediatrics and Medicine, Center for Vaccine Development and Global Health, University of Maryland School of Medicine, Baltimore, Maryland, USA F Coulibaly Centre pour le Développement des Vaccins (CVD-Mali), Bamako, Mali F Haidara Centre pour le Développement des Vaccins (CVD-Mali), Bamako, Mali K Kotloff Department of Pediatrics and Medicine, Center for Vaccine Development and Global Health, University of Maryland School of Medicine, Baltimore, Maryland, USA D Malle Centre pour le Développement des Vaccins (CVD-Mali), Bamako, Mali A Mehta Department of Pediatrics and Medicine, Center for Vaccine Development and Global Health, University of Maryland School of Medicine, Baltimore, Maryland, USA S Sow Centre pour le Développement des Vaccins (CVD-Mali), Bamako, Mali M Tapia Department of Pediatrics and Medicine, Center for Vaccine Development and Global Health, University of Maryland School of Medicine, Baltimore, Maryland, USA S Tennant Department of Pediatrics and Medicine Center for Vaccine Development and Global Health, University of Maryland School of Medicine, Baltimore, Maryland, USA A Hotwani Aga Khan University, Karachi Pakistan F Kabir Aga Khan University, Karachi Pakistan F Qamar Aga Khan University, Karachi Pakistan S Qureshi Aga Khan University, Karachi Pakistan S Shakoor Aga Khan University, Karachi Pakistan R Thobani Aga Khan University, Karachi Pakistan MT Yousufzai Aga Khan University, Karachi Pakistan M Bakari Muhimbili University of Health and Allied Sciences, Dar es Salaam, Tanzania C Duggan Boston Children's Hospital, Boston U Kibwana Muhimbili University of Health and Allied Sciences, Dar es Salaam, Tanzania R Kisenge Muhimbili University of Health and Allied Sciences, Dar es Salaam, Tanzania K Manji Muhimbili University of Health and Allied Sciences, Dar es Salaam, Tanzania S Somji Muhimbili University of Health and Allied Sciences, Dar es Salaam, Tanzania C Sudfeld Harvard TH Chan School of Public Health, Boston P Ashorn Department of Maternal, newborn, child and adolescent health, World Health Organization, Geneva R Bahl Department of Maternal, newborn, child and adolescent health, World Health Organization, Geneva A De Costa* Department of Maternal, newborn, child and adolescent health, World Health Organization, Geneva J Simon Department of Maternal, newborn, child and adolescent health, World Health Organization, Geneva The ABCD trial is funded through a grant from the Bill and Melinda Gates Foundation (grant no: OPP1126331). The funders had no role in the study design. The trial was an investigator-led trial coordinated by the Dept. of Maternal, newborn, child and adolescent health, World Health Organization, Geneva. World Health Organisation, Geneva, Switzerland A. De Costa , D. Ahmed , T. Ahmed , M. J. Chisti , M. W. Rahman , A. K. Asthana , P. K. Bansal , A. Chouhan , S. Deb , P. Dhingra , U. Dhingra , A. Dutta , V. K. Jaiswal , J. Kumar , A. Pandey , S. Sazawal , A. K. Sharma , C. McGrath , C. Nyabinda , M. Okello , P. B. Pavlinac , B. Singa , J. L. Walson , N. Bar-Zeev , Q. Dube , B. Freyne , C. Ndamala , L. Ndeketa , H. Badji , J. P. Booth , F. Coulibaly , F. Haidara , K. Kotloff , D. Malle , A. Mehta , S. Sow , M. Tapia , S. Tennant , A. Hotwani , F. Kabir , F. Qamar , S. Qureshi , S. Shakoor , R. Thobani , M. T. Yousufzai , M. Bakari , C. Duggan , U. Kibwana , R. Kisenge , K. Manji , S. Somji , C. Sudfeld , P. Ashorn , R. Bahl , A. De Costa & J. Simon All named authors adhere to the authorship guidelines of Trials. All authors have agreed to publication. Correspondence to A. De Costa. The trial has been approved by the WHO Ethics Review Committee and by the Ethics Committees of all participating sites in the seven countries – Bangladesh (Ethical Review Committee of the International Centre for Diarrhoeal Disease Research), India (Institutional Ethics Committee, Subharti Medical College & Hospital, Swami Vivekanand Subharti University), Kenya (Kenya Medical Research Institute Ethical Review Committee and the University of Washington Institutional Review Board), Malawi (University of Malawi College of Medicine Research Ethics Committee), Mali (Comite d'Ethique de la FMPOS (ERC at USTTB), University of Maryland, Baltimore Research Ethics Committee), Pakistan (Aga Khan University, Ethical Review Committee), Tanzania (Muhimbili University of Health and Allied Sciences Senate Research and Publications Committee; Tanzanian Food and Drug Administration; National Institute for Medical Research), and Boston (Boston Children's Hospital Institutional Review Board). Each updated version of the approved protocol will be submitted to the ethical committees above. Written informed consent (from parents/caregivers) is obtained by study staff before any trial procedures are carried out and participant confidentiality is maintained throughout the trial in line with the standard ICH-GCP principles. Additional file 1. SPIRIT Checklist for the ABCD trial. Alam, T., Ahmed, D., Ahmed, T. et al. A double-blind placebo-controlled trial of azithromycin to reduce mortality and improve growth in high-risk young children with non-bloody diarrhoea in low resource settings: the Antibiotics for Children with Diarrhoea (ABCD) trial protocol. Trials 21, 71 (2020) doi:10.1186/s13063-019-3829-y Received: 05 August 2019 DOI: https://doi.org/10.1186/s13063-019-3829-y Paediatric diarrhoea
CommonCrawl
Dynamics lunch: Elon Lindenstrauss (HUJI) - Bilu's theorem I will describe Bilu's equidistribution theorem for roots of polynomials, and explain some implications this has on entropy of toral automorphisms. Dynamics lunch:Asaf Katz - Uniqueness of measure of maximal entropy Dynamics lunch: Sebastian Donoso (HUJI) - Automorphism groups of low complexity subshifts Abstract: The automorphism group of a subshift $(X,\sigma)$ is the group of homeomorphisms of $X$ that commute with $\sigma$. It is known that such groups can be extremely large for positive entropy subshifts (like full shifts or mixing SFT). In this talk I will present some recent progress in the understanding of the opposite case, the low complexity one. I will show that automorphism groups are highly constrained for low complexity subshifts. For instance, for a minimal subshifts with sublinear complexity the automorphism group is generated by the shift and a finite set. Dynamics lunch: Yuri Kifer (HUJI) - On Erdos-Renyi law of large numbers and its extensions Dynamics lunch: Ori Gurel Gurevitch (HUJI), Stationary random graphs Dynamics lunch seminar: Brandon Seward (HUJI): Entropy theory for non-amenable groups (part III) Dynamics lunch: Tom Gilat (HUJI): "Measure rigidity for `dense' multiplicative semigroups (following Einsiedler and Fish)" Topology & geometry, Egor Shelukhin (IAS), "The L^p diameter of the group of area-preserving diffeomorphisms of S^2" Ross building, Hebrew University (Seminar Room 70A) Abstract: We use a geometric idea to give an analytic estimate for the word-length in the pure braid group of S^2. This yields that the L^1-norm (and hence each L^p-norm, including L^2) on the group of area-preserving diffeomorphisms of S^2 is unbounded. This solves an open question arising from the work of Shnirelman and Eliashberg-Ratiu. Joint work in progress with Michael Brandenbursky. Topology & geometry, Ailsa Keating (Columbia University), "Homological Mirror Symmetry for singularities of type Tpqr" Topology & geometry, Mikhail Katz (Bar Ilan University), "Determinantal variety and bi-Lipschitz equivalence" Abstract: The unit circle viewed as a Riemannian manifold has diameter (not 2 but rather) π, illustrating the difference between intrinsic and ambient distance. Gromov proceeded to erase the difference by pointing out that when a Riemannian manifold is embedded in L∞, the intrinsic and the ambient distances coincide in a way that is as counterintuitive as it is fruitful. Witness the results of his 1983 Filling paper. Topology & geometry: Pavel Paták (HUJI), "Homological non-embeddability and a qualitative topological Helly-type theorem" Abstract: The classical theorem of Van Kampen and Flores states that the k-dimensional skeleton of (2k+2)-dimensional simplex cannot be embedded into R2k. We present a version of this theorem for chain maps and as an application we prove a qualitative topological Helly-type theorem. If we define the Helly number of a finite family of sets to be one if all sets in the family have a point in common and as the largest size of inclusion-minimal subfamily with empty intersection otherwise, the theorem can be stated as follows: Topology & geometry, Sari Ghanem (Université Joseph Fourier - Grenoble I ), "The decay of SU(2) Yang-Mills fields on the Schwarzschild black hole with spherically symmetric small energy initial data" Levi building, Hebrew University ( Room 06) **Note the special location** Topology & geometry, Amitai Zernik (Hebrew University), "Fixed-point Expressions for Open Gromov-Witten Invariants - overview and $A_{\infty}$ perspective" In this pair of talks I will discuss how to obtain fixed-point expressions for open Gromov-Witten invariants. The talks will be self-contained, and the second talk will only require a small part of the first talk, which we will review. The Atiyah-Bott localization formula has become a valuable tool for computation of symplectic invariants given in terms of integrals on the moduli spaces of closed stable maps. In contrast, the moduli spaces of open stable maps have boundary which must be taken into account
CommonCrawl
Volume 414 - 41st International Conference on High Energy physics (ICHEP2022) - Poster Session Level 1 Muon Triggers for the CMS Experiment at the HL-LHC M. Glazewska* and M. Konecki Pre-published on: November 05, 2022 Published on: — The High Luminosity LHC (HL-LHC) is expected to commence operation in 2029 with an expected luminosity of 7.5$\times$10$^{34}$ cm$^{-2}$s$^{-1}$. While this luminosity will bring a higher data rate beneficial to the search for interesting physics, it comes with the consequence of a pileup of $\sim$200. These harsher operating conditions will require the upgrade of the trigger system. The track reconstructing algorithms of the Level-1 Trigger for the three Muon Track Finder (MTF) regions (Barrel, Overlap, Endcap) will be developed in order to maintain and possibly improve the current event selection precision. Stand-alone candidates from the outer tracking system will also be taken into account in the decision of the Global Muon Trigger (GMT). The MTF and GMT will be implemented in a custom X2O board. DOI: https://doi.org/10.22323/1.414.1219 Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Communicate with the PoS editorial office | Cookie policy | Privacy policy Published by Sissa Medialab srl Partita IVA: 01097780322
CommonCrawl
Fundamental Analysis Tools for Fundamental Analysis The Most Crucial Financial Ratios for Penny Stocks Marianna Galstyan Marianna Galstyan has 6+ years of experience as a business consultant and 3+ years as a financial policy analyst. Skylar Clarine Fact checked by Skylar Clarine Skylar Clarine is a fact-checker and expert in personal finance with a range of experience including veterinary technology and film studies. Tools for Fundamental Analysis Wise investors who want to keep their money usually stay away from penny stocks. But once in a while, a penny stock can hit the jackpot. Ford Motor Co. (F) and American Airlines Group Inc. (AAL), for example, both started out as penny stocks and are now at the blue chip end of the trading spectrum. Investors who are willing to brave the volatile and lightly regulated world of penny stocks can study key financial ratios to mitigate risks and possibly even make a good investment. What Are Penny Stocks? Penny stocks, as defined by the U.S. Securities and Exchange Commission, are securities issued by companies that have a market cap of less than $250 million or $300 million. Some experts choose to adopt a cut-off value of $1 per share. These stocks trade in over-the-counter (OTC) markets. Unlike conventional exchanges like Nasdaq or the New York Stock Exchange, over-the-counter markets do not hold companies to minimum standard requirements to remain on the exchange. These may be companies that have no proven track record, unpredictable revenues or earnings, shaky management, and very little disclosure about their operations. Other penny stock companies operate in unproven sectors of the economy or have products or services that are yet to be tested in the market. Penny stocks are attractive because they are cheap. Investors dream of finding that future Ford Motor or American Airlines and reaping the rewards of exponential growth. However, these low share prices often come with considerable liabilities. Penny stocks are highly volatile and lack adequate liquidity. This means that even if stock prices rise, investors may not be able to sell shares before prices fall again. The speculative nature of penny stocks requires due diligence and analysis to make the investment in these securities something other than a pure gamble. How to Reduce the Risks of Penny Stocks One way to reduce the risk associated with the inadequate disclosure of penny stocks is to pick from companies in the OTCQX tier of the over-the-counter markets. OTCQX has stricter financial standards for the listed companies. These companies must comply with U.S. securities laws and meet higher standards of operations when compared to the other two OTC market tiers—OTCQB and OTC Pink. Investors should be especially wary of companies listed on OTC Pink as they are not required to file with the SEC and are therefore not regulated. To uncover a sound penny stock investment, use fundamental analysis to identify factors affecting the company and to assess the strength of its operations. Keep in mind, though, with penny stocks, the lack of timely and pertinent public information may make good fundamental analysis difficult to complete. Given adequate financial disclosure, we can apply some of the same analytical methods we use for larger companies to determine if a given penny stock is worth our investment dollars. Strong numbers and a positive trend on the balance sheet, income statement, and cash flow statement are important because so much of the penny stock's value is based on future expectations of performance. Liquidity Ratios: Liquidity ratios (such as current ratio, quick ratio, cash ratio, and operating cash flow ratio) are the first ratios that an investor should compute for penny stocks. Often, penny stocks are unable to cover their short-term liabilities in a given time frame. Lower liquidity ratios (say, less than 0.5) are a good indication that the company is struggling to stay in business or to advance its operations. Leverage Ratios: Another important subset of ratios are leverage ratios. They are similar to liquidity ratios in that they focus on the company's ability to cover debt. In this case, it is the long-term debt that we are concerned about. Two important leverage ratios are the debt ratio and the interest coverage ratio. Debt Ratio = Total Liabilities Total Assets \text{Debt Ratio}\ =\ \frac{\text{Total Liabilities}}{\text{Total Assets}} Debt Ratio = Total AssetsTotal Liabilities​ Here, we are looking for trends such as if the debt load is shrinking or expanding. If it is expanding, then it should only be for the reason of supporting future growth opportunities and business development. The interest coverage ratio is computed to determine if the debt load is manageable and if the company generates an adequate level of earnings to service its outstanding debt. Interest Coverage Ratio = Earnings Before Interest and Taxes Interest Expense \text{Interest Coverage Ratio}\ =\ \frac{\text{Earnings Before Interest and Taxes}}{\text{Interest Expense}} Interest Coverage Ratio = Interest ExpenseEarnings Before Interest and Taxes​ Higher interest coverage ratio numbers are better. Anything less than 2 signals trouble in servicing long-term debt in the future. Performance Ratios: Performance ratios (such as gross profit margin, operating profit margin, net profit margin, return on assets, and return on equity) help quantify the money made at each level of the company's income statement. The challenge is that profit margins of penny stocks are often very small in the early stages of growth. Healthy and consistent growth in operating earnings is more critical in the context of penny stocks. Valuation Ratios: Finally, valuation ratios help us measure the attractiveness of the stock at its current price. Penny stock shares can be seriously overvalued. The most common ratio measuring value is the price-to-earnings (P/E) ratio. Price-to-Earnings Ratio = Current Share Price Earnings Per Share \text{Price-to-Earnings Ratio}\ =\ \frac{\text{Current Share Price}}{\text{Earnings Per Share}} Price-to-Earnings Ratio = Earnings Per ShareCurrent Share Price​ Generally speaking, a lower P/E ratio signifies better value-per-dollar of earnings. This ratio, however, becomes meaningless if company earnings are nonexistent or negative, which is often the case with penny stocks. A better measure of penny stock value is the price-to-earnings-to-growth (PEG) ratio, which incorporates the company's annual earnings growth rate into the above equation. It is derived by dividing the P/E ratio by the expected annual growth rate in earning per share (EPS). Provided that the growth rate estimation is reliable, the PEG ratio is a useful measure of value for penny stocks since much of their value rests in the anticipated future growth of the company's earnings. As mentioned above, P/E and PEG ratios are useless when company earnings are zero or negative. In this scenario, we can use the price-to-sales and price-to-cash flow ratios, which are far more effective with regard to penny stocks. Price-to-Sales Ratio = Current Share Price Sales Per Share \text{Price-to-Sales Ratio}\ =\ \frac{\text{Current Share Price}}{\text{Sales Per Share}} Price-to-Sales Ratio = Sales Per ShareCurrent Share Price​ A price-to-sales ratio of two or less is generally considered a good share value. Price-to-Cash Flow Ratio = Current Share Price Total Cash Flow Per Share \text{Price-to-Cash Flow Ratio}\ =\ \frac{\text{Current Share Price}}{\text{Total Cash Flow Per Share}} Price-to-Cash Flow Ratio = Total Cash Flow Per ShareCurrent Share Price​ The price-to-cash flow ratio is a variation of price-to-sales. It is especially useful to compute if the quality of earnings is under question. Once these financial ratios are calculated, we can compare them with the same ratios for the previous reporting periods or forecast ratios into the future. We can also compare these ratios to those of direct competitors and the market overall to gain useful insight into the company's performance and value. Penny stock shares rise and fall based on the trading demand and are often only loosely related to company fundamentals and the balance sheet. It is often not possible to calculate a penny stock's correct intrinsic value. Their prices are highly unpredictable and reflect perceived potential over actual value. Company disclosure level is at best mediocre, and often nonexistent. Stocks trading on OTCQX require periodic and accurate disclosure of company fundamentals. Investors who wish to trade in penny stocks should stick to the OTCQX market and use financial ratio analysis to mitigate risks. U.S. Securities and Exchange Commission. "Microcap Stock." Accessed July 28, 2021. OTC Markets. "OTCQX U.S." Accessed July 28, 2021. OTC Markets. "OTCQB." Accessed July 28, 2021. OTC Markets. "Information for Pink Companies." Accessed July 28, 2021. What is the average P/E ratio in the financial sector? Penny Stocks to Watch for in January 2022 Analyze Investments Quickly With Ratios How Do I Calculate the P/E Ratio of a Company? Assessing a Stock's Future With the Price-to-Earnings Ratio and PEG How Should I Analyze a Company's Financial Statements? Price-to-Earnings (P/E) Ratio The price-to-earnings (P/E) ratio is the ratio for valuing a company that measures its current share price relative to its per-share earnings. What Is Price-Growth Flow? Price-Growth Flow is a measure of a company's earnings power and R&D expenditures compared to its current market value. What Is a Dividend Payout Ratio? The dividend payout ratio is the measure of dividends paid out to shareholders relative to the company's net income. Debt-to-Equity (D/E) Ratio The debt-to-equity (D/E) ratio indicates how much debt a company is using to finance its assets relative to the value of shareholders' equity. What Are a Company's Earnings? A company's earnings are its after-tax net income, meaning its profits. Earnings are the main determinant of a public company's share price. How to Use the Enterprise Value-to-Sales Multiple Enterprise value-to-sales (EV/sales) relates the enterprise value (EV) of a company to its annual revenue.
CommonCrawl
Beranda Mekanika Universitas Physics of Newtonian Dynamics Physics of Newtonian Dynamics By Ragil Priya On Wednesday, October 25, 2017 Newtonian Dynamics In law variety a pair of, the acceleration lasts solely whereas the applied force lasts. The applied force needn't, however, be constant in time — the law is true in the slightest degree times throughout the motion. Law variety three applies to "contact" interactions. If the bodies area unit separated, and also the interaction takes a finite time to propagate between the bodies, the law should be changed to incorporate the properties of the "field " between the bodies. Although our discussion of the geometry of motion has led to major advances in our understanding of measurements of space and time in different inertial systems, we have yet to come to the crux of the matter, namely — a discussion of the effects of forces on the motion of two or more interacting particles. This key branch of Physics is called Dynamics. It was founded by Galileo and Newton and perfected by their followers, most notably Lagrange and Hamilton. We shall see that the Newtonian concepts of mass, momentum and kinetic energy require fundamental revisions in the light of the Einstein's Special Theory of Relativity. The revised concepts come about as a result of Einstein's recognition of the crucial rôle of the Principle of Relativity in unifying the dynamics of all mechanical and optical phenomena. In spite of the conceptual difficulties inherent in the classical concepts, (difficulties that will be discussed later), the subject of Newtonian dynamics represents one of the great triumphs of Natural Philosophy. The successes of the classical theory range from accurate descriptions of the dynamics of everyday objects to a detailed understanding of the motions of galaxies. The law of inertia Galileo (1544-1642) was the first to develop a quantitative approach to the study of motion. He addressed the question — what property of motion is related to force? Is it the position of the moving object? Is it the velocity of the moving object? Is it the rate of change of its velocity? ...The answer to the question can be obtained only from observations; this is a basic feature of Physics that sets it apart from Philosophy proper. Galileo observed that force influences the changes in velocity (accelerations) of an object and that, in the absence of external forces (e.g: friction), no force is needed to keep an object in motion that is travelling in a straight line with constant speed. This observationally based law is called the Law of Inertia. It is, perhaps, difficult for us to appreciate the impact of Galileo's new ideas concerning motion. The fact that an object resting on a horizontal surface remains at rest unless something we call force is applied to change its state of rest was, of course, well-known before Galileo's time. However, the fact that the object continues to move after the force ceases to be applied caused considerable conceptual difficulties for the early Philosophers (see Feynman The Character of Physical Law). The observation that, in practice, an object comes to rest due to frictional forces and air resistance was recognized by Galileo to be a side effect, and not germane to the fundamental question of motion. Aristotle, for example, believed that the true or natural state of motion is one of rest. It is instructive to consider Aristotle's conjecture from the viewpoint of the Principle of Relativity —- is a natural state of rest consistent with this general Principle? According to the general Principle of Relativity, the laws of motion have the same form in all frames of reference that move with constant speed in straight lines with respect to each other. An observer in a reference frame moving with constant speed in a straight line with respect to the reference frame in which the object is at rest would conclude that the natural state or motion of the object is one of constant speed in a straight line, and not one of rest. All inertial observers, in an infinite observations; this is a basic feature of Physics that sets it apart from Philosophy proper. Galileo observed that force influences the changes in velocity (accelerations) of an object and that, in the absence of external forces (e.g: friction), no force is needed to keep an object in motion that is travelling in a straight line with constant speed. This observationally based law is called the Law of Inertia. It is, perhaps, difficult for us to appreciate the impact of Galileo's new ideas concerning motion. The fact that an object resting on a horizontal surface remains at rest unless something we call force is applied to change its state of rest was, of course, well-known before Galileo's time. However, the fact that the object continues to move after the force ceases to be applied caused considerable conceptual difficulties for the early Philosophers (see Feynman The Character of Physical Law). The observation that, in practice, an object comes to rest due to frictional forces and air resistance was recognized by Galileo to be a side effect, and not germane to the fundamental question of motion. Aristotle, for example, believed that the true or natural state of motion is one of rest. It is instructive to consider Aristotle's conjecture from the viewpoint of the Principle of Relativity —- is a natural state of rest consistent with this general Principle? According to the general Principle of Relativity, the laws of motion have the same form in all frames of reference that move with constant speed in straight lines with respect to each other. An observer in a reference frame moving with constant speed in a straight line with respect to the reference frame in which the object is at rest would conclude that the natural state or motion of the object is one of constant speed in a straight line, and not one of rest. All inertial observers, in an infinite number of frames of reference, would come to the same conclusion. We see, therefore, that Aristotle's conjecture is not consistent with this fundamental Principle. Newton's laws of motion During his early twenties, Newton postulated three Laws of Motion that form the basis of Classical Dynamics. He used them to solve a wide variety of problems including the dynamics of the planets. The Laws of Motion, first published in the Principia in 1687, play a fundamental rôle in Newton's Theory of Gravitation; they are: In the absence of an applied force, an object will remain at rest or in its present state of constant speed in a straight line (Galileo's Law of Inertia) In the presence of an applied force, an object will be accelerated in the direction of the applied force and the product of its mass multiplied by its acceleration is equal to the force. And, If a body A exerts a force of magnitude$\left | F_{AB} \right |$ on a body B, then B exerts a force of equal magnitude $\left | F_{BA} \right |$ on A.. The forces act in opposite directions so that $\mathrm{F}_{AB}=\mathrm{F}_{BA}$ . In law number 2, the acceleration lasts only while the applied force lasts. The applied force need not, however, be constant in time — the law is true at all times during the motion. Law number 3 applies to "contact" interactions. If the bodies are separated, and the interaction takes a finite time to propagate between the bodies, the law must be modified to include the properties of the "field " between the bodies. FRANK W. K. FIRK Professor Emeritus of Physics Mekanika Universitas Penulis: Ragil Priya
CommonCrawl
A mathematical model for marine dinoflagellates blooms DCDS-S Home Instability of free interfaces in premixed flame propagation February 2021, 14(2): 597-613. doi: 10.3934/dcdss.2020364 Equipartition of energy for nonautonomous damped wave equations Marcello D'Abbicco 1, , Giovanni Girardi 1, , Giséle Ruiz Goldstein 2, , Jerome A. Goldstein 2,, and Silvia Romanelli 1, Dipartimento di Matematica, University of Bari Aldo Moro, Via E. Orabona 4, 70125 Bari, Italy Department of Mathematical Sciences, University of Memphis, 373 Dunn Hall, Memphis, TN 38152-3240, USA * Corresponding author: Jerome A. Goldstein Dedicated to Michel Pierre on his seventieth birthday Received December 2019 Published May 2020 The kinetic and potential energies for the damped wave equation $ \begin{equation} u''+2Bu'+A^2u = 0 \;\;\;\;\;\;({\rm DWE})\end{equation} $ are defined by $ K(t) = \Vert u'(t)\Vert^2,\, P(t) = \Vert Au(t)\Vert^2, $ $ A,B $ are suitable commuting selfadjoint operators. Asymptotic equipartition of energy means $\begin{equation} \lim\limits_{t\to\infty} \frac{K(t)}{P(t)} = 1 \;\;\;\;\;\;({\rm AEE})\end{equation}$ for all (finite energy) non-zero solutions of (DWE). The main result of this paper is the proof of a result analogous to (AEE) for a nonautonomous version of (DWE). Keywords: Equipartition of energy, nonautonomous systems, asymptotics, damped wave equations. Mathematics Subject Classification: Primary: 34G10, 35L90; Secondary: 76D33. Citation: Marcello D'Abbicco, Giovanni Girardi, Giséle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Equipartition of energy for nonautonomous damped wave equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 597-613. doi: 10.3934/dcdss.2020364 M. D'Abbicco, M. R. Ebert and S. Lucente, Self-similar asymptotic profile of the solution to a nonlinear evolution equation with critical dissipation, Math. Methods Appl. Sci., 40 (2017), 6480-6494. doi: 10.1002/mma.4469. Google Scholar M. D'Abbicco, G. Girardi and M. Reissig, A scale of critical exponents for semilinear waves with time-dependent damping and mass terms, Nonlinear Anal., 179 (2019), 15-40. doi: 10.1016/j.na.2018.08.006. Google Scholar J. L. Doob, Stochastic Processes, John Wiley and Sons, Inc., New York, Chapman and Hall, Ltd., 1953. Google Scholar G. R. Goldstein, J. A. Goldstein and F. Travessini, Equipartition of energy for nonautonomous wave equations, Discrete Contin. Dyn. Syst. Ser. S, 10 (2017), 75-85. doi: 10.3934/dcdss.2017004. Google Scholar J. A. Goldstein, An asymptotic property of solutions of wave equations, Proc. Amer. Math. Soc., 23 (1969), 359-363. doi: 10.1090/S0002-9939-1969-0250125-1. Google Scholar J. A. Goldstein, An asymptotic property of solutions of wave equations. II, J. Math. Anal. Appl., 32 (1970), 392-399. doi: 10.1016/0022-247X(70)90305-7. Google Scholar J. A. Goldstein, Semigroups of Linear Operators and Applications, 2nd edition, Dover Publications, Inc., Mineola, New York, 2017. Google Scholar J. A. Goldstein and G. Reyes, Equipartition of operator-weighted energies in damped wave equations, Asymptot. Anal., 81 (2013), 171-187. doi: 10.3233/ASY-2012-1124. Google Scholar T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, Berlin, 1995. Google Scholar Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020278 Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270 Serena Dipierro, Benedetta Pellacci, Enrico Valdinoci, Gianmaria Verzini. Time-fractional equations with reaction terms: Fundamental solutions and asymptotics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 257-275. doi: 10.3934/dcds.2020137 Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407 Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571 Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, 2021, 20 (1) : 389-404. doi: 10.3934/cpaa.2020273 Bopeng Rao, Zhuangyi Liu. A spectral approach to the indirect boundary control of a system of weakly coupled wave equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 399-414. doi: 10.3934/dcds.2009.23.399 Gervy Marie Angeles, Gilbert Peralta. Energy method for exponential stability of coupled one-dimensional hyperbolic PDE-ODE systems. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020108 Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3737-3765. doi: 10.3934/dcds.2019230 Md. Masum Murshed, Kouta Futai, Masato Kimura, Hirofumi Notsu. Theoretical and numerical studies for energy estimates of the shallow water equations with a transmission boundary condition. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1063-1078. doi: 10.3934/dcdss.2020230 Jerry L. Bona, Angel Durán, Dimitrios Mitsotakis. Solitary-wave solutions of Benjamin-Ono and other systems for internal waves. I. approximations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 87-111. doi: 10.3934/dcds.2020215 Guo-Niu Han, Huan Xiong. Skew doubled shifted plane partitions: Calculus and asymptotics. Electronic Research Archive, 2021, 29 (1) : 1841-1857. doi: 10.3934/era.2020094 Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316 Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032 Xiaoxiao Li, Yingjing Shi, Rui Li, Shida Cao. Energy management method for an unpowered landing. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020180 Jonathan J. Wylie, Robert M. Miura, Huaxiong Huang. Systems of coupled diffusion equations with degenerate nonlinear source terms: Linear stability and traveling waves. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 561-569. doi: 10.3934/dcds.2009.23.561 Shang Wu, Pengfei Xu, Jianhua Huang, Wei Yan. Ergodicity of stochastic damped Ostrovsky equation driven by white noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1615-1626. doi: 10.3934/dcdsb.2020175 Marcello D'Abbicco Giovanni Girardi Giséle Ruiz Goldstein Jerome A. Goldstein Silvia Romanelli
CommonCrawl
Causal system, order of numerator and denominator [duplicate] This question already has an answer here: Do Causal Discrete-time systems have proper transfer functions? (1 answer) Is it true that if the order of numerator of a system function is larger than its denominator, then we can say that the system is causal? (Without finding R.O.C.) Is there similar thing for Stability? signal-analysis DavidDavid $\begingroup$ Have you looked at Why more poles than zeroes? It looks duplicate to me. $\endgroup$ – Valentin Tihomirov Mar 15 '16 at 7:05 When $H(z)$ is rational, then the system is causal if and only if its ROC is the exterior of a circle outside the out-most pole, and the order of numerator is no greater than the order of the denominator. A causal LTI system with a rational transfer function $H(z)$ is stable if and only if all poles of $H(z)$ are inside the unit circle of the z-plane, i.e., the magnitudes of all poles are smaller than 1. so you also need to find the R.O.C. Behind The SciencesBehind The Sciences for causal systems (which are the only ones i know that are physically realizable) which means the number of zeros may not exceed the number of poles, the system is stable if and only if all poles are inside of the unit circle. that's the only criteria you need to worry about. there is a discrete-time counterpart to the Routh-Hurwitz criterion called the Jury test that allows you to determine stability without factoring the denominator of $H(z)$ into factors that identify the poles, but the basic criterion is the same. whether you factor the denominator into factors with poles $(z-p_i)$ or use the Jury test, the salient test is "are there any poles, $p_i$, such that $|p_i| \ge 1$?" if so, the system is not stable. if all poles satisfy $|p_i| < 1$, the system is stable (we must include any poles that are cancelled by zeros). $\begingroup$ I think it's clearer to state the sentence "the number of zeros may not exceed the number of poles" as "the degree of the numerator polynomial may not exceed the degree of the denominator polynomial", because the first statement is wrong if we include poles at infinity (and why shouldn't we?). E.g., $$H(s)=\frac{(s+1)(s+2)}{s+3}$$ has two zeros, but also two poles (one at $s=-3$ and one at $s=\infty$). So the number of zeros doesn't exceed the number of poles, yet the system is not causal and stable. $\endgroup$ – Matt L. Mar 15 '16 at 20:17 $\begingroup$ @MattL, you're right about the specific fact, but your example is problematic. how are you going to implement your $H(s)$ with integrators? if you try to multiply both numerator and denominator with $s^{-2}$ and then represent the missing factor (and pole at $\infty$) with $(1-p z^{-1})$, you will have a scaling problem with the whole transfer function. so, in fact, if the number of zeros exceeds the number of poles, i see realization as problematic. perhaps divide out one $s$ and implement that as a differentiator summed to the output of a 1 pole, 1 zero system. $\endgroup$ – robert bristow-johnson Mar 17 '16 at 20:15 $\begingroup$ Nobody can implement that system because it's neither causal nor stable. But the point is that its number of poles equals its number of zeros (if poles at infinity are counted). This was just to show that you can have a non-causal system even if the number of zeros does not exceed the number of poles. So it's easier and clearer to talk about the degrees of numerator and denominator instead of the number of poles and zeros. $\endgroup$ – Matt L. Mar 17 '16 at 20:51 $\begingroup$ how many poles at $\infty$ are you gonna toss in there? why not 2 or 4 or some other contrived number? $\endgroup$ – robert bristow-johnson Mar 18 '16 at 0:37 $\begingroup$ No, no need for that. The wickedly contrived number '1' was sufficient to prove my point. $\endgroup$ – Matt L. Mar 18 '16 at 8:10 Not the answer you're looking for? Browse other questions tagged signal-analysis or ask your own question. Do Causal Discrete-time systems have proper transfer functions? Why more poles than zeroes? Characterising a system from its input and output System invertability Periodicity of the output of causal time-invariant systems when the input is periodic The two types of stability and "Why exponential" May a causal system depend on future values of its own output Causality and ROC of a stable LTI system Estimate transfer function from Bode curve How to Prove a System Is Invertible? Initial Rest Condition for LCCDE causal LTI systems
CommonCrawl
Structural stability of rate-independent nonpotential flows DCDS-S Home Thermalization of rate-independent processes by entropic regularization February 2013, 6(1): 235-255. doi: 10.3934/dcdss.2013.6.235 Quasistatic damage evolution with spatial $\mathrm{BV}$-regularization Marita Thomas 1, Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstraße 39, 10117 Berlin, Germany Received May 2011 Revised July 2011 Published October 2012 An existence result for energetic solutions of rate-independent damage processes is established. We consider a body consisting of a physically linearly elastic material undergoing infinitesimally small deformations and partial damage. In [23] an existence result in the small strain setting was obtained under the assumption that the damage variable $z$ satisfies $z\in W^{1,r}(\Omega)$ with $r\in(1,\infty)$ for $\Omega⊂ \mathbb{R}^d.$ We now cover the case $r=1$. The lack of compactness in $W^{1,1}(\Omega)$ requires to do the analysis in $\mathrm{BV}(\Omega)$. This setting allows it to consider damage variables with values in {0,1}. We show that such a brittle damage model is obtained as the $\Gamma$-limit of functionals of Modica-Mortola type. Keywords: Partial damage, damage evolution with spatial regularization, energetic formulation, $\Gamma$-convergence of rate-independent systems., functions of bounded variation. Mathematics Subject Classification: Primary: 74C05, 74R05, 49J45; Secondary: 49S05, 74R2. Citation: Marita Thomas. Quasistatic damage evolution with spatial $\mathrm{BV}$-regularization. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : 235-255. doi: 10.3934/dcdss.2013.6.235 L. Ambrosio and G. Dal Maso, A general chain rule for distributional derivatives, Proceedings of the American Mathematical Society, 108 (1990), 691-702. doi: 10.1090/S0002-9939-1990-0969514-3. Google Scholar L. Ambrosio, N. Fusco and D. Pallara, "Functions of Bounded Variation and Free Discontinuity Problems," Oxford University Press, 2005. Google Scholar G. Alberti, Variational models for phase transitions, an approach via gamma-convergence, 1998, in "Differential Equations and Calculus of Variations" (eds. G. Buttazzo et al.), Springer-Verlag, 2000. Google Scholar B. Bourdin, G. Francfort and J. J. Marigo, The variational approach to fracture, J. Elasticity, 91 (2008), 5-148. doi: 10.1007/s10659-007-9107-3. Google Scholar G. Francfort and A. Garroni, A variational view of partial brittle damage evolution, Arch. Rational Mech. Anal., 182 (2006), 125-152. doi: 10.1007/s00205-006-0426-5. Google Scholar A. Fiaschi, D. Knees and U. Stefanelli, Young-measure quasi-static damage evolution, Arch. Ration. Mech. Anal., 203 (2012), 415-453. Google Scholar G. Francfort and J.-J. Marigo, Stable damage evolution in a brittle continuous medium, Eur. J. Mech., A/Solids, 12 (1993), 149-189. Google Scholar G. Francfort and A. Mielke, Existence results for a class of rate-independent material models, with nonconvex elastic energies, (). doi: 10.1515/CRELLE.2006.044. Google Scholar M. Frémond and B. Nedjar, Damage, gradient of damage and principle of virtual power, Internat. J. Solids Structures, 33 (1996), 1083-1103. doi: 10.1016/0020-7683(95)00074-7. Google Scholar A. Giacomini, Ambrosio-Tortorelli approximation of quasi-static evolution of brittle fracture, Calc. Var. Partial Differ. Equ., 22 (2005), 129-172. Google Scholar E. Giusti, "Minimal Surfaces and Functions of Bounded Variation," Birkhäuser, Boston, 1984. Google Scholar A. Garroni and C. Larsen, Threshold-based quasi-static brittle damage evolution, Arch. Ration. Mech. Anal., 194 (2009), 585-609. doi: 10.1007/s00205-008-0174-9. Google Scholar K. Hackl and H. Stumpf, Micromechanical concept for the analysis of damage evolution in thermo-viscoelastic and quasi-static brittle fracture, Int. J. Solids Structures, 30 (2003), 1567-1584. Google Scholar A. Mielke, Evolution of rate-independent systems, in "Evolutionary Equations," (Edited by C. M. Dafermos and E. Feireisl), Handb. Differ. Equ., Elsevier/North-Holland, Amsterdam, 2 (2005), 461-559. Google Scholar A. Mielke, Differential, energetic and metric formulations for rate-independent processes, Nonlinear PDE's and applications, 87-170, Lecture Notes in Math., 2028, Springer, Heidelberg, 2011. Google Scholar L. Modica and S. Mortola, Un esempio di $\Gamma$-convergenza, Boll. U. Mat. Ital. B, 14 (1977), 285-299. Google Scholar A. Mainik and A. Mielke, Existence results for energetic models for rate-independent systems, Calc. Var. PDEs, 22 (2005), 73-99. doi: 10.1007/s00526-004-0267-8. Google Scholar L. Modica, The gradient theory of phase transitions and the minimal interface criterion, Arch. Rational Mech. Anal., 98 (1987), 123-142. doi: 10.1007/BF00251230. Google Scholar A. Mielke and T. Roubíček, Rate-independent damage processes in nonlinear elasticity, M$^3$AS Math. Models Methods Appl. Sci., 16 (2006), 177-209. doi: 10.1142/S021820250600111X. Google Scholar A. Mielke, T. Roubíček and U. Stefanelli, $\Gamma$-limits and relaxations for rate-independent evolutionary problems, Calc. Var. Partial Differ. Equ., 31 (2008), 387-416. Google Scholar A. Mielke, T. Roubíček and M. Thomas, From damage to delamination in nonlinearly elastic materials at small strains, J. Elasticity, 109 (2012), 235-273. Google Scholar M. Thomas, "Rate-independent Damage Processes in Nonlinearly Elastic Materials," PhD thesis, Humboldt-Universität zu Berlin, 2010. Google Scholar M. Thomas and A. Mielke, Damage of nonlinearly elastic materials at small strain: existence and regularity results, Zeit. angew. Math. Mech., 90 (2010), 88-112. Google Scholar Ulisse Stefanelli, Daniel Wachsmuth, Gerd Wachsmuth. Optimal control of a rate-independent evolution equation via viscous regularization. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1467-1485. doi: 10.3934/dcdss.2017076 Riccarda Rossi, Ulisse Stefanelli, Marita Thomas. Rate-independent evolution of sets. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 89-119. doi: 10.3934/dcdss.2020304 Riccarda Rossi, Giuseppe Savaré. A characterization of energetic and $BV$ solutions to one-dimensional rate-independent systems. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : 167-191. doi: 10.3934/dcdss.2013.6.167 T. J. Sullivan, M. Koslowski, F. Theil, Michael Ortiz. Thermalization of rate-independent processes by entropic regularization. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : 215-233. doi: 10.3934/dcdss.2013.6.215 Luca Minotti. Visco-Energetic solutions to one-dimensional rate-independent problems. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5883-5912. doi: 10.3934/dcds.2017256 Gianni Dal Maso, Flaviana Iurlano. Fracture models as $\Gamma$-limits of damage models. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1657-1686. doi: 10.3934/cpaa.2013.12.1657 Alexander Mielke, Riccarda Rossi, Giuseppe Savaré. Modeling solutions with jumps for rate-independent systems on metric spaces. Discrete & Continuous Dynamical Systems, 2009, 25 (2) : 585-615. doi: 10.3934/dcds.2009.25.585 Gianni Dal Maso, Alexander Mielke, Ulisse Stefanelli. Preface: Rate-independent evolutions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : i-ii. doi: 10.3934/dcdss.2013.6.1i Alexander Mielke. Complete-damage evolution based on energies and stresses. Discrete & Continuous Dynamical Systems - S, 2011, 4 (2) : 423-439. doi: 10.3934/dcdss.2011.4.423 Augusto Visintin. Structural stability of rate-independent nonpotential flows. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : 257-275. doi: 10.3934/dcdss.2013.6.257 Dorothee Knees, Chiara Zanini. Existence of parameterized BV-solutions for rate-independent systems with discontinuous loads. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 121-149. doi: 10.3934/dcdss.2020332 Daniele Davino, Ciro Visone. Rate-independent memory in magneto-elastic materials. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 649-691. doi: 10.3934/dcdss.2015.8.649 Martin Heida, Alexander Mielke. Averaging of time-periodic dissipation potentials in rate-independent processes. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1303-1327. doi: 10.3934/dcdss.2017070 Michela Eleuteri, Luca Lussardi, Ulisse Stefanelli. A rate-independent model for permanent inelastic effects in shape memory materials. Networks & Heterogeneous Media, 2011, 6 (1) : 145-165. doi: 10.3934/nhm.2011.6.145 Stefano Bosia, Michela Eleuteri, Elisabetta Rocca, Enrico Valdinoci. Preface: Special issue on rate-independent evolutions and hysteresis modelling. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : i-i. doi: 10.3934/dcdss.2015.8.4i Roman VodiČka, Vladislav MantiČ. An energy based formulation of a quasi-static interface damage model with a multilinear cohesive law. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1539-1561. doi: 10.3934/dcdss.2017079 Alice Fiaschi. Young-measure quasi-static damage evolution: The nonconvex and the brittle cases. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : 17-42. doi: 10.3934/dcdss.2013.6.17 Alice Fiaschi. Rate-independent phase transitions in elastic materials: A Young-measure approach. Networks & Heterogeneous Media, 2010, 5 (2) : 257-298. doi: 10.3934/nhm.2010.5.257 Martin Kružík, Johannes Zimmer. Rate-independent processes with linear growth energies and time-dependent boundary conditions. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 591-604. doi: 10.3934/dcdss.2012.5.591 Michela Eleuteri, Luca Lussardi. Thermal control of a rate-independent model for permanent inelastic effects in shape memory materials. Evolution Equations & Control Theory, 2014, 3 (3) : 411-427. doi: 10.3934/eect.2014.3.411 Marita Thomas
CommonCrawl
== Chapter 6 Real-number calculus == === 6.1 What makes an honest function? === Calculus is built from the two ingredients, [https://en.wikipedia.org/wiki/Derivative differentiation] and [https://en.wikipedia.org/wiki/Integral integration]. Differentiation is a local phenomenon which concerns the rates that things change whereby integration is a more quantity that measures the totality. Incredibly, these two ingredients are the inverse of one another ([https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus fundamental theorem of calculus]). These two operate on 'functions' and Penrose notes that these can be thought of as 'mappings' from some array of numbers (domain) to another (target). Penrose gives the three examples below which will be used later in the chapter as mapping the real number system to itself: [[File:Fig 6p2.png|thumb|center]] === 6.2 Slopes of functions === Differentiation is concerned with and calculates the rates that things change, or 'slopes' of these things. For the curves given in section 6p1 above, two of the three do not have unique slopes at the origin and are said to be not ''differentiable'' at the origin, or not ''smooth'' there. Further, the curve of theta(x) has a jump at the origin which is to say that it is discontinuous there, whereas $$|x|$$ and $$x^2$$ were continuous everywhere. Taking differentiation a step further, Penrose shows us two plots which look very similar, but are represented by different functions, $$x^3$$ and $$x|x|$$. Each are differentiable and continuous, but the difference has to do with the curvature ([https://en.wikipedia.org/wiki/Second_derivative second derivative]) at the origin. $$x|x|$$ does not have a well-defined curvature here and is said to not be ''twice differentiable''. === 6.3 Higher derivatives; $$C^\infty$$-smooth functions === Looking closer at the concept of two derivatives of the same function (second derivative, or curvature), Penrose shows us the functions from 6p2 and their first and second derivatives. Note that the first derivative of $$f(x)$$, written $$f'(x)$$, meets the x-axis at places of local minima or maxima and the second derivative of $$f(x)$$, written $$f''(x)$$, meets the x-axis where the curvature goes to $$0$$ and is said to be a point of inflection. [[File:Fig 6p5.png|thumb|center]] In general, a function can be smooth for many derivatives and the mathematical terminology for general smoothness is to say that $$f(x)$$ is $$C^n$$-smooth. It can be seen that $$x|x|$$ is $$C^1$$-smooth but not $$C^2$$-smooth due to the discontinuity at the origin in the derivative. In general $$x^n|x|$$ is $$C^n$$-smooth but not $$C^{n+1}$$-smooth. In fact, a function can be $$C^\infty$$-smooth if it is smooth for every positive integer. Note that negative integers for $$x^n$$ immediately are not smooth for $$x^{-1}$$ (discontinuous at the origin). Penrose notes that Euler would have required $$C^\infty$$-smooth functions to be defined as functions, and then gives the function: :<math>h(x) = \begin{cases} 0, & \mbox{if } x \le 0 \\ e^{-\frac{1}{x}}, & \mbox{if } x > 0 \end{cases} </math> as an example of a $$C^\infty$$-smooth function but one that Euler would still not be happy with since it is two functions stuck together. === 6.4 The "Eulerian" notion of a function? === How, then do we define the notion of a 'Eulerian' function? This can be accomplished in two ways. The first using complex numbers and is incredibly simple. If we extend $$f(x)$$ to $$f(z)$$ in the complex plane, then all we require is for $$f(z)$$ to be once differentiable (a kind of $$C^1$$-smooth function). That's it, magically. We will see that this can be stated with $$f(x)$$ being an [https://en.wikipedia.org/wiki/Analytic_function analytic function]. The second method involves power series manipulations, and Penrose notes that 'the fact that complex differentiability turns out to be equivalent to power series expansions is one of the truly great pieces of complex-number magic'. For the second method, the power series of $$f(x)$$ is introduced, <math>f(x) = a_0 + a_1x + a_2x^2 + a_3x^3 + …</math> For this series to exist then it must be $$C^\infty$$-smooth. We must take and evaluate derivatives $$f(x)$$ to find the coefficients, thus an infinite number of derivatives (positive integers) must exist for the power series to exist. If we evaluate $$f(x)$$ at the origin, we call this a power series expansion about the origin. About any other point $$p$$ would be considered a power series expansion about $$p$$. (Maclaurin Series about origin, see also [https://en.wikipedia.org/wiki/Taylor_series Taylor Series] for the general case) The power series is considered analytic if it encompasses the power series about point $$p$$. If it is analytic at all points of its domain, we call it an analytic function or, equivalently, a $$C^ω$$-smooth function. Euler would be pleased with this notion of an analytic function, which is 'smoothier' than the set of $$C^\infty$$-smooth functions ($$h(x)$$ from 6p3 is $$C^\infty$$-smooth but not $$C^ω$$-smooth). * Physics in trying to understand reality by approximating it. === 6.5 The rules of differentiation === * Armed with these few rules (and loads and loads of practice), one can become an "expert at differentiation" without needing to have much in the way of actual understanding of why the rules work! === 6.6 Integration === As stated in section 6p1, integration is the inverse of differentiation as stated in the fundamental theorem of calculus. Penrose provides the following visual and explanation. If we start with the differentiated curve, the area underneath the derivative curve bounded by two points on the x-axis, and the x-axis itself is equal to the difference in heights of the original curve evaluated at the two points. [[File:Fig6p8 and6p9.png|thumb|center]] Integration is noted as making the function smoother and smoother, whereas differentiation continues to make things 'worse' until some functions reach a discontinuity and become 'non-differentiable'. Penrose ends the chapter noting that there are approaches which enable the process of differentiation to be continued indefinitely, even if the function is not differentiable. One example is the [https://en.wikipedia.org/wiki/Dirac_delta_function Dirac Delta Function] which is of 'considerable importance in quantum mechanics'. This extends our notion of $$C^n$$-functions into the negative integer space ($$C^{-1},C^{-2},...$$) and will be discussed later with complex numbers. Penrose notes that this leads us further away from the 'Eulerian' functions, but complex numbers provide us with an irony that expresses one of their finest magical feats of all. * If we integrated then differentiate, we get the same answer back. Non-commutative the other way.
CommonCrawl
In a mixture, the ratio of milk and water is $8: 7$. If 11 litres of water is mixed, then the ratio becomes $8: 9$. Find the amount of milk in the initial mixture. Gitesh kumar Garg Asked in Mathematics Feb 16, 2022 (a) 63 litres (b) 88 litres (c) 22 litres (d) 44 litres Let the amounts of milk and water are $8 x$ and $7 x$. \begin{array}{l} \therefore \quad \frac{8 x}{7 x+11}=\frac{8}{9} \\ \Rightarrow \quad 72 x=56 x+88 \\ \Rightarrow \quad 72 x-56 x=88 \\ \Rightarrow \quad 16 x=88 \\ \therefore \quad x=\frac{88}{16} \end{array} $\therefore$ The amount of milk $=8 x$ =8 \times \frac{88}{16} \\ \text { itres; Ans. } Correct option $=(\mathrm{d})$ Trick : Milk: Water 8: & 7 \\ \hline & 2=11 $\therefore$ Amount of milk $=\frac{11}{2} \times 8$ $=44$ litres: Ans Gitesh kumar Garg Answered Feb 16, 2022 The ratio of milk and water in 70 $\mathrm{kg}$ mixture is $3: 2$. If $14 \mathrm{~kg}$ water is mixed with the mixture, then find the ratio of milk and water. A bag contains rupees, fifty paise $\&$ twenty-five paise coins in the proportion $2: 6: 8$. If the total amount is Rs. 210 , then find the number of coins of each kind. The ratio between two numbers is $7: 9 .$ If each number is increased by 44 , then the ratio becomes $11: 13 .$ Find their difference. The ratio of the ages of Sachin and Amit is $7: 4$ and the sum of their ages is 33 years, then find the ratio of their ages before 6 years. Rs. 559 is divided into $a, b$ and $c$ in such a way that $2 \times$ part of $a=3 \times$ part of $b=4 \times$ part of $c$, then find the part of $c$. The ratio of boys and girls of a school with 504 students is $13: 14$. What will be the new ratio, if 12 more girls are admitted? Parwez (V.P.) has some cocks and goats. If the total numbers of heads and feet are 76 and 212 respectively, then find the number of cocks he has. An object of mass $100 \mathrm{~kg}$ is accelerated uniformly from a velocity of $5 \mathrm{~m} \mathrm{~s}^{-1}$ to $8 \mathrm{~m} \mathrm{~s}^{-1}$ in $6 \mathrm{~s}$. Calculate the initial and final momentum of the object. Also, find the magnitude of the force exerted on the object Gitesh kumar Garg Asked in Mathematics Feb 7, 2022 The length, breadth and height of a rectangular solid are in the ratio of $3: 2: 1$ and its total surface area is $352 \mathrm{~cm}^{2}$, then find its length. The average age of 50 persons of a group is 60 years. If 5 persons are excluded, then the average becomes 62 years. Find the average age of excluded persons.
CommonCrawl
Frontiers of Parameterized Complexity A blog for parameterized complexity community Talks in 2021 The details for joining the talk are given below. The link to join the zoom talk is https://uib.zoom.us/j/4231169675. Password: Name of the W[1]-complete problem, six letters, all capital. Set of pairwise adjacent vertices. Date and Time: May 27, 2021 at 17:00 hours Bergen time (GMT+2) Speaker: Tuukka Korhonen, University of Helsinki Title: Single-Exponential Time 2-Approximation Algorithm for Treewidth Abstract: We give an algorithm, which given an n-vertex graph G and an integer k, in time 2^O(k) n either outputs a tree decomposition of G of width at most 2k + 1 or determines that the treewidth of G is larger than k. The previous best approximation ratio in time 2^O(k) n was 5, and in time 2^O(k) n^O(1) was 3, both given by Bodlaender et al. [SICOMP '16]. Our algorithm is based on a proof of Bellenbaum and Diestel [Comb. Probab. Comput. '02]. Speaker: Michal Wlodarczyk, Eindhoven University of Technology Title: Vertex Deletion Parameterized by Elimination Distance and Even Less Abstract: We study the parameterized complexity of various classic vertex deletion problems such as Odd cycle transversal, Vertex planarization, and Chordal vertex deletion under hybrid parameterizations. Existing FPT algorithms for these problems either focus on the parameterization by solution size, detecting solutions of size k in time f(k)*n^O(1), or width parameterizations, finding arbitrarily large optimal solutions in time f(w)*n^O(1) for some width measure w like treewidth. We unify these lines of research by presenting FPT algorithms for parameterizations that can simultaneously be arbitrarily much smaller than the solution size and the treewidth. We consider two recently introduced classes of parameterizations which are relaxations of either treedepth of treewidth. They are related to graph decompositions in which subgraphs that belong to a target class H (e.g., bipartite or planar) are considered simple. First, we present a framework for computing approximately optimal decompositions for miscellaneous classes H. Namely, if the cost of an optimal decomposition is k, we show how to find a decomposition of cost k^O(1) in time f(k)*n^O(1). Secondly, we exploit the constructed decompositions for solving vertex-deletion problems under the new parameterizations. Joint work with Bart M. P. Jansen and Jari J. H. de Kroon. Speaker: Dimitrios M. Thilikos, LIRMM, Univ Montpellier, CNRS Title: Parameterized Algorithms for Vertex Deletion to Minor-closed Graph Classes Abstract: Let ${\cal G}$ be a minor-closed graph class. We say that a graph $G$ is a {\em $k$-apex} of ${\cal G}$ if $G$ contains a set $S$ of at most $k$ vertices such that $G\setminus S$ belongs to ${\cal G}$. We denote by ${\cal A}_k ({\cal G})$ the set of all graphs that are $k$-apices of ${\cal G}.$ In the first paper of this series we obtained upper bounds on the size of the graphs in the minor-obstruction set of ${\cal A}_k ({\cal G})$, i.e., the minor-minimal set of graphs not belonging to ${\cal A}_k ({\cal G}).$ We design an algorithm that, given a graph $G$ on $n$ vertices, runs in time $2^{{\sf poly}(k)}\cdot n^3$ and either returns a set $S$ certifying that $G \in {\cal A}_k ({\cal G})$, or reports that $G \notin {\cal A}_k ({\cal G})$. Here ${\sf poly}$ is a polynomial function whose degree depends on the maximum size of a minor-obstruction of ${\cal G}.$ In the special case where ${\cal G}$ excludes some apex graph as a minor, we give an alternative algorithm running in $2^{{\sf poly}(k)}\cdot n^2$-time. Speaker: Sebastian Siebertz, University of Bremen Title: Twinwidth and Permutations Abstract: Twin-width is a structural width measure that was recently introduced by Bonnet, Kim, Thomassé and Watrigant. We study the logical properties of twin-width and prove that a class of binary relational structures (edge-coloured partially directed graphs) has bounded twin-width if and only if it is a first-order transduction of a proper permutation class. Joint work with Édouard Bonnet, Jaroslav Nešetřil, Patrice Ossona de Mendez and Stéphan Thomassé. Date and Time: April 08, 2021 at 17:00 hours Bergen time (GMT+2) Speaker: Eduard Eiben, Royal Holloway University of London Title: Removing Connected Obstacles in the Plane is FPT Abstract: Given two points in the plane, a set of obstacles defined by closed curves, and an integer k, does there exist a path between the two designated points intersecting at most k of the obstacles? This is a fundamental and well-studied problem arising naturally in computational geometry, graph theory, wireless computing, and motion planning. It remains NP-hard even when the obstacles are very simple geometric shapes (e.g., unit-length line segments). In this talk, we show that the problem is fixed-parameter tractable (FPT) parameterized by k, by giving an algorithm with running time $k^{O(k^3)}n^{O(1)}$. Here n is the number connected areas in the plane drawing of all the obstacles. Date and Time: March 25, 2021 at 17:00 hours Bergen time (GMT+1) Speaker: Sitan Chen, Massachusetts Institute of Technology Title: Learning Deep ReLU Networks is Fixed-Parameter Tractable Abstract: We consider the problem of learning an unknown ReLU network with an arbitrary number of layers under Gaussian inputs and obtain the first nontrivial results for networks of depth more than two. We give an algorithm whose running time is a fixed polynomial in the ambient dimension and some (exponentially large) function of only the network's parameters. These results provably cannot be obtained using gradient-based methods and give the first example of a class of efficiently learnable neural networks that gradient descent will fail to learn. In contrast, prior work for learning networks of depth three or higher requires exponential time in the ambient dimension, while prior work for the depth-two case requires well-conditioned weights and/or positive coefficients to obtain efficient run-times. Our algorithm does not require these assumptions. Our main technical tool is a type of filtered PCA that can be used to iteratively recover an approximate basis for the subspace spanned by the hidden units in the first layer. Our analysis leverages properties of lattice polynomials from tropical geometry. Based on joint work with Adam Klivans and Raghu Meka. Speaker: S\'andor Kisfaludi-Bak, Max Planck Institute for Informatics Title: Planar Steiner Tree With Terminals On Few Faces In the Steiner tree problem, we are given a graph and a set S of vertices (terminals), and the goal is to find the shortest tree T containing S. The problem is often studied with the number of terminals (|S|) as parameter, for which it is fixed-parameter tractable. The problem is also well-studied in planar graphs. Interestingly, if all terminals are on the outer face of a plane graph, then Steiner tree is polynomial-time solvable, which motivates a parameter smaller than |S|: let k be the minimum number of faces in the plane graph that can cover S. We will discuss how we can go beyond the natural n^O(k) algorithm to obtain a running time of 2^O(k) * n^O(sqrt(k)). On the other hand, we will see some of the main ideas of a reduction that yield a lower bound of n^Omega(sqrt(k)) under the Exponential Time Hypothesis, which nearly matches the running time, and also proves W[1]-hardness for this parameter. The talk is based on joint work with Jesper Nederlof and Erik Jan van Leeuwen. Speaker: Fahad Panolan, Indian Institute of Technology Hyderabad Title: Hitting topological minors is FPT Abstract: In the Topological Minor Deletion (TM-Deletion) problem, the input consists of an undirected graph G, a family of undirected graphs F, and an integer k. The task is to determine whether G contains a set S of vertices of size at most k, such that the graph G-S contains no graph from F as a topological minor. We give an algorithm for TM-Deletion with running time f(h,k) poly(|V(G)|), where h is the maximum size of a graph in F, and f is a computable function of h and k. This is the first fixed-parameter tractable algorithm (FPT) for the problem. This is joint work with Fedor V. Fomin, Daniel Lokshtanov, Saker Saurabh, and Meirav Zehavi. Speaker: Danil Sagunov, Steklov Institute of Mathematics Title: Algorithmic Extensions of Dirac's Theorem Abstract: The Dirac's Theorem states that every n-vertex 2-connected graph contains a cycle twice as long as its minimum degree. This bound is tight and the corresponding cycle can be found in polynomial time following Dirac's original proof. Moreover, the problem of finding a cycle whose length is at least (2+eps) times minimum degree is NP-hard for any constant eps > 0. We study the problem of finding a cycle slightly longer than the bound of the Dirac's theorem in a 2-connected graph, i.e. a cycle of length at least twice its minimum degree plus k. Using several known and new methods and constructions, we show that this problem is solvable in FPT time with single-exponential dependence on k. Interestingly, for obtaining such an algorithm, we design algorithms for several related parameterized problems. Prior to this work, parameterized complexity of some of them was questioned and remained unresolved in the literature. Apart from that, the proof builds on a new graph-theoretical result that is interesting on its own. In this talk, we will discuss the main points of the central result of this work, its main constructions and ideas, and directions of the future research. This is joint work with Fedor V. Fomin, Petr A. Golovach, and Kirill Simonov. Date and Time: February 25, 2021 at 17:00 hours Bergen time (GMT+1) Speaker: Bart Jansen, Eindhoven University of Technology Title: Algebraic Sparsification for Decision and Maximization Constraint Satisfaction Problems Abstract: We survey polynomial-time sparsification for NP-complete Boolean Constraint Satisfaction Problems (CSPs). The goal in sparsification is to reduce the number of constraints in a problem instance without changing the answer, such that a bound on the number of resulting constraints can be given in terms of the number of variables n. We investigate how the worst-case sparsification size depends on the types of constraints allowed in the problem formulation (the constraint language), and how sparsification algorithms can be obtained by modeling constraints as low-degree polynomials. We also consider the corresponding problem for Maximum CSP problems, where the notion of characteristic polynomials turns out to characterize optimal compressibility. Based on joint work with Hubie Chen, Astrid Pieterse, and Michal Wlodarczyk Speaker: Magnus Wahlstr\"om, Royal Holloway University of London Title: Quasipolynomial multicut-mimicking networks and kernelization of multiway cut problems Abstract: We show the existence of so-called mimicking networks of k^O(log k) edges, mimicking the behaviour of multicuts over a set of terminals of total capacity k. We also show an efficient (randomized) construction that produces such a network with k^O(log^3 k) edges, using an approximation algorithm for Small Set Expansion. This improves on a recent result (W., ICALP 2020), which showed only the existence. As a consequence, a range of problems have quasipolynomial kernels, in particular including Edge Multiway Cut, Group Feedback Edge Set and Subset Feedback Edge Set (in its general formulation, with undeletable edges). The existence of a polynomial kernel for Multiway Cut is one of the biggest open questions in kernelization, and this result is the first improvement in general graphs since the introduction of the representative sets approach to kernelization in 2012 (Kratsch and W.). The talk assumes no previous knowledge of matroids or other previous work on the problem. Posted by Roohani Sharma at 3:16 AM No comments: Frontiers of PC 2.0 Thank you for tuning on with us so far. We break now for a short summer break and revive the talks again in August 2020. Wishing you all a happy summer break! To get notifications about the next talks via email please send an email to [email protected] with the subject line 'Subscribe FrontPC' (if you haven't already). For more information, please contact any one of the following: Fedor Fomin ([email protected]), Saket Saurabh ([email protected]), Roohani Sharma ([email protected]). The recordings of the previous talks are available on our YouTube channel 'Frontiers of Parameterized Complexity'. The links to the available slides of the past talks can be found after their abstracts in this blog. Missed out on your favourite talks? Catch them up at our YouTube channel 'Frontiers of Parameterized Complexity' https://www.youtube.com/channel/UCdfML-PShQNSCeqbz9Ol_oA/ Talks schedule By default, the talks start at 17:00 Bergen time (GMT+1) Link to the zoom meeting: https://uib.zoom.us/j/4231169675 Password: name of the W[1]-complete problem, 6 letters, all capital. Also known as a set of pairwise adjacent vertices. Parinya Chalermsook, Aalto University Title: Vertex Sparsification for Edge Connectivity Abstract: Graph compression is a basic information theoretic and computational question that has found its applications in many areas of algorithm design. In this work, we initiate the study of a parameterized variant of graph compression that preserves cut sizes between designated terminals. We present applications in designing fast dynamic graph algorithms and in the survivable network design problems in low treewidth graphs. (Joint work with Syamantak Das, Bundit Laekhanukit, Yunbum Kook, Yang P. Liu, Richard Peng, Mark Sellke, Danial Vaz.) Karol Węgrzycki, Saarland University and Max Planck Institute for Informatics Title: Improving Schroeppel and Shamir's Algorithm for Subset Sum via Orthogonal Vectors Abstract: We present an O*(2^{0.5 n}) time and O*(2^{0.249999n}) space randomized algorithm for Subset Sum. This is the first improvement over the long-standing O*(2^{n/2}) time and O*(2^{n /4}) space algorithm due to Schroeppel and Shamir. This talk will also feature an introduction to the representative families framework developed by Fomin, Lokshtanov, Panolan and Saurabh (J. ACM 2016) and representation technique introduced by Howgrave-Graham and Joux (EUROCRYPT 2010). Finally, we will uncover a tight and curious relation between the Subset Sum and Orthogonal Vectors problem. This is joint work with Jesper Nederlof (Utrecht University). Full version: https://arxiv.org/abs/2010.08576 Karthik Chandrasekaran, University of Illinois, Urbana-Champaign Title: Hypergraph k-cut for fixed k in deterministic polynomial time Abstract: In the hypergraph k-cut problem, the input is a hypergraph along with a fixed constant k and the goal is to find a smallest subset of hyperedges whose removal ensures that the remaining hypergraph has at least k connected components. The graph k-cut problem is solvable in polynomial time (Goldschmidt and Hochbaum, 1988) while the complexity of the hypergraph k-cut problem was open. In this talk, I will present the first deterministic polynomial-time algorithm to solve the hypergraph k-cut problem. Our algorithm relies on novel structural results that provide new insights even for the graph k-cut problem. Based on joint work with Chandra Chekuri. Ariel Kulik, Computer Science Department, Technion Title: Analysis of Two-variable Recurrence Relations with Application to Parameterized Approximations Abstract: In this paper we introduce randomized branching as a tool for parameterized approximation and develop the mathematical machinery for its analysis. Our algorithms improve the best known running times of parameterized approximation algorithms for Vertex Cover and 3-Hitting Set for a wide range of approximation ratios. One notable example is a simple parameterized random 1.5-approximation algorithm for Vertex Cover, whose running time of O*(1.01657k) substantially improves the best known running time of O*(1.0883k) [Brankovic and Fernau, 2013]. For 3-Hitting Set we present a parameterized random 2-approximation algorithm with running time of O*(1.0659k), improving the best known O*(1.29k) algorithm of [Brankovic and Fernau, 2012]. The running times of our algorithms are derived from an asymptotic analysis of a wide class of two-variable recurrence relations. We show an equivalence between these recurrences and a stochastic process, which we analyze using the Method of Types, by introducing an adaptation of Sanov's theorem to our setting. We believe our novel analysis of recurrence relations, which is of independent interest, is a main contribution of this paper. William Lochet, University of Bergen Title: A polynomial time algorithm for the $k$-disjoint shortest path problem Abstract: The disjoint paths problem is a fundamental problem in algorithmic graph theory. For a given graph $G$ and a set of $k$ pairs of terminals in $G$, it asks for the existence of $k$ vertex-disjoint paths connecting each pair of terminals. Very famously, Robertson and Seymour proved the existence of a $n^3$ algorithm for any fixed $k$ in 1995 as part of the Graph Minor project. In this talk, we focus on the version of this problem where all the paths are required to be shortest paths. This was first introduced as the disjoint shortest paths problem by Eilam-Tzoreff in 1998 where she proved that the case $k = 2$ admits a polynomial time algorithm. She also asked for the existence of a polynomial time algorithm for any fixed $k$, a question which remained open even for the case $k = 3$. The goal of this talk is to prove that, for any fixed $k$, there exists a $n^{f(k)}$ algorithm for the $k$-disjoint shortest paths problem, answering Eilam-Tzoreff's question. Edouard Bonnet, Universite Claude Bernard Lyon Title: Twin-width Abstract: Inspired by an invariant defined on permutations by Guillemot and Marx [SODA '14], we introduce the notion of twin-width on graphs and on matrices. Proper minor-closed classes, bounded rank-width graphs, $K_t$-free unit ball graphs, posets with bounded-size antichains, proper subclasses of permutation graphs, and some cubic expanders based on repeated 2-lifts, all have bounded twin-width. On all these classes we can compute in polynomial time a d-contraction sequence, witness that the twin-width is at most d. FO model checking asks if a given first-order sentence $\phi$ is true on a given structure G. We show that this problem is FPT in parameter $|\phi|$ on classes of bounded twin-width binary structures, provided the witness is given. More precisely, given a d-contraction sequence for G, our algorithm runs in time $f(d,|\phi|) |V(G)|$ where f is a computable but non-elementary function. Practical algorithms do exist for typical FO expressible problems, like k-Independent Set and k-Dominating Set. Furthermore twin-width is robust as far as first-order logic is concerned: any transduction of a bounded twin-width class has bounded twin-width. Bounded twin-width classes have other interesting properties: they are small, $\chi$-bounded, satisfy the strong Erdos-Hajnal property, and their intersection with the sparse world has many equivalent characterizations. We will survey our current understanding of twin-width, putting the emphasis on fixed-parameter tractability of FO model checking and intriguing/pressing open questions. This is joint work with Colin Geniet, Eun Jung Kim, Stéphan Thomassé, and Rémi Watrigant. Meirav Zehavi, Ben-Gurion University Title: Lossy Kernelization for (Implicit) Hitting Set Problems We re-visit the complexity of polynomial time pre-processing (kernelization) for the $d$-Hitting Set problem, one of the classic problems in Parameterized Complexity. This problem encompasses a number of fundamental parameterized problems: Vertex Cover, as well as special cases of 3-Hitting set, namely Feedback Vertex in Tournaments (FVST) and Cluster Vertex Deletion (CVD). In fact, this implies polynomial kernels to deletion to any hereditary property that can be characterized by a finite set of forbidden induced subgraphs. With respect to bit size the kernelization complexity of $d$-Hitting Set is essentially settled: there exists a kernel with $O(k^d)$ bits ($O(k^d)$ sets and $O(k^{d-1})$ elements) and this it tight by the result of Dell and van Melkebeek [STOC 2010, JACM 2014]. On the other hand, it is a major open problem in the field to determine whether there exists a kernel for $d$-Hitting Set with fewer elements. We show that if we allow the kernelization to be lossy with a qualitatively better loss than the best possible approximation ratio of polynomial time approximation algorithms, then one can obtain kernels where the number of elements is linear for every fixed $d$. Further, we show that there exist approximate Turing kernelizations for $d$-Hitting Set that even beat the established bit size lower bounds for exact kernelizations---in fact, we use a constant number of oracle calls, each with ``near linear'' ($O(k^{1+\epsilon})$) bit size. For two special cases of implicit 3-Hitting set, namely, FVST and CVD, we obtain the ``best of both worlds'' type of results---$(1+\epsilon)$-approximate kernelizations with a linear number of vertices. In terms of size, this substantially improves the exact kernels of Fomin et al. [SODA 2018, TALG 2019], with simpler arguments. On the way, we use an interesting fact about the natural Linear Programming (LP) relaxation of $d$-Hitting Set: there exists a set $S$ of at most $k\cdot d$ elements that contains the support of every optimum LP solution to the LP-relaxation of $d$-Hitting Set. Peter Gartland, University of California Santa Barbara Title: Independent Set on P_k-Free Graphs in Quasi-Polynomial Time We present an algorithm that takes as input a graph G with weights on the vertices, and computes a maximum weight independent set S of G. If the input graph G excludes a path P_k on k vertices as an induced subgraph, the algorithm runs in time n^{O((k^2)log^3 n)}. Hence, for every fixed k our algorithm runs in quasi-polynomial time. This resolves in the affirmative an open problem of [Thomasse, SODA'20 invited presentation]. Previous to this work, polynomial time algorithms were only known for P_4-free graphs [Corneil et al., DAM'81], P_5-free graphs [Lokshtanov et al., SODA'14], and P_6-free graphs [Grzesik et al., SODA'19]. For larger values of t, only 2^{O(sqrt{knlog n})} time algorithms [Bacso et al., Algorithmica'19] and quasi-polynomial time approximation schemes [Chudnovsky et al., SODA'20] were known. Thus, our work is the first to offer conclusive evidence that Independent Set on P_k-free graphs is not NP-complete for any integer k. The recording of this talk is now available at https://www.youtube.com/watch?v=RldGIFoJHtY. No talk Karthik C.S., Tel Aviv University Title: Towards a Unified Framework for Hardness of Approximation in P Currently there are two popular techniques to create a gap and prove fine-grained/fixed-parameter inapproximability results in P. One is the Distributed PCP framework established by Abboud, Rubinstein, and Williams (2017), and the other is the Threshold Composition technique introduced by Lin (2015). In this talk we will survey results proved using these two techniques and also explore the connections between them. The recording of this talk is now available at https://www.youtube.com/watch?v=Pn9OlmF008Q. Pasin Manurangsi, Google Research Title: The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution -independent agnostic PAC model, with a focus on $L_p$ perturbations. We give a computationally efficient learning algorithm and a nearly matching running time lower bound for this problem. An interesting implication of our findings is that the $L_{\infty}$ perturbations case is provably computationally harder than the case $2 \leq p < \infty$. Specifically, when parameterized by the margin, the former is not FPT whereas the latter is. . Based on joint work with Ilias Diakonikolas and Daniel M. Kane. The recording of this talk is now available at https://www.youtube.com/watch?v=QLik4RZ-ew4. The slides of the talk are available here. Kevin Prattt, Carnegie Mellon University Title: Parameterized applications of multivariate polynomial differentiation The apolar inner product of two polynomials over a field is defined as the dot product of their coefficient vectors (with appropriate scaling). Specific instances of computing this inner product lead to the fastest-known algorithms for several intractable problems, including computing the permanent of a matrix and detecting simple paths in a graph. In this talk I will explore two different approaches for computing this inner product, one numeric and one symbolic. I will show how these yield in a very general manner non-trivial algorithms for problems such as detecting and approximately counting simple paths in a graph. Our approaches raise concrete algebraic questions that could provide a path to faster algorithms for these (and other) problems. Part of this talk is based on joint work with Cornelius Brand. The recording of the talk is now availbale at https://www.youtube.com/watch?v=qAEazQQ6dK8. Marcin Pilipczuk, University of Warsaw Title: Optimal Discretization is Fixed-parameter Tractable Given two disjoint sets W_1 and W_2 of points in the plane, the Optimal Discretization problem asks for the minimum size of a family of horizontal and vertical lines that separate W_1 from W_2, that is, in every region into which the lines partition the plane there are either only points of W_1, or only points of W_2, or the region is empty. Equivalently, Optimal Discretization can be phrased as a task of discretizing continuous variables: we would like to discretize the range of x-coordinates and the range of y-coordinates into as few segments as possible, maintaining that no pair of points from W_1 \times W_2 are projected onto the same pair of segments under this discretization. We provide a fixed-parameter algorithm for the problem, parameterized by the number of lines in the solution. Our algorithm works in time 2^O(k^2 log k) n^O(1), where k is the bound on the number of lines to find and n is the number of points in the input. Our result answers in positive a question of Bonnet, Giannopolous, and Lampis [IPEC 2017] and of Froese (PhD thesis, 2018) and is in contrast with the known intractability of two closely related generalizations: the Rectangle Stabbing problem and the generalization in which the selected lines are not required to be axis-parallel. (Joint work with Stefan Kratsch, Toma\v{s} Masa\v{r}ik, Irene Muzi, Manuel Sorge.) The recording of this talk is now available at https://www.youtube.com/watch?v=y8ab7XfqvMg. Martin Grohe, RWTH Aachen University Title: Polylogarithmic Parameterized Algorithms for the Graph Isomorphism Problem: From Bounded Degree to Excluded Minors Building on Babai's quasipolynomial graph isomorphism test, we designed a number of parameterized isomorphism tests that exhibit an unusual running time that is quasipolynomial in the parameter (that is, n^{polylog(k)}). The first parameter for which we could find such an algorithm is the maximum degree, the latest the Hadwiger number, that is, the maximum h such that K_h is a minor. In this talk, I'll give a high level introduction into the techniques used in these algorithms. (Joint work with Daniel Neuen, Pascal Schweitzer, and Daniel Wiebking.) The recording of this talk is now available at https://www.youtube.com/watch?v=6WODAEleYBw. Edith Elkind, University of Oxford Title: Hedonic diversity games We consider a setting where players belong to two types (men and women, vegetarians and carnivores, junior and senior researchers) and need to split into groups, with each player having preferences over the proportion of the two player types in his or her group. We study the problem of finding a stable partition, for several game-theoretic notions of stability; while some of the problems we consider turn out to be polynomial-time solvable, others are NP-hard, in which case we also explore their parameterized complexity. Based on joint work with Ayumi Igarashi, Robert Bredereck and Niclas Boehmer. The recording of this talk is now available at https://www.youtube.com/watch?v=zzmk0R_thF4. The slides of the talk are available here. There was an error on slide 13 of the presentation, which is corrected in the updated slides: for dichotomous diversity games an outcome in the core always exists and can be found in polynomial time. The proof can be found in the MSc thesis of Nicolas Boehmer. Hans Bodlaender, Utrecht University Title: Typical Sequences Revisited - Computing Width Parameters of Graphs In this talk, we revisit a key element of the currently asymptotic fasters parameterized algorithms for treewidth and pathwidth: typical sequences.These were introduced in 1991, by Lagergren and Arnborg, and independently by Bodlaender and Kloks. We give a structural result on thejoin of typical sequences, which speeds up one step in these algorithms for treewidth and pathwidth, but does not yet give better time bounds. The result is used to obtain polynomial time algorithms for some problems on directed series parallel graphs. The talk also discusses the current (open) status of the parameterized complexity of pathwidth and treewidth. (This is a joint work with Lars Jaffke and Jan Arne Telle.) The recording of this talk is now available at https://www.youtube.com/watch?v=uVr3wVxmYL4. The slides of the talk are available here. Daniel Marx, Max-Planck-Institut für Informatik Title: Incompressibility of H-free edge modification problems: Towards dichotomy Given a graph G and an integer k, the H-free Edge Editing problem is to find whether there exists at most k pairs of vertices in G such that changing the adjacency of the pairs in G results in a graph without any induced copy of H. The existence of polynomial kernels for H-free Edge Editing received significant attention in the parameterised complexity literature. Nontrivial polynomial kernels are known to exist for some graphs H with at most 4 vertices, but starting from 5 vertices, polynomial kernels are known only if H is either complete or empty. This suggests the conjecture that there is no H with at least 5 vertices where H-free Edge Editing admits a polynomial kernel. Towards this goal, we obtain a set of nine 5-vertex graphs such that if H-free Edge Editing is incompressible for every H in this set, then H-free Edge Editing is incompressible for every graph H with at least five vertices that is neither complete nor empty (under the usual complexity assumptions). That is, proving incompressibility for these nine graphs would give a complete classification of the kernelization complexity of H-free Edge Editing for every H with at least 5 vertices. We obtain similar result also for H-free Edge Deletion. Here the picture is more complicated due to the existence of another infinite family of graphs H where the problem is trivial (graphs with exactly one edge). We obtain a larger set of 19 graphs whose incompressibility would give a complete classification of the kernelization complexity of H-free Edge Deletion for every graph H with at least 5 vertices. Analogous results follow also for the H-free Edge Completion problem by simple complementation. (Joint Work with R. B. Sandeep) The recording of this talk is now available at https://www.youtube.com/watch?v=MAD5IR6mT-A. The slides of the talk are available here. Michal Pilipczuk, University of Warsaw Title: Beyond Sparsity The last decade has witnessed impressive progress on the understanding of structure in sparse graphs. Much of this progress can be attributed to the systematic exploration of abstract definitions of structural sparsity --- concepts of bounded expansion and nowhere denseness. From the parameterized perspective, these two notions have brought new techniques to the toolbox and helped to explain the parameterized complexity of problems definable in First Order logic. However, the contemporary graph theory offers multiple quantitative notions (aka parameters) of "well-structuredness" that reach beyond sparse graphs. To name just a few, we may bound the rankwidth of a graph, exclude a vertex-minor, or stipulate VC parameters like VC dimension or VC density. In the talk we will survey the recent advances in lifting the developments of sparsity to classes of well-structured dense graphs. We will particularly focus on connections with concepts from model theory: stable and NIP classes, and the notion of FO transductions. The recording of this talk is now available at https://www.youtube.com/watch?v=JHjh3u4ZUWo. The slides of the talk are available here. Vincent Cohen-Addad, Google Zürich Title: On the Parameterized Complexity of Various Clustering Problems Clustering problems are fundamental computational problems. On the one hand they are at the heart of various data analysis, bioinformatics or machine learning approaches, on the other hand, they can be seen as variants of 'set cover' problems (involving e.g. metrics, graphs with a special structure, etc.) and so are very basic problems. For many applications, finding efficient fixed-parameter algorithms is a very natural approach. Popular choices of parameters are either parameters of the problem itself (e.g. the number of clusters) or on the underlying structure of the input (e.g. the dimension in the metric case). We will focus on classic metric clustering objectives such as k-median and k-means and review recent results on the fixed parameter complexity of these problems, and the most important open problems. The recording of this talk is now available at https://www.youtube.com/watch?v=Fgh-20qHzfg. The slides of the talk are available here. Jason Li, CMU Title: Detecting Feedback Vertex Sets of Size k in O*(2.7^k) Time In the Feedback Vertex Set problem, one is given an undirected graph $G$ and an integer $k$, and one needs to determine whether there exists a set of $k$ vertices that intersects all cycles of $G$ (a so-called feedback vertex set). Feedback Vertex Set is one of the most central problems in parameterized complexity: It served as an excellent test bed for many important algorithmic techniques in the field such as Iterative Compression~[Guo et al. (JCSS'06)], Randomized Branching~[Becker et al. (J. Artif. Intell. Res'00)] and Cut\&Count~[Cygan et al. (FOCS'11)]. In particular, there has been a long race for the smallest dependence $f(k)$ in run times of the type $O^\star(f(k))$, where the $O^\star$ notation omits factors polynomial in $n$. This race seemed to be run in 2011, when a randomized $O^\star(3^k)$ time algorithm based on Cut\&Count was introduced. In this work, we show the contrary and give a $O^\star(2.7^k)$ time randomized algorithm. Our algorithm combines all mentioned techniques with substantial new ideas: First, we show that, given a feedback vertex set of size $k$ of bounded average degree, a tree decomposition of width $(1-\Omega(1))k$ can be found in polynomial time. Second, we give a randomized branching strategy inspired by the one from~[Becker et al. (J. Artif. Intell. Res'00)] to reduce to the aforementioned bounded average degree setting. Third, we obtain significant run time improvements by employing fast matrix multiplication. The recording of this talk is now available at https://www.youtube.com/watch?v=mhyS7cqYoYw. Daniel Lokshtanov, UCSB Title: A Parameterized Approximation Scheme for Min k-Cut In the Min k-Cut problem, input is an edge weighted graph G and an integer k, and the task is to partition the vertex set into k non-empty sets, such that the total weight of the edges with endpoints in different parts is minimized. When k is part of the input, the problem is NP-complete and hard to approximate within any factor less than 2. Recently, the problem has received significant attention from the perspective of parameterized approximation. Gupta {\em et al.} [SODA 2018] initiated the study of FPT-approximation for the Min k-Cut problem and gave an 1.9997-approximation algorithm running in time 2^{O(k^6}n^{O(1)}. Later, the same set of authors~[FOCS 2018] designed an (1 +\epsilon)-approximation algorithm that runs in time (k/\epsilon)^{O(k)}n^{k+O(1)}, and a 1.81-approximation algorithm running in time 2^{O(k^2)}n^{O(1)}. More, recently, Kawarabayashi and Lin~[SODA 2020] gave a (5/3 + \epsilon)-approximation for Min k-Cut running in time 2^{O(k^2 \log k)}n^{O(1)}. In this paper we give a parameterized approximation algorithm with best possible approximation guarantee, and best possible running time dependence on said guarantee (up to Exponential Time Hypothesis (\ETH) and constants in the exponent). In particular, for every \epsilon > 0, the algorithm obtains a (1 +\epsilon)-approximate solution in time (k/\epsilon)^{O(k)}n^{O(1)}. The main ingredients of our algorithm are: a simple sparsification procedure, a new polynomial time algorithm for decomposing a graph into highly connected parts, and a new exact algorithm with running time s^{O(k)}n^{O(1)} on unweighted (multi-) graphs. Here, s denotes the number of edges in a minimum k-cut. The later two are of independent interest. The recording of this talk is now available at https://www.youtube.com/watch?v=6EvV8Ljn8VI. Jesper Nederlof, Utrecht University Title: Bipartite TSP in O(1.9999^n) Time, Assuming Quadratic Time Matrix Multiplication The symmetric traveling salesman problem (TSP) is the problem of finding the shortest Hamiltonian cycle in an edge-weighted undirected graph. In 1962 Bellman, and independently Held and Karp, showed that TSP instances with n cities can be solved in O(n^2*2^n) time. Since then it has been a notorious problem to improve the runtime to O((2-eps)^n) for some constant eps >0. In this work we establish the following progress: If (s x s)-matrices can be multiplied in s^{2+o(1)} time, than all instances of TSP in bipartite graphs can be solved in O(1.9999^n) time by a randomized algorithm with constant error probability. We also indicate how our methods may be useful to solve TSP in non-bipartite graphs. On a high level, our approach is via a new problem called the MINHAMPAIR problem: Given two families of weighted perfect matchings, find a combination of minimum weight that forms a Hamiltonian cycle. As our main technical contribution, we give a fast algorithm for MINHAMPAIR based on a new sparse cut-based factorization of the `matchings connectivity matrix', introduced by Cygan et al. [JACM'18]. The recording of this talk is now available at https://www.youtube.com/watch?v=BHvegY-xYBE. The slides of the talk are available here. Posted by Fedor Fomin at 2:04 AM No comments: This blog contains the list of talks on Frontiers of Parameterized Complexity. The initial plan to have talks every week, on Thursdays, at 17.00 Bergen time (GMT+2). The recordings of the talks have be found at our YouTube channel Frontiers of Parameterized Complexity https://www.youtube.com/channel/UCdfML-PShQNSCeqbz9Ol_oA/ Fedor Fomin Roohani Sharma Subscribe To Frontiers PC
CommonCrawl
Efficacy of sarolaner (Simparica®) against induced infestations of Haemaphysalis longicornis on dogs Kenji Oda1, Wakako Yonetake2, Takeshi Fujii2, Andrew Hodge3, Robert H. Six4, Steven Maeder ORCID: orcid.org/0000-0002-6823-68834 & Douglas Rugg4 Parasites & Vectors volume 12, Article number: 509 (2019) Cite this article Haemaphysalis longicornis is the major tick affecting dogs in most of the East Asia/Pacific region and has recently been detected in a number of areas of the USA. This tick is a vector for a number of pathogens of dogs, other mammals and humans. In this study, the efficacy of a single oral administration of sarolaner (Simparica®, Zoetis) at the minimum label dosage (2 mg/kg) was evaluated against an existing infestation of H. longicornis and subsequent weekly reinfestations for 5 weeks after treatment. Sixteen dogs were ranked on pretreatment tick counts and randomly allocated to treatment on Day 0 with sarolaner at 2 mg/kg or a placebo. The dogs were infested with H. longicornis nymphs on Days − 2, 5, 12, 19, 26 and 33. Efficacy was determined at 48 hours after treatment and subsequent re-infestations based on live tick counts relative to placebo-treated dogs. There were no adverse reactions to treatment. A single dose of sarolaner provided 100% efficacy on Days 2, 7, 14 and 21; and ≥ 97.4% efficacy on Days 28 and 35. Considering only attached, live ticks, efficacy was 100% for the entire 35 days of the study. Geometric mean live tick counts for sarolaner were significantly lower than those for placebo on all days (11.62 ≤ t(df) ≤ 59.99, where 13.0 ≤ df ≤ 14.1, P < 0.0001). In this study, a single oral administration of sarolaner at 2 mg/kg provided 100% efficacy against an existing infestation of H. longicornis nymphs and ≥ 97.4% efficacy (100% against attached ticks) against weekly reinfestation for at least 35 days after treatment. Ticks are a common ectoparasite of dogs globally. As the spectrum and importance of tick-borne diseases in dogs, humans and other animals has expanded, so also has the realization that effective tick control is necessary to help prevent the spread of these diseases, as well as ameliorate the direct clinical impact of tick infestations. Haemaphysalis longicornis (Acari: Ixodidae), the Asian longhorned tick bush tick or New Zealand cattle tick, is found in Japan, Korea, eastern China, southeast Russia, Australia, New Zealand, and a number of Pacific islands. Endemic to the East Asia/Pacific region. This three-host tick is thought to have been introduced to Australia in the 19th century on cattle from Japan and from there to New Zealand and other Pacific islands [1]. Cattle are the primary host of H. longicornis [2] but adults and nymphs also infest humans and a wide variety of domestic and wild mammals such as dogs, cats, sheep, goats, horses, deer, foxes and rabbits, as well as birds [3, 4]. Larvae are found mainly on small mammals and birds [1]. Extensive infestations can occur causing severe irritation which, combined with blood loss, can lead to weight and productivity losses and even death in cattle, sheep and deer [1, 5, 6]. Through much of its distribution (e.g. New Zealand [7], China [8], Korea [9, 10], and Japan [11]) H. longicornis is the most common and/or economically important tick species present. Recently established infestations of H. longicornis have been detected in several eastern states in the USA, including the heavily populated suburbs of New York City, and in Arkansas [12, 13]. Haemaphysalis longicornis is a potential vector for a wide range of pathogens that affect domestic animals and humans, including bovine theileriosis (Theileria spp.), bovine babesiosis (Babesia ovata), canine babesiosis (Babesia gibsoni), ehrlichiosis (Ehrlichia spp.), anaplasmosis (Anaplasma phagocytophilum), tick-borne encephalitis (flavivirus), Japanese spotted fever (Rickettsia japonica), and an emerging tick-borne zoonosis, severe fever with thrombocytopenia syndrome (phlebovirus) [1, 10, 11, 13,14,15,16]. Haemaphysalis longicornis is the most prevalent tick found on dogs through most of its Asian distribution. Effective control of this tick is necessary to minimize the adverse clinical effects of infestation as well as to reduce the risk of transmission of the numerous canine and zoonotic pathogens it can vector. Sarolaner is a novel isoxazoline acaricide/insecticide with persistent efficacy following oral administration in dogs [17]. Sarolaner, in a chewable tablet formulation (Simparica®, Zoetis) provides control of a fleas and ticks for at least a month after a single oral dose [18] and has been shown to be effective against many tick species globally [19,20,21,22]. This study was conducted to evaluate the efficacy of a single administration of sarolaner at the minimum label dose against an existing H. longicornis infestation and weekly re-infestations for a period of 5 weeks following treatment. This negative controlled, randomized, laboratory comparative efficacy study was conducted by the Research Institute for Animal Science in Biochemistry and Toxicology (RIAS) at the Narita Animal Science (NAS) Laboratory Co, Ltd, Chiba, Japan. Study procedures followed the World Association for the Advancement of Veterinary Parasitology (WAAVP) guidelines for evaluating the efficacy of parasiticides for the treatment, prevention and control of flea and tick infestation on dogs and cats [23]. Eight male and eight female purpose-bred Beagle dogs approximately 22 months of age and weighing 7.8 to 12.1 kg were included in the study. The dogs had not been treated with any acaricidal medications for at least 30 days prior to the study and had not been fitted with acaricidal collars for at least 14 days. Dogs were identified by uniquely numbered ear tattoos and housed individually in indoor enclosures (23 ± 3 °C) that conformed to accepted animal welfare guidelines and ensured no direct contact between dogs. Dogs were acclimatized to these conditions for at least 7 days prior to treatment. Dogs were fed an appropriate ration of a commercial, dry, laboratory-canine feed for the duration of the study. Water was available ad libitum. All dogs were given a physical examination to ensure that they were in good health at enrollment and suitable for inclusion in the study. General health observations of each dog were performed at least once daily throughout the study. The study followed a randomized complete block design. The 16 dogs were ranked according to pre-treatment tick counts on Day − 5 into blocks of 2, and within each block dogs were randomly allocated to treatment with either placebo or sarolaner, resulting in eight dogs in each treatment group. Dogs were infested with ticks prior to treatment and then re-infested weekly for 5 weeks. Tick counts were conducted 48 h after treatment or re-infestation. On Day 0, dogs were dosed with either placebo tablets or tablets containing sarolaner. Each dog received from one to two tablets (5 mg, 20 mg) of sarolaner or the equivalent number of placebo tablets to provide the minimum recommended dose of 2 mg/kg (actual range 2.04–2.44 mg/kg) based on body weights taken on Day − 2. Placebo and sarolaner tablet presentations were similar in appearance. Dogs were provided their daily food ration approximately 20 min prior to treatment. All doses were administered by hand pilling to ensure accurate dosing. Each dog was observed for several minutes after dosing for evidence that the dose was swallowed. Tick infestation and assessment Viable, unfed nymphal H. longicornis ticks from the NAS Laboratory colony were used for the study. The colony was established from a parthenogenetic strain originally isolated in the Aomori prefecture in May 2002. Nymphs were used for infestation as these attach and feed readily on dogs and are frequently found on dogs in Japan. The preferred hosts for adults are large wild and domestic ruminants [1, 2]. Each dog was infested with 50 H. longicornis nymphs at each infestation. Ticks were preferentially placed on the head around the base of the ears to mimic the natural preferred attachment sites. Dogs were sedated with medetomidine hydrochloride (Domitor®) to facilitate infestation and were fitted with Elizabethan collars to reduce grooming for the duration of each infestation period, with the exception of the period from Day 0 to Day 2. For tick counts, dogs were sedated and systematically inspected from head to tail, parting the hair by hand, as ticks were observed they were counted and removed. The dogs were then combed thoroughly with a fine-toothed comb. This procedure was conducted for a minimum of 10 min per dog. If ticks were found in the final minute, examination was continued in 1-min increments until no further ticks were found. Dogs were infested on Day − 7 and live ticks were counted on Day − 5 to ensure the dogs were acceptable hosts and for allocation. The dogs were infested on Day − 2 (2 days before treatment) and assessed on Day 2 to determine efficacy versus an existing infestation. Subsequent weekly infestations were conducted on Days 5, 12, 19, 26 and 33 and counts were performed 48 h after each infestation on Days 7, 14, 21, 28 and 35 to confirm the duration of preventative efficacy. For all post-treatment counts both live and dead ticks were classified as: unattached or free; attached but unengorged; or attached and engorged. The individual dog was the experimental unit. Tick counts were transformed by the loge(count + 1) transformation prior to analysis in order to stabilize the variance and normalize the data. Using the PROC MIXED procedure (SAS 9.2, Cary NC), transformed counts were analyzed using a mixed linear model for repeated measures. The model included the fixed effects of treatment, time point and the interaction between treatment and time point. The random effects included block, animal and error. Two-sided testing was performed at the significance level α = 0.05. The assessment of efficacy was based on the percent reduction in the arithmetic and geometric mean live tick counts relative to placebo using Abbott's formula: $$\% {\text{reduction}} = 100 \times \left[ {{\text{Mean}}\;{\text{count}}\left( {\text{placebo}} \right){-}{\text{Mean}}\;{\text{count}}\left( {\text{treated}} \right)} \right]/{\text{Mean}}\;{\text{count}}\left( {\text{placebo}} \right).$$ All dogs included in the study were demonstrated to be acceptable hosts on Day − 5 with an arithmetic mean tick count of 19.5 (range 13–35). The placebo-treated dogs maintained H. longicornis infestations throughout the study with two to 32 live ticks recovered from all eight dogs at each examination (Table 1). On Days 2 to 14, all of these ticks were attached. On Days 21 to 35, a small number (1–3) of unattached ticks were found on up to four dogs per time point. The relatively low tick counts for placebo dogs on Day 2 were likely due to the longer infestation period prior to counting (4 days vs 2 days for all other counts), removal of Elizabethan collars from Day 0 to Day 2 allowing for greater loss of ticks due to self-grooming by the dogs, and also natural attrition. Table 1 Geometric (arithmetic) mean live H. longicornis counts and ranges for dogs treated once orally (Day 0) with placebo or sarolaner chewable tables at 2 mg/kg and percent efficacy relative to placebo at 48 h after treatment and subsequent weekly re-infestations No live ticks were found on any sarolaner-treated dog from Day 2 to Day 21. On Day 28, single live, unattached ticks were recovered from two dogs and on Day 35, three dogs had 1, 2, or 4 live, unattached ticks. Efficacy was 100% from treatment through Day 21 and based on geometric means was 98.9% on Day 28 and 97.4% on Day 35 (Table 1). All of the live ticks recovered from sarolaner-treated dogs were unattached and as these were not feeding they would not have been exposed to sarolaner. Therefore, based on live, attached ticks, efficacy was 100% through the entire study. There were no adverse reactions to treatment with sarolaner oral tablets. One dog that had been treated with sarolaner (2.23 mg/kg) was found dead on Day 22. This intact male Beagle dog had normal food consumption and was reported to be normal at all general health observations from acclimation until the dog was found dead. A post-mortem examination was conducted on the day after the dog's death (Day 23). The only finding was congestion of the abdominal and thoracic organs. Detailed review of the necropsy findings and histopathology indicated that the organ congestion was consistent with post-mortem autolysis. The cause of death could not be determined but there was no clear evidence of convulsions. Sarolaner is a gamma-aminobutyric acid antagonist and clinical signs of toxicity are central nervous system excitation. Since, this dog showed no abnormal signs in the 3 weeks between administration of sarolaner up until its death, treatment is considered unlikely to have a causal relationship with its death. This study evaluated efficacy against the nymphal stage of H. longicornis as this stage is commonly found infesting dogs. Similar high level of efficacy could be expected against adult H. longicornis as sarolaner has demonstrated month long efficacy against the adults of many tick species globally [19,20,21,22]. Another oral isoxazoline, afoxolaner, has been evaluated against adult H. longicornis and produced efficacy > 91.9% (based on live tick counts) up to 30 days after a single treatment [24] and recently a third isoxazoline, lotilaner, provided efficacy of > 95% against adults for up to 35 days after a single oral dose [25], while a fourth, fluralaner resulted in efficacy of 93.6% or more for 114 days after the recommended dosage [26]. It should be noted that efficacy in the latter two studies was based on counts of live attached ticks only; when sarolaner was assessed using attached ticks, efficacy was 100% for the entire 35 days. Although the four studies cannot be directly compared due to the differences in design (only assessing live attached ticks) and stage evaluated (nymphs in this study), the general excellent acaricidal efficacy of the isoxazolines was confirmed against H. longicornis. The high relative potency of sarolaner compared to the other compounds was demonstrated as efficacy assessed versus total live tick counts was always 97.4% or greater through Day 35 and was 100% for the entire study when determined for live attached ticks. Sarolaner has also shown a more rapid speed of kill and greater sustained efficacy than afoxolaner against several tick species [21, 27,28,29]. A rapid speed of kill is an advantage for an orally administered acaricide as this limits the time that ticks feed before receiving an effective dose, thus reducing the chances of the transmission of tick-borne pathogens. Recently, a single oral treatment with oral sarolaner has been demonstrated to prevent the transmission of Babesia canis to dogs from infected Dermacentor reticulatus ticks [30]. Systemic medications for the prevention of tick infestations have advantages over topical formulations, including ease of accurate dosing, no exposure to topical pesticide residues in the hair coat and no reduction in the duration of efficacy due to shampooing, rain or swimming etc. Haemaphysalis longicornis is the most common tick infesting dogs in East Asia and is the vector of numerous pathogens of dogs (including babesiosis) as well as zoonoses that cause serious even fatal diseases. The rapid and complete control of H. longicornis provided for up to 35 days after a single oral treatment in this study confirms that treatment of dogs with Simparica® would provide effective control of the important ticks afflicting dogs and potentially reduce the risk of transmission of tick-borne pathogens. A single oral dose of sarolaner (Simparica®) at the minimum recommended dose of 2 mg/kg effectively treated an existing infestation of H. longicornis nymphs and prevented re-infestation for up to 35 days. This convenient chewable formulation of sarolaner is a valuable tool for the treatment and prevention of tick infestations and the reduction of the risk of transmission of tick-borne diseases in dogs. The dataset supporting the conclusions of this article is included within the article. Hoogstraal H, Roberts FHS, Kohls GM, Tipton VJ. Review of Haemaphysalis (Kaiseriana) longicornis Neumann (resurrected) of Australia, New Zealand, New Caledonia, Fiji, Japan, Korea, and northeastern China and USSR, and its parthenogenetic and bisexual populations (Ixodoidea, Ixodidae). J Parasitol. 1968;54:1197–213. Roberts FHS. Australian ticks. Melbourne: CSIRO; 1970. Heath AGC, Tenquist JD, Bishop DM. Goats, hares and rabbits as hosts for the New Zealand cattle tick, Haemaphysalis longicornis. NZ J Zool. 1987;14:549–55. Heath AGC, Tenquist JD, Bishop DM. Bird hosts of the New Zealand cattle tick, Haemaphysalis longicornis. NZ J Zool. 1988;15:585–6. Heath AGC, Pearce DM, Tenquist JD, Cole DJW. Some effects of a tick infestation (Haemaphysalis longicornis) on sheep. NZ J Ag Res. 1977;20:19–22. Neilson FJA, Mossman DH. Anemia and deaths in red deer (Cervus elephas) fawns associated with heavy infestations of cattle tick (Haemaphysalis longicornis). NZ Vet J. 1981;30:125–6. Dumbleton LJ. The ticks (Ixoidea) of the New Zealand sub-region. Wellington: DSIR; 1953. Zhou Y, Zhang H, Cao J, Gong H, Zhou J. Epidemiology of toxoplasmosis: role of the tick Haemaphysalis longicornis. Infect Dis Poverty. 2016;5:14. Baek J-K, Kim H, Won S, Kim H-C, Sung-Tae Chong S-T, et al. Ticks collected from wild and domestic animals and natural habitats in the Republic of Korea. Korean J Parasitol. 2014;52:281–5. Oh JY, Moon B-C, Bae BK, Shin E-H, Ko YH, Kim Y-J, et al. Genetic identification and phylogenetic analysis of Anaplasma and Ehrlichia species in Haemaphysalis longicornis collected from Jeju Island, Korea. J Bacteriol Virol. 2009;39:257–67. Shimada Y, Beppu T, Inokuma H, Okuda M, Onishi T. Ixodid tick species recovered from domestic dogs in Japan. Med Vet Entomol. 2003;17:38–45. Sleeman J. Haemaphysalis longicornis detected in the United States. USGS National Wildlife Health Center, Wildlife Health Bulletin 2018-03. https://www.nwhc.usgs.gov/publications/wildlife_health_bulletins/index.jsp. Accessed 13 Aug 2018. Beard CB, Occi J, Bonilla DL, Egizi AM, Fonseca DM, Mertins JW, et al. Multistate infestation with the exotic disease-vector tick Haemaphysalis longicornis—United States, August 2017–September 2018. Morb Mortal Wkly Rep. 2018;67:1310–3. Gaowa, Ohashi N, Aochi M, Wuritu, Wu D, Yoshikawa Y, et al. Rickettsiae in ticks, Japan, 2007–2011. Emerg Infect Dis. 2013;19:338–40. Luo L-M, Zhao L, Wen H-L, Zhang Z-T, Liu J-W, Fang L-Z, et al. Haemaphysalis longicornis ticks as reservoir and vector of severe fever with thrombocytopenia syndrome virus in China. Emerg Infect Dis. 2015;21:1770–6. Kim CM, Yi YH, Yu DH, Lee MJ, Cho MR, Desai AR, et al. Tick-borne rickettsial pathogens in ticks and small mammals in Korea. Appl Environ Microbiol. 2006;72:5766–76. McTier TL, Chubb N, Curtis MP, Hedges L, Inskeep GA, Knauer CS, et al. Discovery of sarolaner: a novel, orally administered, broad-spectrum, isoxazoline ectoparasiticide for dogs. Vet Parasitol. 2016;222:3–11. McTier TL, Six RH, Fourie JJ, Pullins A, Hedges L, Mahabir SP, et al. Determination of the effective dose of a novel oral formulation of sarolaner (Simparica) for the treatment and month-long control of fleas and ticks on dogs. Vet Parasitol. 2016;222:12–7. Six RH, Everett WR, Young DR, Carter L, Mahabir SP, Honsberger NA, et al. Efficacy of a novel oral formulation of sarolaner (Simparica) against five common tick species infesting dogs in the United States. Vet Parasitol. 2016;222:28–32. Geurden T, Becskei C, Grace S, Strube C, Doherty P, Liebenberg J, et al. Efficacy of a novel oral formulation of sarolaner (Simparica) against four common tick species infesting dogs in Europe. Vet Parasitol. 2016;222:33–6. Packianathan R, Hodge A, Bruellke N, Davis K, Maeder S. Comparative speed of kill of sarolaner (Simparica®) and afoxolaner (NexGard®) against induced infestations of Ixodes holocyclus on dogs. Parasites Vectors. 2017;10:98. Scott F, Franz L, Campos DR, Azevedo TRC, Cunha D, Six RH, et al. Efficacy of sarolaner (Simparic™) against induced infestations of Amblyomma cajennense on dogs. Parasites Vectors. 2017;10:390. Marchiondo AA, Holdsworth PA, Fourie LJ, Rugg D, Hellmann K, Snyder DE, et al. World Association for the Advancement of Veterinary Parasitology (W.A.A.V.P.) second edition: guidelines for evaluating the efficacy of parasiticides for the treatment, prevention and control of flea and tick infestations on dogs and cats. Vet Parasitol. 2013;194:84–97. Kondo Y, Kinoshita G, Drag M, Chester TS, Larsen D. Evaluation of the efficacy of afoxolaner against Haemaphysalis longicornis on dogs. Vet Parasitol. 2014;201:229–31. Otaki H, Sonobe J, Murphy M, Cavalleri D, Seewald W, Drake J, et al. Laboratory evaluation of lotilaner (CredelioTM) against Haemaphysalis longicornis infestations of dogs. Parasites Vectors. 2018;11:448. Toyota M, Hirama K, Suzuki T, Armstrong R, Okinaga T. Efficacy of orally administered fluralaner in dogs against laboratory challenge with Haemaphysalis longicornis ticks. Parasites Vectors. 2019;12:43. Six RH, Young DR, Myers MR, Mahabir SP. Comparative speed of kill of sarolaner (Simparica™) and afoxolaner (NexGard®) against induced infestations of Ixodes scapularis on dogs. Parasites Vectors. 2016;9:79. Six RH, Everett WR, Chapin S, Mahabir SP. Comparative speed of kill of sarolaner (Simparica™) and afoxolaner (NexGard®) against induced infestations of Amblyomma americanum on dogs. Parasites Vectors. 2016;9:98. Six RH, Young DR, Holzmer SJ, Mahabir SP. Comparative speed of kill of sarolaner (Simparica™) and afoxolaner (NexGard®) against induced infestations of Rhipicephalus sanguineus s.l. on dogs. Parasites Vectors. 2016;9:91. Thomas Geurden T, Six R, Becskei C, Maeder S, Lloyd A, Mahabir S, et al. Evaluation of the efficacy of sarolaner (Simparica®) in the prevention of babesiosis in dogs. Parasites Vectors. 2017;10:415. The authors would like to thank the support staff at RIAS/NAS for their assistance with the conduct of the study. This study was funded by Zoetis Japan KK. Research Institute for Animal Science in Biochemistry and Toxicology, 3-7-11, Hashimotodai, Midori-ku, Sagamihara-shi, Kanagawa, 252-0132, Japan Kenji Oda Zoetis Japan KK, 3-22-7, Yoyogi, Shibuya-ku, Tokyo, 151-0053, Japan Wakako Yonetake & Takeshi Fujii Zoetis Australia Research and Manufacturing, Melbourne, Australia Andrew Hodge Zoetis, Veterinary Medicine Research and Development, 333 Portage St., Kalamazoo, MI, 49007, USA Robert H. Six , Steven Maeder & Douglas Rugg Search for Kenji Oda in: Search for Wakako Yonetake in: Search for Takeshi Fujii in: Search for Andrew Hodge in: Search for Robert H. Six in: Search for Steven Maeder in: Search for Douglas Rugg in: All authors were involved in protocol development, data interpretation and preparing the manuscript. AH was the biometrician responsible for the study design and statistical analysis. All authors read and approved the final manuscript. Correspondence to Steven Maeder. The protocol was reviewed and approved by the RIAS's Institutional Animal Care and Use Committee prior to implementation. This study was funded by Zoetis Japan KK. WY, TF, AH, RHS, SM and DR were employees of Zoetis. KO was the contracted study investigator. Oda, K., Yonetake, W., Fujii, T. et al. Efficacy of sarolaner (Simparica®) against induced infestations of Haemaphysalis longicornis on dogs. Parasites Vectors 12, 509 (2019) doi:10.1186/s13071-019-3765-4 Accepted: 23 October 2019 Haemaphysalis longicornis Isoxazoline Sarolaner Simparica®
CommonCrawl