text
stringlengths
100
500k
subset
stringclasses
4 values
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance. ISSN 1547-7371(online) ISSN 1061-0022(print) Journals Home Search My Subscriptions Subscribe Your device is paired with for another days. Previous issue | This issue | Most recent issue | All issues | Next issue | Previous article | Recently published articles On the coordinate functions of Peano curves Authors: B. M. Makarov and A. N. Podkorytov Translated by: N. Tsilevich Original publication: Algebra i Analiz, tom 28 (2016), nomer 1. Journal: St. Petersburg Math. J. 28 (2017), 115-125 MSC (2010): Primary 26A16; Secondary 28A12 DOI: https://doi.org/10.1090/spmj/1441 Published electronically: November 30, 2016 MathSciNet review: 3591069 Abstract | References | Similar Articles | Additional Information Abstract: A construction of "nonsymmetric" plane Peano curves is described whose coordinate functions satisfy the Lipschitz conditions of orders $\alpha$ and $1-\alpha$ for some $\alpha$. It is proved that these curves are metric isomorphisms between the interval $[0,1]$ and the square $[0,1]^2$. This fact is used to show that the graphs of their coordinate functions have the maximum possible Hausdorff dimension for a given smoothness. References [Enhancements On Off] (What's this?) G. Peano, Sur une courbe, qui remplit toute une aire plane, Math. Ann. 36 (1890), no. 1, 157–160 (French). MR 1510617, DOI https://doi.org/10.1007/BF01199438 N. N. Luzin, The theory of functions of a real variable, Uchpedgiz, Moscow, 1940. (Russian) Hans Sagan, Space-filling curves, Universitext, Springer-Verlag, New York, 1994. MR 1299533 A. S. Besiovitch and H. D. Ursell, Sets of fractional dimensions (V): On dimensional numbers of some continuous curves, J. London Math. Soc. 12 (1937), 18–25. S. A. Kline, On curves of fractional dimensions, J. London Math. Soc. 20 (1945), 79–86. MR 16452, DOI https://doi.org/10.1112/jlms/s1-20.2.79 K. J. Falconer, The geometry of fractal sets, Cambridge Tracts in Mathematics, vol. 85, Cambridge University Press, Cambridge, 1986. MR 867284 H. Steinhaus, La courbe de Peano et les fonctions indépendantes, C. R. Acad. Sci. Paris 202 (1936), 1961–1963. Adriano M. Garsia, Combinatorial inequalities and smoothness of functions, Bull. Amer. Math. Soc. 82 (1976), no. 2, 157–170. MR 582776, DOI https://doi.org/10.1090/S0002-9904-1976-13975-4 E. R. Love and L. C. Young, Sur une classe de fonctionnelles linéaires, Fund. Math. 28 (1937), 243–257. John A. R. Holbrook, Stochastic independence and space-filling curves, Amer. Math. Monthly 88 (1981), no. 6, 426–432. MR 622959, DOI https://doi.org/10.2307/2321827 Boris Makarov and Anatolii Podkorytov, Real analysis: measures, integrals and applications, Universitext, Springer, London, 2013. Translated from the 2011 Russian original. MR 3089088 G. Peano, Sur une courbe, qui remplit toute une aire plane, Math. Ann. 36 (1890), no. 1, 157–160. MR 1510617 H. Sagan, Space-filling curves, Universitext, Springer-Verlag, New York, 1994. MR 1299533 S. A. Kline, On curves of fractional dimension, J. London Math. Soc. 20 (1945), 79–86. MR 0016452 K. J. Falconer, The geometry of fractal sets, Cambridge Tracts Math., vol. 85, Cambridge Univ. Press, Cambridge, 1986. MR 867284 A. M. Garsia, Combinatorial inequalities and smoothness of functions, Bull. Amer. Math. Soc. 82 (1976), no. 2, 157–170. MR 0582776 J. R. Holbrook, Stochastic independence and space-filling curves, Amer. Math. Monthly. 88 (1981), no. 6, 426–432. MR 622959 B. M. Makarov and A. N. Podkorytov, Lectures on real analysis, BHV-Petersburg, St. Petersburg, 2011; English transl., Real analysis: messures, integrals and applications, Universitext, Springer-Verlag, London, 2013. MR 3089088 Retrieve articles in St. Petersburg Mathematical Journal with MSC (2010): 26A16, 28A12 Retrieve articles in all journals with MSC (2010): 26A16, 28A12 B. M. Makarov Affiliation: Department of Mathematics and Mechanics, St. Petersburg State University, St. Petersburg, Russia Email: [email protected] A. N. Podkorytov Email: [email protected] Keywords: Peano curve, Lipschitz condition, Hausdorff dimension Received by editor(s): September 7, 2015 Article copyright: © Copyright 2016 American Mathematical Society Join the AMS AMS Conferences News & Public Outreach Math in the Media Mathematical Imagery Mathematical Moments Data on the Profession Fellows of the AMS Mathematics Research Communities AMS Fellowships Programs for Students Collaborations and position statements Appropriations Process Primer Congressional briefings and exhibitions About the AMS Notices of the AMS · Bulletin of the AMS American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267 AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office. © Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility
CommonCrawl
A comprehensive analysis of trends and determinants of HIV/AIDS knowledge among the Bangladeshi women based on Bangladesh Demographic and Health Surveys, 2007–2014 Md. Tuhin Sheikh ORCID: orcid.org/0000-0002-8513-18991, Md. Nizam Uddin1 & Jahidur Rahman Khan1 South-Asian countries are considered to be a potential breeding ground for HIV epidemic. Although the prevalence of this incurable disease is low in Bangladesh, women still have been identified as more vulnerable group. The aim of this study is to assess the knowledge about HIV/AIDS: its trends and associated factors among the women in Bangladesh. We analysed the nationally representative repeatedly cross-sectional Bangladesh Demographic and Health Surveys (BDHSs) data: 2007, 2011, and 2014. These data were clustered in nature due to the sampling design and the generalized mixed effects model is appropriate to examine the association between the outcome and the explanatory variables by adjusting for the cluster effect. Overall, women's knowledge about HIV/AIDS has been decreasing over the years. Education plays the leading role and secondary-higher educated women are 6.6 times more likely to have HIV/AIDS knowledge. The likelihood of knowledge is higher among the women who had media exposure (OR: 1.6) and knowledge on family planning (OR: 2.3). A rural-urban gap is noticed in women's knowledge about HIV/AIDS and significant improvement has been observed among the rural and media exposed women. Results reveal that age, region, religion, socio-economic status, education, contraceptive use have significant (p<0.01) effects on women's knowledge about HIV/AIDS. This study recommends to emphasis more on women's education, media exposure, and family planning knowledge in strengthening women's knowledge about HIV/AIDS. In addition, residence specific programs regarding HIV/AIDS awareness also need to be prioritized. Among the incurable infectious diseases, acquired immune deficiency syndrome (AIDS) caused by the infection of humane immunodeficiency virus (HIV) has become a major global health problem in recent years. According to the UNAIDS [1], there were 36.7 million people living with HIV in 2015, which is 3.4 million higher than those of in 2010. In Asia and Pacific region, there were 5.1 million people living with HIV in 2015 [2], of which the South-Asian (SA) countries: China, India, and Indonesia account for about 75% of the total number of people living with HIV in this region [3]. In Bangladesh-a SA country, the prevalence of HIV is low (less than 0.1%) [4], which steadily increased since 1989 [5]. The reported number of people with HIV in Bangladesh increased by more than 300% (from 1207 in 2007 to 3674 in 2014) in seven years [6]. The recent estimates of the number of people living with HIV in Bangladesh in 2015 is about 9600, of which about 3200 are women aged 15 years and above [7]. Thus, Bangladesh with her low prevalence of HIV/AIDS, possesses a high risk of rapid spread of HIV/AIDS [5, 8–10]. There are many potential factors that are attributable to this increased risk of HIV infection and/or transmission: geographical and cultural proximity to India and Myanmar-two severely affected countries [8, 11], poverty, gender inequity, high levels of transitional sex [12], mobility of boatmen across the border area [13], and especially, the low-level knowledge about HIV/AIDS. With the vision of reducing the risk of HIV infection and transmission, we should concentrate on these potential factors, however, many of these factors are often linked with the country's health, demography, economy, politics, etc., which are not malleable enough to change or improvement. Instead, major concentration could be given on increasing the level of knowledge about HIV/AIDS, since the causes of HIV infection are known and can be escaped by being knowledgeable about HIV/AIDS. In the context of Bangladesh, the percentage of married women and married men with knowledge about HIV/AIDS were 67% and 87%, respectively, in 2007 [14]. This percentage increased only by 2% for married women and 1% for married men, respectively over the years 2007–2011 [15]. In 2014, Bangladesh Demographic and Health Survey (BDHS) identified female population as more vulnerable group than male population and observed that about 70% of the married women are knowledgeable about HIV/AIDS, which is very similar to that of documented in 2011 [6]. Accordingly, many studies [16, 17] reported that the level of knowledge among the men is higher compared to the women in Bangladesh. Moreover, the women bear the heavier onus of the consequences of the disease due to their standing in a less advantaged socio-economic position, limited access to sexually, and reproductive health care [18], and subsequently, women are considered to be more vulnerable to HIV infection and transmission [10]. In addition, the perception among the women in Bangladesh about HIV/AIDS is often contaminated with myths, facts, and rumors [19], which further contribute to HIV infection and/or transmission. In this critical condition, to control HIV infection and/or transmission, preventive measures (e.g. increasing the level of knowledge) for women could be effective, which has been recommended in earlier studies [16, 20, 21]. Since any effective vaccine to completely cure from HIV/AIDS is not available yet [22], spreading correct knowledge about HIV/AIDS should be the very first step to raise awareness about HIV/AIDS. Lack of knowledge about HIV/AIDS is usually positively associated with misconception, confusion, social stigma, poor sex behavior [23], which contribute to the increase in HIV infection and transmission. Increasing women's knowledge about HIV/AIDS will facilitate long-term controlling of HIV/AIDS epidemic [17] and will still be effective even when there is limited/poor healthcare facilities. Assessing the current scenario of women's knowledge status in Bangladesh and identifying the associated factors will be helpful for government and non-government organizations to develop more structured and specific target program regarding HIV/AIDS prevention. In this regard, Khan [21] investigated the adolescents married women (10–19 years) in Bangladesh and reported female education, media use, and condom use as potential predictors of women's knowledge about HIV/AIDS. Rahman, M.S. and Rahman, M.L. [16] studied married women of wider age group (15–49 years) and identified the use of media as a strong tool to spread HIV knowledge and, also reported socio-economic status as an important factor. Likewise, Yaya et al. [17] studied a sample of ever married women in Bangladesh and demonstrated a positive association between the women's knowledge and their respective husbands' increasing level of education. Although there have been notable research works conducted earlier to assess the knowledge status of married women in Bangladesh, most studies focused on a particular study period. There are only few studies [24] that examined the trends and determinants of knowledge about HIV/AIDS among the married women in Kenya over the years 1993–2009. Studying the trends and determinants of women's knowledge will disclose more window about the changing behavior of the associated factors and their varying effects over time. To best of our knowledge, no earlier studies in Bangladesh examined the trends and determinants associated with the knowledge about HIV/AIDS among the married women in Bangladesh. The main goal of the study is to analyse the pooled data from the repeatedly cross-sectional BDHSs of the period 2007–2014, and investigate the factors associated with the married women's knowledge about HIV/AIDS in Bangladesh and their trends over time. Mixed effects model is used to analyse BDHS data to have greater insight into the data mechanism. This study will help the government and policy makers to evaluate the present scenario of knowledge about HIV/AIDS among the women in Bangladesh, and the trend analysis will help to investigate the pattern of changes in the determinants of knowledge about HIV/AIDS. We expect this study will help in constructing necessary programs that might contribute to control HIV infection or AIDS disease in Bangladesh. Sampling design The data for this study is extracted from the Bangladesh Demographic and Health Surveys (BDHSs) of 2007, 2011, and 2014. These nationally representative cross-sectional surveys are conducted in collaboration with National Institute of Population Research and Training (NIPORT), ICF International, USA, and Mitra & Associates. The sampling design for these surveys is comprised of two-stage stratified cluster sampling scheme. The sampling frame for the survey is a complete list of enumeration areas (EAs), which are either a village or a part of a village or a group of villages. In the first stage of the sampling, EAs (clusters) are selected using probability proportional to size (PPS) sampling method, where 600 clusters were selected in all BDHSs 2007, 2011, and 2014, respectively. In the second stage, an equal probability systematic sampling method was used to draw on average 30 households from each cluster. Finally, a total of 10996, 17749, and 17863 households were selected in the BDHS 2007, 2011, and 2014, respectively. Data and variables The Bangladesh Demographic and Health Survey (BDHS) collects data on demographics, fertility, mortality, nutrition, awareness and attitude towards HIV/AIDS, etc., although the collection of data is often subject to the objective and vision of the year of survey. To investigate the trends and determinants of the knowledge about HIV/AIDS among the women in Bangladesh, we pooled the last three survey data of BDHSs (2007–2014) and focused on the ever married sample to investigate our hypothesis. The main variable of interest is the knowledge status of married women about HIV/AIDS, which is extracted by asking the women whether they have heard about HIV/AIDS or not. To determine the effects of associated factors on the knowledge about HIV/AIDS, the relevant covariates are analysed that are reported in the earlier studies [5, 19, 21, 25, 26], depending on their availability in the BDHSs data. For instance, respondent's age ("15–19", "20–24", "25–29", "30–49"), respondent's and husband's highest level of education ("No education", "Primary", "Secondary or Higher"), division ("Barisal", "Chittagong", "Dhaka", "Khulna", "Rajshahi", "Sylhet"), residence ("Rural", "Urban"), Religion ("Islam", "Others"), socio-economic status ("Poor", "Middle", "Rich"). The variable on contraceptive use is categorized into two, "Yes" if any method (condom or others) and "No" for no method. Media use is also categorized as "Yes" for those respondents who either watch television or listen to radio or read newspaper. And, a variable on family planning knowledge is categorized as "Yes" for the respondents who heard about family planning through media and "No" otherwise. For BDHS 2011 and 2014, we have merged Rangpur and Rajshahi divisions as Rajshahi division for matching with BDHS 2007. There are some other important variables, e.g. information of HIV incidence, women's knowledge about HIV programs and sources of information, etc. which could help explaining women's knowledge about HIV/AIDS, however, due to unavailability of such information in BDHS data we could not investigate these variables. For details of the design and survey questions regarding the data, we would refer the BDHS report of 2014 [6]. Let Y ij (i=1,…,N; j=1,…,n i ) be the binary response of knowledge about HIV/AIDS for the jth woman under the ith EA (cluster), which takes the value of 1 if the women heard about HIV/AIDS and 0 otherwise. In the BDHSs pooled data, women within the same cluster may share some similar characteristics, facilities, and/or commodities, which may result in correlated responses within the clusters. Thus the data have a hierarchical structure, women are nested within the clusters. To analyse clustered data with hierarchical structure, regular statistical tool, e.g. simple logistic regression often unable to adjust for the variation due to cluster and fails to capture the association among the responses within clusters. Disregarding the variation due to cluster and cluster-specific association among the responses results in incorrect standard errors that compromise the efficiency of the model parameters [27]. To model clustered binary outcome, generalized linear mixed effects model (GLMM) is often used, which is comprised of fixed and random effects components. The fixed effects explain the population average evolution in response to covariates, and random effects explain the cluster-specific evolution. The GLMM with logit link is defined as $$ \text{logit}(p_{ij})=\mathbf{x}'_{ij} \boldsymbol{\beta}+u_{i}, $$ where p ij denotes the probability that a woman have heard about HIV/AIDS i.e. P r(y ij =1), x ij′ is the vector of observed covariates, β is the vector of regression parameters, and the random effects or residuals at the cluster level, u i s are assumed to follow normal distribution with zero mean and constant variance \(\sigma ^{2}_{u}\). That is, βs explain the effects of the covariates averaged over the clusters and random effects u i adjust for the cluster specific evolution of the mean response. To determine the extent to which the respondent's knowledge status about HIV/AIDS is clustered within clusters, the variance estimates at the cluster level and the individual level are used to evaluate the intra-cluster correlation (ICC), which is defined as $$\rho=\frac{\sigma^{2}_{u}}{\sigma^{2}_{u}+\sigma^{2}_{e}}, $$ where \(\sigma ^{2}_{e}\) is the total variance at individual level. In hierarchical data structure, the residuals at individual level are assumed to follow standard logistic regression with zero mean and variance \(\sigma ^{2}_{e}=\pi ^{2}/3\) [27]. The value of ρ is close to zero for a clustered data with no cluster variation, on the other hand, ρ>0 indicates the average correlation among the binary responses within clusters. The estimation of the model parameters defined in (1) is evaluated by maximization of the likelihood function. For more detail about the parameter estimation, we refer the reader the references, Fitzmaurice et al. [28] and Powers and Xie [27]. Table 1 presents the descriptive statistics of the variables considered in the analysis, across the survey years 2007–2014. The descriptive measures show that the percentage of women with knowledge about HIV/AIDS slightly increased for the women aged 25–49 years and decreased for the women aged 15–24 years, of which young married women (15–19 years) show a more steeper decline. The rationale behind this change is that the young married women have already been identified as at risk group who lack maturity and often get limited access to health-related discussion, especially, sex-related health problems. We observe that the frequency of married women aged 15–19 years increased from 1051 in 2007 to 1462 in 2014, which have contributed to the decrease in the percentage of knowledgeable married women (15–19 years) over the three survey years. For all three of the survey years, the percentages of women with knowledge about HIV/AIDS are highest and lowest for the age group of 20–24 years and 30–49 years, respectively. Table 1 Distribution of Bangladeshi married women with knowledge about HIV/AIDS by various demographic and socio-economic variables(Bangladesh Demographic and Health Surveys, 2007–2014) In the extracted pooled data, majority of the respondents (about 90%) were Muslim, of which about 70% women have knowledge about HIV/AIDS over the last three survey years. The higher percentage of educated women (primary or secondary/higher) have knowledge about HIV/AIDS compared to women with no education, for all three of the survey years. Similar to respondent's education, a higher percentage of women have HIV/AIDS knowledge whose husbands have a higher level of educational attainment. Table 1 shows that the women from upper socio-economic class have a higher percentage of knowledgeable women about HIV/AIDS. In particular, the percentage of rich women with HIV/AIDS knowledge has been uniform (about 88%), while for the poor and middle-class women, the percentage increased by 2% (48 to 50%) and 6% (66 to 72%), respectively over the years 2007–2014. Women's knowledge about HIV/AIDS varies for their working status, surprisingly, except for 2011, the percentage of working women having knowledge about HIV/AIDS are found to be lower in 2007 and 2014. The percentage of knowledgeable women who use contraceptive during sexual intercourse is found to be higher compared to women who use no contraceptive, with the latter group shows a slightly increasing pattern over the years 2007–2014. Conversely, for the women who use contraceptive, the percentage of having knowledge about HIV/AIDS slightly decreased over the last three survey years. The percentage of knowledgeable women who have media exposure increased 9% (87 to 94%) over the years 2007–2014. Similarly, a higher percentage of women with family planning knowledge have knowledge about HIV/AIDS compared to women with no knowledge of family planning. The descriptive measures show that the percentage of knowledgeable women about HIV/AIDS changed disproportionately in urban and rural areas over the years. Although the rural-urban gap is declining over the years, the percentage of urban women having HIV/AIDS knowledge decreased, while the percentage increased for rural women. Khulna and Sylhet divisions have the highest and lowest percentages of women with knowledge about HIV/AIDS, respectively, for all three of the survey years. Table 2 shows the results derived from generalized linear mixed effects model for assessing the association between the knowledge about HIV/AIDS among the Bangladeshi married women and different socio-demographic, socio-economic, and health-related factors. The results show that the age of women has a significant effect on their knowledge about HIV/AIDS. Particularly, women aged 20–29 years are more likely, while women aged 30 years or above are less likely to have knowledge about HIV/AIDS compared to women aged 15–19 years. Table 2 Estimates and standard errors (SEs) of generalized linear mixed effects model parameters for the knowledge about HIV/AIDS among the women in Bangladesh (Bangladesh Demographic and Health Surveys, 2007–2014) Table 2 shows that the religion of women is also found to have a significant effect on women's knowledge about HIV/AIDS. We found Muslim women are 20% more likely to have knowledge about HIV/AIDS compared to women of other religions (Hinduism, Christianity, etc.). As expected, there exists a positive association between women's highest level of education and their knowledge about HIV/AIDS. For instance, the odds of women with primary and secondary/higher education levels are 1.2 and 1.9, respectively, to the women with no education. We found similar results for the husband's education, i.e., the likelihood of women's knowledge about HIV/AIDS among the women increases with the increasing attainment of their respective husband's education. It is found that the knowledge about HIV/AIDS among the women is significantly related to their socio-economic status, with a higher level of socio-economic status indicates an increased likelihood of knowledge about HIV/AIDS. Particularly, women from middle and rich class are 1.4 and 2.1 times more likely to have knowledge about HIV/AIDS, respectively, compared to poor women. Working women are 9% more likely to have knowledge about HIV/AIDS compared to the women with no working status. Contraceptive use during sex is one of the most convenient and easy ways to prevent HIV infection. The results show that the women using contraceptive during sex are more likely to have knowledge about HIV/AIDS (OR: 1.28). The effect of media use on the knowledge about HIV/AIDS is quantified through a composite index of watching television or listening radio or reading newspaper. The odds of women with media exposure having knowledge about HIV/AIDS is about 1.6 times higher compared to women without media exposure. Similarly, for women who have family planning knowledge, the likelihood of having knowledge about HIV/AIDS is found to be higher. Type of residence has significant effect on explaining the likelihood of women's knowledge about HIV/AIDS. It is found that women living in urban areas have a higher likelihood of having knowledge about HIV/AIDS compared to women living in rural areas. The knowledge about HIV/AIDS among the women is found to significantly vary over the divisions of Bangladesh. Comparing with respect to Barisal division, respondents from both Dhaka and Khulna division are 1.3 and 1.8 times more likely, respectively, of having knowledge about HIV/AIDS. Whereas, women from Chittagong, Rajshahi, and Sylhet division have 25%, 30%, and 27% lower likelihood of having knowledge about HIV/AIDS, respectively, than that of the women from Barisal division. To have the true picture of the effects of the determinants on the women's knowledge about HIV/AIDS, we adjusted for the possible interaction effects in the analysis. The results show that the effect of residence type is different at different levels of the survey year and contraceptive use. The rural women who use contraceptive are 0.88 times less likely to have knowledge about HIV/AIDS. Also, among the rural women, the likelihood of having knowledge about HIV/AIDS shows an increasing trend over the years. The similar increasing trend has been found for the women who got media exposure over the years. The effect of age on the women's knowledge has also been found to vary significantly over the levels of contraceptive use and family planning knowledge status. The likelihood of having knowledge about HIV/AIDS is 1.20 times more among the 30–49 years aged women who use contraceptives. Whereas, the family planning knowledge has a higher impact on the knowledge about HIV/AIDS among the women of all the age groups except for 30–49 years compared to women aged 15–19 years. Finally, the intra-cluster correlation (ICC) is found to be 0.137, which implies 13.7% of the variation in response is due to variation between clusters. In this study, nationally representative pooled data from the Bangladesh Demographic and Health Survey (BDHS) of the years 2007, 2011, and 2014, are analysed to examine the trends and determinants of knowledge about HIV/AIDS among the married women (15–49 years) in Bangladesh. This study serves two important aspects through mixed model analysis of the data. First, analysing the determinants as well as their changes in pattern provides more insight to evaluate the present scenario of HIV/AIDS knowledge among the women compared to previous years and identify specific areas where to give more attention to improve the situation. Second, mixed model analysis of the clustered data appropriately fits the model parameters adjusting for the within cluster association and provides with valid inference. The mixed model analysis of BDHS pooled data reveals that age has significant effect on women's knowledge about HIV/AIDS. In the context of Bangladesh, women from different age groups differ in lifestyle, health practice, adaptability, maturity, accessibility, sex behaviors, etc., which could be the underlying reason behind the influence of women's age on their knowledge about HIV/AIDS. The mixed model analysis also shows that the women aged 20–29 years are more likely to have knowledge about HIV/AIDS compared to young married women (15–19 years). Young married women often get limited exposure to sex-related issues and suffer from immaturity [21], which contribute to their vulnerability to HIV infection due to having little knowledge about HIV/AIDS [29]. Our finding is in agreement with other studies [10], reported earlier. In addition, we found women aged 30 years or above are less likely to have knowledge about HIV/AIDS. Conversely, Van Huy et al. [30] showed that women aged 30 years or above are more likely to have knowledge about HIV/AIDS, however, this study was in the context of Vietnam. The earlier studies in the context of Bangladesh, e.g. Hossain et al. [10], Yaya et al. [17], reported that the knowledge about HIV/AIDS among the women aged 30–49 years is lower (insignificant) compared to 15–19 years age group. In agreement with the direction of the association, this study demonstrates that the likelihood of knowledge among the women aged 30–49 is significantly lower in comparison with the women aged 15–19 years. The rationale of this finding is that married women aged 30 years or more are less adaptive to absorbing information compared to young women, which reduces their likelihood of knowing about HIV/AIDS. The women's knowledge about HIV/AIDS is found to vary significantly across the divisions of Bangladesh. Particularly, the women from Khulna and Rajshahi have the highest and lowest likelihood of having knowledge about HIV/AIDS, respectively, compared to Barisal division. Although Khan [21] reported that women from Dhaka division have the highest likelihood of HIV/AIDS knowledge, our finding is consistent with recent findings of Hossain et al. [10]. The divisions of Bangladesh possess inherent variabilities in terms of education, health, economy, media exposure, etc. that often contribute to the differences in women's knowledge about HIV/AIDS at the division levels. Particularly, women from Khulna division have more access to media, more likely to use contraceptive methods than the women from Barisal division [31]. This could be the underlying reason, as these factors are found to be important for knowledge about HIV/AIDS among the women in Bangladesh. However, the low likelihood for the women at Rajshahi cannot be easily interpreted. As such, in BHDS 2011 and 2014, the Rangpur division has been introduced as a separate division that was once a part of Rajshahi division (in BDHS 2007), however, we merged Rajshahi and Rangpur division as Rajshahi in this study. The Rangpur division possesses low percentage of contraceptive user and high percentage of poor women [6] that might have confounded the effect of Rajshahi division compared to Sylhet division. The type of residence has been found to be significantly associated with the knowledge of women about HIV/AIDS. Particularly, women living in urban areas are more likely to have knowledge about HIV/AIDS compared to rural women, which is in accordance with other studies [10, 21, 30, 32]. Contrary to our findings, there are few studies [17], which reported that the type of residence is unrelated to women's knowledge about HIV/AIDS. However, rural women are often abandoned, neglected, and deprived of better health care facilities [33], and some recent studies [34] reported rural women as vulnerable to HIV infection due to their low level of knowledge about HIV/AIDS. On the other hand, urban women often enjoy better living provided with easier access to health information, media, healthcare facilities, etc., which reduces the likelihood of HIV infection subsequently. This study also shows the effect of residence type over the years on the women's knowledge about HIV/AIDS. We observe that over the survey years, the likelihood of having knowledge about HIV/AIDS has been decreasing. Although the urban women show higher likelihood of knowledge, over time rural women shows significant improvement compared to urban women. We expect this finding will help the government and policy makers to plan for resident specific programs concerning to spread HIV/AIDS knowledge and to create awareness among the rural and urban women in Bangladesh. There is a significant association between the women's knowledge about HIV/AIDS and their religion. Muslim women are found to be more knowledgeable about HIV/AIDS compared to non-Muslim women in Bangladesh. The rationale for this finding is that the codes of Islam strongly prohibits multiple sex partner, extra marital affair, etc. and contributes to lower prevalence of HIV among the Muslim [35], which we expect the indirect reasons behind the higher likelihood of knowledge about HIV/AIDS among the Muslim women in Bangladesh. However, sex-related risk behaviors are practiced in the real world scenario disregarding the Islamic codes by the Muslims [36]. For instance, the migrant workers, especially, men often practice unsafe sex with non-spousal partner in the foreign country and pose a great threat of HIV/AIDS infection/transmission when they return back to Bangladesh and resume to unprotected sex with their spouses [37]. This provides counter evidence on the lower level of knowledge and awareness among the Muslim women in Bangladesh. Likewise, to support the claim of women's religion on their knowledge about HIV/AIDS, there are insufficient literatures in the context of Bangladesh. Thus, we would interpret our finding with caution and would recommend in-depth analysis regarding the effect of religion on the knowledge about HIV/AIDS among the women in Bangladesh. This study results show that women from a higher level of socio-economic status (middle and rich) have a higher likelihood of HIV/AIDS knowledge compared to poor women, which is consistent with the studies [30]. The justification of the result is that women from upper socio-economic class have more access to modern amenities and health care facilities, which increase their likelihood of knowing about HIV/AIDS. The use of contraceptive during sex has been proved to be beneficial for the protection from regulation of birth and HIV/AIDS [21], which also translates to awareness about the sex-related disease and health protection. This study reports that women who use contraceptive during sex are more likely to have knowledge about HIV/AIDS compared to the non-contraceptive user. This finding is in agreement with the other studies [21, 38]. Additionally, we found that the effect of contraceptive use on the knowledge is significantly different for different age groups and type of residence. Particularly, this study states that women aged 30–49 years who use contraceptive have a higher likelihood of knowledge about HIV/AIDS. This is due to the fact that younger women often face difficulty in obtaining contraceptive (e.g. condom) and knowing about it's benefits and usage. Also, it is found that rural women who use contraceptive are less likely to have HIV/AIDS knowledge. This is because rural women often consider the use of contraceptive as a means of birth control by being ignorant of it's uses as a protection from sex-related health problems, e.g. AIDS or HIV infection. Our finding indicates that rural women contraceptive user require extra attention and need to be educated about the benefit of using contraceptive to prevent HIV/AIDS. Education plays a vital role in determining the social status of an individual and also, translates to a better job and more access to information [16]. Earlier literature [39] has already identified education as an alternate vaccine for the incurable disease-AIDS. Mwamwenda [39] reported that HIV/AIDS knowledge is positively associated with increasing level of the respondent's and their husband's education. A similar finding is found in our study, which is in line with other relevant studies [10, 21, 30, 32, 33]. Exposure to media has also been reported as an important factor associated with the knowledge of married women about HIV/AIDS. Television, radio, and newspaper are effective media to reach the general people, which communicate important messages in the form of music, news reports, dramas, movies, advertisements, etc. that can profoundly influence public knowledge about HIV/AIDS. Our study result shows that women who got media exposure have a higher likelihood of knowledge about HIV/AIDS. This finding is consistent with earlier studies [10, 16, 32, 34, 40]. Analysing the trend over the years, we found compared to the survey year 2007, significant improvement is observed in the level of knowledge among the women with media exposure in 2014 survey year. This finding might help the policy makers to plan for a longer version of media exposure so as to disseminate the knowledge about HIV/AIDS among the women in Bangladesh. Moreover, family planning knowledge has been proved to be effective for preventing HIV transmission [41]. Exposure to family planning through media often provides information and services to women about pregnancy, sexually transmitted diseases including AIDS, which contribute to the improvement of women's knowledge about HIV/AIDS. Accordingly, in this study, we found women who heard about family planning through media are more likely to have knowledge about HIV/AIDS. Also, we found the effect of family planning knowledge varies significantly over different age groups of the respondents. Family planning exposure has been found to be more effective among the women aged 20–29 years. The current working status has been an important indicator of women's knowledge about HIV/AIDS. Working women compared to non-working women often get more chances to discuss HIV/AIDS related health issues with their respective co-workers. Additionally, workers are often provided with health care facilities (e.g. medical unit) that educate the workers on HIV/AIDS. Again, being in a community with other workers, global issues like HIV/AIDS spread faster, which altogether increase the likelihood of their knowledge about HIV/AIDS. This finding is in accordance with other studies [16]. Despite several strengths of the study, there are few limitations, which we would like to mention for future research works. This study focuses on the knowledge status of married women which is determined by whether the respondents have heard about HIV/AIDS. Although the women who are aware about HIV/AIDS could be considered as knowledgeable about HIV/AIDS, however, causal relationship cannot be ensured necessarily. Awareness about HIV/AIDS could be more than only knowing about the disease, for instance, awareness on transmission or awareness on prevention, etc., which is beyond the scope of this study. Although we have examined the influence of protective behaviours (e.g. condom use and family planning knowledge) and other covariates on the knowledge about HIV/AIDS, however, knowledge status of women could also influence their protective behaviour, which is beyond the scope of this study and not consistent with the specific objective. There is a possibility that the knowledge status of husbands influence the knowledge status of the respondents, however, we could not analyse knowledge information of husbands due to the unavailability in the BDHSs data. Analysing the effects of the determinants on the knowledge about HIV/AIDS among the women in Bangladesh could be more interesting if we could incorporate the information regarding HIV incidence. However, due to unavailability of incidence information in BDHS data, we could not elaborate our findings in relation to HIV incidence. In conclusion, this study analysed the pooled data from three most recent nationally representative surveys (BDHS: 2007, 2011, and 2014) to determine the trends and determinants of the women's knowledge about HIV/AIDS in Bangladesh. Using a mixed modeling approach, this study reveals women's age, region, residence, religion, socio-economic status, women's and husbands' highest level of education, media use, contraceptive use, family planning knowledge have a significant influence on women's knowledge about HIV/AIDS. Additionally, we found that the rural women who use contraceptive are less likely to be knowledgeable about HIV/AIDS, which is a great concern as rural women despite using contraceptive possess little knowledge about HIV/AIDS. This study demonstrates that the effect of women's age is confounded due to contraceptive use and family planning knowledge on explaining the likelihood of women's knowledge about HIV/AIDS. Moreover, analysing the trend we found, over the years the likelihood of women being knowledgeable about HIV/AIDS has been decreasing, however, the likelihood has been observed to be increasing for rural women and media users, over the years. This study recommends residence specific programs to disseminate HIV/AIDS awareness effectively and also highlights the role of media exposure to create awareness about HIV/AIDS. Moreover, this study also recommends educating both the women and their husbands to have a greater impact on the knowledge about HIV/AIDS among the women in Bangladesh. UNAIDS: Global aids update 2016. Technical report. 2016. http://www.unaids.org/en/resources/documents/2016/Global-AIDS-update-2016. UNAIDS: The prevention gap report. Technical report. 2016. http://www.unaids.org/sites/default/files/media_asset/2016-prevention-gap-report_en.pdf. AVERT: Hiv and aids in asia & the pacific regional overview. Technical report. 2017. https://www.avert.org/professionals/hiv-around-world/asia-pacific/overview. Nahar Q, Alam MS, Chowdhury EI, Azim T, Alam N, Saifi R, Khan SI, Oliveras E, Reza M. 20 years of hiv in bangladesh : experiences and way forward. Technical report. 2009. http://citeweb.info/20090546364. Islam MM, Conigrave KM. Hiv and sexual risk behaviors among recognized high-risk groups in bangladesh: need for a comprehensive prevention program. Int J Infect Dis. 2008; 12(4):363–70. NIPORT, Mitra and Associates, and ICF: Bangladesh demographic and health survey 2014. Technical report, National Institute of Population Research and Training (NIPORT), Mitra and Associates, and ICF International, Dhaka, Bangladesh, and Rockville, Maryland, USA: NIPORT, Mitra and Associates, and ICF International. 2016. http://microdata.worldbank.org/index.php/catalog/2562. UNAIDS: HIV and AIDS Estimates. http://www.unaids.org/en/regionscountries/countries/bangladesh. Gibney L, Choudhury P, Khawaja Z, Sarker M, Islam N, Vermund S. Hiv/aids in bangladesh: an assessment of biomedical risk factors for transmission. Int J STD & AIDS. 1999; 10(5):338–46. Gibney L, Choudhury P, Khawaja Z, Sarker M, Vermund S. Behavioural risk factors for hiv/aids in a low-hiv prevalence muslim nation: Bangladesh. Int J STD & AIDS. 1999; 10(3):186–94. Hossain M, Mani KK, Sidik SM, Shahar HK, Islam R. Knowledge and awareness about stds among women in bangladesh. BMC Public Health. 2014; 14(1):775. Mondal NI, Takaku H, Ohkusa Y, Sugawara T, Okabe N, et al. Hiv/aids acquisition and transmission in bangladesh: turning to the concentrated epidemic. Jpn J Infect Dis. 2009; 62(2):111–9. UNICEF: Hiv and aids in bangladesh. Technical report. 2010. https://www.unicef.org/bangladesh/HIV_AIDS(1).pdf. Gazi R, Mercer A, Wansom T, Kabir H, Saha NC, Azim T. An assessment of vulnerability to hiv infection of boatmen in teknaf, bangladesh. Confl Health. 2008; 2(1):5. Rahman MS, Rahman ML. Media and education play a tremendous role in mounting aids awareness among married couples in bangladesh. AIDS Res Therapy. 2007; 4(1):10. Yaya S, Bishwajit G, Danhoundo G, Shah V, Ekholuenetale M. Trends and determinants of hiv/aids knowledge among women in bangladesh. BMC Public Health. 2016; 16(1):812. Garai J. Gender and hiv/aids in bangladesh: A review. J Health Soc Sci. 2016; 1(3):181–98. Rahman MM, Kabir M, Shahidullah M. Adolescent knowledge and awareness about aids/hiv and factors affecting them in bangladesh. J Ayub Med College Abbottabad. 2009; 21(3):3–6. Islam MT, Mostafa G, Bhuiya AU, Hawkes S, De Francisco A. Knowledge on, and attitude toward, hiv/aids among staff of an international organization in bangladesh. J Health Popul Nutr. 2002; 20(3):271–8. Khan MA. Knowledge on aids among female adolescents in bangladesh: evidence from the bangladesh demographic and health survey data. J Health Popul Nutr. 2002; 20(2):130–7. Myhre L, June S, Flora A. Hiv/aids communication campaigns: progress and prospects. J Health Commun. 2000; 5(sup1):29–45. Varni SE, Miller CT, Solomon SE. Sexual behavior as a function of stigma and coping with stigma among people with hiv/aids in rural new england. AIDS Behav. 2012; 16(8):2330–9. Ochako R, Ulwodi D, Njagi P, Kimetu S, Onyango A. Trends and determinants of comprehensive hiv and aids knowledge among urban young women in kenya. AIDS Res Ther. 2011; 8(1):11. Keating J, Meekers D, Adewuyi A. Assessing effects of a media campaign on hiv/aids awareness and prevention in nigeria: results from the vision project. BMC Public Health. 2006; 6(1):123. Peltzer K, Matseke G, Mzolo T, Majaja M. Determinants of knowledge of hiv status in south africa: results from a population-based hiv survey. BMC Public Health. 2009; 9(1):174. Powers D, Xie Y. Statistical Methods for Categorical Data Analysis. UK: Emerald Group Publishing Limited; 2008. Fitzmaurice GM, Laird NM, Ware JH. Applied Longitudinal Analysis. Wiley Series in Probability and Statistics. Hoboken: Wiley; 2012. Santhya K, Jejeebhoy SJ. Early marriage and hiv/aids: Risk factors among young women in india. Econ Polit Wkly. 2007; 42(14):1291–7. Van Huy N, Lee HY, Nam YS, Van Tien N, Huong TTG, Hoat LN. Secular trends in hiv knowledge and attitudes among vietnamese women based on the multiple indicator cluster surveys, 2000, 2006, and 2011: what do we know and what should we do to protect them?Glob Health Action. 2016; 9:29247. BBS and UNICEF: Bangladesh multiple indicator cluster survey 2012–2013, progotirpathey: Final report. Technical report, Bangladesh Bureau of Statistics (BBS) and UNICEF Bangladesh, Dhaka, Bangladesh. 2014. https://www.unicef.org/bangladesh/MICS_Final_21062015_Low.pdf. Mondal MNI, Rahman MM, Rahman MO, Akter MN. Level of awareness about hiv/aids among ever married women in bangladesh. Food Public Health. 2012; 2(3):73–8. Mondal MNI, Hoque N, Chowdhury MRK, Hossain MS. Factors associated with misconceptions about hiv transmission among ever-married women in bangladesh. Japan J Infect Dis. 2015; 68(1):13–19. Asaduzzaman M, Higuchi M, Sarker MAB, Hamajima N. Awareness and knowledge of hiv/aids among married women in rural bangladesh and exposure to media: a secondary data analysis of the 2011 bangladesh demographic and health survey. Nagoya J Med Sci. 2016; 78(1):109. Gray PB. Hiv and islam: is hiv prevalence lower among muslims?. Social Sci & Med. 2004; 58(9):1751–6. Ahmed S. Aids and the muslim world: a challenge. Asian J Soc Sci Humanit. 2013; 2(3):451–9. Urmi AZ, Leung DT, Wilkinson V, Miah MAA, Rahman M, Azim T. Profile of an hiv testing and counseling unit in bangladesh: majority of new diagnoses among returning migrant workers and spouses. PloS ONE. 2015; 10(10):0141483. Mondal NI, Islam R, Rahman O, Rahman S, Hoque N. Determinants of hiv/aids awareness among garments workers in dhaka city, bangladesh. World J AIDS. 2012; 2(4):312–8. Mwamwenda TS. Education level and human immunodeficiency virus (hiv)/acquired immune deficiency syndrome (aids) knowledge in kenya. J AIDS HIV Res. 2014; 6(2):28–32. Yadav SB, Makwana NR, Vadera BN, Dhaduk KM, Gandha KM. Awareness of hiv/aids among rural youth in india: a community based cross-sectional study. J Infect Dev Countries. 2011; 5(10):711–6. Akelo V, Girde S, Borkowf CB, Angira F, Achola K, Lando R, Mills LA, Thomas TK, Lecher SL. Attitudes toward family planning among hiv-positive pregnant women enrolled in a prevention of mother-to-child transmission study in kisumu, kenya. PloS One. 2013; 8(8):66593. The authors acknowledge the contributions of the team of BDHS, NIPORT, MEASURE DHS and ICF International teams for their efforts to collect data and to open access the data set. The authors acknowledge the editors and the reviewers for their critical comments and suggestions, which improved the organization and readability of the paper. The author received no specific fund for this study. The datasets generated and/or analysed during the current study are freely available upon request from the DHS website at http://dhsprogram.com/data/available-datasets.cfm. Institute of Statistical Research and Training, University of Dhaka, Shahbagh, Dhaka, 1000, Bangladesh Md. Tuhin Sheikh , Md. Nizam Uddin & Jahidur Rahman Khan Search for Md. Tuhin Sheikh in: Search for Md. Nizam Uddin in: Search for Jahidur Rahman Khan in: Correspondence to Md. Tuhin Sheikh. The Bangladesh Demographic Health Surveys were approved by ICF Macro Institutional Review Board and the National Research Ethics Committee of the Bangladesh Medical Research Council. All the participants were given a written consent about the survey before interviewing. Conceptualization of the problem: MTS and MNU, formal analysis: MTS and JRK, preparation of original draft: MTS, review & editing: MTS, JRK, and MNU. All authors have read and approved the final version. Sheikh, M.T., Uddin, M.N. & Khan, J.R. A comprehensive analysis of trends and determinants of HIV/AIDS knowledge among the Bangladeshi women based on Bangladesh Demographic and Health Surveys, 2007–2014. Arch Public Health 75, 59 (2017) doi:10.1186/s13690-017-0228-2 HIV/AIDS awareness Mixed model
CommonCrawl
npg asia materials Exposed high-energy facets in ultradispersed sub-10 nm SnO2 nanocrystals anchored on graphene for pseudocapacitive sodium storage and high-performance quasi-solid-state sodium-ion capacitors Panpan Zhang1,2 na1, Xinne Zhao2 na1, Zaichun Liu1,3 na1, Faxing Wang1,3, Ying Huang4, Hongyan Li5, Yang Li2, Jinhui Wang2, Zhiqiang Su2, Gang Wei6, Yusong Zhu3, Lijun Fu3, Yuping Wu1,3 & Wei Huang3 NPG Asia Materials volume 10, pages 429–440 (2018)Cite this article The development of sodium (Na) ion capacitors marks the beginning of a new era in the field of electrochemical capacitors with high-energy densities and low costs. However, most reported negative electrode materials for Na+ storage are based on slow diffusion-controlled intercalation/conversion/alloying processes, which are not favorable for application in electrochemical capacitors. Currently, it remains a significant challenge to develop suitable negative electrode materials that exhibit pseudocapacitive Na+ storage for Na ion capacitors. Herein, surface-controlled redox reaction-based pseudocapacitance is demonstrated in ultradispersed sub-10 nm SnO2 nanocrystals anchored on graphene, and this material is further utilized as a fascinating negative electrode material in a quasi-solid-state Na ion capacitor. The SnO2 nanocrystals possess a small size of <10 nm with exposed highly reactive {221} facets and exhibit pseudocapacitive Na+ storage behavior. This work will enrich the methods for developing electrode materials with surface-dominated redox reactions (or pseudocapacitive Na+ storage). Increasing interest in portable electronic devices, electric vehicles, and smart grids is creating significant demand for low-cost, environmentally friendly energy storage devices1,2,3,4. The electrochemical capacitor (also called supercapacitor) is one of the most promising energy storage devices, as it has a fast power delivery and long cycle life, although it still suffers from a moderate energy density compared with those of rechargeable batteries5,6,7. To improve the energy densities of electrochemical capacitors, lithium (Li) ion capacitors have been constructed from a capacitor-type electrode (such as activated carbon, carbon nanotubes (CNTs), and graphene) and a battery-type electrode (such as Li4Ti5O12, TiO2, Fe3O4, and Li3VO4) in a Li salt-containing electrolyte since 20012,3,4,5,6,7,8,9,10,11,12,13,14,15,16. Li ion capacitors can achieve energy densities approaching those of Li ion batteries with power densities similar to those of electrochemical capacitors17,18,19,20,21,22,23. Currently, Li ion capacitors ranging from lab-scale demonstrations to commercial products are being rapidly developed. For example, Maxwell and CRRC Corporation Limited have cooperated in the development of commercial Li ion capacitors, which have been utilized in the brake energy recovery system of a subway in China since 2016 (http://stock.10jqka.com.cn/20161129/c595244612.shtml). It is foreseeable that Li ion capacitors will become one of the most promising candidates for next-generation hybrid electric vehicles and pure electric vehicles. However, the enormous exploitation of Li resources driven by the ever-growing demands will eventually cause Li to become scarce. Consequently, replacing Li with Na to build an analogous Na ion capacitor offers an opportunity to construct a sustainable energy storage system with a high-energy density24. The technologies of Na ion capacitors based on liquid electrolytes have made significant advances in the past several years24,25,26,27,28,29,30. However, the liquid electrolytes for Na ion capacitors suffer from safety issues owing to their intrinsic instability in terms of flammability, possibility of leakage, and internal short circuit. Therefore, solid electrolytes would be a perfect choice to solve this crucial problem31,32,33. In addition to electrolytes, another key part of Na ion capacitors is the negative electrode, whose kinetics, which are based on a diffusion-controlled intercalation (or conversion or alloying) process, are slower than that of the positive electrode with a surface-controlled process (electric double-layer absorption/desorption). Currently, some negative electrode materials for Na ion capacitors have been reported. For example, Simon's and Yamada's groups fabricated two Na ion capacitors based on MXene (Ti2C and V2C) as negative electrodes, but these capacitors operated at relatively low voltages (<3.5 V) with a poor cycling stability (only 300 cycles)27,28. Two other fabricated Na ion capacitors based on V2O5 and TiO2 as negative electrodes in liquid electrolytes could only be operated within low voltage ranges with low energy densities (both below 65 Wh kg−1 based on the total mass of the negative and positive electrodes)25,29, which are not comparable with the high-energy density of conventional Li ion capacitors. By applying hard carbon with a low working potential as the negative electrode for Na ion capacitors, the energy density can be remarkably enhanced due to the expanded working voltage31. However, at a low potential (close to 0.1 V vs. Na+/Na), both Na plating and the growth of dendrites take place on the negative electrode, which cause safety hazards and expose the device to a wide range of safety concerns. Very recently, NaTi2(PO4)3 grown on graphene nanosheets was demonstrated as an intercalation negative electrode for Na ion capacitors30. When trapped between graphene layers, the electrical conductivity of the NaTi2(PO4)3 nanoparticles (100 nm) greatly increased, thus enhancing the charge transfer kinetics and reducing the interfacial resistance at high current rates30. However, the reaction kinetics at the NaTi2(PO4)3 electrode are still dominated by diffusion-controlled intercalation processes, which is inappropriate for electrochemical capacitors. Generally, the kinetics of a negative electrode based on a diffusion-controlled intercalation (or conversion or alloying) process are far slower than that of a positive electrode with a surface-controlled process. Fortunately, some traditional Li ion battery materials with diffusion-controlled processes in their bulk states can be utilized as extrinsic pseudocapacitive materials when they are reasonably designed through physical control of the electrode materials (including the particle size, surface area, electrical conductivity, morphology, and face orientation)34,35,36,37,38,39,40,41. These electrode materials (such as ordered mesoporous MoO3 nanocrystals34, mesoporous LixMn2O4 thin films38, and oxygen vacancy-enhanced MoO3-x40 and TiO2−x41) display superior pseudocapacitive Li+ storage behaviors, which should make them perfect choices for Li ion capacitors. However, few negative electrode materials (only TiO2/graphene42, SnS243, and SnS nanosheets44) exhibit pseudocapacitive Na+ storage behavior. The development of electrode materials with pseudocapacitive Na+ storage properties is in the preliminary stage. Therefore, it remains a significant challenge to develop suitable negative electrode materials with surface-dominated pseudocapacitive Na+ storage properties for Na ion capacitors. Herein, we demonstrate that pseudocapacitive Na+ storage instead of a diffusion-controlled reaction dominates the charge storage process in ultradispersed sub-10 nm SnO2 nanocrystals with exposed {221} facets anchored on graphene. This nanocomposite of SnO2/graphene can be prepared on a large scale by a facile one-step reaction with a hydrothermal method. It exhibits a high specific capacity of 276 F g–1 at a current density of 0.5 A g–1 and an excellent cycling stability of up to 6000 cycles. Even at a high current density of 1.8 A g–1, this nanocomposite shows a specific capacitance of 105 F g–1. Then, a quasi-solid-state Na ion capacitor is assembled using CNTs as the positive electrode and SnO2/graphene as the negative electrode in a Na ion-conducting gel polymer electrolyte. The obtained quasi-solid-state Na ion capacitor displays a maximum energy density of 86 Wh kg–1 and a maximum power density of 4.1 kW kg–1. After 900 cycles, this quasi-solid-state Na ion capacitor exhibits a stable capacitance retention of 100% at a current density of 0.5 A g–1, which is competitive with those of current Na ion capacitors in liquid electrolytes. Synthesis of SnO2/graphene Graphene oxide (GO) was first synthesized by a modified Hummers method according to our previous work45. For the preparation of SnO2 nanoparticles, 200 mg of SnCl2·2H2O was dissolved into 20 mL of deionized (DI) water and sonicated for 10 min to achieve a clear and homogeneous solution. After that, 200 mg of sodium citrate was added, and the mixture was further sonicated for 30 min. The solution was transferred to a 50 mL Teflon-lined autoclave and heated in an oven at 200 °C for 10 h with no intentional control of the ramping or cooling rate. To prepare SnO2/graphene, 200 mg of SnCl2·2H2O was added to aqueous GO (0.5 mg mL−1, 20 mL) and then sonicated for 10 min. After adding 200 mg of sodium citrate, the reaction solution was further sonicated for 30 min and then transferred to a 50 mL Teflon-lined autoclave. Then, it was heated at 200 °C for 10 h in an oven without intentional control of the ramping or cooling rate. The product was both centrifuged at 4500 rpm for 10 min with DI water and ethanol, and each washing step was repeated at least three times. Finally, the product was redisposed in 3 mL of DI water and freeze-dried for several days. Synthesis of Na+ conducting gel polymer electrolyte (N-GPE) The polymer matrix for N-GPE was prepared by a phase separation process31. Poly(vinylidene fluoride-co-hexafluoropropylene) (P(VDF-HFP)) with a molecular weight of approximately 329,000 (Supplementary Fig. S1) was dissolved in a mixture of N,N-dimethylformamide (DMF) and DI water at 80 °C with a mass ratios of P(VDF-HFP):DMF:H2O = 15:85:3. The solution was cast onto a clean glass plate before being soaked in a water bath at 80 °C to form a homogeneous white membrane. After that, the white membrane was dried under vacuum at 100 °C for 10 h. Then, the obtained dry membrane was punched into circular pieces (d = 19 mm). Finally, the circular pieces were immersed in an organic electrolyte (1 mol L−1 NaClO4 solution in ethylene carbonate (EC)/dimethyl carbonate (DMC)/diethyl carbonate (DEC), 1/1/1, w/w/w) over 12 h in a glove box (water content: <1 ppm) to acquire the N-GPE. The amount of liquid electrolyte taken up (ΔW) was calculated according to the following equation: $$\Delta W = \frac{{100(W_{\mathrm s} - W_{\mathrm o})}}{{W_{\mathrm o}}}{\mathrm{\% }}$$ where Wo and Ws are the weights of the membrane before and after absorption of the organic electrolyte, respectively. In this work, the uptake of the liquid electrolyte is approximately 115 wt.%, which is enough to provide a high ionic conductivity. High-resolution transmission electron microscopy (HRTEM) was conducted on a JEM-2100F field emission transmission electron microscope (JEOL products, Japan) at an acceleration voltage of 200 kV. The samples were prepared by dropping a SnO2/graphene aqueous solution onto a copper grid and drying at room temperature. N2 adsorption–desorption isotherms were obtained by the 3H-2000PS1 static volume method with a specific surface and pore analysis instrument (BeiShiDe, Beijing). Scanning electron microscopy (SEM, JSM-6700F, JEOL, Japan), X-ray diffraction (XRD, Rigaku D/max-2500 VB+/PC), X-ray photoelectron spectroscopy (XPS, ThermoVG ESCALAB 250), Fourier transform infrared (FTIR) spectroscopy (Nicolet 6700, Thermo-Fisher), and Raman spectroscopy (LabRAM HORIBA JY, Edison, NJ) were used to confirm the structures and morphologies of the SnO2 nanoparticles, GO, and SnO2/graphene. Electrochemical measurement Standard coin-type model cells were used to construct half cells. In the constructed half cells, Na metal served as the counter and reference electrodes, and the N-GPE functioned as the electrolyte. The specific capacitance (F g–1) of the electrode material was obtained from the charge and discharge curves and calculated via the formula C = (I Δt)/(m ΔU), where I is the current (A) used for charge/discharge cycling, Δt is the charge time (seconds), m is the mass (g) of active materials in the working electrode, and ΔU is the operating potential range (V) during the charge process. Electrochemical measurements of the quasi-solid-state Na ion capacitor were conducted by using two-electrode cells at room temperature. The material (SnO2/graphene or CNTs), carbon black, and polymer binder (polyvinylidene difluoride) with a weight ratio of 80:10:10 were assembled as active electrodes. The mass proportion of the electrode material was optimized by the equivalents of charge stored in a single electrode measurement. Generally, through the equation q = C × ΔE × m (C represents the specific capacitance in a half-cell test, ΔE is the voltage range, and m is the mass of the active material), the charge stored by each electrode can be determined. The N-GPE was utilized as the electrolyte and the separator. All cells were assembled in an Ar-filled glove box under oxygen and water concentrations below 1 ppm. Density functional theory (DFT) calculations The DFT-based first-principles calculations reported herein were performed with the CASTEP package of Materials Studio 6.0 within the generalized gradient approximation, as formulated by the Perdew–Burke–Ernzerhof functional46,47. The final set of energies for all calculations was computed with an energy cutoff of 370 eV. Adsorption-optimized structural modeling was performed using a 3 × 3 × 1 supercell on the {110}, {001}, and {221} surfaces of SnO2. All atoms were fully relaxed until the convergence criteria for energy were set to 10−5 eV, and the residual forces on each atom decreased below 0.03 meV Å−1. The linear synchronous-transit/quadratic synchronous-transit (LST/QST) method was used to perform migration energy barrier calculations within a large 2 × 2 × 2 supercell48. The Brillouin zone integration was performed with 1 × 1 × 1 and 2 × 2 × 4 Γ-centered Monkhorst-Pack k-point meshes in geometry optimization and LST/QST calculations, respectively49. The SnO2/graphene nanocomposite was synthesized through a one-step hydrothermal reaction of few-layered GO (2–3 layers, 2–10 μm, Supplementary Fig. S2) and SnCl2·2H2O in an aqueous solution at 200 °C. In the starting material, the sharp diffraction peak observed at 9.68° in the XRD pattern is consistent with the characteristic diffraction peak (001) of GO. The diffraction peak of graphene exhibits a dramatic shift to a higher 2θ angle of ~28°, proving that GO was successfully reduced to graphene (Supplementary Fig. S3a). In addition, the increased intensity ratio of the D- and G-bands (ID/IG) of graphene relative that of GO in the Raman spectra indicates that most of the oxygenated groups were removed during the reduction process (Supplementary Fig. S3b). Meanwhile, due to the strong electrostatic interactions between Sn2+ and GO, SnO2 nanocrystals are uniformly anchored on the surface of the graphene nanosheets. These SnO2 nanocrystals enable independent dispersion of the graphene nanosheets without evident agglomeration (Fig. 1a and Supplementary Fig. S4), while the graphene can effectively restrict the further growth of SnO2 nanocrystals (Fig. 1b, c). In comparison, without graphene, the pure SnO2 crystals prepared under the same hydrothermal conditions exhibit a size >100 nm (Supplementary Fig. S5). The energy dispersive X-ray spectroscopy (EDX) mapping results show that Sn is uniformly dispersed on the graphene nanosheets (Supplementary Fig. S6), verifying the ultradispersed distribution of SnO2 nanocrystals on the graphene surface. Fig. 1: Morphological analysis of SnO2 nanocrystals with exposed {221} facets anchored on graphene. a SEM image. b, c Low-magnification TEM images. HRTEM images showing the octahedral SnO2 model enclosed by {221} facets with d–f [\(\bar 1\) \(\bar 1\) 1] and g–i [\(\bar 1\) 0 1] zone axes A HRTEM image of SnO2 nanocrystals is presented in Fig. 1d. The apex angle of the SnO2 nanocrystals between two side surfaces is approximately 88° when viewed from the [\(\bar 1\) \(\bar 1\) 1] zone axis, in good agreement with the octahedral SnO2 model enclosed by {221} facets projected along the same direction (Fig. 1e). The two sets of lattice fringes with spacings of 0.26 and 0.33 nm are in accordance with the (101) and (110) planes, respectively, of rutile SnO2 (Fig. 1f). When some of the SnO2 nanocrystals were rotated to the [\(\bar 1\) 0 1] zone axis from the [\(\bar 1\) \(\bar 1\) 1] zone axis, as indexed by the selected-area electron diffraction pattern shown in Figure 1g, a perfect apex angle of approximately 65° between the two side surfaces also agrees well with the model of the octahedral SnO2 enclosed by {221} facets projected along the same direction (Fig. 1h). The HRTEM image (Fig. 1i) shows that the lattice distances are 0.33 and 0.31 nm, corresponding to the (110) and (001) planes, respectively, of SnO2. Based on the above TEM observations of different regions with different zone axes in the SnO2 nanocrystals, it is concluded that the {221} facets are exposed in these nanocrystals. More micrographs showing the morphology of SnO2/graphene can be found in Supplementary Fig. S7. The XRD pattern of SnO2 in the nanocomposite can be indexed to the tetragonal rutile phase of SnO2 (JCPDS card no. 41–1445)50, and no other Sn oxide phases are detected (Fig. 2a). Moreover, the intensity ratio of the (221) peak to the (110) peak of SnO2/graphene is larger than that of the sample obtained without sodium citrate during the reaction, indicating that more exposed {221} facets are present in the SnO2 nanocrystals of the nanocomposite. This suggests that environmentally friendly sodium citrate can be utilized as capping agent to obtain SnO2 nanocrystals with exposed {221} facets. Actually, the surface energies of the SnO2 facets follow the trend {110} < {100} < {101} < {001}51. This means that the {001} facet with the largest surface energy could possess the highest reactivity compared with the other crystal facets, thus promoting growth in the [001] direction. In this case, the SnO2 nanocrystals tend to form surfaces enclosed by the thermodynamically most stable {110} facet in order to maintain the lowest surface energy. However, the surface energy can be strongly affected by the adsorption of foreign ions on the crystal facets. Here, DFT calculations show that citrate prefers to adsorb on the {221} facets, which have the most negative adsorption energy value (Fig. 2b) compared with those of the {110} and {001} facets, thus hindering crystal growth along the [221] direction and resulting in exposed {221} facets. For the parallel experiment without sodium citrate, anisotropic growth along the [221] direction is no longer suppressed, leading to the formation of irregular SnO2 polycrystals that still have a sub-10 nm size (Supplementary Fig. S8). For another control sample without only GO, the obtained SnO2 is bulk crystals with sizes up to several micrometers (Supplementary Fig. S5). Moreover, this bulk SnO2 has a lower intensity ratio of the (221) peak to the (110) peak (Supplementary Fig. S9). Therefore, both GO and sodium citrate are indispensable for obtaining SnO2 nanocrystals with exposed {221} facets. More specifically, the role of GO is to restrict further growth of the SnO2 nanocrystals, while sodium citrate acts as a capping agent for exposing the {221} facets on the sub-10 nm nanocrystals. Furthermore, to confirm the (physical or chemical) interaction between the SnO2 nanocrystals and graphene, the obtained SnO2/graphene nanocomposite was examined by XPS and FTIR spectroscopy. The overall XPS spectra reveal the presence of C1s, O1s, and Sn3d peaks (Supplementary Fig. S10a). The C1s peaks located at 284.5, 286.6, and 288.4 eV can be assigned to graphitic carbon, residual C–OH and O–C = O, respectively (Fig. 2c). For the Sn3d spectrum, as shown in Supplementary Fig. S10b, the two symmetrical peaks observed at 486.6 eV and 495.5 eV are attributable to Sn 3d5/2 and Sn 3d3/2, respectively, indicating that the Sn atoms exist as SnO250. The observation of O1s peaks at 533.5 and 531.8 eV (Supplementary Fig. S10c) confirms the presence of carbonyl or hydroxyl groups and O2− species in the SnO2/graphene nanocomposite. A small amount of oxygenated groups is beneficial for achieving a high dispersion of SnO2 nanocrystals on the graphene sheets through electrostatic interactions. In addition, strong Sn-O-Sn antisymmetric vibration peaks are observed in the FTIR spectrum at 604 and 563 cm−1 and assigned to the Eu mode of SnO2 (Supplementary Fig. S10d). The absence of Sn–C, Sn–OOC, and Sn–O–C bonds in SnO2/graphene indicates that the incorporation of SnO2 nanocrystals on the graphene nanosheets is mainly based on physical adsorption. Meanwhile, the spectrum shows an absorption band at 1575 cm−1 corresponding to C = C stretching, indicating the restoration of the graphene network upon reduction. As indicated by the nitrogen adsorption and desorption isotherms (Supplementary Fig. S11), the Brunauer-Emmett-Teller specific surface area of the SnO2/graphene nanocomposite is 215 m2 g–1, which is much higher than those of graphene nanosheets (87 m2 g–1) and SnO2 nanoparticles (52 m2 g–1). The higher specific surface area of SnO2/graphene compared with that of the graphene nanosheets proves again that the SnO2 nanocrystals enable independent dispersion of the graphene nanosheets without evident agglomeration. The thermogravimetric curve reveals that the weight ratios of graphene and SnO2 nanocrystals in the composite are estimated to be 19 wt.% and 81 wt.%, respectively (Fig. 2d). As the positive electrode material, the morphology of the CNTs was characterized by SEM and HRTEM. The SEM images show that the length of the CNTs is in the range of several micrometers (Supplementary Fig. S12a–c). The fact that no O is detected combined with the clear interlayer distance of approximately 0.34 nm (Supplementary Fig. S12d–f) suggest a low oxidation and high degree of graphitization of the CNTs. Fig. 2: The role of citrate as a facet-controlling agent for the growth of SnO2 nanocrystals. a XRD patterns of SnO2 samples obtained with and without sodium citrate. b The calculated adsorption energies of citrate on SnO2 with different facets. c C1s XPS spectrum and d thermogravimetric curve of the SnO2/graphene nanocomposite Before assembling the Na ion capacitor, the electrochemical behaviors of the CNTs (positive electrode) and the SnO2 nanocrystals with exposed {221} facets anchored on graphene (negative electrode) were first investigated in half-cells using Na metal as both the counter and reference electrodes in the N-GPE. The galvanostatic charge–discharge (GCD) profiles of the CNTs exhibit a typical symmetric triangular shape and almost straight lines at different current densities (Supplementary Fig. S13a), illustrating the characteristics of an electrochemical double-layer capacitor. At various sweep rates, the cyclic voltammetric (CV) curves of the CNTs show relatively flat rectangular shapes between 3.0 and 4.0 V (vs. Na+/Na) and no Faradaic reaction (Supplementary Fig. S13b). The specific capacitance of the CNTs (Supplementary Fig. S14) at 0.5 A g–1 is 173 F g–1, and it remains at 162, 144, 115, and 72 F g–1 when the current densities are increased to 0.6, 0.8, 1.2, and 1.8 A g–1, respectively. The GCD curves of the SnO2/graphene nanocomposite present a sloping voltage profile without a clear voltage plateau, even at a low rate of 0.2 C (Fig. 3a). At a current density of 0.5 A g−1, the specific capacitance of the SnO2/graphene nanocomposite is as high as 276 F g−1 and is maintained between 253 and 174 F g–1 with increases in the charge–discharge current density from 0.6 to 1.2 A g–1 (Fig. 3b). Even at a high current density of 1.8 A g–1, a specific capacitance of 105 F g–1 can still be maintained in the N-GPE. For comparison, the specific capacitances of the SnO2 nanoparticles without graphene are obviously lower than those of SnO2/graphene, which are in the range of 150–75 F g–1 at current densities from 0.6 A g–1 to 1.2 A g–1 (Supplementary Fig. S15). Furthermore, the cycling behavior of the SnO2/graphene electrode tends to remain stable for 6000 cycles (Supplementary Fig. S16). To the best of our knowledge, this is one of the longest cycling lives observed to date among all reported metal oxide-based negative electrode materials for Na+ storage24,25,26,27,28,29,30,31,32,33. Fig. 3: Electrochemical performance of the SnO2 nanocrystals with exposed {221} facets anchored on graphene (negative electrode) in half-cells. a GCD curves. b The corresponding specific capacitances at various current densities. c CV curves and d the Lg (anodic currents) vs. Lg (sweep rates). e Separation of the capacitive (green) and diffusion currents in SnO2/graphene at a sweep rate of 10 mV s–1. f Normalized contribution ratios of the capacitive (green) and diffusion-controlled (gray) capacitances at different sweep rates Kinetic analysis based on CV tests from 1 to 200 mV s−1 was carried out to determine the pseudocapacitive contribution to the total capacitance of the SnO2/graphene nanocomposite. Unlike the reported SnO2 nanoparticles and nanosheets with distinct redox peaks52,53,54,55, the CV curves of the SnO2/graphene nanocomposite at various sweep rates exhibit a broad anodic peak ranging from 0.1 to 3 V (vs. Na+/Na) without visible cathodic peaks (Fig. 3c and Supplementary Fig. S17). In the voltage region from 0.9 to 0.1 V during the cathodic scan, Na+ ions react with SnO2 through conversion reactions (producing Sn and Na2O) and alloying reactions (producing NaxSn alloy)55. In the anodic portion, a dealloying reaction occurs, and the SnO2 nanocrystals can be reverted to the original phase. These reversible reactions were previously revealed by TEM studies of SnO2 nanocrystals52,53. Before the conversion reaction, Na+ absorption within the SnO2 nanocrystal takes places; thus, the migration path of Na ions was optimized, and the diffusion barrier (Ea) is only 0.19 eV (Supplementary Fig. S18). The change in current with the sweep rate can be generally expressed as i = avb, where i is the current (A), v is the sweep rate (mV s−1) of a cyclic voltammetric experiment, and a and b are constants. A b value of 0.5 generally indicates a diffusion-controlled process, while a b value of 1.0 suggests that the reaction is surface-limited34. Here, the b value is approximately 0.8 even at the peak voltage (Fig. 3d), indicating that the kinetics are based on an interplay between surface- and diffusion-controlled reactions but are closer to surface-controlled diffusion. Additionally, the total current (i) resulting from capacitive effects (k1v) and diffusion-controlled Faradic processes (k2v1/2) can be represented as: $$i = k_1v + k_2v^{1/2}$$ $$i/v^{1/2} = k_1v^{1/2} + k_2$$ After plotting i/v1/2 versus v1/2, one can determine the k1 and k2 values as well as the capacitive current contribution at a given voltage. Based on this quantification, 86% of the total charge (therefore, the capacitance) is capacitive at a scan rate of 10 mV s–1 (Fig. 3e). The diffusion-controlled charge is mainly generated near the peak voltage. The contribution ratios between pseudocapacitive storage and diffusion-controlled storage at other scan rates are illustrated in Figure 3f. With increasing scan rate, the capacitance from the diffusion-controlled reactions decreases faster than that from the capacitive effects, leading to an improvement in the capacitive contribution to the total stored charge. The capacitive contribution to the capacitance finally reaches a maximum value (92%) at the highest sweep rate of 100 mV s–1. The pseudocapacitive contribution mainly originates from the unique hybrid structure of SnO2 nanocrystals with exposed high-energy facets anchored on graphene. First, the SnO2 nanocrystals have ultrasmall sub-10 nm particle sizes. Actually, there have been reports that traditional battery materials exhibit capacitor-like properties at small dimensions (<10 nm)56. Second, the exposed highly reactive {221} facets in the SnO2 nanocrystals may enhance the pseudocapacitive Na+ storage behavior. First-principles calculations were performed to gain further insight into the sodiation dynamics of SnO2 with different exposed facets. The optimized configurations of solvated Na+ ions in diethylene glycol dimethyl ether (DEGDME) electrolyte are presented in Figure 4a. Each most stable Na+ ion is coordinated by two DEGDME molecules in its primary solvation sheath, forming the complex cation Na+(DEGDME)2. Then, the interactions of three typical facets, {110} (Fig. 4b), {001} (Fig. 4c) and {221} (Fig. 4d), in SnO2 with solvated Na+ ions were considered. The {221} facet has the lowest surface adsorption energy (−10.3 eV) compared with the {110} (−6.5 eV) and {001} (−8.4 eV) facets. This means that the {221} facet can realize the fastest and most stable adsorption of Na+. During the conversion reaction with Na+, part of the SnO2 in the nanocrystals produces Sn and Na2O, and then, the nearby Na+ (adsorption on the {221} facets in unreacted SnO2 nanocrystals) can easily react with the Sn to form NaxSn alloy, which greatly promotes the dynamics of the two-step conversion/alloying reaction. The presence of better overlap in the electron orbitals between Sn, Na, and O at the Fermi level of the {221} facets compared to that at the {110} and {001} facets, as shown in Supplementary Fig. S19, clearly demonstrates the interactions between the Sn, Na, and alloy atoms57. Third, the ultradispersed SnO2 nanocrystals anchored on graphene provide abundant interfacial sites for Na+ adsorption. Additional interfacial Na+ storage usually occurs in the sub-10 nm SnO2 nanocrystal–electrolyte, graphene–electrolyte and sub-10 nm SnO2 nanocrystal–graphene interfaces49,58. Fourth, graphene can produce a nonfaradaic contribution from the electrical double-layer effect. The pure graphene electrode delivers a reversible capacitance of 45 F g–1 at 0.5 A g−1 (Supplementary Fig. S20), which is approximately 16% of the specific capacitance of the nanocomposite. Additionally, it should be noted that there are some essential electrochemical differences in the behavior of the SnO2 electrode between Li+ storage and Na+ storage, which mainly involve different ion diffusivities, concentration-dependent moduli, and strengths between the lithiation and sodiation processes55. For example, the volume expansion for sodiation (+9.6 Å3 for 1Na/8SnO2) was much larger than that for lithiation (+6.7 Å3, 1Li/8SnO2)55. This essential electrochemical difference highlights another important role of graphene in eliminating the fragmentation and detachment of the SnO2 electrode caused by volume expansion and contraction during Na+ storage. Fig. 4: First-principles calculations for the interaction of different exposed facets of SnO2 with solvated Na+ ions. a The optimized configuration for solvated Na+ ions. b–d Three typical facets, {110}, {001} and {221} facets, respectively, were considered To assemble the quasi-solid-state Na ion capacitor (Fig. 5a), a gel polymer electrolyte (N-GPE) was utilized as both the separator and the electrolyte between the positive electrode (CNTs) and the negative electrode (SnO2/graphene). Microsized pores ranging from 2 to 5 μm uniformly are distributed in the polymer matrix of the N-GPE (Supplementary Fig. S21). After calculation based on the equation q = C+ × ΔE+ × m+ = C– × ΔE– × m– (where C is the specific capacitance in the half-cell test, ΔE is the voltage range, and m is the mass of the active material), the best mass ratio of the two electrodes (m+/m−) should be in the range of 3.6–3.8 (Fig. 5b). The voltage of the quasi-solid-state Na ion capacitor reaches 3.8 V, and its charge–discharge process is complete within 70 seconds at a high current density of 1.8 A g−1 (Fig. 5c). Along with the increase in the current density from 0.5 A g−1 to 1.2 A g–1, its specific capacitance decreases from 41 to 27 F g–1 based on the total weights of the negative and positive electrode materials (Fig. 5d). In addition, the quasi-solid-state Na ion capacitor shows a maximum energy density of 86 Wh kg–1 (at a power density of 955 W kg–1) and a maximum power density of 4.1 kW kg–1 (Fig. 5e). The specific capacitance retains 100% of the initial value after 900 charge–discharge cycles at a current density of 0.5 A g–1 (Fig. 5f and Supplementary Fig. S22). The specific capacitance increases over the 900 cycles. Similarly, the measured capacitance also increases up to 2000 cycles in the half-cell (Supplementary Fig. S16a). To reveal the underlying reasons, post-cycling analysis of the SnO2/graphene electrodes was carried out, and the results are shown in Supplementary Fig. S23. One of the reasons that can be determined is associated with the formation of a solid electrolyte interface (SEI) film on the surface of the SnO2/graphene electrode and the "self-activation" of the formed SEI film during cycling. On the one hand, the irreversible reaction of electrolyte decomposition (producing NaF, Na2CO3, etc.) cannot be prevented in this voltage range (Supplementary Fig. S23a–c). The initial SEI film exhibits a relatively large charge transfer resistance (Supplementary Fig. S23d). On the other hand, the formation of an SEI film on the anode surface is necessary for maintaining its stability. The stable SEI film formed after the first hundreds of cycles could suppress further decomposition of the NaPF6-based electrolyte but still enable the transport of Na+ ions59,60. Compared with the initial SEI film formed during the first cycles, the stable SEI films formed after 300 cycles and 500 cycles have lower charge transfer resistances (Supplementary Fig. S23e–h). Such "self-activation" of the SEI film that formed on the SnO2/graphene electrode could potentially be the origin of the observed continuous increase in capacitances during the initial cycles in the half-cell (Supplementary Fig. S16a) and the full Na ion capacitor (Fig. 5f). In addition, the gradual electrolyte wetting of the SnO2/graphene electrode and the CNT electrode may also lead to an increase in the specific capacitance61. After being charged, the quasi-solid-state Na ion capacitor can readily power a commercial white light-emitting diode (Supplementary Fig. S24). Fig. 5: Electrochemical performance of the quasi-solid-state Na ion capacitor. a Schematic illustration. b Adjustment of the mass ratios of the positive and negative electrodes. c GCD curves d and the corresponding specific capacitances at various current densities. e Ragone plots. f Cycling behavior at a current density of 0.5 A g−1 Generally, the reported quasi-solid-state Na ion capacitor described here provides significant progress, at least in terms of the following three aspects. First, unlike the reported oxides such as Nb2O5 and RuO2 with intrinsic pseudocapacitances18,62, the reported SnO2/graphene presents extrinsic pseudocapacitance when SnO2 is made into ultrafine nanocrystals (sub-10 nm) with exposed {221} facets to maximize the reaction process on the surface. During the past several years, through the design of electrode materials with ordered mesopores or oxygen vacancies, researchers have discovered the existence of a pseudocapacitive contribution in some electrode materials for Li ion batteries34,35,36,37,38,39,40. Here, it is first demonstrated that nanocrystals (sub-10 nm) with exposed high-energy facets grown on graphene can induce pseudocapacitive Na+ storage behavior. Thus, this work will enrich the methods for developing electrode materials with pseudocapacitive Na+ storage behaviors. Second, most of previously reported Na ion capacitors are based on organic liquid electrolytes24,25,26,27,28,29,30, which easily results in leakage of the electrolyte and serious safety issues. The electrochemical capacitors used in aqueous or ionic liquid electrolytes are well-known to be much more stable than Na ion batteries and not as susceptible to thermal run-away. However, this is not true for metal (Li or Na) ion capacitors because of the flammability of the organic electrolytes31. The matrix of our N-GPE shows a good flame-retarding ability. Due to the presence of nonflammable functional groups (–C–F), the matrix does not catch fire when put in a flame although it shrinks (Supplementary Fig. S25)63. Moreover, replacing the liquid electrolyte with a gel polymer electrolyte does not lead to much of a decrease in the electrochemical performance of the Na ion capacitor and may improve the safety of electronic devices. Third, some high-index-faceted SnO2 samples for gas-sensing and catalytic CO oxidation applications are currently available51,64,65. However, these SnO2 samples with exposed high-energy facets have very large particle sizes (>100 nm)51,64,65, which severely impedes their potential applications in electrochemical capacitors. The work described here is the first report of ultrasmall (sub-10 nm) SnO2 nanocrystals enclosed within {221} high-index facets. In addition, the synthetic process for achieving ultradispersed sub-10 nm SnO2 nanocrystals anchored on graphene is simple and can be easily scaled up. Such a highly efficient and low-cost preparation technology, together with the nonflammable electrolyte, provides great potential for the scale-up of quasi-solid-state Na ion capacitors for practical energy storage devices in the near future. In summary, SnO2 nanocrystals with a sub-10 nm size and exposed {221} facets are highly dispersed on graphene through a facile one-step reaction, and the resulting composite is further utilized as a negative electrode in a quasi-solid-state Na ion capacitor. We demonstrate that designing sub-10 nm SnO2 nanocrystals with exposed high-energy facets anchored on graphene induces a surface-dominated redox reaction and a pseudocapacitive contribution to Na+ storage. This pseudocapacitive ion storage behavior is highly beneficial for fast charge storage and excellent cyclability in a quasi-solid-state Na ion capacitor. In addition to SnO2, the reported strategy may also work in other metal oxides for pseudocapacitive sodium-ion storage. Chodankar, N., Dubal, D., Kwon, Y. & Kim, D. Direct growth of FeCo2O4 nanowire arrays on flexible stainless steel mesh for high-performance asymmetric supercapacitor. NPG Asia Mater. 9, e419 (2017). Aravindan, V., Gnanaraj, J., Lee, Y. S. & Madhavi, S. Insertion-type electrodes for nonaqueous Li-ion capacitors. Chem. Rev. 114, 11619–11635 (2014). Zhang, F. et al. A high-performance supercapacitor-battery hybrid energy storage device based on graphene-enhanced electrode materials with ultrahigh energy density. Energy Environ. Sci. 6, 1623–1632 (2013). Chen, Z. et al. 3D nanocomposite architectures from carbon-nanotube-threaded nanocrystals for high-performance electrochemical energy storage. Adv. Mater. 26, 339–345 (2014). Stoller, M. et al. Activated graphene as a cathode material for Li-ion hybrid supercapacitors. Phys. Chem. Chem. Phys. 14, 3388–3391 (2012). Zhang, T. et al. High energy density Li-ion capacitor assembled with all graphene-based electrodes. Carbon N. Y. 92, 106–118 (2015). Wang, F. et al. A quasi-solid-state Li-ion capacitor with high energy density based on Li3VO4/carbon nanofibers and electrochemically-exfoliated graphene sheets. J. Mater. Chem. A 5, 14922–14929 (2017). Ma, Y., Chang, H., Zhang, M. & Chen, Y. Graphene-based materials for lithium-ion hybrid supercapacitors. Adv. Mater. 27, 5296–5308 (2015). Ye, L. et al. A high performance Li-ion capacitor constructed with Li4Ti5O12/C hybrid and porous graphene macroform. J. Power Sources 282, 174–178 (2015). Wang, Q., Wen, Z. H. & Li, J. H. A hybrid supercapacitor fabricated with a carbon nanotube cathode and a TiO2-B nanowire anode. Adv. Funct. Mater. 16, 2141–2146 (2006). Puthusseri, D. et al. From waste paper basket to solid state and Li-HEC ultracapacitor electrodes: a value added journey for shredded office paper. Small 10, 4395–4402 (2014). Wang, F. et al. A quasi-solid-state Li ion capacitor based on porous TiO2 hollow microspheres wrapped with graphene nanosheets. Small 12, 6207–6213 (2016). Leng, K. et al. Graphene-based Li-ion hybrid supercapacitors with ultrahigh performance. Nano Res. 6, 581–592 (2013). Wang, H., Guan, C., Wang, X. & Fan, H. A high energy and power Li-ion capacitor based on a TiO2 nanobelt array anode and a graphene hydrogel cathode. Small 11, 1470–1477 (2015). Gokhale, R. et al. Oligomer-salt derived 3D, heavily nitrogen doped, porous carbon for Li-ion hybrid electrochemical capacitors application. Carbon N. Y. 80, 462–471 (2014). Yu, X. et al. Ultrahigh-rate and high-density lithium-ion capacitors through hybriding nitrogen-enriched hierarchical porous carbon cathode with prelithiated microcrystalline graphite anode. Nano Energy 15, 43–53 (2015). Wang, F. et al. Nanoporous LiMn2O4 spinel prepared at low temperature as cathode material for aqueous supercapacitors. J. Power Sources 242, 560–565 (2013). Augustyn, V., Simon, P. & Dunn, B. Pseudocapacitive oxide materials for high-rate electrochemical energy storage. Energy Environ. Sci. 7, 1597–1614 (2014). Dsoke, S., Fuchs, B., Gucciardi, E. & Wohlfahrt-Mehrens, M. The importance of the electrode mass ratio in a Li-ion capacitor based on activated carbon and Li4Ti5O12. J. Power Sources 282, 385–393 (2015). Banerjee, A. et al. MOF-derived crumpled-sheet-assembled perforated carbon cuboids as highly effective cathode active materials for ultra-high energy density Li-ion hybrid electrochemical capacitors (Li-HECs). Nanoscale 6, 4387–4394 (2014). Qu, W. et al. Combination of a SnO2-C hybrid anode and a tubular mesoporous carbon cathode in a high energy density non-aqueous lithium ion capacitor: preparation and characterisation. J. Mater. Chem. A 2, 6549–6557 (2014). Wang, X. & Shen, G. Intercalation pseudo-capacitive TiNb2O7@carbon electrode for high-performance lithium ion hybrid electrochemical supercapacitors with ultrahigh energy density. Nano Energy 15, 104–115 (2015). Jain, A. et al. Activated carbons derived from coconut shells as high energy density cathode material for Li-ion capacitors. Sci. Rep. 3, 3002 (2013). Aravindan, V., Ulaganathana, M. & Madhavi, S. Research progress in Na-ion capacitors. J. Mater. Chem. A 4, 7538–7548 (2016). Chen, Z. et al. High-performance sodium-ion pseudocapacitors based on hierarchically porous nanowire composites. ACS Nano 6, 4319–4327 (2012). Ding, J. et al. Peanut shell hybrid sodium ion capacitor with extreme energy-power rivals lithium ion capacitors. Energy Environ. Sci. 8, 941–955 (2015). Wang, X. et al. Pseudocapacitance of MXene nanosheets for high-power sodium-ion hybrid capacitors. Nat. Commun. 6, 6544–6549 (2015). Dall'Agnese, Y., Taberna, P. L., Gogotsi, Y. & Simon, P. Two-dimensional vanadium carbide (MXene) as positive electrode for sodium-ion capacitors. J. Phys. Chem. Lett. 6, 2305–2309 (2015). Le, Z. et al. Pseudocapacitive sodium storage in mesoporous single-crystal-like TiO2-graphene nanocomposite enables high-performance sodium-ion capacitors. ACS Nano 11, 2952–2960 (2017). Thangavel, R., Moorthy, B., Kim, D. & Lee, Y. Pushing the energy output and cyclability of sodium hybrid capacitors at high power to new limits. Adv. Energy Mater. 7, 1602654 (2017). Wang, F. et al. A quasi-solid-state sodium-ion capacitor with high energy density. Adv. Mater. 27, 6962–6968 (2015). Li, H., Peng, L., Zhu, Y., Zhang, X. & Yu, G. Achieving high-energy high-power density in a flexible quasi-solid-state sodium ion capacitor. Nano Lett. 16, 5938–5943 (2016). Dong, S. et al. Self-supported electrodes of Na2Ti3O7 nanoribbon array/graphene foam and graphene foam for quasi-solid-state Na-ion capacitors. J. Mater. Chem. A 5, 5806–5812 (2017). Brezesinski, T., Wang, J., Tolbert, S. & Dunn, B. Ordered mesoporous α-MoO3 with iso-oriented nanocrystalline walls for thin-film pseudocapacitors. Nat. Mater. 9, 146–151 (2010). Brezesinski, K. et al. Pseudocapacitive contributions to charge storage in highly ordered mesoporous group V transition metal oxides with iso-oriented layered nanocrystalline domains. J. Am. Chem. Soc. 132, 6982–6990 (2010). Sathiya, M., Prakash, A., Ramesha, K., Tarascon, J. & Shukla, A. V2O5-anchored carbon nanotubes for enhanced electrochemical energy storage. J. Am. Chem. Soc. 133, 16291–16299 (2011). Augustyn, V. et al. Lithium-ion storage properties of titanium oxide nanosheets. Mater. Horiz. 1, 219–223 (2014). Lesel, B., Ko, J., Dunn, B. & Tolbert, S. Mesoporous LixMn2O4 thin film cathodes for lithium-ion pseudocapacitors. ACS Nano 10, 7572–7581 (2016). Kim, H. et al. Oxygen vacancies enhance pseudocapacitive charge storage properties of MoO3-x. Nat. Mater. 16, 454–460 (2017). Huang, S. et al. Tunable pseudocapacitance in 3D TiO2−δ nanomembranes enabling superior lithium storage performance. ACS Nano 11, 821–830 (2016). Wang, F. et al. Nanoscale Horiz. 1, 272–289 (2016). Chen, C. et al. Na+ intercalation pseudocapacitance in graphene-coupled titanium oxide enabling ultra-fast sodium storage and long-term cycling. Nat. Commun. 6, 6929–6936 (2015). Chao, D. et al. Pseudocapacitive Na-ion storage boosts high rate and areal capacity of self-branched 2D layered metal chalcogenide nanoarrays. ACS Nano 10, 10211–10219 (2016). Chao, D. et al. Array of nanosheets render ultrafast and high-capacity Na-ion storage by tunable pseudocapacitance. Nat. Commun. 7, 12122–12129 (2016). Zhang, P. et al. One-step synthesis of large-scale graphene film doped with gold nanoparticles at liquid-air interface for electrochemistry and Raman detection applications. Langmuir 30, 8980–8989 (2014). Delley, B. An all-electron numerical method for solving the local density functional for polyatomic molecules. J. Chem. Phys. 92, 508–517 (1990). Blöchl, P. E. Projector augmented-wave method. Phys. Rev. B 50, 17953–17979 (1994). Halgren, T. & Lipscomb, W. The synchronous-transit method for determining reaction pathways and locating molecular transition states. Chem. Phys. Lett. 49, 225–232 (1977). Sun, Y. et al. Direct atomic-scale confirmation of three-phase storage mechanism in Li4Ti5O12 anodes for room-temperature sodium-ion batteries. Nat. Commun. 4, 1870–1880 (2013). Li, Z. et al. High rate SnO2-graphene dual aerogel anodes and their kinetics of lithiation and sodiation. Nano Energy 15, 369–378 (2015). Han, X. et al. Synthesis of tin dioxide octahedral nanoparticles with exposed high-energy {221} facets and enhanced gas-sensing properties. Angew. Chem. 121, 9344–9347 (2009). Su, D., Ahn, H. & Wang, G. SnO2@graphene nanocomposites as anode materials for Na-ion batteries with superior electrochemical performance. Chem. Commun. 49, 3131–3133 (2013). Su, D., Wang, C., Ahn, H. & Wang, G. Octahedral tin dioxide nanocrystals as high capacity anode materials for Na-ion batteries. Phys. Chem. Chem. Phys. 15, 12543–12550 (2013). Ding, J. et al. Sodiation vs. lithiation phase transformations in a high rate-high stability SnO2 in carbon nanocomposite. J. Mater. Chem. A 3, 7100–7111 (2015). Gu, M. et al. Probing the failure mechanism of SnO2 nanowires for sodium–ion batteries. Nano Lett. 13, 5203–5211 (2013). Simon, P., Gogotsi, Y. & Dunn, B. Where do batteries end and supercapacitors begin? Science 343, 1210–1211 (2014). Su, D., Dou, S. & Wang, G. Gold nanocrystals with variable index facets as highly effective cathode catalysts for lithium-oxygen batteries. NPG Asia Mater. 7, e155 (2015). Chen, C. et al. Integrated intercalation-based and interfacial sodium storage in graphene-wrapped porous Li4Ti5O12 nanofibers composite aerogel. Adv. Energy Mater. 6, 1600322 (2016). Yu, F. et al. The pseudocapacitance contribution in boron-doped graphite sheets for anion storage enables high-performance sodium-ion capacitors. Mater. Horiz. 3, https://doi.org/10.1039/c8mh00156a (2018). Wang, F. et al. Dual-graphene rechargeable sodium battery. Small 13, 1702449 (2017). Wang, J. et al. Graphene microsheets from natural microcrystalline graphite minerals: scalable synthesis and unusual energy storage. J. Mater. Chem. A 3, 3144–3150 (2015). Augustyn, V. et al. High-rate electrochemical energy storage through Li+ intercalation pseudocapacitance. Nat. Mater. 12, 518–522 (2013). Zhu, Y. et al. Composite of a nonwoven fabric with poly(vinylidene fluoride) as a gel membrane of high safety for lithium ion battery. Energy Environ. Sci. 6, 618–624 (2013). Vayssieres, L. & Graetzel, M. Highly ordered SnO2 nanorod arrays from controlled aqueous growth. Angew. Chem. Int. Ed. 43, 3666–3670 (2004). Sun, Y. et al. Atomically thin tin dioxide sheets for efficient catalytic oxidation of carbon monoxide. Angew. Chem. Int. Ed. 52, 10569–10572 (2013). We gratefully acknowledge financial support from National Materials Genome Project (2016YFB0700600), the National Natural Science Foundation Committee of China (Distinguished Youth Scientists Projects 51425301, U1601214, 51502137, and 51573013) and the Jiangsu Distinguished Professorship Program (2016). G. Wei thanks the financial support of the Deutsche Forschungsgemeinschaft (DFG) under grant WE 5837/1-1. The computational resources generously provided by the High Performance Computing Center of Nanjing Tech University are greatly appreciated. These authors contributed equally: Panpan Zhang, Xinne Zhao, Zaichun Liu. State Key Laboratory of Materials-Oriented Chemical Engineering, Nanjing Tech University, Nanjing, 210009, Jiangsu Province, China Panpan Zhang, Zaichun Liu, Faxing Wang & Yuping Wu State Key Laboratory of Chemical Resource Engineering, Beijing University of Chemical Technology, Beijing, 100029, China Panpan Zhang, Xinne Zhao, Yang Li, Jinhui Wang & Zhiqiang Su Institute for Advanced Materials, School of Energy Science and Engineering, Nanjing Tech University, Nanjing, 211816, Jiangsu Province, China Zaichun Liu, Faxing Wang, Yusong Zhu, Lijun Fu, Yuping Wu & Wei Huang College of Mechanical and Electrical Engineering, Beijing University of Chemical Technology, Beijing, 100029, China Ying Huang State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, 130022, Jilin, China Hongyan Li Faculty of Production Engineering, University of Bremen, 28359, Bremen, Germany Gang Wei Panpan Zhang Xinne Zhao Zaichun Liu Faxing Wang Jinhui Wang Zhiqiang Su Yusong Zhu Lijun Fu Yuping Wu Wei Huang Correspondence to Faxing Wang, Zhiqiang Su or Yuping Wu. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Zhang, P., Zhao, X., Liu, Z. et al. Exposed high-energy facets in ultradispersed sub-10 nm SnO2 nanocrystals anchored on graphene for pseudocapacitive sodium storage and high-performance quasi-solid-state sodium-ion capacitors. NPG Asia Mater 10, 429–440 (2018). https://doi.org/10.1038/s41427-018-0049-y Revised: 26 March 2018 Issue Date: May 2018 Synthesis and electrochemical performance of mesoporous MnC2O4 nanorod/rGO composite anode for lithium-ion batteries Ya-Nan Zhang Li-Ying Xue Yan-Xuan Wen Journal of Materials Science: Materials in Electronics (2021) Encapsulation of MnS Nanocrystals into N, S-Co-doped Carbon as Anode Material for Full Cell Sodium-Ion Capacitors Shaohui Li Jingwei Chen Pooi See Lee Nano-Micro Letters (2020) Insights into Enhanced Capacitive Behavior of Carbon Cathode for Lithium Ion Capacitors: The Coupling of Pore Size and Graphitization Engineering Kangyu Zou Peng Cai Xiaobo Ji Three-dimensional ordered porous electrode materials for electrochemical energy storage Xinhai Yuan NPG Asia Materials (2019) Editorial Summary Capacitors: Tin dioxide nanosurfaces store sodium ions Graphene studded with tin dioxide nanocrystals provides a novel electrode material for sodium-ion capacitors with fast storage and excellent cyclability (the number of times they can be effectively recharged). Lithium-ion batteries are the dominant choice in modern portable electronics. However, sodium-ion technology offers the advantage of sodium's abundance and low-cost. Much research is required before the electrode materials suitable for sodium-ion batteries and capacitors match the performance of those used in lithium-ion devices. Yuping Wu from Nanjing Tech University, Panpan Zhang from Beijing University of Chemical Technology, China, and colleagues in China and Germany anchored tin dioxide nanocrystals to a sheet of graphene and demonstrated its application as a negative electrode in a sodium-ion capacitor. The team propose that the exposed surfaces of the highly reactive tin dioxide crystals give the electrode its sodium-ion storage behavior. For Authors & Referees NPG Asia Materials (NPG Asia Mater) ISSN 1884-4057 (online) ISSN 1884-4049 (print)
CommonCrawl
Curium This article is about the chemical element. For the ancient city located in Cyprus, see Kourion. Not to be confused with cerium. Curium is a transuranic radioactive chemical element with the symbol Cm and atomic number 96. This element of the actinide series was named after Marie and Pierre Curie – both were known for their research on radioactivity. Curium was first intentionally produced and identified in July 1944 by the group of Glenn T. Seaborg at the University of California, Berkeley. The discovery was kept secret and only released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 20 grams of curium. Curium, 96Cm /ˈkjʊəriəm/ ​(KEWR-ee-əm) silvery metallic, glows purple in the dark 247 (most stable isotope) Curium in the periodic table Hydrogen Helium Lithium Beryllium Boron Carbon Nitrogen Oxygen Fluorine Neon Sodium Magnesium Aluminium Silicon Phosphorus Sulfur Chlorine Argon Potassium Calcium Scandium Titanium Vanadium Chromium Manganese Iron Cobalt Nickel Copper Zinc Gallium Germanium Arsenic Selenium Bromine Krypton Rubidium Strontium Yttrium Zirconium Niobium Molybdenum Technetium Ruthenium Rhodium Palladium Silver Cadmium Indium Tin Antimony Tellurium Iodine Xenon Caesium Barium Lanthanum Cerium Praseodymium Neodymium Promethium Samarium Europium Gadolinium Terbium Dysprosium Holmium Erbium Thulium Ytterbium Lutetium Hafnium Tantalum Tungsten Rhenium Osmium Iridium Platinum Gold Mercury (element) Thallium Lead Bismuth Polonium Astatine Radon Francium Radium Actinium Thorium Protactinium Uranium Neptunium Plutonium Americium Curium Berkelium Californium Einsteinium Fermium Mendelevium Nobelium Lawrencium Rutherfordium Dubnium Seaborgium Bohrium Hassium Meitnerium Darmstadtium Roentgenium Copernicium Nihonium Flerovium Moscovium Livermorium Tennessine Oganesson (Upn) americium ← curium → berkelium Atomic number (Z) f-block Element category Actinide Electron configuration [Rn] 5f7 6d1 7s2 Electrons per shell 2, 8, 18, 32, 25, 9, 2 Phase at STP 1613 K ​(1340 °C, ​2444 °F) Density (near r.t.) 13.51 g/cm3 Heat of fusion 13.85 kJ/mol P (Pa) at T (K) Atomic properties Oxidation states +2, +3, +4, +5,[1] +6[2] (an amphoteric oxide) Pauling scale: 1.3 Ionization energies 1st: 581 kJ/mol Atomic radius empirical: 174 pm Covalent radius 169±3 pm Spectral lines of curium Natural occurrence ​double hexagonal close-packed (dhcp) Electrical resistivity 1.25 µΩ·m[3] Magnetic ordering antiferromagnetic-paramagnetic transition at 52 K[3] named after Marie Skłodowska-Curie and Pierre Curie Glenn T. Seaborg, Ralph A. James, Albert Ghiorso (1944) Main isotopes of curium Iso­tope Abun­dance Half-life (t1/2) Decay mode Pro­duct syn 160 d SF – α 238Pu syn 29.1 y α 239Pu ε 243Am SF – syn 18.1 y SF – syn 8500 y SF – syn 4730 y α 242Pu syn 1.56×107 y α 243Pu β− 250Bk | references Curium is a hard, dense, silvery metal with a relatively high melting point and boiling point for an actinide. Whereas it is paramagnetic at ambient conditions, it becomes antiferromagnetic upon cooling, and other magnetic transitions are also observed for many curium compounds. In compounds, curium usually exhibits valence +3 and sometimes +4, and the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds, but there is no evidence of its incorporation into bacteria and archaea. When introduced into the human body, curium accumulates in the bones, lungs and liver, where it promotes cancer. All known isotopes of curium are radioactive and have a small critical mass for a sustained nuclear chain reaction. They predominantly emit α-particles, and the heat released in this process can serve as a heat source in radioisotope thermoelectric generators, but this application is hindered by the scarcity and high cost of curium isotopes. Curium is used in production of heavier actinides and of the 238Pu radionuclide for power sources in artificial pacemakers. It served as the α-source in the alpha particle X-ray spectrometers installed on several space probes, including the Sojourner, Spirit, Opportunity and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface. The 60-inch (150 cm) cyclotron at the Lawrence Radiation Laboratory, University of California, Berkeley, in August 1939. Although curium had likely been produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a 60-inch (150 cm) cyclotron.[5] Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) at the University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was unknown at the time.[6][7] The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was carried out by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).[8][9][10][11] The curium-242 isotope was produced in July–August 1944 by bombarding 239Pu with α-particles to produce curium with the release of a neutron: Pu 94 239 + He 2 4 ⟶ Cm 96 242 + n 0 1 {\displaystyle {\ce {^{239}_{94}Pu + ^{4}_{2}He -> ^{242}_{96}Cm + ^{1}_{0}n}}} Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay: Cm 96 242 ⟶ Pu 94 238 + He 2 4 {\displaystyle {\ce {^{242}_{96}Cm -> ^{238}_{94}Pu + ^{4}_{2}He}}} The half-life of this alpha decay was first measured as 150 days and then corrected to 162.8 days.[12] Another isotope 240Cm was produced in a similar reaction in March 1945: Pu 94 239 + He 2 4 ⟶ Cm 96 240 + 3 0 1 n {\displaystyle {\ce {^{239}_{94}Pu + ^{4}_{2}He -> ^{240}_{96}Cm + 3^{1}_{0}n}}} The half-life of the 240Cm α-decay was correctly determined as 26.7 days.[12] The discovery of curium, as well as americium, in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the Quiz Kids, five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one of the listeners asked whether any new transuranium element beside plutonium and neptunium had been discovered during the war.[8] The discovery of curium (242Cm and 240Cm), their production and compounds were later patented listing only Seaborg as the inventor.[13] Marie and Pierre Curie The new element was named after Marie Skłodowska-Curie and her husband Pierre Curie who are noted for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of the rare earth elements Johan Gadolin:[14] "As the name for the element of atomic number 96 we should like to propose "curium", with symbol Cm. The evidence indicates that element 96 contains seven 5f electrons and is thus analogous to the element gadolinium with its seven 4f electrons in the regular rare earth series. On this base element 96 is named after the Curies in a manner analogous to the naming of gadolinium, in which the chemist Gadolin was honored."[6] The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman created the first substantial sample of 30 µg curium-242 hydroxide at the University of California in 1947 by bombarding americium-241 with neutrons.[15][16][17] Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds.[15] Curium metal was produced only in 1951 by reduction of CmF3 with barium.[18][19] CharacteristicsEdit PhysicalEdit Double-hexagonal close packing with the layer sequence ABAC in the crystal structure of α-curium (A: green, B: blue, C: red) Orange fluorescence of Cm3+ ions in a solution of tris(hydrotris)pyrazolylborato-Cm(III) complex, excited at 396.6 nm. A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling those of gadolinium. Its melting point of 1340 °C is significantly higher than that of the previous transuranic elements neptunium (637 °C), plutonium (639 °C) and americium (1173 °C). In comparison, gadolinium melts at 1312 °C. The boiling point of curium is 3110 °C. With a density of 13.52 g/cm3, curium is significantly lighter than neptunium (20.45 g/cm3) and plutonium (19.8 g/cm3), but is heavier than most other metals. Between two crystalline forms of curium, the α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters a = 365 pm and c = 1182 pm, and four formula units per unit cell.[20] The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressures above 23 GPa, at room temperature, α-Cm transforms into β-Cm, which has a face-centered cubic symmetry, space group Fm3m and the lattice constant a = 493 pm.[20] Upon further compression to 43 GPa, curium transforms to an orthorhombic γ-Cm structure similar to that of α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also referred to as Cm I, II and III.[21][22] Curium has peculiar magnetic properties. Whereas its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K,[23][24] and β-Cm exhibits a ferrimagnetic transition at about 205 K. Meanwhile, curium pnictides show ferromagnetic transitions upon cooling: 244CmN and 244CmAs at 109 K, 248CmP at 73 K and 248CmSb at 162 K. The lanthanide analogue of curium, gadolinium, as well as its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering.[25] In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then remains nearly constant up to room temperature. There is a significant increase in resistivity over time (about 10 µΩ·cm/h) due to self-damage of the crystal lattice by alpha radiation. This makes uncertain the absolute resistivity value for curium (about 125 µΩ·cm). The resistivity of curium is similar to that of gadolinium and of the actinides plutonium and neptunium, but is significantly higher than that of americium, uranium, polonium and thorium.[3][26] Under ultraviolet illumination, curium(III) ions exhibit strong and stable yellow-orange fluorescence with a maximum in the range about 590–640 nm depending on their environment.[27] The fluorescence originates from the transitions from the first excited state 6D7/2 and the ground state 8S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes.[28] ChemicalEdit Curium ions in solution almost exclusively assume the oxidation state of +3, which is the most stable oxidation state for curium.[29] The +4 oxidation state is observed mainly in a few solid phases, such as CmO2 and CmF4.[30][31] Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself.[32] The chemical behavior of curium is different from the actinides thorium and uranium, and is similar to that of americium and many lanthanides. In aqueous solution, the Cm3+ ion is colorless to pale green,[33] and Cm4+ ion is pale yellow.[34] The optical absorption of Cm3+ ions contains three sharp peaks at 375.4, 381.2 and 396.5 nanometers and their strength can be directly converted into the concentration of the ions.[35] The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (CmO2+ 2): this was prepared from the beta decay of americium-242 in the americium(V) ion 242 AmO+ 2.[2] Failure to obtain Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm4+/Cm3+ ionization potential and the instability of Cm(V).[32] Curium ions are hard Lewis acids and thus form most stable complexes with hard bases.[36] The bonding is mostly ionic, with a small covalent component.[37] Curium in its complexes commonly exhibits a 9-fold coordination environment, within a tricapped trigonal prismatic geometry.[38] IsotopesEdit Thermal neutron cross sections (barns)[39] 242Cm 243Cm 244Cm 245Cm 246Cm 247Cm Fission 5 617 1.04 2145 0.14 81.90 Capture 16 130 15.20 369 1.22 57 C/F ratio 3.20 0.21 14.62 0.17 8.71 0.70 LEU spent fuel 20 years after 53 MWd/kg burnup[40] 3 common isotopes 51 3700 390 Fast reactor MOX fuel (avg 5 samples, burnup 66-120GWd/t)[41] Total curium 3.09×10−3% 27.64% 70.16% 2.166% 0.0376% 0.000928% Isotope 242Cm 243Cm 244Cm 245Cm 246Cm 247Cm 248Cm 250Cm Critical mass, kg 25 7.5 33 6.8 39 7 40.4 23.5 See also: Isotopes of curium About 20 radioisotopes and 7 nuclear isomers between 233Cm and 252Cm are known for curium, and no stable isotopes. The longest half-lives have been reported for 247Cm (15.6 million years) and 248Cm (348,000 years). Other long-lived isotopes are 245Cm (half-life 8500 years), 250Cm (8,300 years) and 246Cm (4,760 years). Curium-250 is unusual in that it predominantly (about 86%) decays via spontaneous fission. The most commonly used curium isotopes are 242Cm and 244Cm with the half-lives of 162.8 days and 18.1 years, respectively.[12] Transmutation flow between 238Pu and 244Cm in LWR.[42] Fission percentage is 100 minus shown percentages. Total rate of transmutation varies greatly by nuclide. 245Cm–248Cm are long-lived with negligible decay. All isotopes between 242Cm and 248Cm, as well as 250Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can act as a nuclear fuel in a reactor. As in most transuranic elements, the nuclear fission cross section is especially high for the odd-mass curium isotopes 243Cm, 245Cm and 247Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases.[43] The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because the neutron activation of 248Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if the minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present.[44] The adjacent table lists the critical masses for curium isotopes for a sphere, without a moderator and reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 gram for 245Cm, 155 gram for 243Cm and 1550 gram for 247Cm. There is a significant uncertainty in these critical mass values. Whereas it is usually on the order of 20%, the values for 242Cm and 246Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups.[43][45] Currently, curium is not used as a nuclear fuel owing to its low availability and high price.[46] 245Cm and 247Cm have a very small critical mass and therefore could be used in portable nuclear weapons, but none have been reported thus far. Curium-243 is not suitable for this purpose because of its short half-life and strong α emission which would result in excessive heat.[47] Curium-247 would be highly suitable, having a half-life 647 times that of plutonium-239. OccurrenceEdit Several isotopes of curium were detected in the fallout from the Ivy Mike nuclear test. The longest-lived isotope of curium, 247Cm, has a half-life of 15.6 million years. Therefore, any primordial curium, that is curium present on the Earth during its formation, should have decayed by now, although some of it would be detectable as an extinct radionuclide as an excess of its primordial, long-lived daughter 235U. Curium is produced artificially, in small quantities for research purposes. Furthermore, it occurs in spent nuclear fuel. Curium is present in nature in certain areas used for the atmospheric nuclear weapons tests, which were conducted between 1945 and 1980.[48] So the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), beside einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular 245Cm, 246Cm and smaller quantities of 247Cm, 248Cm and 249Cm.[49] Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils.[50] The transuranic elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.[51] SynthesisEdit Isotope preparationEdit Curium is produced in small quantities in nuclear reactors, and by now only kilograms of it have been accumulated for the 242Cm and 244Cm and grams or even milligrams for heavier isotopes. This explains the high price of curium, which has been quoted at 160–185 USD per milligram,[15] with a more recent estimate at US$2,000/g for 242Cm and US$170/g for 244Cm.[52] In nuclear reactors, curium is formed from 238U in a series of nuclear reactions. In the first chain, 238U captures a neutron and converts into 239U, which via β− decay transforms into 239Np and 239Pu. U 92 238 → ( n , γ ) U 92 239 → 23.5 min β − 93 239 Np → 2.3565 d β − 94 239 Pu {\displaystyle {\ce {^{238}_{92}U->[{\ce {(n,\gamma )}}]{^{239}_{92}U}->[\beta ^{-}][23.5\ {\ce {min}}]_{93}^{239}Np->[\beta ^{-}][2.3565\ {\ce {d}}]_{94}^{239}Pu}}} (the times are half-lives). Further neutron capture followed by β−-decay produces the 241Am isotope of americium which further converts into 242Cm: Pu 94 239 → 2 ( n , γ ) 94 241 Pu → 14.35 yr β − Am 95 241 → ( n , γ ) 95 242 Am → 16.02 h β − 96 242 Cm {\displaystyle {\ce {^{239}_{94}Pu->[{\ce {2(n,\gamma )}}]_{94}^{241}Pu->[\beta ^{-}][14.35\ {\ce {yr}}]{^{241}_{95}Am}->[{\ce {(n,\gamma )}}]_{95}^{242}Am->[\beta ^{-}][16.02{\ce {h}}]_{96}^{242}Cm}}} For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of 244Cm:[7] Pu 94 239 → 4 ( n , γ ) 94 243 Pu → 4.956 h β − 95 243 Am → ( n , γ ) 95 244 Am → 10.1 h β − 96 244 Cm → 18.11 yr α 94 240 Pu {\displaystyle {\ce {^{239}_{94}Pu->[{\ce {4(n,\gamma )}}]_{94}^{243}Pu->[\beta ^{-}][4.956\ {\ce {h}}]_{95}^{243}Am->[({\ce {n}},\gamma )]_{95}^{244}Am->[\beta ^{-}][10.1{\ce {h}}]_{96}^{244}Cm->[\alpha ][18.11\ {\ce {yr}}]_{94}^{240}Pu}}} Curium-244 decays into 240Pu by emission of alpha particle, but it also absorbs neutrons resulting in a small amount of heavier curium isotopes. Among those, 247Cm and 248Cm are popular in scientific research because of their long half-lives. However, the production rate of 247Cm in thermal neutron reactors is relatively low because it is prone to undergo fission induced by thermal neutrons.[53] Synthesis of 250Cm via neutron absorption is also rather unlikely because of the short half-life of the intermediate product 249Cm (64 min), which converts by β− decay to the berkelium isotope 249Bk.[53] Cm 96 A + 0 1 n ⟶ 96 A + 1 Cm + γ ( for 244 ≤ A ≤ 248 ) {\displaystyle {\ce {^{\mathit {A}}_{96}Cm{}+_{0}^{1}n->_{96}^{{\mathit {A}}+1}Cm{}+\gamma }}\ ({\text{for }}244\leq A\leq 248)} The above cascade of (n,γ) reactions produces a mixture of different curium isotopes. Their post-synthesis separation is cumbersome, and therefore a selective synthesis is desired. Curium-248 is favored for research purposes because of its long half-life. The most efficient preparation method of this isotope is via α-decay of the californium isotope 252Cf, which is available in relatively large quantities due to its long half-life (2.65 years). About 35–50 mg of 248Cm is being produced by this method every year. The associated reaction produces 248Cm with isotopic purity of 97%.[53] Cf 98 252 → 2.645 yr α Cm 96 248 {\displaystyle {\begin{matrix}{}\\{\ce {^{252}_{98}Cf ->[\alpha][2.645\ {\ce {yr}}] ^{248}_{96}Cm}}\\{}\end{matrix}}} Another interesting for research isotope 245Cm can be obtained from the α-decay of 249Cf, and the latter isotope is produced in minute quantities from the β−-decay of the berkelium isotope 249Bk. Bk 97 249 → 330 d β − Cf 98 249 → 351 yr α Cm 96 245 {\displaystyle {\ce {^{249}_{97}Bk ->[\beta^-][330\ {\ce {d}}] ^{249}_{98}Cf ->[\alpha][351\ {\ce {yr}}] ^{245}_{96}Cm}}} Metal preparationEdit Chromatographic elution curves revealing the similarity between Tb, Gd, Eu lanthanides and corresponding Bk, Cm, Am actinides. Most synthesis routines yield a mixture of different actinide isotopes as oxides, from which a certain isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent.[54] Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium.[55] Separation of curium from a very similar americium can also be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; whereas americium oxidizes to soluble Am(IV) complexes, curium remains unchanged and can thus be isolated by repeated centrifugation.[56] Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was conducted in the environment free from water and oxygen, in the apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents.[7][18][57][58][59] C m F 3 + 3 L i ⟶ C m + 3 L i F {\displaystyle \mathrm {CmF_{3}\ +\ 3\ Li\ \longrightarrow \ Cm\ +\ 3\ LiF} } Another possibility is the reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride.[60] Compounds and reactionsEdit See also: Category:Curium compounds. OxidesEdit Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides,[48] but the divalent oxide CmO is also known.[61] Black CmO2 can be obtained by burning curium oxalate (Cm2(C2O4)3), nitrate (Cm(NO3)3) or hydroxide in pure oxygen.[31][62] Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3:[31][63] 4 CmO 2 → Δ T 2 Cm 2 O 3 + O 2 {\displaystyle {\ce {4CmO2 ->[\Delta T] 2Cm2O3 + O2}}} Alternatively, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen:[64] 2 CmO 2 + H 2 ⟶ Cm 2 O 3 + H 2 O {\displaystyle {\ce {2CmO2 + H2 -> Cm2O3 + H2O}}} Furthermore, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium.[65] Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to produce a volatile form of CmO2 and the volatile trioxide CmO3, one of the two known examples of the very rare +6 state for curium.[2] Another observed species was reported to behave similarly to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state;[66] however, new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well.[67] HalidesEdit The colorless curium(III) fluoride (CmF3) can be produced by introducing fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine:[7] 2 C m F 3 + F 2 ⟶ 2 C m F 4 {\displaystyle \mathrm {2\ CmF_{3}\ +\ F_{2}\ \longrightarrow \ 2\ CmF_{4}} } A series of ternary fluorides are known of the form A7Cm6F31, where A stands for alkali metal.[68] The colorless curium(III) chloride (CmCl3) is produced in the reaction of curium(III) hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can further be converted into other halides, such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at elevated temperature of about 400–450 °C:[69] C m C l 3 + 3 N H 4 I ⟶ C m I 3 + 3 N H 4 C l {\displaystyle \mathrm {CmCl_{3}\ +\ 3\ NH_{4}I\ \longrightarrow \ CmI_{3}\ +\ 3\ NH_{4}Cl} } An alternative procedure is heating curium oxide to about 600 °C with the corresponding acid (such as hydrobromic for curium bromide).[70][71] Vapor phase hydrolysis of curium(III) chloride results in curium oxychloride:[72] C m C l 3 + H 2 O ⟶ C m O C l + 2 H C l {\displaystyle \mathrm {CmCl_{3}\ +\ \ H_{2}O\ \longrightarrow \ CmOCl\ +\ 2\ HCl} } Chalcogenides and pnictidesEdit Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature.[73][74] The pnictides of curium of the type CmX are known for the elements nitrogen, phosphorus, arsenic and antimony.[7] They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperatures.[75] Organocurium compounds and biological aspectsEdit Predicted curocene structure Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet.[76][77] Formation of the complexes of the type Cm(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and therefore are useful in its selective separation from lanthanides and another actinides.[27][78] Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid,[79] urea,[80] fluorescein[81] and adenosine triphosphate.[82] Many of these compounds are related to biological activity of various microorganisms. The resulting complexes exhibit strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying the interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence.[28][79][80][81][82] Curium has no biological significance.[83] There are a few reports on biosorption of Cm3+ by bacteria and archaea, however no evidence for incorporation of curium into them.[84][85] ApplicationsEdit RadionuclidesEdit The radiation from curium is so strong that the metal glows purple in the dark. Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have relatively short half-lives of 162.8 days and 18.1 years, and produce as much as 120 W/g and 3 W/g of thermal energy, respectively.[15][86][87] Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price of around 2000 USD/g. 243Cm with a ~30 year half-life and good energy yield of ~1.6 W/g could make a suitable fuel, but it produces significant amounts of harmful gamma and beta radiation from radioactive decay products. Though as an α-emitter, 244Cm requires a much thinner radiation protection shielding, it has a high spontaneous fission rate, and thus the neutron and gamma radiation rate are relatively strong. As compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits a 500-fold greater fluence of neutrons, and its higher gamma emission requires a shield that is 20 times thicker — about 2 inches of lead for a 1 kW source, as compared to 0.1 in for 238Pu. Therefore, this application of curium is currently considered impractical.[52] A more promising application of 242Cm is to produce 238Pu, a more suitable radioisotope for thermoelectric generators such as in cardiac pacemakers. The alternative routes to 238Pu use the (n,γ) reaction of 237Np, or the deuteron bombardment of uranium, which both always produce 236Pu as an undesired by-product — since the latter decays to 232U with strong gamma emission.[88] Curium is also a common starting material for the production of higher transuranic elements and transactinides. Thus, bombardment of 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yielded certain isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv).[89] Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the 60-inch (150 cm) cyclotron at Berkeley: 2He → 245 Only about 5,000 atoms of californium were produced in this experiment.[90] Alpha-particle X-ray spectrometer of a Mars exploration rover X-ray spectrometerEdit The most practical application of 244Cm — though rather limited in total volume — is as α-particle source in the alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander,[91] as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars.[92] APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source.[50][93][94] An elaborated APXS setup is equipped with a sensor head containing six curium sources having the total radioactive decay rate of several tens of millicuries (roughly a gigabecquerel). The sources are collimated on the sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (the proton analysis is implemented only in some spectrometers). These spectra contain quantitative information on all major elements in the samples except for hydrogen, helium and lithium.[95] SafetyEdit Owing to its high radioactivity, curium and its compounds must be handled in appropriate laboratories under special arrangements. Whereas curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma radiation, which require a more elaborate protection.[48] If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, about 45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In the bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones.[48][50] Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of 244Cm in soluble form is 0.3 μC.[15] Intravenous injection of 242Cm and 244Cm containing solutions to rats increased the incidence of bone tumor, and inhalation promoted pulmonary and liver cancer.[48] Curium isotopes are inevitably present in spent nuclear fuel with a concentration of about 20 g/tonne.[96] Among them, the 245Cm–248Cm isotopes have decay times of thousands of years and need to be removed to neutralize the fuel for disposal.[97] The associated procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium.[27] ^ Kovács, Attila; Dau, Phuong D.; Marçalo, Joaquim; Gibson, John K. (2018). "Pentavalent Curium, Berkelium, and Californium in Nitrate Complexes: Extending Actinide Chemistry and Oxidation States". Inorg. Chem. American Chemical Society. 57 (15): 9453–9467. doi:10.1021/acs.inorgchem.8b01450. PMID 30040397. ^ a b c d Domanov, V. P.; Lobanov, Yu. V. (October 2011). "Formation of volatile curium(VI) trioxide CmO3". Radiochemistry. SP MAIK Nauka/Interperiodica. 53 (5): 453–6. doi:10.1134/S1066362211050018. ^ a b c Schenkel, R. (1977). "The electrical resistivity of 244Cm metal". Solid State Communications. 23 (6): 389. Bibcode:1977SSCom..23..389S. doi:10.1016/0038-1098(77)90239-3. ^ Kovács, Attila; Dau, Phuong D.; Marçalo, Joaquim; Gibson, John K. (2018). "Pentavalent Curium, Berkelium, and Californium in Nitrate Complexes: Extending Actinide Chemistry and Oxidation States". Inorg. Chem. American Chemical Society. 57 (15): 9453–9467. doi:10.1021/acs.inorgchem.8b01450. ^ Hall, Nina (2000). The New Chemistry: A Showcase for Modern Chemistry and Its Applications. Cambridge University Press. pp. 8–9. ISBN 978-0-521-45224-3. ^ a b Seaborg, Glenn T.; James, R. A.; Ghiorso, A. (1949). "The New Element Curium (Atomic Number 96)" (PDF). NNES PPR (National Nuclear Energy Series, Plutonium Project Record). The Transuranium Elements: Research Papers, Paper No. 22.2. 14 B. OSTI http://www.osti.gov/cgi-bin/rd_accomplishments/display_biblio.cgi?id=ACC0049&numPages=13&fp=N. ^ a b c d e Morss, L. R.; Edelstein, N. M. and Fugere, J. (eds): The Chemistry of the Actinide Elements and transactinides, volume 3, Springer-Verlag, Dordrecht 2006, ISBN 1-4020-3555-1. ^ a b Pepling, Rachel Sheremeta (2003). "Chemical & Engineering News: It's Elemental: The Periodic Table – Americium". Retrieved 2008-12-07. ^ Krebs, Robert E. The history and use of our earth's chemical elements: a reference guide, Greenwood Publishing Group, 2006, ISBN 0-313-33438-2 p. 322 ^ Harper, Douglas. "pandemonium". Online Etymology Dictionary. ^ Harper, Douglas. "delirium". Online Etymology Dictionary. ^ a b c Audi, Georges; Bersillon, Olivier; Blachot, Jean; Wapstra, Aaldert Hendrik (1997). "The NUBASE evaluation of nuclear and decay properties" (PDF). Nuclear Physics A. 624 (1): 1–124. Bibcode:1997NuPhA.624....1A. doi:10.1016/S0375-9474(97)00482-X. Archived from the original (PDF) on 2008-09-23. ^ Seaborg, G. T. U.S. Patent 3,161,462 "Element", Filing date: 7 February 1949, Issue date: December 1964 ^ Greenwood, p. 1252 ^ a b c d e Hammond C. R. "The elements" in Lide, D. R., ed. (2005). CRC Handbook of Chemistry and Physics (86th ed.). Boca Raton (FL): CRC Press. ISBN 0-8493-0486-5. ^ L. B. Werner, I. Perlman: "Isolation of Curium", NNES PPR (National Nuclear Energy Series, Plutonium Project Record), Vol. 14 B, The Transuranium Elements: Research Papers, Paper No. 22.5, McGraw-Hill Book Co., Inc., New York, 1949. ^ "National Academy of Sciences. Isadore Perlman 1915–1991". Nap.edu. Retrieved 2011-03-25. ^ a b Wallmann, J. C.; Crane, W. W. T.; Cunningham, B. B. (1951). "The Preparation and Some Properties of Curium Metal". Journal of the American Chemical Society. 73 (1): 493–494. doi:10.1021/ja01145a537. hdl:2027/mdp.39015086479790. ^ Werner, L. B.; Perlman, I. (1951). "First Isolation of Curium". Journal of the American Chemical Society. 73 (1): 5215–5217. doi:10.1021/ja01155a063. ^ a b Milman, V.; Winkler, B.; Pickard, C. J. (2003). "Crystal structures of curium compounds: an ab initio study". Journal of Nuclear Materials. 322 (2–3): 165. Bibcode:2003JNuM..322..165M. doi:10.1016/S0022-3115(03)00321-0. ^ Young, D. A. Phase diagrams of the elements, University of California Press, 1991, ISBN 0-520-07483-1, p. 227 ^ Haire, R.; Peterson, J.; Benedict, U.; Dufour, C.; Itie, J. (1985). "X-ray diffraction of curium-248 metal under pressures of up to 52 GPa". Journal of the Less Common Metals. 109 (1): 71. doi:10.1016/0022-5088(85)90108-0. ^ Kanellakopulos, B.; Blaise, A.; Fournier, J. M.; Müller, W. (1975). "The magnetic susceptibility of Americium and curium metal". Solid State Communications. 17 (6): 713. Bibcode:1975SSCom..17..713K. doi:10.1016/0038-1098(75)90392-0. ^ Fournier, J.; Blaise, A.; Muller, W.; Spirlet, J.-C. (1977). "Curium: A new magnetic element". Physica B+C. 86–88: 30. Bibcode:1977PhyBC..86...30F. doi:10.1016/0378-4363(77)90214-5. ^ Nave, S. E.; Huray, P. G.; Peterson, J. R. and Damien, D. A. Magnetic susceptibility of curium pnictides, Oak Ridge National Laboratory ^ Schenkel, R. (1977). "The electrical resistivity of 244Cm metal". Solid State Communications. 23 (6): 389. Bibcode:1977SSCom..23..389S. doi:10.1016/0038-1098(77)90239-3. ^ a b c Denecke, Melissa A.; Rossberg, André; Panak, Petra J.; Weigl, Michael; Schimmelpfennig, Bernd; Geist, Andreas (2005). "Characterization and Comparison of Cm(III) and Eu(III) Complexed with 2,6-Di(5,6-dipropyl-1,2,4-triazin-3-yl)pyridine Using EXAFS, TRFLS, and Quantum-Chemical Methods". Inorganic Chemistry. 44 (23): 8418–25. doi:10.1021/ic0511726. PMID 16270980. ^ a b Bünzli, J.-C. G. and Choppin, G. R. Lanthanide probes in life, chemical, and earth sciences: theory and practice, Elsevier, Amsterdam, 1989 ISBN 0-444-88199-9 ^ Penneman, p. 24 ^ Keenan, Thomas K. (1961). "FIRST OBSERVATION OF AQUEOUS TETRAVALENT CURIUM". Journal of the American Chemical Society. 83 (17): 3719. doi:10.1021/ja01478a039. ^ a b c Asprey, L. B.; Ellinger, F. H.; Fried, S.; Zachariasen, W. H. (1955). "EVIDENCE FOR QUADRIVALENT CURIUM: X-RAY DATA ON CURIUM OXIDES". Journal of the American Chemical Society. 77 (6): 1707. doi:10.1021/ja01611a108. ^ a b Gregg J., Lumetta; Thompson, Major C.; Penneman, Robert A.; Eller, P. Gary (2006). "Curium". In Morss, Lester R.; Edelstein, Norman M.; Fuger, Jean (eds.). The Chemistry of the Actinide and Transactinide Elements (PDF). 3 (3rd ed.). Dordrecht, the Netherlands: Springer. pp. 1397–1443. doi:10.1007/1-4020-3598-5_9. ISBN 978-1-4020-3555-5. ^ Holleman, p. 1956 ^ Penneman, pp. 25–26 ^ Jensen, Mark P.; Bond, Andrew H. (2002). "Comparison of Covalency in the Complexes of Trivalent Actinide and Lanthanide Cations". Journal of the American Chemical Society. 124 (33): 9870–7. doi:10.1021/ja0178620. PMID 12175247. ^ Seaborg, Glenn T. (1993). "Overview of the Actinide and Lanthanide (the f) Elements". Radiochimica Acta. 61: 115–122. ^ Pfennig, G.; Klewe-Nebenius, H. and Seelmann Eggebert, W. (Eds.): Karlsruhe nuclide, 6th Ed. 1998 ^ Kang, Jungmin; Von Hippel, Frank (2005). "Limited Proliferation-Resistance Benefits from Recycling Unseparated Transuranics and Lanthanides from Light-Water Reactor Spent Fuel" (PDF). Science and Global Security. 13 (3): 169. doi:10.1080/08929880500357682. ^ Osaka, M.; et al. (2001). "Analysis of Curium Isotopes in Mixed Oxide Fuel Irradiated in Fast Reactor" (PDF). Journal of Nuclear Science and Technology. 38 (10): 912–914. doi:10.3327/jnst.38.912. Archived from the original (PDF) on 3 July 2007. ^ Sasahara, Akihiro; Matsumura, Tetsuo; Nicolaou, Giorgos; Papaioannou, Dimitri (2004). "Neutron and Gamma Ray Source Evaluation of LWR High Burn-up UO2 and MOX Spent Fuels" (PDF). Journal of Nuclear Science and Technology. 41 (4): 448–456. doi:10.3327/jnst.41.448. ^ a b Institut de Radioprotection et de Sûreté Nucléaire: "Evaluation of nuclear criticality safety. data and limits for actinides in transport" Archived May 19, 2011, at the Wayback Machine, p. 16 ^ National Research Council (U.S.). Committee on Separations Technology and Transmutation Systems (1996). Nuclear wastes: technologies for separations and transmutation. National Academies Press. pp. 231–. ISBN 978-0-309-05226-9. Retrieved 19 April 2011. ^ Okundo, H. & Kawasaki, H. (2002). "Critical and Subcritical Mass Calculations of Curium-243 to −247 Based on JENDL-3.2 for Revision of ANSI/ANS-8.15". Journal of Nuclear Science and Technology. 39 (10): 1072–1085. doi:10.3327/jnst.39.1072. ^ § 2 Begriffsbestimmungen (Atomic Energy Act) (in German) ^ Jukka Lehto; Xiaolin Hou (2 February 2011). Chemistry and Analysis of Radionuclides: Laboratory Techniques and Methodology. Wiley-VCH. pp. 303–. ISBN 978-3-527-32658-7. Retrieved 19 April 2011. ^ a b c d e Curium (in German) ^ Fields, P. R.; Studier, M. H.; Diamond, H.; et al. (1956). "Transplutonium Elements in Thermonuclear Test Debris". Physical Review. 102 (1): 180–182. Bibcode:1956PhRv..102..180F. doi:10.1103/PhysRev.102.180. ^ a b c Human Health Fact Sheet on Curium Archived 2006-02-18 at the Wayback Machine, Los Alamos National Laboratory ^ Emsley, John (2011). Nature's Building Blocks: An A-Z Guide to the Elements (New ed.). New York, NY: Oxford University Press. ISBN 978-0-19-960563-7. ^ a b Basic elements of static RTGs, G.L. Kulcinski, NEEP 602 Course Notes (Spring 2000), Nuclear Power in Space, University of Wisconsin Fusion Technology Institute (see last page) ^ a b c Lumetta, Gregg J.; Thompson, Major C.; Penneman, Robert A.; Eller, P. Gary (2006). "Curium" (PDF). In Morss; Edelstein, Norman M.; Fuger, Jean (eds.). The Chemistry of the Actinide and Transactinide Elements (3rd ed.). Dordrecht, The Netherlands: Springer Science+Business Media. p. 1401. ISBN 978-1-4020-3555-5. Archived from the original (PDF) on 2010-07-17. ^ Magnusson D; Christiansen B; Foreman MRS; Geist A; Glatz JP; Malmbeck R; Modolo G; Serrano-Purroy D & Sorel C (2009). "Demonstration of a SANEX Process in Centrifugal Contactors using the CyMe4-BTBP Molecule on a Genuine Fuel Solution". Solvent Extraction and Ion Exchange. 27 (2): 97. doi:10.1080/07366290802672204. ^ Cunningham, B. B.; Wallmann, J. C. (1964). "Crystal structure and melting point of curium metal". Journal of Inorganic and Nuclear Chemistry. 26 (2): 271. doi:10.1016/0022-1902(64)80069-5. ^ Stevenson, J.; Peterson, J. (1979). "Preparation and structural studies of elemental curium-248 and the nitrides of curium-248 and berkelium-249". Journal of the Less Common Metals. 66 (2): 201. doi:10.1016/0022-5088(79)90229-7. ^ Gmelin Handbook of Inorganic Chemistry, System No. 71, Volume 7 a, transuranics, Part B 1, pp. 67–68. ^ Eubanks, I.; Thompson, M. C. (1969). "Preparation of curium metal". Inorganic and Nuclear Chemistry Letters. 5 (3): 187. doi:10.1016/0020-1650(69)80221-7. ^ Noe, M.; Fuger, J. (1971). "Self-radiation effects on the lattice parameter of 244CmO2". Inorganic and Nuclear Chemistry Letters. 7 (5): 421. doi:10.1016/0020-1650(71)80177-0. ^ Haug, H. (1967). "Curium sesquioxide Cm2O3". Journal of Inorganic and Nuclear Chemistry. 29 (11): 2753. doi:10.1016/0022-1902(67)80014-9. ^ Fuger, J.; Haire, R.; Peterson, J. (1993). "Molar enthalpies of formation of BaCmO3 and BaCfO3". Journal of Alloys and Compounds. 200 (1–2): 181. doi:10.1016/0925-8388(93)90491-5. ^ Domanov, V. P. (January 2013). "Possibility of generation of octavalent curium in the gas phase in the form of volatile tetraoxide CmO4". Radiochemistry. 55 (1): 46–51. doi:10.1134/S1066362213010098. ^ Zaitsevskii, Andréi; Schwarz, W. H. Eugen (April 2014). "Structures and stability of AnO4 isomers, An = Pu, Am, and Cm: a relativistic density functional study". Physical Chemistry Chemical Physics. 2014 (16): 8997–9001. Bibcode:2014PCCP...16.8997Z. doi:10.1039/c4cp00235k. PMID 24695756. ^ Keenan, T. (1967). "Lattice constants of K7Cm6F31 trends in the 1:1 and 7:6 alkali metal-actinide(IV) series". Inorganic and Nuclear Chemistry Letters. 3 (10): 391. doi:10.1016/0020-1650(67)80092-8. ^ Asprey, L. B.; Keenan, T. K.; Kruse, F. H. (1965). "Crystal Structures of the Trifluorides, Trichlorides, Tribromides, and Triiodides of Americium and Curium". Inorganic Chemistry. 4 (7): 985. doi:10.1021/ic50029a013. ^ Burns, J.; Peterson, J. R.; Stevenson, J. N. (1975). "Crystallographic studies of some transuranic trihalides: 239PuCl3, 244CmBr3, 249BkBr3 and 249CfBr3". Journal of Inorganic and Nuclear Chemistry. 37 (3): 743. doi:10.1016/0022-1902(75)80532-X. ^ Wallmann, J.; Fuger, J.; Peterson, J. R.; Green, J. L. (1967). "Crystal structure and lattice parameters of curium trichloride". Journal of Inorganic and Nuclear Chemistry. 29 (11): 2745. doi:10.1016/0022-1902(67)80013-7. ^ Weigel, F.; Wishnevsky, V.; Hauske, H. (1977). "The vapor phase hydrolysis of PuCl3 and CmCl3: heats of formation of PuOC1 and CmOCl". Journal of the Less Common Metals. 56 (1): 113. doi:10.1016/0022-5088(77)90224-7. ^ Troc, R. Actinide Monochalcogenides, Volume 27, Springer, 2009 ISBN 3-540-29177-6, p. 4 ^ Damien, D.; Charvillat, J. P.; Müller, W. (1975). "Preparation and lattice parameters of curium sulfides and selenides". Inorganic and Nuclear Chemistry Letters. 11 (7–8): 451. doi:10.1016/0020-1650(75)80017-1. ^ Lumetta, G. J.; Thompson, M. C.; Penneman, R. A.; Eller, P. G. Curium Archived 2010-07-17 at the Wayback Machine, Chapter Nine in Radioanalytical Chemistry, Springer, 2004, pp. 1420-1421. ISBN 0387341226, ISBN 978-0387 341224 ^ Elschenbroich, Ch. Organometallic Chemistry, 6th edition, Wiesbaden 2008, ISBN 978-3-8351-0167-8, p. 589 ^ Kerridge, Andrew; Kaltsoyannis, Nikolas (2009). "Are the Ground States of the Later Actinocenes Multiconfigurational? All-Electron Spin−Orbit Coupled CASPT2 Calculations on An(η8-C8H8)2(An = Th, U, Pu, Cm)". The Journal of Physical Chemistry A. 113 (30): 8737–45. Bibcode:2009JPCA..113.8737K. doi:10.1021/jp903912q. PMID 19719318. ^ Girnt, Denise; Roesky, Peter W.; Geist, Andreas; Ruff, Christian M.; Panak, Petra J.; Denecke, Melissa A. (2010). "6-(3,5-Dimethyl-1H-pyrazol-1-yl)-2,2′-bipyridine as Ligand for Actinide(III)/Lanthanide(III) Separation". Inorganic Chemistry. 49 (20): 9627–35. doi:10.1021/ic101309j. PMID 20849125. ^ a b Glorius, M.; Moll, H.; Bernhard, G. (2008). "Complexation of curium(III) with hydroxamic acids investigated by time-resolved laser-induced fluorescence spectroscopy". Polyhedron. 27 (9–10): 2113. doi:10.1016/j.poly.2008.04.002. ^ a b Heller, Anne; Barkleit, Astrid; Bernhard, Gert; Ackermann, Jörg-Uwe (2009). "Complexation study of europium(III) and curium(III) with urea in aqueous solution investigated by time-resolved laser-induced fluorescence spectroscopy". Inorganica Chimica Acta. 362 (4): 1215. doi:10.1016/j.ica.2008.06.016. ^ a b Moll, Henry; Johnsson, Anna; Schäfer, Mathias; Pedersen, Karsten; Budzikiewicz, Herbert; Bernhard, Gert (2007). "Curium(III) complexation with pyoverdins secreted by a groundwater strain of Pseudomonas fluorescens". BioMetals. 21 (2): 219–28. doi:10.1007/s10534-007-9111-x. PMID 17653625. ^ a b Moll, Henry; Geipel, Gerhard; Bernhard, Gert (2005). "Complexation of curium(III) by adenosine 5′-triphosphate (ATP): A time-resolved laser-induced fluorescence spectroscopy (TRLFS) study". Inorganica Chimica Acta. 358 (7): 2275. doi:10.1016/j.ica.2004.12.055. ^ "Biochemical Periodic Table – Curium". UMBBD. 2007-06-08. Retrieved 2011-03-25. ^ Moll, H.; Stumpf, T.; Merroun, M.; Rossberg, A.; Selenska-Pobell, S.; Bernhard, G. (2004). "Time-resolved laser fluorescence spectroscopy study on the interaction of curium(III) with Desulfovibrio äspöensis DSM 10631T". Environmental Science & Technology. 38 (5): 1455–9. Bibcode:2004EnST...38.1455M. doi:10.1021/es0301166. PMID 15046347. ^ Ozaki, T.; et al. (2002). "Association of Eu(III) and Cm(III) with Bacillus subtilis and Halobacterium salinarium". Journal of Nuclear Science and Technology. Suppl. 3: 950–953. Archived from the original on 2009-02-25. ^ Binder, Harry H.: Lexikon der chemischen Elemente, S. Hirzel Verlag, Stuttgart 1999, ISBN 3-7776-0736-3, pp. 174–178. ^ Gmelin Handbook of Inorganic Chemistry, System No. 71, Volume 7a, transuranics, Part A2, p. 289 ^ Kronenberg, Andreas, Plutonium-Batterien Archived 2013-12-26 at the Wayback Machine (in German) "Archived copy". Archived from the original on February 21, 2011. Retrieved April 28, 2011. CS1 maint: Archived copy as title (link) CS1 maint: BOT: original-url status unknown (link) ^ Holleman, pp. 1980–1981. ^ Seaborg, Glenn T. (1996). Adloff, J. P. (ed.). One Hundred Years after the Discovery of Radioactivity. Oldenbourg Wissenschaftsverlag. p. 82. ISBN 978-3-486-64252-0. ^ "Der Rosetta Lander Philae". Bernd-leitenberger.de. 2003-07-01. Retrieved 2011-03-25. ^ Rieder, R.; Wanke, H.; Economou, T. (September 1996). "An Alpha Proton X-Ray Spectrometer for Mars-96 and Mars Pathfinder". Bulletin of the American Astronomical Society. 28: 1062. Bibcode:1996DPS....28.0221R. ^ Leitenberger, Bernd Die Surveyor Raumsonden (in German) ^ Nicks, Oran (1985). "Ch. 9. Essentials for Surveyor". SP-480 Far Travelers: The Exploring Machines. NASA. ^ Alpha Particle X-Ray Spectrometer (APXS), Cornell University ^ Hoffmann, K. Kann man Gold machen? Gauner, Gaukler und Gelehrte. Aus der Geschichte der chemischen Elemente (Can you make gold? Crooks, clowns and scholars. From the history of the chemical elements), Urania-Verlag, Leipzig, Jena, Berlin 1979, no ISBN, p. 233 ^ Baetslé, L. H. Application of Partitioning/Transmutation of Radioactive Materials in Radioactive Waste Management Archived 2005-04-26 at the Wayback Machine, Nuclear Research Centre of Belgium Sck/Cen, Mol, Belgium, September 2001. BibliographyEdit Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. ISBN 978-0-08-037941-8. Holleman, Arnold F. and Wiberg, Nils Lehrbuch der Anorganischen Chemie, 102 Edition, de Gruyter, Berlin 2007, ISBN 978-3-11-017770-1. Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960 Wikimedia Commons has media related to Curium. Look up curium in Wiktionary, the free dictionary. Curium at The Periodic Table of Videos (University of Nottingham) NLM Hazardous Substances Databank – Curium, Radioactive Retrieved from "https://en.wikipedia.org/w/index.php?title=Curium&oldid=906187002"
CommonCrawl
Language and Linguistics (1) Canadian Journal of Mathematics (2) Journal of Child Language (1) Australian Mathematical Society Inc (2) Royal College of Speech and Language Therapists (1) Centre-valued Index for Toeplitz Operators with Noncommuting Symbols John Phillips, Iain Raeburn Journal: Canadian Journal of Mathematics / Volume 68 / Issue 5 / 01 October 2016 Published online by Cambridge University Press: 20 November 2018, pp. 1023-1066 We formulate and prove a "winding number" index theorem for certain "Toeplitz" operators in the same spirit as Gohberg–Krein, Lesch and others. The "number" is replaced by a self-adjoint operator in a subalgebra $Z\,\subseteq \,Z\left( A \right)$ of a unital $C*$ -algebra, $A$ . We assume a faithful $Z$ -valued trace $\tau $ on $A$ left invariant under an action $\alpha :\,\text{R}\,\to \,\text{Aut}\left( A \right)$ leaving $Z$ pointwise fixed. If $\delta $ is the infinitesimal generator of $\alpha $ and $u$ is invertible in $\text{dom}\left( \delta \right)$ , then the "winding operator" of $u$ is $\frac{1}{2\pi i}\tau \left( \delta \left( u \right){{u}^{-1}} \right)\,\in \,{{Z}_{sa}}$ . By a careful choice of representations we extend $\left( A,\,Z,\,\tau ,\,\alpha \right)$ to a von Neumann setting $\left( \mathfrak{A},\,\mathfrak{Z},\,\bar{\tau },\,\bar{\alpha } \right)$ where $\mathfrak{A}\,=\,{A}''$ and $\mathfrak{Z}\,=\,{Z}''$ . Then $A\,\subset \,\mathfrak{A}\,\subset \,\mathfrak{A}\,\rtimes \,\mathbf{R}$ , the von Neumann crossed product, and there is a faithful, dual $\mathfrak{Z}$ -trace on $\mathfrak{A}\,\rtimes \,\mathbf{R}$ . If $P$ is the projection in $\mathfrak{A}\,\rtimes \,\mathbf{R}$ corresponding to the non-negative spectrum of the generator of $\mathbf{R}$ inside $\mathfrak{A}\,\rtimes \,\mathbf{R}$ and $\tilde{\pi }:\,A\,\to \,\mathfrak{A}\,\rtimes \,\mathbf{R}$ is the embedding, then we define ${{T}_{u}}\,=\,P\tilde{\pi }\left( u \right)P\,\text{for}\,u\,\in \,{{A}^{-1}}$ and show it is Fredholm in an appropriate sense and the $\mathfrak{Z}$ -valued index of ${{T}_{u}}$ is the negative of the winding operator. In outline the proof follows that of the scalar case done previously by the authors. The main difficulty is making sense of the constructions with the scalars replaced by $\mathfrak{Z}$ in the von Neumann setting. The construction of the dual $\mathfrak{Z}$ -trace on $\mathfrak{A}\,\rtimes \,\mathbf{R}$ requires the nontrivial development of a $\mathfrak{Z}$ -Hilbert algebra theory. We show that certain of these Fredholm operators fiber as a "section" of Fredholm operators with scalar-valued index and the centre-valued index fibers as a section of the scalar-valued indices. Chapter 2 - Biography from Biography and Life By John Raeburn Naturality and induced representations Siegfried Echterhoff, S. Kaliszewski, John Quigg, Iain Raeburn Journal: Bulletin of the Australian Mathematical Society / Volume 61 / Issue 3 / June 2000 We show that induction of covariant representations for C*-dynamical systems is natural in the sense that it gives a natural transformation between certain crossed-product functors. This involves setting up suitable categories of C*-algebras and dynamical systems, and extending the usual constructions of crossed products to define the appropriate functors. From this point of view, Green's Imprimitivity Theorem identifies the functors for which induction is a natural equivalence. Various special cases of these results have previously been obtained on an ad hoc basis. Twisted crossed products by coactions MSC 2010: Selfadjoint operator algebras Journal: Journal of the Australian Mathematical Society / Volume 56 / Issue 3 / June 1994 We consider coactions of a locally compact group G on a C*-algebra A, and the associated crossed product C*-algebra A× G. Given a normal subgroup N of G, we seek to decompose A× G as an iterated crossed product (A× G/ N) × N, and introduce notions of twisted coaction and twisted crossed product which make this possible. We then prove a duality theorem for these twisted crossed products, and discuss how our results might be used, especially when N is abelian. "Culture Morphology" and Cultural History in Berenice Abbott's Changing New York John Raeburn Journal: Prospects / Volume 9 / October 1984 Berenice abbott's photographs of New York City in the 1930s, made under the aegis of the Federal Arts Project of the WPA, have never enjoyed the acclaim that the work of photographers for the Farm Security Administration (FSA) received from the 1930s onward, despite the fact that her work is at least the equal of theirs in both aesthetic and documentary interest. Her photographs have not exactly been neglected — she is dutifully mentioned in most histories of twentieth-century photography — but neither have they been seen as at least equally central to our understanding of the culture of 1930s America as the work of Rothstein, Mydans, Lee, and even Lange and Evans. Changing New York (1939), a collection of nearly 100 of her photographs taken between 1935 and 1938, is a major document of the Depression, one that has heretofore been slighted in evaluations of the decade's achievements. Talking to children: the effects of rate, intonation, and length on children's sentence imitation* John D. Bonvillian, Vicki P. Raeburn, Elizabeth A. Horan Journal: Journal of Child Language / Volume 6 / Issue 3 / October 1979 Twelve nursery school children (mean age = 3; 9) attempted to imitate sentences which varied systematically in rate of presentation, intonation and length. The children more successfully imitated shorter sentences than longer ones, and sentences spoken at a rate nearer their own than sentences spoken at slower or faster rates. The imitation of long sentences read with normal intonation was superior to the imitation of long sentences read with flat intonation. Since adults frequently address children in short sentences, spoken slowly and with exaggerated intonation, these results indicate that these speech modifications may enhance the children's language comprehension. Perturbations of AF-Algebras Let A and B be C*-algebras acting on a Hilbert space H, and let where A 1 is the unit ball in A and d(a, B 1) denotes the distance of a from B 1. We shall consider the following problem: if ‖A – B‖ is sufficiently small, does it follow that there is a unitary operator u such that uAu* = B? Such questions were first considered by Kadison and Kastler in [9], and have received considerable attention. In particular in the case where A is an approximately finite-dimensional (or hyperfinite) von Neumann algebra, the question has an affirmative answer (cf [3], [8], [12]). We shall show that in the case where A and B are approximately finite-dimensional C*-algebras (AF-algebras) the problem also has a positive answer.
CommonCrawl
Patch-use dynamics by a large herbivore Dana P Seidel1 & Mark S Boyce1 An adaption of the optimal foraging theory suggests that herbivores deplete, depart, and finally return to foraging patches leaving time for regrowth [van Moorter et al., Oikos 118:641–652, 2009]. Inter-patch movement and memory of patches then produce a periodic pattern of use that may define the bounds of a home range. The objective of this work was to evaluate the underlying movements within home ranges of elk (Cervus elaphus) according to the predictions of this theory. Using a spatial temporal permutation scan statistic to identify foraging patches from GPS relocations of cow elk, we evaluated return patterns to foraging patches during the 2012 growing season. Subsequently, we used negative binomial regression to assess environmental characteristics that affect the frequency of returns, and thereby characterize the most successful patches. We found that elk return to known patches regularly over a season, on average after 15.4 (±5.4 SD) days. Patches in less-rugged terrain, farther from roads and with high productivity were returned to most often when controlling for the time each patch was known to each elk. Instead of diffusion processes often used to describe animal movement, our research demonstrates that elk make directed return movements to valuable foraging sites and, as support for Van Moorter et al.'s [Oikos 118:641–652, 2009] model, we submit that these movements could be an integral part of home-range development in wild ungulates. Home-range development and range-use dynamics are key components of foraging behaviour with implications for animal movement, habitat selection, and fitness [1,2]. The home range often is defined to be the area known by the animal and remembered or maintained because of its value, presumably in resources required by the animal for survival and reproduction [1,3,4]. However, simulations of memory processes alone have failed to yield stable home ranges [5,6] and the biological mechanisms underlying the development and maintenance of home ranges in non-territorial animals are still missing. There is a growing body of literature on mechanistic home range models hypothesizing the underlying rules for movement or landscape structure that may define or result in the development of stable home ranges [5,7-10]. Compared to traditional techniques that describe home ranges, mechanistic models are more comprehensive attempts to unveil the processes that result in home-range behaviour. Because these models are based not only on the movements of animals but upon the underlying rules for movement, they have the ability to predict an individual's spatial use, not only describe it [9,11]. As such these models, when validated, are especially powerful tools for predicting responses to changes in habitat [9,12] either by human land-use change, or natural perturbations to the environment. Because of their potential predictive powers, numerous mechanistic home range models have been developed recently. Unfortunately, these works focus primarily on the development of defended ranges or territories of central place foragers [3,13,14], not the ranges of more diffuse foragers (e.g. most cervids) without a central place or a discrete and defended territory. In an attempt to address this gap, a model by Van Moorter et al. [5] simulates home-range development combining the rules of optimal foraging theory and a two-part memory system. Foragers move between dynamically valued patches distributed across the landscape, removing food from a patch until depletion stimulates departure according to the marginal value theorem [15]. Their movement is biased by the utility of surrounding patches and both short-term memory and long-term memory that prevent backtracking over depleted patches while maintaining knowledge of successful patches and allowing time for forage regrowth prior to return. Seidel and Boyce [Seidel DP, Boyce MS: Varied tastes: home range implications of foraging patch selection, forthcoming] evaluated four formative assumptions of Van Moorter et al.'s model in two populations of elk in SW Alberta. Their work formed the first empirical support for this model but they did not investigate the predicted movement patterns or returns to foraging sites. Although directed movements between areas of resource abundance where animals linger to forage have been demonstrated [16-18], few studies have shown returns or recursive movement patterns in ungulate populations and none exhibit returns directly to identified foraging patches [19,20]. As such, our objective was to evaluate movement within home ranges according to predictions of a proposed mechanistic home range model for foragers. We used a flexible space-time permutation scan statistic (STPSS) to identify and approximate the scale of discrete elk foraging patches in space and time. We first sought to establish whether and how frequently elk return to these patches. Secondly, our goal was to identify the characteristics of a patch that increased the likelihood of reuse. Connecting patch-return likelihood to attributes of these patches and surrounding landscape lays the groundwork for understanding why and how animals use various areas within their home range and allows us to evaluate the expectation that those patches that are revisited should be of higher quality than other available patches. SatScan clustering Using the STPSS procedure, 815 clusters were identified over the summer season with a total of 2,112 returns overall. Clusters with radii less than 15 m in length were removed, 47(5.8%) qualifying clusters, leaving 768 clusters for analysis. The average number of clusters identified in total each week was 54.86(±8.24 SD) clusters (minimum: 42, maximum: 63). An average of 109.7(±8.36 SD) clusters per individual was identified over the 3-month season. The average radius of analysed clusters was 92.4 m (±39.1 SD) and included an average of 2.63(±1.21 SD) fixes in each cluster. SatScan output also provides number of observed fixes within the cluster. This value is often larger (but never smaller) than the number of fixes included in the cluster and represents the total number of fixes within the spatial boundaries of the cluster over the entire analysed temporal period, e.g. 7 days. The average number of observed fixes in each cluster was 2.77(±1.47 SD) indicating that animals frequently revisited the cluster within the same week but not within the chosen temporal window. Investigating returns Our calculations suggested that across all animals, clusters were returned to an average of 2.75 (±2.37 SD) times over the 3-month season (including single fix returns). Animals returned to each cluster after an average of 15.38 days (±5.39 SD) and the average rate of return (#returns/timeknown) was 0.034 (±.027 SD) returns per day (or 3.34 returns per patch over the study period). Some clusters (17.1%) did not experience a return foraging event. See Table 1 for additional summary statistics on returns. Table 1 Summary statistics on returns for cow elk, summer 2012 A high frequency of zeroes is often best explained by length of time that the patch was known to the elk– especially evident in Livingstone animals. For example, because E144 moved to a new area of her home range just 3 weeks before the end of the sampling period, her late-season clusters had a much shorter period of time for revisitation and account for 69.6% of her non-returned patches over only 21% of the study period. This phenomenon is explored using a Kaplan-Meier curve demonstrating that until a patch is known for about 20 days it has nearly a hundred percent chance of not being returned to but after 100 days a return is a near certainty (Figure 1). Kaplan-Meier curve examining the influence of TMKnown on cluster visits. TmKnown, or the number of days between an individual's first visit to a patch and the end of the study period, has a noteworthy effect on the likelihood that an identified patch will be revisited. Revisited patches have, on average, been known for 85 days, suggesting that many clusters not returned to were potentially not known long enough to be returned to within the sampled season. Returns were overdispersed (mean = 2.75, variance = 5.63) and a negative binomial distribution examined for better fit. As expected, a fixed negative binomial outperformed a fixed Poisson model by 14 AIC (Akaike Information Criterion) units and reduced the Pearson χ2 dispersion coefficient from 1.34 to 1.10. When mixed-effects models were estimated with Poisson and negative binomial families, fit was improved compared to fixed models. Unexpectedly, the mixed-effects models differed only by 0.14 AIC units (mixed Poisson 2857.92, mixed NB 2857.78) but again the Pearson χ2 coefficient indicated less over dispersion with the negative binomial (1.21 to 1.13 respectively). To explain variation in the pattern of returns, we fit biologically plausible alternative models and identified the model with the smallest AIC (see Table 2). The best fit model by AIC indicates that time known, ruggedness, distance to road and productivity at the site most significantly influenced the likelihood of return across all patches. The AIC-selected model explained approximately 13% of the deviance when compared with the null model. The dispersion parameter for the top reported model was 34.716 (25.61 SE). The width of this standard error and the magnitude of the corrective parameter were large but the parameter estimates were stable across mixed Poisson and Negative Binomial approaches and the Pearson Chi Squared dispersion parameter, 1.13, indicates that the remaining 13% overcorrelation is within suitable bounds for use of the negative binomial distribution [21]. Table 2 Candidate models and akaike weights Higher relative productivity of a patch (NDVI) increased the likelihood of return (Table 3). Elk preferred to return to patches farther from roads and the interaction parameter between herd and distance to road was included in the top model indicating the road effect on returns was magnified in the Waterton herd. Additionally, our model shows that Waterton animals return less often than Livingston animals overall. The censorship parameter, TmKnown, proved to have the largest effect size, positively impacting return likelihood almost twice as much as any other variable. The longer that a patch is known by, i.e. available to, an animal the more likely it will receive a return visit. Examination of this variable using a Kaplan-Meier survival curve further emphasizes its importance in the revisitation of patches. According to our data, patches known to an elk for less than 60 days have roughly only a 25% chance of being revisited; this chance doubles once patches have been known to the elk for at least 100 days (Figure 1). Table 3 Coefficient estimates for top model, model M Our results confirm that individual elk make repeated foraging visits to patches within a growing season. Furthermore, we demonstrate that distance from roads, as well as landscape ruggedness, and green herbaceous productivity contribute to increased returns at foraging patches indicating that patch value influenced the likelihood of return to a patch, just as proposed by Van Moorter et al.'s [5] home-range model. Return behaviours have been shown before in wild ungulates, but to our knowledge, this is the first empirical demonstration of recursive movements specifically to identified foraging sites. Wolf et al. [20] and Bar-David et al. [19] both identified recursion events to previously used or "known" locations related to resources, or foraging behaviours, though neither estimated returns directly to identified foraging areas. By analysing return patterns to a specific location and use, we uniquely explored how foraging selection might drive movement patterns. Differences across return distributions of individuals and across herds were noted (Figure 2A&B), with Waterton animals returning less often overall. These distributions are likely influenced by subtle range shifts over the season and by individual movement behaviours. Larger home ranges lead to fewer returns and longer time between returns at individual patches [van Moorter B: unpublished manuscript]. This is logical: when there is more space to cover and more patches to visit, the time between returns will be longer leading to fewer returns over a single season. Movement between (and thus return rates to) patches could be influenced by other environmental features such as ruggedness of terrain or overall extent of home range although we did not explore these explicitly in this analysis. We observed that Waterton cows expanded their home ranges over the course of the season, but maintained returns to the entire area, even as it expanded late in the summer down into the aspen forests and wetlands on the east shore of lower Waterton lakes where bull elk typically concentrated their summer movements. Maintenance of larger home ranges may explain a portion of the reduced return likelihood of Waterton patches. Distribution of return frequency to clusters by (A) Individual and (B) Herd across the summer season. Histograms depicting frequency of returns to identified foraging patches are presented for each individual cow and each herd cumulatively. These histograms demonstrate the wide variation present across individual and herd return frequencies, potentially influenced both by differences in habitat and behaviour across the season. Our top model demonstrates that at the population level, TmKnown, ruggedness, productivity, distance to road, and interactions between distance to road and herd and TmKnown and productivity were the most influential environmental covariates determining return counts at patches across the season. The importance of productivity in return models supports the underlying thesis of Van Moorter et al.'s [5] model which values patches based on replenishment of resources. As expected our results demonstrate that productive patches are returned to more often than less productive patches. An attraction to productive forage is consistent with previous work demonstrating that elk migration often follows the start of spring photosynthetic activity, or greenup; as new growth extends into higher elevations over summer so do elk [22]. Forage research on elk also shows attraction to intermediate levels of biomass, often more digestible and productive than tall late-season stands, and forage abundance has been shown to encourage site fidelity in nonmigratory elk populations on short time intervals, supporting our results that productivity may strongly influence returns [23-25]. Distance to nearest road and its interaction with the Herd variable appeared in the top model, with Waterton animals being more sensitive to road proximity. Animals in national parks often seem undisturbed by roads, habituated to traffic and people, and attracted by the roadside vegetation and protection from predators that roads and human settlements offer [26], but in other populations, especially in those facing hunting pressure, roads and high traffic have been shown to alter movement near roads [27,28]. From the perspective of foraging, human disturbance has been shown to increase vigilance, reducing time spent foraging, foraging efficiency, and intake [29-31] and, recently, to deter foraging patch selection in elk [Seidel DP, Boyce MS: Varied tastes: home range implications of foraging patch selection, forthcoming]. Our analysis demonstrates that disturbance also might affect whether or not that animal returns to patches over time. Inclusion of the TmKnown variable markedly improved the fit of our model to the data and emphasizes the temporal dynamics at play driving returns. TmKnown was the strongest indicator of return likelihood, with an effect size nearly twice that of any other predictor; this is a logical result. Patches visited earlier in the season have a longer period of time during which they can returned. The Kaplan-Meier estimation demonstrates clearly that patches must be known for roughly 20 days before attracting a return (Figure 1). Given the time needed for regrowth, revisits before 20 days would likely be disadvantageous giving further support to the Van Moorter et al. model [5]. Additionally this figure demonstrates that nearly all patches known for at least 115 days were revisited and displays a sharp uptake in revisits once a patch was known for 90 days or more. Exhibition of return behaviour overall indicates that animals are not avoiding previous locations and that previous use may increase subsequent use, just as demonstrated by Wolf et al. [20]. If this coefficient had been diminished or even negative, we would expect that animals were likely moving into novel environments, not cycling back over the season either due to range drift or possibly resource depletion or predator avoidance. In future research, it would be useful to explicitly evaluate how the demonstrated increase in return probability over time compares to probabilities extracted from simple biased random walk models (i.e. biased to a central location, considering both mono- and multi-nuclear models), or more advanced multi-phasic movement models. Such a comparison of models, using empirical data for parameterization, could be very informative and offer a unique evaluation of current proposed models for understanding movement and space use of large mammals. Traditionally, simple random-walk or diffusion models have been used widely to model animal movement and, dependent on the time scale in question, can provide a realistic approximation of movement for many species [32]. Diffusion alone however does not result in emergent home-range behaviour; using a diffusion approach, eventually the paths of an animal will expand to fill any available extent. Diffusion models with an attraction vector to a central place (e.g. a den, a nest) can result in a circular, unimodal, home ranges but empirical observation shows that animals' real home ranges generally exhibit multimodal use with non-circular edges [32]. Mechanistic home range models have evolved in an attempt to identify and model the movement processes that can simulate emergent multimodal utilization distributions and realistic home-range boundaries (see [3,32] for further review of recent movement and home-range modelling). The Van Moorter et al. [5] model, predicting a foraging and memory-driven model, provides a realistic model for the intra-home-range movement in wild ungulates, without requiring presupposition of home range centers or a single attractive nuclei. Our field observations have demonstrated repeated movements among multiple nodes of attraction which are indicative of memory processes, and negate simple diffusion or central place models for ungulate home range development. We have demonstrated that elk will return to foraging patches repeatedly over the season. Return behaviour should be driven in part by patch value, and indeed, we show that productivity, terrain ruggedness, and proximity to road all influenced the likelihood that elk would return to foraging patches. These results demonstrate that the Van Moorter et al. [5] model for home-range development appropriately characterizes key aspects of elk foraging and movement behaviour and furthers understanding of within home range movement of free ranging elk. Increased research into the mechanisms driving space use and empirical evaluation of theoretical home range models will improve our understanding of the dynamic nature of animal space use and movements, especially in response to human land-use change. Study area & animals Elk in this study ranged freely within the montane ecosystem of SW Alberta. The study area is characterized by steep mountainous terrain to the west, abruptly transitioning in the east to rolling grasslands and agricultural land. Seven cow elk from two herds (Waterton and Livingston) were included in these analyses. The three Waterton animals ranged within the boundaries of Waterton Lakes National Park, and were predominately associated with the Park's northwestern hills and the aspen forests and wetlands southeast of Lower Waterton Lake. Tourism to the national park during summer is a unique disturbance for animals in this herd. The four radiocollared Livingstone animals ranged on both sides of the Livingstone Range, an eastern ridge of the Rocky Mountains where they encountered timber cut blocks of varying age and dense forests dominated by lodgepole pine (Pinus contorta) to the west, and rolling agricultural and range lands to the east. To identify patches used for foraging, a retrospective space–time permutation scan statistic (STPSS) was used to identify clusters in the relocation data for each individual elk using SaTScan® [33]. The scan statistic is defined by a moving cylindrical window with a base in geographic space and height defined by time. Using this method, each relocation was considered to be the center of a possible cluster (containing a minimum of 2 fixes) across multiple spatial windows and at each available time window (i.e., over 1 day, 2 days, or 3 days). The analysis considers all relocations within a wide range of cylinders when evaluating for clusters: everything from relocations within tall poles, i.e. small spatial windows but across many days, to those that might be described to occur within wide flat discs, i.e. large spatial windows during a single day [34]. For detailed information on the probability function underlying this clustering method, see Kulldorff et al. [33]. Following adaptations explained by Webb et al. [34] to use this method with GPS relocation data, we let c zd = number of locations at geographic coordinate z during day d, and defined C, the total number of observed GPS elk locations, as $$ C={\displaystyle \sum_z{\displaystyle \sum_d{c}_{zd}}} $$ On day d at location z the expected number of GPS locations (U) is $$ {U}_{zd}=\frac{1}{C}\left({\displaystyle \sum_z{c}_{zd}}\right)\left({\displaystyle \sum_d{c}_{zd}}\right) $$ Because each relocation point in a GPS dataset is unique, the number of GPS locations at a location z across all days sums to one and, subsequently, U zd =1. Expected number of locations U A in a cylinder A is the summation of these expectations within that cylinder: $$ {U}_A={\displaystyle \sum_{\left(z,d\right)\in A}{U}_{zd}} $$ When there is no space–time interaction, c A , the observed number of locations within the cylinder, is distributed according to a hypergeometric distribution with mean U A and probability function: $$ P\left({c}_A\right)=\frac{\left(\frac{{\displaystyle {\sum}_{z\in A}{c}_{zd}}}{c_A}\right)\left(N-\frac{{\displaystyle {\sum}_{z\in A}{c}_{zd}}}{{\displaystyle {\sum}_{d\in A}{c}_{zd}-{c}_A}}\right)}{\left(\frac{C}{{\displaystyle {\sum}_{d\in A}{c}_{zd}}}\right)} $$ When both the number of geographic locations and the number of days within a cylinder are small compared to C, c A is expected to be approximately Poisson distributed with mean and variance U A . As such, the evidence that a given cylinder contains a cluster can be measured by a Poisson Generalized Likelihood Ratio. Elk most actively forage during crepuscular periods [35-37] thus, to help ensure that clustering could identify patches primarily used for foraging and not some other activity, e.g., grooming or bedding, data from peak hours of day and night were removed (10:00–14:00 and 22:00–2:00) prior to clustering. In addition, all resulting clusters with a radius ≤ 15 m were removed because these likely represent GPS error on resting or bedded animals [23]. Three decision rules had to be made prior to running the scan statistic: the maximum spatial window, the maximum temporal window, and permission for geographic overlap of clusters. Frair et al. [23] used a first-passage time analysis, assessing how long an animal spends in an area of a given size, to identify the scales at which three separate movement processes occurred: resting, foraging, and traveling from 2-hr fix data. When foraging, female elk travelled an average of 265.7 m (42.5 m SD) between fixes; accounting for this previous work and given the logistical constraints of our field sampling, a maximum diameter of 300 m was chosen as an upper spatial bound for analysis. The maximum number of sequential days evaluated for clusters of points, i.e., the maximum temporal window, was left broad: including up to 3 days of points. Finally, within an individual scan (over the data of one elk for a single week), no geographic overlap was allowed between reported clusters; this is a constraint imposed to ensure that we captured unique patches in space. Counting returns After identifying the boundaries of foraging patches, we recorded all revisits by an elk to its known patches during the summer season. Patches were identified weekly from telemetry data for each animal and were aggregated from June-August 2012 for return analyses. Returns to each patch were calculated for the entire duration of the summer season. Sampling began the first week of June to reduce the likelihood of including patches encountered on spring migration to the summer range as these patches are unlikely to be used again within the season. For purposes of our analysis, a return was defined to be a series of 2 or more sequential fixes within 300 m of the cluster point separated by more than 3 days (i.e., 36 fixes) from the previous visit. This mirrored the spatial rule used for defining clusters by the STPSS (maximum 300 m diameter) and required a temporal window that would help to ensure that animals left the general area and subsequently returned in a separate event. Elk often spend several days encamped in one area and then relocate to another distant area of their home range [17]; we expected these rapid relocation events to occur within our 3-day buffer and to separate one series of cluster visits from another. Single fix events within the appropriate spatial and temporal definition were denoted as "singles" but were not assumed to represent a foraging event. Biologically, we hypothesize these single fix events could represent exploratory returns to assess biomass regeneration in the presence of competing herbivores (e.g., cattle) but given their duration were not considered to be a foraging return for this analysis. To count returns to each patch, we first imposed the spatial boundary of the patch and then tallied return events. Distances between each relocation for an animal and each cluster for that animal over the study period were calculated using Geospatial Modelling Environment (GME) [38]. In Program R [39], we identified the subset of fixes within 300 m of a cluster point. This subset contained all returns to the 300 m buffer including the foraging event originally clustered, but at this point they are undifferentiated events (See Figure 3). To accurately count the number of returns to a site, we used the sequential fix numbers (adjusted for missing fixes) included in the subset table to isolate clusters in time. Using the diff function in GME, the table was read separating events of sequential fixes. In this way, nonsequential points outside the 3-day buffer represented start points of events that were isolated and tallied, separating single-fix events from multi-fix events, or returns. Based on this method, the number of returns to an area equals the (number of events in the area) – 1, accounting for the originally clustered foraging event. A correction to the returns count was needed in instances when the final record was a single-fix return: in this case, returns equal (number of events in the area) – 2, accounting for both the last single event and the original cluster point. Example subset table for differentiating return events. This example patch has received 2 returns and 1 single fix event over the season. Note that a return can occur prior to the event clustered by the space-time permutation scan statistic. Return analysis Using counts of returns to a patch as our response variable, we sought to model how environmental covariates might influence an elk's decision to return to patches later in the season using an information-theoretic approach for model selection [40]. All covariates were standardized to mean = 0 and SD = 1, and using mixed negative binomial regression through the glmmADMB package in Program R [39], we investigated which environmental covariates influenced the incidence of return count data at 768 clusters. Our model set included 13 biologically relevant candidate models to explore the influence of environmental and anthropogenic factors on the number of times a patch was revisited (Table 2). Ungulates move to maximize forage intake and typically seek out areas of intermediate biomass with highest quality and quantity of available forage plants [41]. As such, productivity and vegetation models were included to explain the variation in the number of returns to a patch. Model A tests the idea that returns are solely related to relative productivity of the patch. Higher productivity is expected to shorten regrowth times and provide more available biomass over the season, potentially increasing the number of returns occurring over the time window by decreasing the number of days between returns. The normalized difference vegetation index, NDVI, an index of above-ground primary productivity, was compiled from images collected by MODIS remote-sensing satellites during May through September 2012 at a 250 m resolution every 16 days. The mean NDVI value of all clusters in each reporting period demonstrates the typical parabolic trend in productivity values over the summer (see Figure 4). Extracting the NDVI value at each cluster during peak productivity (early July) allowed us to include a covariate indicating the relative productivity of each patch during the summer season. Elk respond to physiogeographic features that determine where forage is most available (e.g. ruggedness, slope, elevation, aspect). Differences in elevation, slope, and aspect can create microclimates that affect localized productivity and available forage [23] and subsequently may affect elk movement [42]. Boxplot demonstrating mean NDVI and its variance throughout summer. MODIS satellites retrieve imagery from the study site every 16 days, twice each month. The 6 boxplots present the average and variance of Normalized Difference Vegetation Index (NDVI) values for each photoperiod. These averages and variances are calculated from NDVI values reported at all clusters identified. The first reporting period of July (July1) has the highest mean and the lowest variance making it the best choice for a parameter demonstrating relative productivity of each cluster. The higher variance early and late in the season is likely due to timing variation of snow melt, growth, and die-off along elevation and cover gradients all of which influence NDVI values. As a secondary variable influencing productivity, northness, or cos(Aspect), was used for interpretation of the circular variable aspect in models. Model B includes this second productivity related parameter, Aspect, to assess how hillshade may play a role in return likelihood in addition to relative productivity (NDVI). In addition to seeking out forage, research has shown that elk movement can be driven by predator avoidance [23,42]. Remaining close to or within cover is an important predator avoidance strategy for elk [29]. To evaluate the influence of cover on return frequency, CanopyClosure was extracted from a 2005 map created by the Foothills Research Institute [43]. This cover map is a composite of remotely sensed LandSat data with 30-m resolution on land cover and crown closure, as well as species composition, and agricultural and regeneration masks. Model C includes Canopy, Aspect, NDVI, for a full vegetation model, accounting for the importance of cover for predator avoidance [29], and the attraction of productive forage [41]. Human disturbance from road networks potentially acts as a deterrent to returning elk. The road network described in a traffic model developed for our study area [44] was used to obtain estimates for the distance to road, DistRd, and average summer daily traffic on nearest road, Traffic. In Waterton National Park, high levels of tourist traffic push through the park's few roads daily. In Livingstone, the landscape contains small, seldom-travelled roads. Due to the large difference in road density and traffic between the two herds, a binary and categorical covariate of Herd was included in models and allowed to interact with Traffic and DistRd variables. The Herd variable specifies whether a patch occurs within the boundaries of the Livingstone or Waterton herds and was included to account for differences in the impact of roads and traffic on return likelihood to patches across the two herds. Model D incorporates the effect of these human disturbances as well as the baseline productivity of a patch (NDVI) that we hypothesize attracted returns. The TmKnown covariate refers to the number of days elapsed between the first ever visit to the patch by an elk and the end of the sampling period at the end of August. This variable accounts for the increased likelihood of revisitation that some patches have over others in the dataset just based on when they were first encountered in the season and the length of our sampling period. Additionally, this covariate has some simple biological relevance accounting for animal learning and memory. The longer a patch is known to an animal, and the longer we monitored returns to it, the more returns that patch is likely to accrue. Model E added TmKnown to the baseline productivity model (Model A). Similarly, Model F added TmKnown to the complete vegetation model (Model C) and Model G considers TmKnown within the productivity and disturbance model (Model D). These function as direct comparisons for the effect of TmKnown on return likelihood. Interactions between TmKnown and productivity parameters (NDVI and Aspect) were included to test for the potential temporal variation in the attraction of patches; it is possible that patches might be returned to more or less over the time they are known based on their productivity across the summer (Model H and I). Movement by elk is restricted by rugged terrain and we hypothesize that the returns would be more frequent at less-rugged patches because they likely require less energy for travel to and within [29,44,45]. Terrain ruggedness, Ruggedness, was included in models to reflect this predicted influence on movement [29]. Model J includes just Ruggedness and TmKnown, representing the hypothesis that returns are only explained by the accessibility of the patch and how long it has been known. In a model representing landscape terrain, Model K includes both road networks and the Ruggedness of the terrain as well as the TmKnown variable. Model L and Model M represent combinations of the terrain model with the productivity parameter (NDVI) and its interaction with TmKnown. Individual variation in return patterns was substantial (see Figure 2A) and ElkID was included as a random effect in all candidate models to account for this variation. Although differences in terrain ruggedness were visually identifiable across herds, an interaction between ruggedness and herd was not expected to influence return frequency. That is to say, the return behaviour of Livingstone animals was not influenced differently by ruggedness than was the behaviour of Waterton animals, despite the greater overall ruggedness of Livingstone terrain. All of the observed differences between herds were attributable to the difference in tourism levels between areas and individual variation accounted for by the random effect. Finally, the influence of the TmKnown parameter on return likelihood was examined using a Kaplan-Meier survival curve built using the survival package in Program R [39]. Powell RA. Animal home ranges and territories and home range estimators. In: Boitani L, Fuller TK, editors. Research techniques in animal ecology: controversies and consequences. New York: Columbia University Press; 2000. p. 65–110. Börger L, Franconi N, Ferretti F, Meschi F, De Michele G, Gantz A, et al. An integrated approach to identify spatiotemporal and individual-level determinants of animal home range size. Am Nat. 2006;168:471–85. Börger L, Dalziel BD, Fryxell JM. Are there general mechanisms of animal home range behaviour? a review and prospects for future research. Ecol Lett. 2008;11:637–50. Gautestad AO. Memory matters: Influence from a cognitive map on animal space use. J Theor Biol. 2011;287:26–36. van Moorter B, Visscher D, Benhamou S, Börger L, Boyce MS, Gaillard J. Memory keeps you at home: a mechanistic model for home range emergence. Oikos. 2009;118:641–52. Gautestad AO, Mysterud I. The home range fractal: from random walk to memory-dependent space use. Ecol Complex. 2010;7:458–70. Mitchell MS, Powell RA. A mechanistic home range model for optimal use of spatially distributed resources. Ecol Model. 2004;177:209–32. Mitchell MS, Powell RA. Optimal use of resources structures home ranges and spatial distribution of black bears. Anim Behav. 2007;74:219–30. Moorcroft P, Lewis MA. Mechanistic home range analysis. Princeton: Princeton University Press; 2006. Nabe-Nielsen J, Tougaard J, Teilmann J, Lucke K, Forchhammer MC. How a simple adaptive foraging strategy can lead to emergent home ranges and increased food intake. Oikos. 2013;122:1307–16. Mitchell MS, Powell RA. Estimated home ranges can misrepresent habitat relationships on patchy landscapes. Ecol Model. 2008;216:409–14. Vanderwel MC, Malcolm JR, Caspersen JP. Using a data-constrained model of home range establishment to predict abundance in spatially heterogeneous habitats. PLoS One. 2012;7:e40599. Potts JR, Harris S, Giuggioli L. Territorial dynamics and stable home range formation for central place foragers. PLoS One. 2012;7:e34033. Giuggioli L, Potts JR, Harris S. Animal interactions and the emergence of territoriality. PLoS Comput Biol. 2011;7:e1002008. Charnov EL. Optimal foraging, the marginal value theorem. Theor Popul Biol. 1976;9:129–36. Johnson CJ, Parker KL, Heard DC, Gillingham MP. Movement parameters of ungulates and scale‐specific responses to the environment. J Anim Ecol. 2002;71:225–35. Fryxell JM, Hazell M, Borger L, Ben Dalziel D, Haydon DT, Morales JM, et al. Multiple movement modes by large herbivores at multiple spatiotemporal scales. Proc Natl Acad Sci. 2008;105:19114–9. Owen-Smith N, Fryxell JM, Merrill EH. Foraging theory upscaled: the behavioural ecology of herbivore movement. Phil Trans Biol Sci. 2010;365:2267–78. Bar-David S, Bar-David I, Cross PC, Ryan SJ, Knechtel CU, Getz WM. Methods for assessing movement path recursion with application to African buffalo in South Africa. Ecology. 2009;90:2467–79. Wolf M, Frair J, Merrill E, Turchin P. The attraction of the known: the importance of spatial familiarity in habitat selection in wapiti. Ecography. 2009;32:401–10. Hilbe JM. Negative binomial regression. 2nd ed. Cambridge: Cambridge University Press; 2011. Boyce MS. Migratory behavior and management of elk (Cervus elaphus). Appl Anim Behav Sci. 1991;29:239–50. Frair JL, Merrill EH, Visscher DR, Fortin D, Beyer HL, Morales JM. Scales of movement by elk (Cervus elaphus) in response to heterogeneity in forage resources and predation risk. Landsc Ecol. 2005;20:273–87. Wilmshurst JF, Fryxell JM, Hudson RJ. Forage quality and patch choice by wapiti (Cervus elaphus). Behav Ecol. 1995;6:209–17. van Beest FM, Vander Wal E, Stronen AV, Brook RK. Factors driving variation in movement rate and seasonality of sympatric ungulates. J Mammal. 2013;94:691–701. Rogala JK, Hebblewhite M, Whittington J, White CA, Coleshill J, Musiani M. Human activity differentially redistributes large mammals in the Canadian Rockies national parks. Environ Soc. 2011;16:16. Rowland MM, Wisdom MJ, Johnson BK, Kie JG. Elk distribution and modeling in relation to roads. J Wildl Manag. 2000;64:672–84. Frair JL, Merrill EH, Beyer HL, Morales JM. Thresholds in landscape connectivity and mortality risks in response to growing road networks. J Appl Ecol. 2008;45:1504–13. Ciuti S, Northrup JM, Muhly TB, Simi S, Musiani M, Pitt JA, et al. Effects of humans on behaviour of wildlife exceed those of natural predators in a landscape of fear. PLoS One. 2012;7:e50611. Frid A, Dill LM. Human-caused disturbance stimuli as a form of predation risk. Conservat Ecol. 2002;6:11. Fortin D, Boyce MS, Merrill EH, Fryxell JM. Foraging costs of vigilance in large mammalian herbivores. Oikos. 2004;107:172–80. Smouse PE, Focardi S, Moorcroft PR, Kie JG, Forester JD, Morales JM. Stochastic modelling of animal movement. Phil Trans Biol Sci. 2010;365:2201–11. Kulldorff M, Heffernan R, Hartman J, Assunção R, Mostashari F. A space–time permutation scan statistic for disease outbreak detection. PLoS Med. 2005;2:e59. Webb NF, Hebblewhite M, Merrill EH. Statistical methods for identifying wolf kill sites using global positioning system locations. J Wildl Manag. 2008;72:798–807. Green RA, Bear GD. Seasonal cycles and daily activity patterns of rocky mountain elk. J Wildl Manag. 1990;54:272–9. Ager AA, Johnson BK, Kern JW, Kie JG. Daily and seasonal movements and habitat use by female rocky mountain elk and mule deer. J Mammal. 2003;84:1076–88. Ensing EP, Ciuti S, de Wijs FALM, Lentferink DH, Hoedt A, Boyce MS, et al. GPS based daily activity patterns in European red deer and North American elk (Cervus elaphus): indication for a weak circadian clock in ungulates. PLoS One. 2014;9:e106997. Beyer HL. Geospatial Modeling Environment. 2012. http://www.spatialecology.com/gme. R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2014 [http://www.R-project.org] Burnham KP, Anderson DR, Huyvaert KP. AIC model selection and multimodel inference in behavioral ecology: some background, observations, and comparisons. Behav Ecol Sociobiol. 2011;65:23–35. Hebblewhite M, Merrill E, McDermid G. A multi-scale test of the forage maturation hypothesis in a partially migratory ungulate population. Ecol Monogr. 2008;78:141–66. Fortin D, Beyer HL, Boyce MS, Smith DW, Duchesne T, Mao JS. Wolves influence elk movements: Behavior shapes a trophic cascade in Yellowstone National Park. Ecology. 2005;86:1320–30. McDermid GJ, Hall RJ, Sanchez-Azofeifa GA, Franklin SE, Stenhouse GB, Kobliuk T, et al. Remote sensing and forest inventory for wildlife habitat assessment. For Ecol Manage. 2009;257:2262–9. Northrup JM, Pitt J, Muhly TB, Stenhouse GB, Musiani M, Boyce MS. Vehicle traffic shapes grizzly bear behaviour on a multiple‐use landscape. J Appl Ecol. 2012;49:1159–67. Mao JS, Boyce MS, Smith DW, Singer FJ, Vales DJ, Vore JM, et al. Habitat selection by elk before and after wolf reintroduction in Yellowstone National Park. J Wildl Manag. 2005;69:1691–707. The authors would like to thank all funders and partners of this project. This project was funded in part by the University of Alberta, Shell Canada Ltd., NSERC CRD funds, Mountain Equipment Coop, TD Friends of the Environment Fund, Safari Club International – Northern Alberta Chapter, and in partnership with Nature Alberta. Department of Biological Sciences, University of Alberta, Edmonton, Alberta, T6G 2E9, Canada Dana P Seidel & Mark S Boyce Dana P Seidel Mark S Boyce Correspondence to Dana P Seidel. DPS participated in the conception and design of the study, carried out the field sampling and data collection, performed the statistical analysis, and drafted the manuscript. MSB participated in design, conception, and supervision of the project and helped to draft the manuscript. Both authors read and approved of the final manuscript prior to submission. Seidel, D.P., Boyce, M.S. Patch-use dynamics by a large herbivore. Mov Ecol 3, 7 (2015). https://doi.org/10.1186/s40462-015-0035-8 Home-range development Site-fidelity Foraging selection
CommonCrawl
MRS Online Proceedings Library Archive (95) Journal of Mechanics (38) European Astronomical Society Publications Series (4) European Journal of Anaesthesiology (4) Diamond Light Source Proceedings (3) Journal of Applied Probability (3) Oryx (1) Fauna & Flora International - Oryx (1) Ryan Test (1) International Hydrology Series (1) Tracking the mental health of a nation: prevalence and correlates of mental disorders in the second Singapore mental health study M. Subramaniam, E. Abdin, J. A. Vaingankar, S. Shafie, B. Y. Chua, R. Sambasivam, Y. J. Zhang, S. Shahwan, S. Chang, H. C. Chua, S. Verma, L. James, K. W. Kwok, D. Heng, S. A. Chong Journal: Epidemiology and Psychiatric Sciences , First View Published online by Cambridge University Press: 05 April 2019, pp. 1-10 The second Singapore Mental Health Study (SMHS) – a nationwide, cross-sectional, epidemiological survey - was initiated in 2016 with the intent of tracking the state of mental health of the general population in Singapore. The study employed the same methodology as the first survey initiated in 2010. The SMHS 2016 aimed to (i) establish the 12-month and lifetime prevalence and correlates of major depressive disorder (MDD), dysthymia, bipolar disorder, generalised anxiety disorder (GAD), obsessive compulsive disorder (OCD) and alcohol use disorder (AUD) (which included alcohol abuse and dependence) and (ii) compare the prevalence of these disorders with reference to data from the SMHS 2010. Door-to-door household surveys were conducted with adult Singapore residents aged 18 years and above from 2016 to 2018 (n = 6126) which yielded a response rate of 69.0%. The subjects were randomly selected using a disproportionate stratified sampling method and assessed using World Health Organization Composite International Diagnostic Interview version 3.0 (WHO-CIDI 3.0). The diagnoses of lifetime and 12-month selected mental disorders including MDD, dysthymia, bipolar disorder, GAD, OCD, and AUD (alcohol abuse and alcohol dependence), were based on the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria. The lifetime prevalence of at least one mood, anxiety or alcohol use disorder was 13.9% in the adult population. MDD had the highest lifetime prevalence (6.3%) followed by alcohol abuse (4.1%). The 12-month prevalence of any DSM-IV mental disorders was 6.5%. OCD had the highest 12-month prevalence (2.9%) followed by MDD (2.3%). Lifetime and 12-month prevalence of mental disorders assessed in SMHS 2016 (13.8% and 6.4%) was significantly higher than that in SMHS 2010 (12.0% and 4.4%). A significant increase was observed in the prevalence of lifetime GAD (0.9% to 1.6%) and alcohol abuse (3.1% to 4.1%). The 12-month prevalence of GAD (0.8% vs. 0.4%) and OCD (2.9% vs. 1.1%) was significantly higher in SMHS 2016 as compared to SMHS 2010. The high prevalence of OCD and the increase across the two surveys needs to be tackled at a population level both in terms of creating awareness of the disorder and the need for early treatment. Youth emerge as a vulnerable group who are more likely to be associated with mental disorders and thus targeted interventions in this group with a focus on youth friendly and accessible care centres may lead to earlier detection and treatment of mental disorders. Refined stability thresholds for localized spot patterns for the Brusselator model in $\mathbb{R}^2$ Y. CHANG, J. C. TZOU, M. J. WARD, J. C. WEI Journal: European Journal of Applied Mathematics / Volume 30 / Issue 4 / August 2019 In the singular perturbation limit ε → 0, we analyse the linear stability of multi-spot patterns on a bounded 2-D domain, with Neumann boundary conditions, as well as periodic patterns of spots centred at the lattice points of a Bravais lattice in $\mathbb{R}^2$ , for the Brusselator reaction–diffusion model $$ \begin{equation*} v_t = \epsilon^2 \Delta v + \epsilon^2 - v + fu v^2 \,, \qquad \tau u_t = D \Delta u + \frac{1}{\epsilon^2}\left(v - u v^2\right) \,, \end{equation*} $$ where the parameters satisfy 0 < f < 1, τ > 0 and D > 0. A previous leading-order linear stability theory characterizing the onset of spot amplitude instabilities for the parameter regime D = ${\mathcal O}$ (ν−1), where ν = −1/log ϵ, based on a rigorous analysis of a non-local eigenvalue problem (NLEP), predicts that zero-eigenvalue crossings are degenerate. To unfold this degeneracy, the conventional leading-order-in-ν NLEP linear stability theory for spot amplitude instabilities is extended to one higher order in the logarithmic gauge ν. For a multi-spot pattern on a finite domain under a certain symmetry condition on the spot configuration, or for a periodic pattern of spots centred at the lattice points of a Bravais lattice in $\mathbb{R}^2$ , our extended NLEP theory provides explicit and improved analytical predictions for the critical value of the inhibitor diffusivity D at which a competition instability, due to a zero-eigenvalue crossing, will occur. Finally, when D is below the competition stability threshold, a different extension of conventional NLEP theory is used to determine an explicit scaling law, with anomalous dependence on ϵ, for the Hopf bifurcation threshold value of τ that characterizes temporal oscillations in the spot amplitudes. 2135 Impact of primary care physician gatekeeping on medication prescriptions for atrial fibrillation Andrew Y. Chang, Mariam Askari, Jun Fan, Paul A. Heidenreich, P. Michael Ho, Kenneth W. Mahaffey, Alexander C. Perino, Mintu P. Turakhia Journal: Journal of Clinical and Translational Science / Volume 2 / Issue S1 / June 2018 OBJECTIVES/SPECIFIC AIMS: Atrial fibrillation (AF) is the most commonly encountered arrhythmia in clinical practice, and has widely varying treatments for stroke prevention and rhythm management. Some of these therapies are increasingly being prescribed by primary care physicians (PCPs). We therefore sought to investigate if healthcare plans with PCP gatekeeping for access to specialists are associated with different pharmacologic treatment strategies for the disease. In particular, we focused on oral anticoagulants (OACs), non-vitamin K-dependent oral anticoagulants (NOACs), rate control, and rhythm control medications. METHODS/STUDY POPULATION: We examined a commercial pharmaceutical claims database (Truven Marketscan™) to compare the prescription frequency of OAC, rate control, and rhythm control medications used to treat AF between patients with PCP-gated health plans (where the PCP is the gatekeeper to specialist referral—i.e., HMO, EPO, POS) and patients with non-PCP-gatekeeper health plans (i.e., comprehensive, PPO, CHDP, HDHP). To control for potential confounders, we also used multivariable logistic regression models to calculate adjusted odds ratios which accounted for age, sex, region, Charlson comorbidity index, CHADS2Vasc score, hypertension, diabetes, stroke/transient ischemic attack, prior myocardial infarction, peripheral artery disease, and antiplatelet medication use. We also calculated median time to therapy to determine if there was a difference in time to new prescription of these medications. RESULTS/ANTICIPATED RESULTS: We found only small differences between patients in PCP-gated and non-PCP-gated plans regarding prescription proportion of anticoagulants at 90 days following new AF diagnosis (OAC 44.2% vs. 42%, p<0.01; warfarin 39.1% vs. 37.1%, p<0.01; NOAC 5.9% vs. 6.0%, p=0.64). We observed similar trends for rate control agents (76.4% vs. 73.4%, p<0.01) and rhythm control agents (24.4% vs. 24.6%, p=0.83). We found similar odds of OAC prescription at 90 days following new AF diagnosis between patients in PCP-gated and non-PCP-gated plans (adjusted OR for PCP-gated plans relative to non-gated plans: OAC 1.006, p=0.84; warfarin 1.054, p=0.08; NOAC 0.815, p=0.001; dabigatran 0.833, p=0.004; and rivaroxaban 0.181, p=0.02). We observed similar trends for rate control agents (1.166, p<0.0001) and rhythm control agents (0.927, p=0.03). Elapsed time until receipt of medication was similar between PCP-gated and non-gated groups [OAC 4±14 days (interquartile range) vs. 5±16 days, p<0.0001; warfarin 4±14 vs. 5±14, p<0.0001; NOAC 7±26 vs. 6±23, p=0.2937; rhythm control 13±35 vs. 13±34, p=0.8661; rate control 10±25 vs. 11±30, p<0.0001]. DISCUSSION/SIGNIFICANCE OF IMPACT: We found that plans with PCP gatekeeping to specialist referrals were not associated with clinically meaningful differences in prescription rates or delays in time to prescription of oral anticoagulation, rate control, and rhythm control drug therapy. In some cases, PCP gatekeeping plans had very small but statistically significant lower odds of being prescribed NOACs. These findings suggest that PCP gatekeeping does not appear to be a major structural barrier in receipt of medications for AF, although non-PCP-gated plans may vary slightly favor facilitating the prescription of NOACs. Our findings that overall OAC prescriptions did not differ by PCP gating status may suggest completion of the rapid dissemination and uptake phase for most AF treatments. The small but statistically significant odds ratios favoring the non-PCP-gated populations in NOACs further suggests that in this newer drug group, the process is ongoing, with more specialists representing early adopters. Interestingly, the low primary care odds ratio of rivaroxaban use, relative to dabigatran, may be indicative of a gradient of uptake of later-generation NOACs, although interpretability is limited by the small number of patients in the rivaroxaban group. Subjective wellbeing, suicide and socioeconomic factors: an ecological analysis in Hong Kong C.-Y. Hsu, S.-S. Chang, P. S. F. Yip Journal: Epidemiology and Psychiatric Sciences / Volume 28 / Issue 1 / February 2019 Aims. There has recently been an increased interest in mental health indicators for the monitoring of population wellbeing, which is among the targets of Sustainable Development Goals adopted by the United Nations. Levels of subjective wellbeing and suicide rates have been proposed as indicators of population mental health, but prior research is limited. Data on individual happiness and life satisfaction were sourced from a population-based survey in Hong Kong (2011). Suicide data were extracted from Coroner's Court files (2005–2013). Area characteristic variables included local poverty rate and four factors derived from a factor analysis of 21 variables extracted from the 2011 census. The associations between mean happiness and life satisfaction scores and suicide rates were assessed using Pearson correlation coefficient at two area levels: 18 districts and 30 quantiles of large street blocks (LSBs; n = 1620). LSB is a small area unit with a higher level of within-unit homogeneity compared with districts. Partial correlations were used to control for area characteristics. Happiness and life satisfaction demonstrated weak inverse associations with suicide rate at the district level (r = −0.32 and −0.36, respectively) but very strong associations at the LSB quantile level (r = −0.83 and −0.84, respectively). There were generally very weak or weak negative correlations across sex/age groups at the district level but generally moderate to strong correlations at the LSB quantile level. The associations were markedly attenuated or became null after controlling for area characteristics. Subjective wellbeing is strongly associated with suicide at a small area level; socioeconomic factors can largely explain this association. Socioeconomic factors could play an important role in determining the wellbeing of the population, and this could inform policies aimed at enhancing population wellbeing. Follow Up of GW170817 and Its Electromagnetic Counterpart by Australian-Led Observing Programmes Gravitational Wave Astronomy I. Andreoni, K. Ackley, J. Cooke, A. Acharyya, J. R. Allison, G. E. Anderson, M. C. B. Ashley, D. Baade, M. Bailes, K. Bannister, A. Beardsley, M. S. Bessell, F. Bian, P. A. Bland, M. Boer, T. Booler, A. Brandeker, I. S. Brown, D. A. H. Buckley, S.-W. Chang, D. M. Coward, S. Crawford, H. Crisp, B. Crosse, A. Cucchiara, M. Cupák, J. S. de Gois, A. Deller, H. A. R. Devillepoix, D. Dobie, E. Elmer, D. Emrich, W. Farah, T. J. Farrell, T. Franzen, B. M. Gaensler, D. K. Galloway, B. Gendre, T. Giblin, A. Goobar, J. Green, P. J. Hancock, B. A. D. Hartig, E. J. Howell, L. Horsley, A. Hotan, R. M. Howie, L. Hu, Y. Hu, C. W. James, S. Johnston, M. Johnston-Hollitt, D. L. Kaplan, M. Kasliwal, E. F. Keane, D. Kenney, A. Klotz, R. Lau, R. Laugier, E. Lenc, X. Li, E. Liang, C. Lidman, L. C. Luvaul, C. Lynch, B. Ma, D. Macpherson, J. Mao, D. E. McClelland, C. McCully, A. Möller, M. F. Morales, D. Morris, T. Murphy, K. Noysena, C. A. Onken, N. B. Orange, S. Osłowski, D. Pallot, J. Paxman, S. B. Potter, T. Pritchard, W. Raja, R. Ridden-Harper, E. Romero-Colmenero, E. M. Sadler, E. K. Sansom, R. A. Scalzo, B. P. Schmidt, S. M. Scott, N. Seghouani, Z. Shang, R. M. Shannon, L. Shao, M. M. Shara, R. Sharp, M. Sokolowski, J. Sollerman, J. Staff, K. Steele, T. Sun, N. B. Suntzeff, C. Tao, S. Tingay, M. C. Towner, P. Thierry, C. Trott, B. E. Tucker, P. Väisänen, V. Venkatraman Krishnan, M. Walker, L. Wang, X. Wang, R. Wayth, M. Whiting, A. Williams, T. Williams, C. Wolf, C. Wu, X. Wu, J. Yang, X. Yuan, H. Zhang, J. Zhou, H. Zovaro Journal: Publications of the Astronomical Society of Australia / Volume 34 / 2017 Published online by Cambridge University Press: 20 December 2017, e069 The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor. Impact of an intervention programme on knowledge, attitudes and practices of population regarding severe fever with thrombocytopenia syndrome in endemic areas of Lu'an, China Y. LYU, C.-Y. HU, L. SUN, W. QIN, P.-P. XU, J. SUN, J.-Y. HU, Y. YANG, F.-L. LI, H.-W. CHANG, X.-D. LI, S.-Y. XIE, K.-C. LI, X.-X. HUANG, F. DING, X.-J. ZHANG Journal: Epidemiology & Infection / Volume 146 / Issue 1 / January 2018 Knowledge, attitudes and practices (KAP) of the population regarding severe fever with thrombocytopenia syndrome (SFTS) in endemic areas of Lu'an in China were assessed before and after an intervention programme. The pre-intervention phase was conducted using a sample of 425 participants from the 12 selected villages with the highest rates of endemic SFTS infection. A predesigned interview questionnaire was used to assess KAP. Subsequently, an intervention programme was designed and applied in the selected villages. KAP was re-assessed for each population in the selected villages using the same interview questionnaire. Following 2 months of the programme, 339 participants had completed the re-assessed survey. The impact of the intervention programme was evaluated using suitable statistical methods. A significant increase in the KAP and total KAP scores was noted following the intervention programme, whereas the proportion of correct knowledge, the positive attitudes and the effective practices toward SFTS of respondents increased significantly. The intervention programme was effective in improving KAP level of SFTS in populations that were resident in endemic areas. Lattice Boltzmann Simulations of Cavity Flows on Graphic Processing Unit with Memory Management P. Y. Hong, L. M. Huang, C. Y. Chang, C. A. Lin Journal: Journal of Mechanics / Volume 33 / Issue 6 / December 2017 Lattice Boltzmann method (LBM) is adopted to compute two and three-dimensional lid driven cavity flows to examine the influence of memory management on the computational performance using Graphics Processing Unit (GPU). Both single and multi-relaxation time LBM are adopted. The computations are conducted on nVIDIA GeForce Titan, Tesla C2050 and GeForce GTX 560Ti. The performance using global memory deteriorates greatly when multi relaxation time (MRT) LBM is used, which is due to the scheme requesting more information from the global memory than its single relaxation time (SRT) LBM counterpart. On the other hand, adopting on chip memory the difference using MRT and SRT is not significant. Also, performance of LBM streaming procedure using offset reading surpasses offset writing ranging from 50% to 100% and this applies to both SRT and MRT LBM. Finally, comparisons using different GPU platforms indicate that Titan as expected outperforms other devices, and attains 227 and 193 speedup over its Intel Core i7-990 CPU counterpart and four times faster than GTX 560Ti and Tesla C2050 for three dimensional cavity flow simulations respectively with single and double precisions. Boundary Element Calculation of Near-Boundary Solutions in 3D Generally Anisotropic Solids by the Self-Regularization Scheme Y. C. Shiah, L. D. Chang This research targets investigation of the internal elastic field near the boundaries of 3D anisotropic bodies by the boundary element method (BEM). This analysis appears to be most important, especially for the interest in the internal solutions near corners or crack tips, where structural failure is most likely to occur. Although the BEM is very efficient for analyzing the problem, nearly singular integration will arise when the internal point of interest stays near the boundary. The present work is to show how the self-regularized boundary integral equation (BIE) can be applied to the interior analysis for 3D generally anisotropic bodies. So far, to the authors' best knowledge, no implementation work has been reported in the literature for successfully treating the near boundary solutions in 3D generally anisotropic solids. In the end, a few benchmark examples are presented to demonstrate the veracity of the present approach for the interior BEM analysis of 3D anisotropic elasticity. A New Micro-Hydrodynamic Herringbone Bearing Using Slant Groove Depth Arrangements for Performance Enhancement Y. T. Lee, A. S. Yang, Y. H. Juan, C. S. Liu, Y. H. Chang Journal: Journal of Mechanics / Volume 33 / Issue 5 / October 2017 This study presents a new groove profile using the slant groove depth arrangements to enhance the performance of micro-HGJBs. The computational analysis was based on the steady-state three-dimensional conservation equations of mass and momentum in conjunction with the cavitation model to examine the complex lubricated flow field. The simulated results of load capacity and circumferential pressure distribution of lubricant film are in good agreement with the measurement data and the predictions cited in the literature. Numerical experiments were extended to determine the pressure distribution, load capacity, radial stiffness and friction torque by varying the slant ratio of groove depth, eccentricity ratio, rotational speed and attitude angle. The cavitation extent of lubricant film was also studied for different slant groove patterns. Visualizing the Spatial Distribution of Ripples in Graphene with Low-Energy Electron Diffractive Imaging I.-S. Hwang, W.-H. Hsu, W.-T. Chang, C.-Y. Lin, T. Latychevskaia Journal: Microscopy and Microanalysis / Volume 23 / Issue S1 / July 2017 Transformation of excess mortality in people with schizophrenia and bipolar disorder in Taiwan Y.-J. Pan, L.-L. Yeh, H.-Y. Chan, C.-K. Chang Given the concerns regarding the adverse health outcomes associated with weight gain and metabolic syndrome in relation to use of second-generation antipsychotics (SGAs), we aimed in this study to explore whether the increase in the use of SGAs would have any impacts on the trend of excess mortality in people with schizophrenia and bipolar disorder (BPD). Two nationwide samples of individuals with schizophrenia and BPD were identified in Taiwan's National Health Insurance Research Database in 2003 and in 2008, respectively. Age- and gender-standardized mortality ratios (SMRs) were calculated for each of the 3-year observation periods. The SMRs were compared between the calendar year cohorts, by disease group, and by causes of death. The mortality gap for people with schizophrenia decreased slightly, revealing an SMR of 3.40 (95% CI 3.30–3.50) for the 2003 cohort and 3.14 (3.06–3.23) for the 2008 cohort. The mortality gap for BPD individuals remained relatively stable with only those aged 15–44 years having an SMR rising significantly from 7.04 (6.38–7.76) to 9.10 (8.44–9.79). Additionally, in this group of BPD patients aged 15–44 years, the natural-cause-SMR increased from 5.65 (4.93–6.44) to 7.16 (6.46–7.91). Compared with the general population, the gap in the excess mortality for people with schizophrenia reduced slightly. However, the over 200% difference between the cohorts in the excess mortality for BPD individuals aged 15–44 years could be a warning sign. Future research to further examine the related factors underlying those changes is warranted. Determination of snow-covered area in different land covers in central Alaska, U.S.A., from aircraft data — April 1995 Dorothy K. Hall, James L. Foster, Alfred T. C. Chang, Carl S. Benson, Janet Y. L. Chien During April 1995, a field and aircraft experiment was conducted in central Alaska in support of the Moderate Resolution Imaging Spectroradiometer (MODIS) snow-mapping project. The MODIS Airborne Simulator (MAS), a 50 channel spectroradiometer, was flown on board the NASA ER-2 aircraft. An objective of the mission was to determine the accuracy of mapping snow in different surface covers using an algorithm designed to map global snow cover after the launch of MODIS in 1998. The surface cover in this area of central Alaska is typically spruce, birch, aspen, mixed forest and muskeg. Integrated reflectance, R i was calculated from the visible/near-infrared channels of the MAS sensor. The R i was used to estimate different vegetation-cover densities because there is an inverse relationship between vegetation-cover density and albedo in snow-covered terrain. A vegetation-cover density map was constructed using MAS data acquired on 13 April 1995 over central Alaska. In the part of the scene that was mapped as having a vegetation-cover density of < 50%, the snow-mapping algorithm mapped 96.41% snow cover. These areas are generally composed of muskeg and mixed forests and include frozen lake. In the part of the scene that was estimated to have a vegetation-cover density of ≥50%, the snow-mapping algorithm mapped 71.23% snow cover. These areas are generally composed of dense coniferous or deciduous forests. Overall, the accuracy of the snow-mapping algorithm is > 87.41% for a 13 April MAS scene with a variety of surface covers (coniferous and deciduous and mixed forests, muskeg, tundra and frozen lake). A Compact Low-Cost Camera Module with Modified Magnetic Restoring Force C. S. Liu, B. J. Tsai, Y. H. Chang Journal: Journal of Mechanics / Volume 33 / Issue 4 / August 2017 In recent years, compact camera modules (CCMs) have been widely used in consumer electrical and electronic products. CCMs with low cost specially are necessary for portable devices. Therefore, the present group recently developed a miniaturized open-loop controlled camera module with low cost for cellphone applications, in which the Lorentz force is balanced by a magnetic restoring force. To enhance the performance of the camera module, this article reports a pattern structure to modify the linearity of the magnetic restoring force. We fabricated a CCM prototype with the dimensions of 8.5 mm × 8.5 mm × 5 mm and demonstrated the usefulness of the pattern structure with the CCM prototype. Its potential applications are foreseeable in portable devices, such as cellphones, web cameras, personal digital assistants and other commercial electronics. Insight, self-stigma and psychosocial outcomes in Schizophrenia: a structural equation modelling approach Y.-J. Lien, H.-A. Chang, Y.-C. Kao, N.-S. Tzeng, C.-W. Lu, C.-H. Loh Journal: Epidemiology and Psychiatric Sciences / Volume 27 / Issue 2 / April 2018 Poor insight is prevalent in patients with schizophrenia and has been associated with acute illness severity, medication non-adherence and poor treatment outcomes. Paradoxically, high insight has been associated with various undesirable outcomes, including low self-esteem, depression and low subjective quality of life (QoL) in patients with schizophrenia. Despite the growing body of studies conducted in Western countries supporting the pernicious effects of improved insight in psychosis, which bases on the level of self-stigma, the effects are unclear in non-Western societies. The current study examined the role of self-stigma in the relationship between insight and psychosocial outcomes in a Chinese population. A total of 170 outpatients with schizophrenia spectrum disorders were recruited from two general university hospitals. Sociodemographic data and clinical variables were recorded and self-report scales were employed to measure self-stigma, depression, insight, self-esteem and subjective QoL. Structural equation modelling (SEM) was used to analyse the cross-sectional data. High levels of self-stigma were reported by 39% of the participants (n = 67). The influences of insight, self-stigma, self-esteem and depression on subjective QoL were confirmed by the SEM results. Our model with the closest fit to the data (χ 2 = 33.28; df = 20; p = 0.03; χ 2/df = 1.66; CFI = 0.98; TLI = 0.97; RMSEA = 0.06) demonstrated that self-stigma might fully mediate the association of insight with low self-esteem, depression and poor subjective QoL. High insight into illness contributed to self-stigma, which caused low self-esteem and depression and, consequently, low QoL. Notably, insight did not directly affect self-esteem, depression or QoL. Furthermore, the association of insight with poor psychosocial outcomes was not moderated by self-stigma. Our findings support the mediating model of insight relevant to the poor psychosocial outcomes of individuals diagnosed with schizophrenia in non-Western societies, in which self-stigma plays a pivotal role. These findings elucidate the direct and indirect effects of insight on psychosocial outcomes and imply that identifying and correcting self-stigma in people with schizophrenia could be beneficial. Additional studies are required to identify whether several other neurocognitive or psychosocial variables mediate or moderate the association of insight with self-esteem, depression and QoL in patients with schizophrenia. Studies with detailed longitudinal assessments are necessary to confirm our findings. Numerical Assessment of Rectangular Straight/Reverse Mufflers at High-Order-Modes Y. C. Chang, D. Y. Lin, H. C. Cheng, M. C. Chiu Journal: Journal of Mechanics / Volume 34 / Issue 3 / June 2018 There has been wide-spread use of plane wave theory in muffler design in industry. However, This has led to an underestimation of acoustical performances at higher frequencies. To overcome the above drawback, the finite element and boundary element methods have been developed. Nevertheless, the time consumed in calculating the noise level is unacceptable. Moreover, considering the acoustical effect and necessity of space-constrained situation in industry, a compact design of reverse mufflers which may improve the acoustical efficiency is then proposed. In this paper, a numerical assessment of rectangular mufflers hybridized with straight/reverse chambers using eigen function, four-pole matrix, and genetic algorithm under limited space is developed. Before the optimization is performed, an accuracy check of the mathematical models for the muffler will be carried out. Results reveal that the noise reduction will increase when the number of chambers increases. In addition, the acoustical performance of the mufflers is reversely proportional to the diameter of the inlet/outlet tubes. Also, the TL of the mufflers will be improved when using more number of target tones in the objective function. Consequently, a successful approach in searching optimal shaped rectangular straight/reverse mufflers using an eigen function and a genetic algorithm method within a constrained space has been demonstrated. Relationship of amotivation to neurocognition, self-efficacy and functioning in first-episode psychosis: a structural equation modeling approach W. C. Chang, V. W. Y. Kwong, C. L. M. Hui, S. K. W. Chan, E. H. M. Lee, E. Y. H. Chen Journal: Psychological Medicine / Volume 47 / Issue 4 / March 2017 Better understanding of the complex interplay among key determinants of functional outcome is crucial to promoting recovery in psychotic disorders. However, this is understudied in the early course of illness. We aimed to examine the relationships among negative symptoms, neurocognition, general self-efficacy and global functioning in first-episode psychosis (FEP) patients using structural equation modeling (SEM). Three hundred and twenty-one Chinese patients aged 26–55 years presenting with FEP to an early intervention program in Hong Kong were recruited. Assessments encompassing symptom profiles, functioning, perceived general self-efficacy and a battery of neurocognitive tests were conducted. Negative symptom measurement was subdivided into amotivation and diminished expression (DE) domain scores based on the ratings in the Scale for the Assessment of Negative Symptoms. An initial SEM model showed no significant association between functioning and DE which was removed from further analysis. A final trimmed model yielded very good model fit (χ2 = 15.48, p = 0.63; comparative fit index = 1.00; root mean square error of approximation <0.001) and demonstrated that amotivation, neurocognition and general self-efficacy had a direct effect on global functioning. Amotivation was also found to mediate a significant indirect effect of neurocognition and general self-efficacy on functioning. Neurocognition was not significantly related to general self-efficacy. Our results indicate a critical intermediary role of amotivation in linking neurocognitive impairment to functioning in FEP. General self-efficacy may represent a promising treatment target for improvement of motivational deficits and functional outcome in the early illness stage. Shared atypical brain anatomy and intrinsic functional architecture in male youth with autism spectrum disorder and their unaffected brothers H.-Y. Lin, W.-Y. I. Tseng, M.-C. Lai, Y.-T. Chang, S. S.-F. Gau Autism spectrum disorder (ASD) is a highly heritable neurodevelopmental disorder, yet the search for definite genetic etiologies remains elusive. Delineating ASD endophenotypes can boost the statistical power to identify the genetic etiologies and pathophysiology of ASD. We aimed to test for endophenotypes of neuroanatomy and associated intrinsic functional connectivity (iFC) via contrasting male youth with ASD, their unaffected brothers and typically developing (TD) males. The 94 participants (aged 9–19 years) – 20 male youth with ASD, 20 unaffected brothers and 54 TD males – received clinical assessments, and undertook structural and resting-state functional magnetic resonance imaging scans. Voxel-based morphometry was performed to obtain regional gray and white matter volumes. A seed-based approach, with seeds defined by the regions demonstrating atypical neuroanatomy shared by youth with ASD and unaffected brothers, was implemented to derive iFC. General linear models were used to compare brain structures and iFC among the three groups. Assessment of familiality was investigated by permutation tests for variance of the within-family pair difference. We found that atypical gray matter volume in the mid-cingulate cortex was shared between male youth with ASD and their unaffected brothers as compared with TD males. Moreover, reduced iFC between the mid-cingulate cortex and the right inferior frontal gyrus, and increased iFC between the mid-cingulate cortex and bilateral middle occipital gyrus were the shared features of male ASD youth and unaffected brothers. Atypical neuroanatomy and iFC surrounding the mid-cingulate cortex may be a potential endophenotypic marker for ASD in males. Comparative risk of self-harm hospitalization amongst depressive disorder patients using different antidepressants: a population-based cohort study in Taiwan C.-S. Wu, S.-C. Liao, Y.-T. Tsai, S.-S. Chang, H.-J. Tsai Journal: Psychological Medicine / Volume 47 / Issue 1 / January 2017 The aim of the study was to evaluate the comparative risk of self-harm associated with the use of different antidepressants. A cohort study was conducted using data from Taiwan's National Health Insurance Research Database from 2001 to 2012. A total of 751 606 new antidepressant users with depressive disorders were included. The study outcome was hospitalization due to self-harm (International Classification of Diseases, Ninth Revision, Clinical Modification codes: E950–E958 and E980–E988). Cox proportional hazards models with stratification of the propensity score deciles were used to estimate the hazard ratios of self-harm hospitalization during the first year following the initiation of antidepressant treatment. There were 1038 hospitalization episodes due to self-harm that occurred during the follow-up of 149 796 person-years, with an overall incidence rate of 6.9 [95% confidence interval (CI) 6.5–7.4] per 1000. Compared with fluoxetine, the risk of self-harm hospitalization was higher for maprotiline [adjusted hazard ratio (aHR) = 3.00, 95% CI 1.40–6.45], milnacipran (aHR = 2.34, 95% CI 1.24–4.43) and mirtazapine (aHR = 1.40, 95% CI 1.06–1.86), lower for bupropion (aHR = 0.51, 95% CI 0.30–0.86), and similar level of risk was found for other selective serotonin reuptake inhibitors (citalopram, escitalopram, fluvoxamine, paroxetine and sertraline). The risk of self-harm may vary across different antidepressant drugs. It would be of importance to conduct further research to investigate the influence of antidepressant use on self-harm behaviors. AMS Dating on the Shell Bar Section from Qaidam Basin, Ne Tibetan Plateau, China H C Zhang, H F Fan, F Q Chang, W X Zhang, G L Lei, M S Yang, Y B Lei, L Q Yang Radiocarbon dating by accelerator mass spectrometry (AMS) of the shell bar section of Qaidam Basin, NE Tibetan Plateau, shows that this section was formed between ~39.7 and ~17.5 14C kyr BP and represented the highest paleolake development period since the Late Pleistocene. It was difficult to obtain reliable dates due to the low organic carbon content, which was formed mainly by authochtonous algae-bacteria (Zhang et al. 2007a). In order to improve the dating, 14C ages of both the alkali residual and acid-soluble components of the organic carbon were measured to check the consistency of the dating results. Total organic carbon (TOC) content and stable carbon isotopes (δ13Corg) might also be used as critical references for checking the reliability of dates. For example, in our study of the shell bar section from Qaidam Basin, we found that when the TOC content was higher than 0.15% and/or δ13Corg was lower than −23, the AMS dates were reliable. AMS dating of fossil shells demonstrated that they could provide valuable age information. The ages given by fossil shells are comparable to those of bulk carbonate from a similar sampling site, and are about 15~18 kyr older than the ages given by organic matter. Due to the U/Th dating requirements and open nature of the system, we concluded that U/Th dating results are unreliable and that this technique is unsuitable for dating halite deposits from Qaidam Basin. Visual working memory deterioration preceding relapse in psychosis C. L. M. Hui, Y. K. Li, A. W. Y. Li, E. H. M. Lee, W. C. Chang, S. K. W. Chan, S. Y. Lam, A. E. Thornton, P. Sham, W. G. Honer, E. Y. H. Chen Journal: Psychological Medicine / Volume 46 / Issue 11 / August 2016 Relapse is distressingly common after the first episode of psychosis, yet it is poorly understood and difficult to predict. Investigating changes in cognitive function preceding relapse may provide new insights into the underlying mechanism of relapse in psychosis. We hypothesized that relapse in fully remitted first-episode psychosis patients was preceded by working memory deterioration. Visual memory and verbal working memory were monitored prospectively in a 1-year randomized controlled trial of remitted first-episode psychosis patients assigned to medication continuation (quetiapine 400 mg/day) or discontinuation (placebo). Relapse (recurrence of positive symptoms of psychosis), visual (Visual Patterns Test) and verbal (Letter–Number span test) working memory and stressful life events were assessed monthly. Remitted first-episode patients (n = 102) participated in the study. Relapsers (n = 53) and non-relapsers (n = 49) had similar baseline demographic and clinical profiles. Logistic regression analyses indicated relapse was associated with visual working memory deterioration 2 months before relapse [odds ratio (OR) 3.07, 95% confidence interval (CI) 1.19–7.92, P = 0.02], more stressful life events 1 month before relapse (OR 2.11, 95% CI 1.20–3.72, P = 0.01) and medication discontinuation (OR 5.52, 95% CI 2.08–14.62, P = 0.001). Visual working memory deterioration beginning 2 months before relapse in remitted first-episode psychosis patients (not baseline predictor) may reflect early brain dysfunction that heralds a psychotic relapse. The deterioration was found to be unrelated to a worsening of psychotic symptoms preceding relapse. Testable predictors offer insight into the brain processes underlying relapse in psychosis.
CommonCrawl
Undernutrition and its associated factors among school adolescent girls in Abuna Gindeberet district, Central Ethiopia: a cross-sectional study Segni Mulugeta Tafasa1, Meseret Robi Tura2, Ermiyas Mulu1 & Zenebu Begna1 Adolescent is the population whose age between 10–19 years old. They are undergoing rapid growth, development and are one of the nutritionally at-risk groups who should need attention. Adolescent undernutrition is a worldwide problem. Even if this stage brings the second window of opportunity to break the intergenerational cycle of undernutrition little is known specifically in the study area. This study was conducted to assess the prevalence of undernutrition and its associated factors among school adolescent girls in Abuna Gindeberet district, Central Ethiopia, 2021. Institution-based cross-sectional quantitative study design was conducted in Abuna Gindeberet district among 10–19 years adolescent girls attending primary and secondary schools from January 1–30, 2021. A systematic random sampling technique was used to select 587 adolescent girls. Data were collected by using interviewer-administered structured and anthropometric measurements. Data were coded, then entered into the Epi-info version 7.2.2.6 and exported to SPSS version 25 and WHO Anthro plus for analysis. Logistic regression analysis was done to identify predictors of under nutrition. Level of statistical significance was declared at p-value < 0.05. The overall magnitude of stunting and thinness were 15.4% [95% CI (12–18)] and 14.2% [95% CI (11–17)] respectively. Number of meals per day [AOR = 3.62, 95% C.I (2.16, 6.05)], adolescent girls of lower grades [AOR = 2.08, 95% C.I (1.07, 4.04)] and who did not begin menstruation [AOR = 1.71, 95% C.I (1.06, 2.73)] were significantly associated with stunting. Adolescent girls engaged in vigorous intensity activities [AOR = 2.51, 95% C.I (1.14, 5.54)], poor dietary diversity score [AOR = 4.05, 95% C.I (1.43, 11.46)] and adolescent age [AOR = 3.77, 95% C.I (1.06, 13.37)] were significantly associated with thinness among adolescent girls. Adolescent girl's undernutrition is a public health problem in the study area. The number of meals per day, adolescent girls of lower grades and who did not begin menstruation were significantly associated with stunting as well as adolescent girls engaged in vigorous-intensity activities; poor dietary diversity score and adolescent age were significantly associated with thinness among adolescent girls. Therefore, government and other stakeholders should focus on these identified factors to improve the nutritional status of adolescent girls. An adolescent is the population whose age between 10–19 years old and it is the period of gradual transition from childhood to adulthood that normally begins with the onset of signs of puberty, which is characterized by important psychological and social changes, not only physiological changes and classified into early adolescence (10–14 years) and late adolescence (15–19 years) [1,2,3]. Globally, 1.8 billion adolescents account for 16% of the world's population and nearly 90% of them live in low and middle-income countries[4]. According to 2016 Ethiopian Demographic Health Survey (EDHS) in Ethiopia adolescent comprises 24% of the country's populations [5]. Nutrient requirements increase in adolescence to meet up the demands of pubertal growth. It is the age at which fast growth and development take place secondary to the infancy period. Adolescents gain up to 50% of their adult weight, more than 20% of their adult height, and 50% of their adult skeletal mass during these ages [6]. In addition to this, different factors affect the dietary habits and behaviors of adolescents, including brain development and understanding of matters that might affect health as well as the broader familial, socio-cultural, and economic environment in which adolescent lives, eats, studies, works and plays [7,8,9]. Malnutrition during adolescence manifests in three broad groups of conditions: undernutrition (wasting, stunting, or chronic undernutrition and thinness or underweight); micronutrient deficiency or excess (inadequate or excessive intake of vitamins or minerals) and overweight or obesity [10]. Undernutrition occurs when people do not eat or absorb enough nutrients to cover their needs for energy and growth, or to sustain a healthy immune system [11]. According to a different report, adolescent undernutrition is a problem in low and middle-income countries (LMIC). The global school-based study conducted in fifty-seven LMIC among 12–15 years adolescents reported an overall prevalence of 10.2% stunting and 5.5% thinness among boys and girls. From this 36.5% and 25.1% of stunting and thinness were reported among girls from Myanmar and Sri Lanka respectively [12]. Similarly, the study conducted in 40 LMIC among adolescent girls reported a prevalence of thinness of 7.63%, with the highest rate in Asia [13]. Some studies in LMIC estimate stunting in girls aged 15–19 years, which ranges from 52% in Guatemala, 44% in Bangladesh to 8% in Kenya, and 6% in Brazil [14]. In Ethiopia, a different study reported that high levels of stunting ranged from 12.2% to 33.2%, and thinness ranges from 8.82% to 32.2% among adolescent girls [15,16,17,18,19,20,21,22,23,24]. Poor nutritional status during any stage of an adolescent can have consequences on cognitive development, resulting in decreased learning ability, poor concentration, and impaired school performance and can retard growth and sexual maturations especially in females it is associated with poor reproductive health outcomes [25, 26]. Undernutrition affects the economy of one country directly, as a loss of productivity due to the poor physical condition and indirectly, because of poor cognitive function and learning abilities [27]. Worldwide 16 million girls aged 15–19 years give birth every year [28]. Poor nutritional status of these adolescent girls does have the effect that passes through generations, an undernourished adolescent girl enters pregnancy with poor nutrient storage and gives birth to low birth weight baby, intrauterine growth restricted baby that is more vulnerable to metabolic disorders in adult life [25]. Chronic undernutrition among adolescents is commonly associated with poverty, poor maternal health and nutrition, frequent illness, or improper infant and young child feeding and care in early life [1, 29]. A study done in a different part of the world identified sociodemographic-related factors, dietary practice-related factors, environmental-related factors, and puberty-related characteristics as major associated factors of adolescent undernutrition [15, 16, 20, 21, 30, 31]. Despite this in Ethiopia, most nutrition programs were focusing on early childhood and pregnant mother nutrition by neglecting adolescent girls. Moreover focusing on adolescent's nutrition will help to improve the nutritional status of adolescents themselves, the nutritional situation of future generations, national economies, and break the intergenerational transmission of undernutrition [32]. Even though the nutritional status of the adolescent is paramount important little was known specifically in the study area. Therefore, this study was aimed to provide information on undernutrition and its associated factors among school adolescent girls in Abuna Gindeberet district, Central Ethiopia. Study setting and period The study was conducted in Abuna Gindeberet district governmental schools. This is located in the West Shoa zone, Oromia regional state 192 km to the West of Addis Ababa capital city of Ethiopia and 134 km to North of Ambo town zonal capital city. There are eleven 1–4 grades, forty 1–8 grades and seven high schools (9–12 grades), and two 1–4 grades of private schools. The total adolescent girls in Abuna Gindeberet district school were 19,150. The study was conducted from January 1–30, 2021. Study design and population An institution based cross-sectional quantitative study design was conducted. All adolescent girls (10–19 years) attending schools in Abuna Gindeberet district were source population and all randomly selected adolescent girls from randomly selected schools were study populations. All 10–19 years age girls in the schools of Abuna Gindeberet district, and self-reported pregnant and lactating girls were included and excluded from the study respectively. Sample size determination and sampling technique Sample size determination The sample size was calculated by using a single and double population proportion formula for first and second objective respectively. For first objective by considering proportion of undernutrition among adolescent girls from previous study in the Hawzen district schools (thinness = 32.2%) and (stunting = 33.2%) (42), 95% CI, 4% marginal error. From this prevalence of stunting = 33.2% that yield the highest from the two proportions, sample size calculated as: $$\mathrm{n}=\frac{{({\mathrm{z\alpha }/}_{2})}^{2}\times \mathrm{pq}}{{\mathrm{d}}^{2}}=\mathrm{n}=\frac{{(1.96)}^{2}\times 0.332\times (1-0.332)}{{(0.04)}^{2}}=533$$ where Z α/2 = critical value for normal distribution at 95% confidence level which equals to 1.96; P = prevalence of stunting from study in Hawzen district school, q = 1-p; d = margin of error. Then by adding 10% non-response rates sample size for first objective became 587. For the second objective, by using assumption of 80% power, 95% C.I, then by using Epi info statcalc for the most significant variable from previous study. Sample size calculation for second objectives Selected variable Prevalence among exposed Prevalence among unexposed With 10% non-response rate 1 Grade level (42) Prevalence of thinness among grade 4–8 = 16.7% AOR = 2.95 Prevalence of thinness among grade 9–12 = 15.5% 440 484 2 DDS (23) Prevalence of thinness among low DDS = 15.6% AOR = 2.1 Prevalence thinness among adequate DDS = 5.7% 342 377 From the calculated sample size, sample size for the first objective gave highest which were 587. Sampling technique and sampling procedures First schools were stratified into primary school (4–8 grades) and secondary school (9–12 grades). From forty primary schools, 12 schools (4–8 grades), and seven secondary schools (9–12 grade) three schools were selected by lottery method. The numbers of adolescent girls were obtained from the respective school director's office. From registration girl's age, 10–19 years were screened before actual data collections. The sample size was allocated to each selected school by using probability proportional allocation to total adolescent girls. Then k (constant interval was determined by dividing total number of study population (N) by required sample size(n)). k = N/n = 6101/587 = 10. Then, from 1 to 10 random start were selected by lottery methods. The random start 4 were selected, then every 10th adolescent girls were selected from school registers until fulfilling the required sample size (Fig. 2.) Operational and term definitions Undernutrition is regarded as the presence of stunting and/ or thinness. Thinness is defined as the proportion of adolescent girls with value BAZ < -2 SDs [33]. Stunting is defined as the proportion of adolescent girls with value HAZ < -2SDs [33]. Dietary Diversity Score is the sum of food groups eaten by adolescent girls over the last 24 h [34]. Meal pattern: the measure of whether they consumed their meal regularly or skipped some times. Primary school: In this study, institutional schools that have students from grades 4–8. Secondary school: Institutional schools that have students from grades 9–12. Data collection tool and procedure Data were collected by face-to-face interview using a pre-tested structured questionnaire adapted from previous studies, guidelines prepared by Food and Agricultural Organization, and anthropometric measurements [21, 30, 34, 35]. The adolescent dietary diversity was measured by a qualitative recall of all foods consumed by each adolescent girl during the previous 24 h, which were validated tools prepared by FAO. It is a dichotomous indicator of whether or not to feed ≥ 4 of 9 food groups in the last 24 h. This was categorized as poor dietary diversity score (< 4 food groups) and good dietary diversity score (≥ 4 food groups). The nine food groups considered were starch staples, dark green leafy vegetables, vitamin A-rich fruits and vegetables, other fruits and vegetables, organ meat, flesh meat, eggs, legumes, and nut and milk and milk products (34). Physical activities levels of adolescent girls were assessed by using the WHO STEP wise approach to non-communicable disease risk factor surveillance (STEPS). Which contained four parts related to work-related activities; travel to and from places, recreational activities, and sedentary behavior [36]. Anthropometric measurements After training, data collectors were recorded height and weight by using a portable non-stretchable plastic height-measuring board with a sliding head bar following standard procedures and portable digital scales respectively. For height measurement, subjects were asked to stand erect with their shoulders level, hands at their sides, thighs and heels comfortably together, the buttocks, scapulae, heels, and head were positioned in contact with the vertical backboard with a sliding head bar. Then height was measured to the nearest 0.1 cm. For weight measurement, adolescent girls were asked to remove their shoes, wear light clothes (schools' uniforms) and then, trained data collectors have weighed the subjects on a calibrated portable digital scale and were record the value to the nearest 0.1 kg. Each measurement was standardized and calibrated by carefully handling, placing the weight scale on a flat surface, and confirmed reading at zero to ascertain accuracy every time before measurements. Data were collected by using pretested and structured questionnaires. The one-day training was given to both data collectors and supervisors. Language experts translated questionnaires to Afan Oromo and then back to English to check the consistency. Pretest was performed before actual data collection on 5% of sample size in neighbor district Gindeberet. Height and weight were taken two times to minimize intra and inter observer's variability of the data collectors, relative technical error measurement (TEM) was calculated. The accepted relative technical measurement error for intra-observer and inter-observer were less than 1.5% and less than 2% respectively. The proper functioning of digital weight scales was checked every time before weight measurement. The data collectors assured the reading scale was exactly at zero before taking weight. Before data entry into the computer, every questionnaire was checked for completeness. First, questionnaires were checked for completeness and consistency before data entry. Then, data were coded and entered onto the Epi-info version 7.2.2.6 and exported to SPSS version 25 for analysis. Anthropometric data and other essential variables were exported to WHO Anthro-plus software, a computer program that converts anthropometric data into Z-scores of the indices, BAZ and HAZ, by using WHO 2007 population references. Descriptive statistics such as frequency, proportions, mean and standard deviation were used to describe characteristics of the study population. Normality for continuous variables was checked and data were normally distributed. The presence of multicollinearity between independent variables was checked by using the variance inflation factor (VIF). However, there was no identified variable with multicollinearity problems. Bivariable and multivariable logistic regression analysis was carried out to identify predictors of undernutrition among adolescent girls. Model fitness was checked by Hosmer–Lemeshow goodness-of-fit test. An odds ratio with 95% confidence intervals was used to see the strength of association between each independent variable and outcome variable. Level of statistical significance was declared at p-value < 0.05. Ethical consideration After the briefing, the purposes of the study Research Ethical Committee (REC) of the Ambo University College of Medicine and Health Sciences have ethically approved it. Upon approval letter of permission was obtained from the colleges. For participants, less than 18 years of consent was obtained from their parents and assent from the students. For participants age greater than/equal to 18 years informed consent was obtained from students themselves. Confidentiality and privacy of the information were maintained. Socio-demographic characteristics of study participants Five hundred eighty-three adolescent girls were included in the study with a response rate of 99.3%. The respondents' age ranges from 10 to 19 years with the mean age of 14.62 (± 2.38 SD). Around 307 (52.7%) participants were found in the age range of 10 to 14 years. Around 389 (66.7%) and 194 (33.3%) of adolescent girls attend primary and secondary education respectively (Table 1). Table 1 Socio-demographic characteristics of study participants in Abuna Gindeberet district, Central Ethiopia, 2021 (n = 583) Environmental related factors Regarding the source of drinking water 269 (46.1%), 132 (22.6%), and 182 (31.3%) were using water from a pipe, protected and unprotected well respectively. 428 (73.4%) of the respondents have a functional toilet in their compound. 222 (38.1%) of study subjects did not wash their hand after visiting the toilet. 235 (40.3%) of adolescent girls had information on adolescent nutrition. The sources of information were from schools 125 (21.4%), HEW 62 (10.7%), and mass media 48 (8.2%) respectively (Fig. 1 and Table 2). Source of drinking water among school adolescent girls in Abuna Gindeberet district, Central Ethiopia, 2021 (n = 583) Table 2 Environmental related characteristics study participants in Abuna Gindeberet district, Central Ethiopia, 2021 (n = 583) Dietary practice related factors The mean dietary diversity score of respondents was 4.78 (± 1.22 SD). Among the participants, 503 (86.3%) and 80 (13.7%) had good and poor dietary diversity scores respectively. The most common staple food among study subjects was teff 378 (64.8%), sorghum 93 (16%), maize 79 (13.5%), and wheat 33 (5.7%). Around 292 (50.1%) of the participant had eaten greater than or equal to three meals per day (Table 3 and 4). Table 3 Food groups eaten in the last 24-h among school adolescent girls in Abuna Gindeberet district, Central Ethiopia, 2021 (n = 583) Table 4 Food frequency and meal pattern of school adolescent girls in Abuna Gindeberet district, Central Ethiopia, 2021 (n = 583) Menstrual, illness history, and life style related factors Out of 583 respondents, 400 (68.6%) of them started to see menstruation. The mean age of menarche was 13.71 (± 0.978SD). About 153 (26.2%) of the participant had a history of illness two weeks before data collection. Regarding the level of physical activities, 222 (38.1%) and 324 (55.6%) of adolescent girls were engaged in vigorous and moderate-intensity activities respectively (Table 5). Table 5 Menstrual, Illness history and Life style related characteristics among school adolescent girls in Abuna Gindeberet district, Central Ethiopia, 2021 (n = 583) Magnitude of undernutrition The overall magnitude of stunting 15.4% [95% CI = (12–18)] and thinness 14.2% [95% CI = (11–17)] among school adolescent girls in Abuna Gindeberet district. Around seven (1.2%) of adolescent girls had both stunting and thinness. About 22 (3.8%) and 15 (2.6%) had severe stunting and thinness respectively (Fig. 2). Magnitude of undernutrition among school adolescent girls in Abuna Gindeberet district, Central Ethiopia, 2021 (n = 583) Factors associated with stunting among school adolescent girls Bivariable and multivariable logistic regression analyses were carried out to identify predictors of stunting. On multivariable analysis number of meals per day, level of grade attended and menstruation status were significantly associated with stunting. This study showed that adolescent girls who ate less than three meals per day were 3.62 times more likely to develop stunting when compared to their counterparts [AOR = 3.62, 95% C.I (2.16, 6.05)]. Adolescent girls of lower grades (4–8 grades) were 2 times more likely to develop stunting when compared to adolescent girls of higher grades (9–12 grades) [AOR = 2.08, 95% CI (1.07, 4.04)]. Adolescent girls who did not begin menstruation were 1.71 times more likely to develop stunting when compared to those who started to see menstruation [AOR = 1.71, 95% C.I = (1.06, 2.73)] (Table 6). Table 6 Bivariable and Multivariable result of factor associated with stunting among school adolescent girls in Abuna Gindeberet district, Central Ethiopia, 2021 Factors associated with thinness among school adolescent girls On multivariable analysis level of activities, poor dietary diversity score and age were significantly associated with thinness among adolescent girls. This study showed that adolescent girls who engaged in vigorous-intensity activities were 2.51 times more likely to develop thinness when compared to their counterparts [AOR = 2.51, 95% CI = (1.14, 5.54)]. Adolescent girls who had poor dietary diversity scores were 4 times more likely to develop thinness when compared to adolescent girls with good dietary diversity scores [AOR = 4.05, 95% CI = (1.43, 11.46)]. Adolescent girls in the early stage (10- 14 years) were 3.77 times more likely to develop thinness when compared to adolescent girls in the late stage (15–19 years) [AOR = 3.77, 95% CI = (1.06, 13.37)] (Table 7). Table 7 Bivariable and Multivariable result of factor associated with thinness among school adolescent girls in Abuna Gindeberet district, Central Ethiopia, 2021 In this study, we assessed the prevalence of undernutrition and its associated factors among school adolescent girls in Abuna Gindeberet district, Central Ethiopia. We found that 15.4% of stunting among adolescent girls. This is nearly similar to the finding from Adama town, Babile district, Lay Guyint district, and Adwa town which were 15.6%, 15%, 16.3%, and 12.2% respectively [16, 17, 24, 30]. The current finding on stunting was lower than the result from west Bengal, Kavre district, and Bangladesh that were 58.36%, 21.08%, and 46.6% respectively [29, 37, 38]. The reason could be due to differences in the study setting, socio-economic, cultural differences, and study time variation. In which study conducted at west Bengal district and Kavre district included adolescents from rural areas. The study conducted in Bangladesh included both rural adolescent boys and girls in whom the current study involved adolescent girls from rural and urban. Current findings on stunting were also lower than finding from Wukro, Tigray region, Ethiopia (21.2%), Hawzen, Tigray region, Ethiopia (33.2%), Awash town, Afar region, Ethiopia (22.9%), and Amhara region, Ethiopia (31.5%) [15, 18, 19, 23]. The reason for the difference might be due to differences in socio-demographic characteristics, study setting, sample size, and cultural differences in adolescent care. Our finding was greater than a study conducted in Pakistan that reported 8% of stunting among school students [39]. This could be due to the study variation in time zone, socio-economic status, and cultural differences. The overall magnitude of thinness was 14.2% in the study area. This finding was in line with finding from Kavre district (14.94%), Amhara region (13.6%), Aksum town (12.6%), Asako district (14.8%), and Goba town (11.9%) respectively [15, 20, 21, 38, 40]. Our finding was greater than finding from Awash town (8.82) [18]. The reason may be due to the urban–rural difference of the study subjects. Which study conducted at Awash town included urban adolescent girls only. In this study majority of the participant were from rural residences, in which different studies reported that adolescent girls from rural were more likely to develop thinness [41]. The other could be the majority of study subjects in Awash town were adolescent girls in a late stage who were less likely to develop thinness [15, 22, 40, 42]. The current finding on thinness was lower than the finding from West Bengal (50.89%) and Bangladesh (42.4%) [29, 37]. The reason is a difference in time zone variation, socio-economic difference, and socio-cultural difference. The study from Adwa town (21.4%), Wukro district (21.6%), Hawzen district (32.2%), and Lay Guyint district (29%) were also greater than our finding [19, 23, 24, 30]. The difference is related to a high prevalence of early childhood undernutrition in the Northern part of Ethiopia [43]. The finding of the study was also lower than the study conducted at East Hararge Babile district (21.6%) [16]. This difference is related to the study setting in which the study conducted at Babile district was a community-based study in which school adolescents had more awareness of nutrition information and healthy nutrition practices than those in the general population. The study showed adolescent girls who ate less than three meals per day were 3.62 times more likely to develop stunting when compared to their counterparts. Studies conducted at Asako district, Adwa town, and Dangila district supported this finding [21, 30, 31]. Nutrient needs to be increased in adolescence to meet the demands of pubertal growth [8]. When adolescent girls ate less than three meals per day they are practicing irregular meal patterns that mean they are skipping regular meals. Irregular meal pattern adds the burden to adolescent girls, thus irregular meal pattern and increased energy requirement due the fast growth spurt leads to an imbalance between energy demand and intake, which finally resulted to adolescent undernutrition. Adolescent girls of lower grades (4–8 grades) were 2 times more likely to develop stunting when compared to adolescent girls of higher grades (9–12 grades). This is in line with finding from the Tigray region Hawzen district and Jimma zone Southwest Ethiopia [23, 44]. This could be due to adolescent girls who receive higher education are generally more aware than those with basic or no education on how to use available resources for the improvement of their nutritional status and their families. In addition, they can decide not to participate in activities that may put their health at risk. The other could be adolescent girls of higher grade had nutrition information from their related subjects, this leads to the fact that different studies reported that adolescents with nutrition and health information were less likely to develop stunting [15, 45]. Adolescent girls who did not begin menstruation were 1.71 times more likely to develop stunting when compared to those who started to see menstruation. This finding is in line with the study conducted at west Kenya and Adwa town school [30, 46]. This could be because delays in the menstruation status of adolescents were related to the deterioration of their nutrition status [47]. The study stated that adolescent girls who engaged in vigorous-intensity activities were 2.51 times more likely to develop thinness when compared to those who did not engage in vigorous-intensity activities. This is supported by a study conducted at Garhwali, India [48]. This is because adolescent girls who engaged in different vigorous-intensity activities were in demand of high-energy intake due to high metabolism. By itself, the stage needs additional energy due to fast growth spurt and development. If the energy intake and demand due to their level of activities and growth spurt became incomparable their nutritional status becomes deteriorates which leads to adolescent thinness [49]. Adolescent girls who had poor dietary diversity scores were 4 times more likely to develop thinness when compared to adolescent girls with good dietary diversity scores. This finding is supported by the study conducted at Adama town, Asako district, Goba town, Amhara region, Finoteselam town, Aksum town, and Awash town [15, 17, 18, 20, 21, 40, 50]. This is related to the fact that adolescents with low dietary diversity scores will get inadequate energy and other important nutrient required for normal growth and development. Adolescent girls in the early stage (10–14 years) were 3.77 times more likely to develop thinness when compared to adolescent girls in the late stage (15–19 years). This is in line with finding from Adwa town, Aksum, Eastern Tigray, and the Amhara region [15, 30, 40, 42]. This is because the early adolescent stage is characterized by a fast growth spurt that needs high energy. Thus, if the requirement for achieving their maximum need is not fulfilled, they will be prone to thinness. Adolescent girl's undernutrition is a public health problem in the study area. Number of meals per day, adolescent girls of lower grades and adolescents who did not begin menstruations were significantly associated with stunting as well as adolescent girls engaged in vigorous-intensity activities; poor dietary diversity score and adolescent age were significantly associated with thinness among adolescent girls. Therefore, government body, Adolescent girls, families and non-governmental organization should focus on importance of dietary diversity, frequency of meals per day and increased nutritional requirement during adolescent age to improve adolescent girls' nutritional status. The data and all supporting materials used in the preparation of this manuscript are freely available from the corresponding author at reasonable request. BAZ: Body Mass Index for Age Z- Score DDS: Dietary Diversity Score EDHS: Ethiopian Demographic and Health Survey FAO: HAZ: Height for Age Z- Score IDDS: Individual Dietary Diversity Score IUGR: Intra Uterine Growth Restriction Statistical Package for Social Science UNICEF: United Nations Children's Fund USAID: United States Agency International Development WHO. Nutrition in adolescence: issues and challenges for the health sector: issues in adolescent health and development. Geneva: World Health Organization; 2005. WHO. The second decade: improving adolescent health and development. Geneva: World Health Organization; 2001. UNICEF. The State of the World's Children 2011. Adolescence; An Age of Opportunity. New York: United Nations Children's Fund, United Nations Plaza; 2011 Christian P, Smith ER. Adolescent undernutrition: global burden, physiology, and nutritional risks. Ann Nutr Metab. 2018;72(4):316–28. Central statistical agency (CSA)[Ethiopia] and ICF. Ethiopia Demographic and Health Survey. Addis Ababa and Rockville: CSA and ICF. Shahid A, Siddiqui FR, Bhatti MA, Ahmed M, Khan MW. Assessment of the nutritional status of adolescent college girls at Rawalpindi. Ann King Edw Med Univ. 2009;15(1):11. Arcan C, Neumark-Sztainer D, Hannan P, Van Den Berg P, Story M, Larson N. Parental eating behaviors, home food environment and adolescent intakes of fruits, vegetables and dairy foods: longitudinal findings from Project EAT. Public Health Nutr. 2007;10(11):1257–65. Guideline W. Implementing Effective Actions for Improving Adolescent Nutrition. Geneva: WHO; 2018. Story M, Neumark-Sztainer D, French S. Individual and environmental influences on adolescent eating behaviors. J Am Diet Assoc. 2002;102(3):S40–51. WHO. Malnutrition fact sheet. Media Centre [website]. Geneva: World Health Organization; 2017 (http://www.who.int/mediacentre/factsheets/malnutrition/en/. 2017. Accessed 30 Jan 2018). Burgess A. Undernutrition in Adults and Children: causes, consequences and what we can do. South Sudan Med J. 2008;1(2):18–22. Caleyachetty R, Thomas G, Kengne AP, Echouffo-Tcheugui JB, Schilsky S, Khodabocus J, et al. The double burden of malnutrition among adolescents: analysis of data from the Global School-Based Student Health and Health Behavior in School-Aged Children surveys in 57 low-and middle-income countries. Am J Clin Nutr. 2018;108(2):414–24. Candler T, Costa S, Heys M, Costello A, Viner RM. Prevalence of thinness in adolescent girls in low-and middle-income countries and associations with wealth, food security, and inequality. J Adolesc Health. 2017;60(4):447–54 e1. Black RE, Victora CG, Walker SP, Bhutta ZA, Christian P, De Onis M, et al. Maternal and child undernutrition and overweight in low-income and middle-income countries. Lancet. 2013;382(9890):427–51. Wassie MM, Gete AA, Yesuf ME, Alene GD, Belay A, Moges T. Predictors of nutritional status of Ethiopian adolescent girls: a community-based cross sectional study. BMC Nutrition. 2015;1(1):20. Teji K, Dessie Y, Assebe T, Abdo M. Anaemia and nutritional status of adolescent girls in Babile District, Eastern Ethiopia. Pan Afr Med J. 2016;24(1):62. Roba K, Abdo M, Wakayo T. Nutritional status and its associated factors among school adolescent girls in Adama City, Central Ethiopia. J Nutr Food Sci. 2016;6(493):2. Kahssay M, Mohamed L, Gebre A. Nutritional Status of School Going Adolescent Girls in Awash Town, Afar Region, Ethiopia. Hindawi J Environ Public Health. 2020;2020:7367139 9 pages. Melaku YA, Zello GA, Gill TK, Adams RJ, Shi Z. Prevalence and factors associated with stunting and thinness among adolescent students in Northern Ethiopia: a comparison to World Health Organization standards. Arch Public Health. 2015;73(1):44. MokennenTegegne , SS, Assefa T, Kalu A. Nutritional Status and Associated Factors of Adolescent School Girls, Goba Town, Southeast Ethiopia. Global Journal Inc (USA). 2016. Girma A, Alemayehu T, Mesfin F. Undernutrition and Associated Factors among Adolescent Girls in Rural Community of Aseko District, Eastern Arsi Zone, Oromia. Ethiopia: Haramaya University; 2017. Gebregyorgis T, Tadesse T, Atenafu A. Prevalence of thinness and stunting and associated factors among adolescent school girls in Adwa Town, North Ethiopia. Int J Food Sci. 2016;2016:8323982. Berhe K, Gebremariam G. Magnitude and associated factors of undernutrition (underweight and stunting) among school adolescent girls in Hawzen Woreda (District), Tigray regional state, Northern Ethiopia: Cross-sectional study. BMC Res Notes. 2020;13(1):59. Arage G, Assefa M, Worku T. Socio-demographic and economic factors are associated with nutritional status of adolescent school girls in Lay Guyint Woreda, Northwest Ethiopia. SAGE Open Med. 2019;7:2050312119844679. Guilloteau P, Zabielski R, Hammon H, Metges C. Adverse effects of nutritional programming during prenatal and early postnatal life, some aspects of regulation and potential prevention and treatments. J Physiol Pharmacol. 2009;60(Suppl 3):17–35. Dewey KG, Begum K. Long-term consequences of stunting in early life. Matern Child Nutr. 2011;7:5–18. World Bank W. Repositioning Nutrition as Central to Development: A Strategy for Large-Scale Action. USA: SAS; 2006. World Health Organization. Adolescent pregnancy: adolescence is a time of opportunity during which a range of actions can be taken to set the stage for healthy adulthood: fact sheet. 2014. Pal A, Pari AK, Sinha A, Dhara PC. Prevalence of undernutrition and associated factors: A cross-sectional study among rural adolescents in West Bengal, India. Int J Pediatr Adolesc Med. 2017;4(1):9–18. Demilew YM, Emiru AA. Undernutrition and associated factors among school adolescents in Dangila Town, Northwest Ethiopia: a cross-sectional study. Afr Health Sci. 2018;18(3):756–66. Truebwasser U. Landscape Analysis of Adolescent Health and Nutrition in Ethiopia. 2017. De Onis M. World Health Organization Reference Curves. The ECOG's eBook on Child and Adolescent Obesity. 2015. p. 19. Kennedy G, Ballard T, Dop MC. Guidelines for measuring household and individual dietary diversity: Food and Agriculture Organization of the United Nations. 2011. Damie TD, Wondafrash M, Teklehaymanot A. Nutritional status and associated factors among school adolescent in Chiro Town, West Hararge, Ethiopia. Gaziantep Med J. 2015;21(1):32–42. WHO. Global physical activity questionnaire. Prevention of Noncommunicable Diseases Department. World Health Organization, Switzerland, https://www.who.int/chp/steps. Rahman MA, Karim R. Prevalence of stunting and thinness among adolescents in the rural area of Bangladesh. J Asian Sci Res. 2014;4(1):39. Haque MK, Sharma K, Mehta DK, Shakya R. Prevalence of underweight, stunting and thinness among adolescent girls in Kavre District. J Nepal Paediatr Soc. 2015;35(2):129–35. Mushtaq MU, Gull S, Khurshid U, Shahid U, Shad MA, Siddiqui AM. Prevalence and socio-demographic correlates of stunting and thinness among Pakistani primary school children. BMC Public Health. 2011;11(1):1–12. Amha A, Girum T. Prevalence and associated factors of thinness among adolescent girls attending governmental schools in Aksum town, northern Ethiopia. Med J Dr DY Patil Vidyapeeth. 2018;11(2):158. Zemene MA, Engidaw MT, Gebremariam AD, Asnakew DT, Tiruneh SA. Nutritional status and associated factors among high school adolescents in Debre Tabor Town, South Gondar Zone, Northcentral Ethiopia. BMC Nutrition. 2019;5(1):43. Weres Z, Yebyo H, Miruts K, Gesesew H, Woldehymanot T. Assessment of Adolescents' under nutrition level among school students in eastern Tigray, Ethiopia: a cross-sectional study. J Nutr Food Sci. 2015;5(5):1. Ethiopian Public Health Institute (EPHI) [Ethiopia] and ICF. Ethiopia Mini Demographic and Health Survey 2019: key indicators. Rockville: EPHI and ICF; 2019. Assefa H, Belachew T, Negash L. Socioeconomic factors associated with underweight and stunting among adolescents of Jimma Zone, south west Ethiopia: a cross-sectional study. Int Sch Res Notices. 2013:3–4. Hossain GMM. A Study on Nutritional Status of the Adolescent Girls at Khagrachhari District in Chittagong Hill Tracts, Bangladesh. Am J Life Sci. 2013;1(6):278. Leenstra T, Petersen L, Kariuki S, Oloo A, Kager P, Ter Kuile F. Prevalence and severity of malnutrition and age at menarche; cross-sectional studies in adolescent schoolgirls in western Kenya. Eur J Clin Nutr. 2005;59(1):41. Dreizen S, Spirakis CN, Stone RE. A comparison of skeletal growth and maturation in undernourished and well-nourished girls before and after menarche. J Pediatr. 1967;70(2):256–63. Saxena Y, Saxena V. Nutritional status in rural adolescent girls residing at hills of Garhwal in India (2009). Internet J Med Update-EJOURNAL. 2011;6:4–5. Rogol AD, Clark PA, Roemmich JN. Growth and pubertal development in children and adolescents: effects of diet and physical activity. Am J Clin Nutr. 2000;72(2):521S-S528. Mengesha DK, Prasad RP, Asres DT. Prevalence and Associated Factors of Thinness among Adolescent Students in Finoteselam Town, Amhara Region, Ethiopia. 2020. Our appreciation goes to the Abuna Gindeberet education office, Abuna Gindeberet health office, and selected school administrators for facilitating the process of data collection. We are also thankful to the study participants, data collectors, and supervisors for their contribution to this study. No funding sources. Department of Public Health, College of Medicine & Health Sciences, Ambo University, P. O. Box 19, Ambo, Ethiopia Segni Mulugeta Tafasa, Ermiyas Mulu & Zenebu Begna Department of Nursing, College of Medicine & Health Sciences, Ambo University, Ambo, Ethiopia Meseret Robi Tura Segni Mulugeta Tafasa Ermiyas Mulu Zenebu Begna SMT conceived the research idea; SMT, MRT, EM, and ZB performed research design, data collection, data analysis, and report writing. SMT, MRT, and ZB write the original drafts of the manuscript. All authors critically reviewed and approved the final version of the manuscript. Correspondence to Segni Mulugeta Tafasa. All methods of this study were carried out under the Declaration of Helsinki's ethical principle for medical research involving human subjects. Ethical approval to conduct this study was obtained from the ethical review committee of Ambo University, College of Medicine and Health Science (Ref. No: PGC/01/2020). An official letter was sent to the Abuna Gindeberet District Education Office. Permission letter was delivered to district education office. Then district education office sent supportive letter to respective schools. For participant less than 18 years informed consent was obtained from their parents and assent from the students. For participants age greater than/equal to 18 years informed consent was obtained from students themselves. Confidentiality and privacy of the information was maintained. The participants were informed that participation is voluntary. The authors declare that they have no competing interests for the publication of this study. Tafasa, S.M., Tura, M.R., Mulu, E. et al. Undernutrition and its associated factors among school adolescent girls in Abuna Gindeberet district, Central Ethiopia: a cross-sectional study. BMC Nutr 8, 87 (2022). https://doi.org/10.1186/s40795-022-00587-8 Associated factors Undernutrition Abuna Gindeberet
CommonCrawl
Contributor info: Dr Robert Konik Title: Dr Last name: Konik Personal web page: https://www.bnl.gov/staff/rmk View/hide publications Publications for which this Contributor is identified as an author: Duality and form factors in the thermally deformed two-dimensional tricritical Ising model Axel Cortés Cubero, Robert M. Konik, Máté Lencsés, Giuseppe Mussardo, Gabor Takács SciPost Phys. 12, 162 (2022) · published 16 May 2022 | Toggle abstract · pdf The thermal deformation of the critical point action of the 2D tricritical Ising model gives rise to an exact scattering theory with seven massive excitations based on the exceptional $E_7$ Lie algebra. The high and low temperature phases of this model are related by duality. This duality guarantees that the leading and sub-leading magnetisation operators, $\sigma(x)$ and $\sigma'(x)$, in either phase are accompanied by associated disorder operators, $\mu(x)$ and $\mu'(x)$. Working specifically in the high temperature phase, we write down the sets of bootstrap equations for these four operators. For $\sigma(x)$ and $\sigma'(x)$, the equations are identical in form and are parameterised by the values of the one-particle form factors of the two lightest $\mathbb{Z}_2$ odd particles. Similarly, the equations for $\mu(x)$ and $\mu'(x)$ have identical form and are parameterised by two elementary form factors. Using the clustering property, we show that these four sets of solutions are eventually not independent; instead, the parameters of the solutions for $\sigma(x)/\sigma'(x)$ are fixed in terms of those for $\mu(x)/\mu'(x)$. We use the truncated conformal space approach to confirm numerically the derived expressions of the matrix elements as well as the validity of the $\Delta$-sum rule as applied to the off-critical correlators. We employ the derived form factors of the order and disorder operators to compute the exact dynamical structure factors of the theory, a set of quantities with a rich spectroscopy which may be directly tested in future inelastic neutron or Raman scattering experiments. Hydrodynamics of the interacting Bose gas in the Quantum Newton Cradle setup Jean-Sébastien Caux, Benjamin Doyon, Jérôme Dubail, Robert Konik, Takato Yoshimura SciPost Phys. 6, 070 (2019) · published 20 June 2019 | Describing and understanding the motion of quantum gases out of equilibrium is one of the most important modern challenges for theorists. In the groundbreaking Quantum Newton Cradle experiment [Kinoshita, Wenger and Weiss, Nature 440, 900, 2006], quasi-one-dimensional cold atom gases were observed with unprecedented accuracy, providing impetus for many developments on the effects of low dimensionality in out-of-equilibrium physics. But it is only recently that the theory of generalized hydrodynamics has provided the adequate tools for a numerically efficient description. Using it, we give a complete numerical study of the time evolution of an ultracold atomic gas in this setup, in an interacting parameter regime close to that of the original experiment. We evaluate the full evolving phase-space distribution of particles. We simulate oscillations due to the harmonic trap, the collision of clouds without thermalization, and observe a small elongation of the actual oscillation period and cloud deformations due to many-body dephasing. We also analyze the effects of weak anharmonicity. In the experiment, measurements are made after release from the one-dimensional trap. We evaluate the gas density curves after such a release, characterizing the actual time necessary for reaching the asymptotic state where the integrable quasi-particle momentum distribution function emerges. Probing non-thermal density fluctuations in the one-dimensional Bose gas Jacopo De Nardis, Miłosz Panfil, Andrea Gambassi, Leticia F. Cugliandolo, Robert Konik, Laura Foini SciPost Phys. 3, 023 (2017) · published 27 September 2017 | Quantum integrable models display a rich variety of non-thermal excited states with unusual properties. The most common way to probe them is by performing a quantum quench, i.e., by letting a many-body initial state unitarily evolve with an integrable Hamiltonian. At late times, these systems are locally described by a generalized Gibbs ensemble with as many effective temperatures as their local conserved quantities. The experimental measurement of this macroscopic number of temperatures remains elusive. Here we show that they can be obtained by probing the dynamical structure factor of the system after the quench and by employing a generalized fluctuation-dissipation theorem that we provide. Our procedure allows us to completely reconstruct the stationary state of a quantum integrable system from state-of-the-art experimental observations.
CommonCrawl
How to estimate a star's heliopause? Is it possible to calculate/estimate the size of a star's heliosphere? If so, how? I am working on a semi-near future, sci-fi novel. As part of the technology base, humans are able to travel between stars near-instantly, however they must first get to a warp-gate of sorts outside the heliopause (outer edge of the heliosphere) at sub-light speeds. This results in a few months of travel time from the planets to the edge of the solar system, then another few months travel from the edge of the next system to the planets. My problem is that I don't know if it is possible to calculate the size of a star's heliosphere based on known data. All of the stars I am using are going to be actual known stars. I'm hoping that using the known data of the stars size, type, and luminosity we can determine at least a reasonable estimate. Although the heliosphere should probably be called the helioegg instead, I am really just needing a number that won't be too outrageous. The main reason I need to calculate the distance is actually to then calculate the travel time from the edge of the solar system. The heliopause of Sol is at roughly 128 AU (give or take) from what I understand, but obviously that is not going to be necessarily even close for every star. It would be interesting if even though Proxima Centauri is the closest star to Sol, if it had a a heliopause distance of like 400 AU it could end up having the longest travel time of all the other local star systems. space space-travel stars TitaniumTurtleTitaniumTurtle Deriving the radius The heliopause is a place of equilibrium, where the ram pressure from the solar wind is equal to the pressure of the interstellar medium (ISM). There are a number of sources of pressure in the ISM, but thermal pressure is the main one.1 The ram pressure from a wind with terminal velocity $v_{\infty}$ at a distance $r$ from the a star is $$P_R=n_*(r)mv_{\infty}^2$$ where $n_*(r)$ is the number density of the wind, and $m$ is the mean mass of the particles at the heliopause. It turns out that, by mass conservation, $n_*(r)\propto \dot{M}v_{\infty}^{-1}r^{-2}$, where $\dot{M}$ is the mass-loss rate.2 The thermal pressure of the ISM is $$P_T=n_{I}k_BT$$ where $k_B$ is Boltzmann's constant and $T$ is the temperature. $n_{I}$ is the number density of the interstellar medium. For Sun-like stars, $v\sim400$ km/s is reasonable. $n_{\odot}$ and $n_I$ are about 5 particles per cubic centimeter and 0.01 particles per cubic centimeter, respectively, and $T\sim10^5$ K. Assuming the wind is largely hydrogen at this point, if we set the two equal, we get $$ \begin{align} r_{\text{crit}} & =\left(\frac{n_{\text{crit}}mv_{\infty}^2}{n_Ik_BT}\right)^{1/2}\text{ AU}\\ & \approx311\left(\frac{n_\text{crit}}{5\text{ cm}^{-3}}\right)^{1/2}\left(\frac{v_{\infty}}{400\text{ km/s}}\right)\text{ AU}\\ & =311\left(\frac{\dot{M}}{10^{-14}M_{\odot}\text{ yr}^{-1}}\right)^{1/2}\left(\frac{v_{\infty}}{400\text{ km/s}}\right)^{1/2}\text {AU} \end{align} $$ where $n_{\text{crit}}\equiv n_*(r_{\text{crit}})$. I've used scaling relations such that for the Sun, $r_{\text{crit}}$ is 311 AU. Important factors Some things to note: Stellar winds don't all have the same composition, but hydrogen is, by and large, the major component, and the one important factor when it comes to calculating $m$. The most important star-dependent variables are $\dot{M}$ and $v_{\infty}$. Note that $r_{\text{crit}}\propto \dot{M}^{1/2}v_{\infty}^{1/2}$. For massive O- and B- stars, winds can have speeds of $\sim2000$ km/s, or more; I think some of the strongest are about $\sim3000$ km/s. This could mean heliopauses of thousands of AU. HD 93129a is a good example, with $v_{\infty}$ of about $\sim3000$ km/s. I can try to pull some numbers for Proxima Centauri, but I'll point out that red dwarfs usually don't have strong stellar winds. The interest in the wind of Proxima Centauri is really because Proxima Centauri b orbits close enough to the star that stellar activity - especially flares - could cause severe problems for life. Star types For Sun-like stars, $\dot{M}\sim10^{-14}M_{\odot}$ per year is reasonable, and so you'd get heliopauses of a few hundred AU. For O- and B- type main sequence stars, I'd expect $\dot{M}\sim10^{-6}M_{\odot}$ per year, which is much higher. At the other end of the spectrum - those red dwarfs that are relatively quiet - we'd see maybe $\dot{M}\sim10^{-15}M_{\odot}$ per year. A- and F- main sequence stars might have mass-loss rates a bit higher than the Sun, and perhaps larger terminal velocities. Off the main sequence, things get more complicated. Red giants - especially asymptotic branch stars, near the end of their lives - have large mass-loss rates that arise via different mechanisms involving dust. These are cool but bright stars; consider $\dot{M}\sim10^{-8}M_{\odot}$ and reasonable fast winds, though not nearly as fast as the hot, massive O-type stars. Additionally, more exotic cases like Be stars that are undergoing mass loss may have larger mass-loss rates. Finally, the young T Tauri stars have mass-loss rates similar to red giants - maybe an order of magnitude or so lower - but their winds are slower than the Sun's. Specific cases I looked around and found instances where $\dot{M}$ and $v_{\infty}$ have been observed, and made an estimate of $r_{\text{crit}}$: $$ \begin{array}{|c|c|c|c|c|c|}\hline \text{Star} & \text{Stellar type} & \text{Mass }(M_{\odot}) & \dot{M}(M_{\odot}\text{ yr}^{-1}) & v_{\infty}(\text{km/s}) & r_{\text{crit}}(\text{AU})\\\hline \text{HD 93129Aa}^1 & \text{O2} & 95 & 2\times10^{-5} & 3200 & 3.93\times10^{7}\\\hline \tau\text{ Sco}^2 & \text{B0} & 20 & 3.1\times10^{-8} & 2400 & 1.32\times10^{6}\\\hline \sigma\text{ Ori E}^3 & \text{B2} & 8.9 & 2.4\times10^{-9} & 1460 & 2.91\times10^{5}\\\hline \alpha\text{ Col}^2 & \text{B7} & 3.7 & 3\times10^{-12} & 1250 & 9.52\times10^{3}\\\hline \text{Deneb}^4 & \text{A2} & 20 & 10^{-6} & 225 & 2.3\times10^{6}\\\hline \text{Sun} & \text{G2} & 1 & 10^{-14} & 400 & 311\\\hline \text{Proxima Centauri}^5 & \text{M6} & 0.12 & 10^{-13} & 550 & 1332\\\hline \end{array} $$ 1Cohen et al. (2011) 2Cohen et al. (1997) 3Krtička et al. (2006) 4Aufdenberg et al. (2002) 5Wargelin & Drake (2002) Now, HD 93129Aa and Deneb are supergiants, so they're off the main sequence, but their properties here shouldn't be too far off from main sequence stars of the same spectral type. Deneb's mass-loss rate is maybe a bit high in comparison to main sequence A stars. Also, I'm slightly skeptical of the value for HD 93129Aa's heliopause, so it's possible that other factors play a role - for instance, thermal pressure could indeed be important in its hot wind. Additionally, some M dwarfs have higher stellar winds and mass-loss rates because of flares and other activity. 1 We can disregard ram pressure, as the ISM is, by and large, slow-moving. Likewise, magnetic fields are typically not important. Similarly, we can neglect thermal pressure in the stellar wind; even though winds may have temperatures of several million K, ram pressure is more important. 2 Specifically, the density $\rho_{\odot}(r)$ (related to number density by $\rho_{\odot}(r)=mn_{\odot}(r)$) is given by $$\rho_{\odot}(r)=\frac{\dot{M}}{4\pi r^2v(r)}$$ where $\dot{M}$ is the mass loss rate and $$v(r)=v_{\infty}\left(1-\frac{R_*}{r}\right)^\beta$$ with $R_*$ being the radius of the star. We typically assume $\beta\approx1$, but at $r\gg R_*$, we can say that $v(r)\approx v_{\infty}$. HDE 226868♦HDE 226868 $\begingroup$ Since we know that then heliosphere is more of an egg shape and shortest distance to Sol heliopause is around 128 AU instead of the 311 AU that you calculated, might it be safe to assume that the radius you are calculating is more accurately an average radius of the whole heliosphere, rather than shortest distance to closest side? I'm mainly wondering because the numbers don't quite match what else I have found, but it mostly seems to be an estimated range. Sol's seems to be between 100- 450 AU, so both numbers fit just fine. $\endgroup$ – TitaniumTurtle Jun 28 '18 at 2:03 The heliosphere, in layman terms, is defined by the equilibrium between the star wind and the galactic wind. Once the star wind is not strong enough to blow away the galactic wind, there we have the heliopause, bordering the heliosphere. Therefore, if you know: Star wind velocity Star wind velocity reduction rate Galactic wind velocity Relative motion of the star with respect to the surrounding star you should be able to estimate the extension of the heliosphere for the star in question. L.Dutch lays out the numbers you need to consider, having done a little reading on stellar winds it would appear that most stars in the A-K section of the main sequence, might be expected to have similar sized heliopauses in a similar area of the galaxy. Cooler and giant stars have much smaller heliopauses and hot white and blue stars much much larger ones due to the extremely high escape velocities of their surfaces. Based on that the hardest nearby star to ship to and from would actually be Sirius rather than any of the three nearby Centauri stars. Barnard's Star should be a very quick transit though. AshAsh $\begingroup$ Not that this goes into a ton of detail, but wouldn't the trinary nature of the Centauri stars give it a more pronounced heliosphere? $\endgroup$ – TitaniumTurtle Jun 24 '18 at 17:42 $\begingroup$ @TitaniumTurtle Maybe and then again because of how far apart they are, particularly Proxima is something like 0.15L.Y. from the Alpha binary, maybe not too, it would almost certainly mess with the shape of the heliopause so possibly you'd have to drop below light-speed earlier to avoid shape fluctuations. $\endgroup$ – Ash Jun 24 '18 at 17:48 Not the answer you're looking for? Browse other questions tagged space space-travel stars or ask your own question. Radiation Levels and Effects on Planet with 27 Suns Creating a realistic world(s) map - Stars Multiple star system near Sun: better choices What is a good way to estimate the mass of spacecraft? About how many planets are plausible for this binary-star system? How would life be on Earth if the Sun was replaced a binary star system? How bright does this planet look from its neighboring planet? Would this simplex ternary system be stable? Increased Luminosity in Stars Estimates for undetectable planets in extra-solar systems
CommonCrawl
What is finite in a finite model I am studying some theorems of model theory in an introductory text of mathematical logic. I know that a model is a way of associating the relationary symbols of a signature $\Sigma$ to $k$-ary relations ($k\ge 0$) and the $k$-ary functional symbols ($k>0$) and constants (which are 0-ary functions) respectively to $k$-ary functions ($k>0$) and the elements of a domain $D$. I thought that a model if finite when the set of the $k$-ary relations and $k$-ary functions ($k\ge0$), including the elements of $D$, is finite, but I have just read a theorem which is shaking my convinctions: Löwenheim-Skolem-Tarski theorem says that if a theory has a model of a given infinite cardinality then its has models of any greater cardinality. If a theory is in the form $\{P_1,P_2,P_3,\ldots\}$ where $P_i,i\in\mathbb{N}$ are 0-ary relational symbols, I would say that it has a countable models where the $P_i$ are propositions, but I do not see how the model could be made uncountable. What am I misunderstanding? I thank you very much for any clarification... logic model-theory Self-teaching workerSelf-teaching worker The cardinality of the model always refers to the cardinality of its universe. Models, or to be precise structures of a language can be seen as a pair $\langle M,I\rangle$, where $M$ is a non-empty set (whose cardinality is "the cardinality of the model", so a finite model means that $M$ is finite) and $I$ is an interpretation function, taking symbols from the language and returning their interpretation as elements (constants) or functions or $k$-ary relations on the set $M$ according to each symbol's designation. When we say model, we often have a specific list of sentences in the language, also called a theory sometimes, that is assumed to be true in the structure. So a finite model for a theory $T$ means that $M$ is a finite set, and that $T$ is a list of sentences which are true in the structure. Saying that the model is countably infinite means that $M$ is countably infinite, and so on. Asaf Karagila♦Asaf Karagila $\begingroup$ What I still don't grasp is how an uncountable model for $\{P_1,P_2,...\}$ could exists: can the image $I(\{P_1,P_2,...\})$ be uncountable? Can an arbitrarily great set of constants be added (but then, wouldn't that make Löwenheim-Skolem-Tarski theorem trivial, while of course it isn't)? I heartily thank you again!! $\endgroup$ – Self-teaching worker Jan 31 '15 at 17:01 $\begingroup$ What are $P_n$'s? Symbols in the language? Can you grasp how $\Bbb R$ is an uncountable model for the theory of ordered fields, with $0,1$ being constants, $+,\cdot$ being operations and $<$ being the order? No one said that all the elements of the universe of the structure are constants. $\endgroup$ – Asaf Karagila♦ Jan 31 '15 at 17:03 $\begingroup$ Is $f\colon\Bbb N\to\Bbb R$ defined by $f(n)=n\cdot\pi$ a function whose domain is countable? It is. Does it mean that $\Bbb R$ is countable as well? Again, no where we require that $M$ is ONLY made of interpreted constants. It may include many many other symbols. One can ask, in a similar fashion to your suggestion, whether a language without constant symbols can even have an interpretation? Isn't the universe of the model empty because nothing is interpreted as an element there? $\endgroup$ – Asaf Karagila♦ Jan 31 '15 at 18:18 $\begingroup$ Now. Again, and really, the last time because I feel that I'm repeating myself over and over and over and over and over and over and over and over and over again here. The language contains some symbols, and they are interpreted as some elements, functions or relations over $M$. Not ALL the elements of $M$, and not ALL its subsets, or functions defined on it, or relations over it, not all of these are interpretation of symbols from the language. But here is the difference between structure and model. Models need to satisfy the axioms of the theory we are talking about. [...] $\endgroup$ – Asaf Karagila♦ Jan 31 '15 at 19:06 $\begingroup$ Whereas structures do not. Yes, just add some arbitrary new elements to $M$, and extend whatever function symbols you have however you want. Yes, this will be an interpretation of the language. But it might not be a model of a theory. For example, if you have $\leq$ in the language and you add new elements, it might not be a partial order, or a linear order, unless you extend $I(\leq)$ to be an order of the larger universe. So you need to do more in order to ensure that the new structure also satisfies the axioms. How are you planning on doing that? $\endgroup$ – Asaf Karagila♦ Jan 31 '15 at 19:08 Not the answer you're looking for? Browse other questions tagged logic model-theory or ask your own question. How to calculate the cardinality of a model Lowenheim-Skolem theorem and first-order model A proof of the Löwenheim-Skolem-Tarski theorem Complete theory with a infinite model has only infinite models Show that class of infinite sequences with relation $R$ is not axiomatizable Application of the Löwenheim-Skolem Theorem Number of non-isomorphic models of the structure Existence of a (classical) FOL-Formula satisfiable by ONLY an uncountable model?
CommonCrawl
Why does "Naive Set Theory" by Halmos allow the universal set despite admitting its non-existence? Halmos openly says that the universe doesn't exist: there is no universe But several pages later Halmos says: In order to record the basic facts about complementation as simply as possible, we assume nevertheless (in this section only) that all the sets to be mentioned are subsets of one and the same set E and that all complements (unless otherwise specified) are formed relative to that E. In such situations (and they are quite common) it is easier to remember the underlying set E than to keep writing it down, and this makes it possible to simplify the notation. An often used symbol for the temporarily absolute (as opposed to relative) complement of A is A′. The following rule from the book killed all my hopes that E isn't the universal set: This seems like blatant self-contradiction with previous statement that there is no universe. I hope to resolve it, but I have almost no ideas what it can mean. My only guess is that it's some kind of "lie to children", when relatively simple lie is told because the teacher believes that his/her student(s) can't yet comprehend the whole truth. P.S. In my other question I got comments that can probably shed some light on the problem. looking at the image you posted, I believe you are confusing two different scenarios. In many applications of set theory, there is a universal set at hand. For example, if we are studying natural numbers, the universal set for that purpose may be the set of all natural numbers. All of the rules for set complement assume that a temporary universal set has been chosen in that way. The other context is in studying formal set theory, such as ZFC. In this context, there is no single universal set that contains all sets.( Carl Mummert) If your book isn't specifically on mathematical set theory, then the complement is usually taken with respect to a not explicitly stated super set U. E.g. the complement of the set of even numbers is probably meant to be the set of odd numbers, as the super set might be the natural numbers. This should be clear from the context. (M.Winter) These comments give weak hope, but alas they don't seem to solve the conflict. If anything, they create new one: conclusion from them is in contradiction with statement that absolute coplement of E is the empty set. For example, let's assume that E is equal to set of natural numbers N. It contains only natural numbers as its elements, but not sets of natural numbers (like it doesn't contain set {1,2,3} as its element). Thus complement of E wouldn't be empty as Halmos assures, it would contain such sets like {1,2,3}, {1,7,65382, 3235464567765}, etc. as its elements. Even worse, I doubt that complement of set E under such circumstances would even be a set in the first place, because it would contain everything that doesn't belong to set E, including set E and complement of set E itself. P.P.S. As Eric Wofsey pointed out, I missed consequences of phrase " all the sets to be mentioned are subsets of E". Indeed, then our complement set can't contain elements that aren't elements of E. In other words, our complement set of E must be subset of E! But at the same time they must have no any common elements. Complement of set E can't contain element that isn't element of set E, thus the only alternative is not contain any element at all, i.e. be the empty set because intersection of any set with empty set is empty set. And of course, under such considerations E doesn't necessarily contain itself as its member. @EricWofsey Thanks Eric! I think the question is solved. Halmos doesn't really allow the universe, I just got wrong conclusions from rule elementary-set-theory $\begingroup$ I'd say that Halmos is explicitly not admitting the universal set. $\endgroup$ – Lord Shark the Unknown Sep 16 '18 at 16:04 $\begingroup$ @EricWofsey I fail to see any contradiction between the phrase and what I said. $\endgroup$ – user161005 Sep 16 '18 at 16:21 $\begingroup$ @EricWofsey Oops, I see now. A subset contains elements of its superset. So if only subsets of E are allowed then we can't have set like {{1,2}, {5,99}} $\endgroup$ – user161005 Sep 16 '18 at 16:26 $\begingroup$ In formal set theory, there is not an "absolute complement" operation, only a relative complement. Anytime a complement symbol is used, it is intended to be a relative complement to some (perhaps unmentioned) particular set. This is because the absolute complement of a set is never a set, it is always a proper class, and so if we are interested in studying sets there is little reason to look at absolute complements. I see that you are interested in taking absolute complements, but unfortunately that is not the way that you should look at the complement symbol. $\endgroup$ – Carl Mummert Sep 16 '18 at 16:39 Halmos never claims that $E$ is the universe, or that all sets are subsets of $E$. Instead, he is just saying: Every assertion in this section has an additional unstated hypothesis that all of the sets that are mentioned are subsets of $E$. Moreover, in this section we use the notation $A'$ for the complement of $A$ relative to $E$, and refer to this as simply the "complement" of $A$. This in particular applies to the equation $$E'=\emptyset$$ which appears in that section, so in this equation $E'$ stands for the complement of $E$ relative to $E$. Eric WofseyEric Wofsey Halmos simply means: in a specific context, like e.g. real analysis, where we are working with the set $\mathbb R$ as "universe", we can safely use the notation $\{ 0, 1, 2, \pi \}^C$ meaning the complement (in $\mathbb R$) of the set $\{ 0, 1, 2, \pi \}$. In fact, from a more "general" set-theoretic point of view, we are speaking of the set : $\mathbb R \setminus \{ 0, 1, 2, \pi \}$. This set exists in every theory proving (or assuming) the existence of $\mathbb R$. If so, we have that the complement of $\mathbb R$ "relative to" the universe $\mathbb R$ will be the empty set : $\mathbb R \setminus \mathbb R = \emptyset$. The complement here is not absolute, but to be understood relative to the chosen set $E$, that is, $S' = E\setminus S$. In particular, $E' = E\setminus E = \emptyset$. For example, if $E=\mathbb N$, then you only look at sets containing nothing but natural numbers. Then the complement of such a set also contains nothing but natural numbers; in particular, not sets of natural numbers. celtschkceltschk It says "all the sets to be mentioned are subsets of one and the same set $E.$" It does not say "all sets are subsets of one and the same set $E.$" Complementation is relative. If $E= \{ 1,2,3,4,5 \}$ and $F= \{4,5,6,7,8\}$ then the complement of $F$ relative to $E$ is $E\smallsetminus F = \{1,2,3\}.$ And if $E=\varnothing$ and $F=\{1,2,3\}$ then the complement of $F$ relative to $E$ is $E\smallsetminus F = \varnothing = E.$ Michael HardyMichael Hardy Not the answer you're looking for? Browse other questions tagged elementary-set-theory or ask your own question. Cardinal exponentiation problem from Halmos' Naive Set Theory The set of all sets of the universe? How is it possible for a singleton to exist if ∅ is a subset of every set? Naive Set Theory by Halmos is confusing to a layman like me Halmos, Naive Set Theory, recursion theorem proof: why must he do it that way? P. Halmos Naive Set Theory Exercise 1 section 3 Why $\{x: x = x\}$ is not a set in Naive Set Theory? (Halmos, Sec. 4) Proof that all natural numbers can be compared with each other from Halmos' "Naive Set Theory" Halmos' Naive Set Theory: On expressing $\{\{x,\;y\}:x \in A \textit{ and } y \in A \textit{ and } x \neq y\}$ What is the difference between an empty class (which is not an element) and an empty set?
CommonCrawl
Bioresources and Bioprocessing Green synthesis of enzyme/metal-organic framework composites with high stability in protein denaturing solvents Xiaoling Wu1, Cheng Yang1 & Jun Ge1 Bioresources and Bioprocessing volume 4, Article number: 24 (2017) Cite this article Enzyme/metal-organic framework composites with high stability in protein denaturing solvents were reported in this study. Encapsulation of enzyme in metal-organic frameworks (MOFs) via co-precipitation process was realized, and the generality of the synthesis was validated by using cytochrome c, horseradish peroxidase, and Candida antarctica lipase B as model enzymes. The stability of encapsulated enzyme was greatly increased after immobilization on MOFs. Remarkably, when exposed to protein denaturing solvents including dimethyl sulfoxide, dimethyl formamide, methanol, and ethanol, the enzyme/MOF composites still preserved almost 100% of activity. In contrast, free enzymes retained no more than 20% of their original activities at the same condition. This study shows the extraordinary protecting effect of MOF shell on increasing enzyme stability at extremely harsh conditions. The enzyme immobilized in MOF exhibited enhanced thermal stability and high tolerance towards protein denaturing organic solvents. Enzymatic catalysis is one of the promising ways to achieve green industrial chemical processes. Immobilized enzyme is the most frequently used form of enzyme at industrial scale (Kirk et al. 2002; Bornscheuer et al. 2012). One of the important aims of enzyme immobilization is to achieve high enzyme stability at harsh conditions such as high temperature and organic solvents which often existe in industrial biocatalytic processes. Although immobilization may compromise the activity of enzyme, the greatly enhanced stability would enable the long-term and repeated use of enzyme and therefore a significantly reduced cost of enzyme (Liu et al. 2015; Liang et al. 2016a; Ji et al. 2016; Novak et al. 2015). Metal-organic frameworks (MOFs) have emerged as a new type of nanomaterials suitable for immobilization of enzyme (Feng et al. 2015; Lykourinou et al. 2011; Chen et al. 2012; Wen et al. 2016) and other biomolecules (Zhang et al. 2016; Li et al. 2016a, b). One major advantage of MOFs is the high chemical and structural stability of the nanoporous frameworks, which can offer a protective effect for encapsulated enzymes against adverse conditions while at the same time allow the transfer of small-molecule substrates/products. Previous attempts in the preparation of enzyme-MOF composites have been majorly focused on the strategy of adsorbing protein molecules into pre-synthesized MOFs which have pore sizes similar to the dimensions of enzyme molecules (Feng et al. 2015; Lykourinou et al. 2011; Li et al. 2016a, b). Covalently linking enzyme molecules on the surface of MOF particles (Shi et al. 2016; Cao et al. 2016) was reported as another effective way for enzyme immobilization on MOFs, where the enzyme molecules were conjugated to the carboxylate groups on the MOFs. Co-precipitation or biomimetic mineralization (Lyu et al. 2014; Li et al. 2016a, b; Liang et al. 2015a, b, 2016b, c; Wu et al. 2015a, b; He et al. 2016; Shieh et al. 2015), a self-assembly process to encapsulate bioactive molecules within the protective exteriors, has been successfully introduced to the synthesis of enzyme-MOF composites. Typically, in a biomimetic mineralization process, protein molecules in aqueous solution first binds with metal ions or organic ligands to form nanoscaled aggregates which induce the nucleation of MOF crystals. Then, the MOF crystals grow as the shell of protein aggregates to encapsulate enzyme inside. The enzyme-MOF composites have showed the great potential of increasing enzyme stability at high temperature (Liang et al. 2015a). However, the stability of enzyme-MOF composites in organic solvents, especially in highly polar organic solvents such as dimethyl sulfoxide (DMSO), dimethyl formamide (DMF), methanol, and ethanol has not been well investigated. Here, we report a biomimetic mineralization route to synthesize enzyme-MOF composites, using cytochrome c (Cyt c), horseradish peroxidase (HRP), and Candida antarctica lipase B (CALB) as model enzymes. After being encapsulated in MOFs, enzymes can retain almost all their activities even incubated in protein denaturing solvents including DMSO, DMF, methanol, and ethanol. In contrast, free enzymes lost almost all their activities at the same condition. Experimental section Horseradish peroxidase (HRP) (reagent grade), Candida antarctica lipase (CALB), 2-methylimidazole, phosphate buffer saline (1×), 1,2,3-trihydroxybenzene (THB, ACS reagent), and 4-nitrophenyl butyrate (p-NPB) were purchased from Sigma-Aldrich. Zinc nitrate hexahydrate (99.998%) and hydrogen peroxide (29–32% wt) were purchased from Alfa Aesar. Cytochrome c from horse heart was purchased from Biodee Corporation. All the other reagents are of analytic grade. Synthesis of the enzyme/ZIF-8 composite HRP, CALB, Cyt c water solution (5 mg/mL, 4 mL), and Zn(NO3)2 water solution (92.5 mg/mL, 4 mL) were added into 2-methylimidazole water solution (4.1 g dissolved in 40 mL water), followed by stirring for 30 min at room temperature. The mixture turned milky almost instantly after mixing. After aged at room temperature for 12 h, the product was collected by centrifugation at 6000 rpm for 10 min, washed with DI water for three times, and used for further characterization. Transmission electron microscopy (TEM) Methanol solution of ZIF-8 and enzyme/ZIF-8 composite was placed on the carbon-coated support grid and dried at room temperature for TEM measurement and energy dispersive X-ray spectrometry (EDS) analysis on a JEOL JEM-2010 high-resolution TEM with an accelerating voltage of 120 kV. XRD analysis of ZIF-8, enzyme/ZIF-8 composite Powder X-ray diffraction (XRD) patterns were conducted using a Bruker D8 Advance X-Ray diffractometer with a Cu Kα anode (λ = 0.15406 nm) at 40 kV and 40 mA. Thermogravimetric analysis of ZIF-8, enzyme/ZIF-8 composites Samples were heated from room temperature to 600 °C at a rate of 10 °C/min under air atmosphere on a TA Instruments TGA 2050 Thermogravimetric Analyzer. Activity assay of enzyme/ZIF-8 composite and its free counterpart The activity of HRP was determined by measuring the rate of decomposition of hydrogen peroxide with THB, which can be converted to a yellowish product, purpurogallin, detectable at 420 nm. In a typical experiment, HRP/ZIF-8 was added to a solution containing H2O2 (9 μM) and THB (16 mM) in phosphate buffer saline. After reaction for 10 min, the solution was centrifuged for 2 min at 12,000 rpm. And the absorbance of the supernatant was recorded at 420 nm on a UV–Vis spectrophotometer. The activity of free HRP was measured using the same procedure. For the enzymatic activity determination of Cyt c/ZIF-8 and free Cyt c, similar procedure was followed by shortening the reaction time to 3 min. For the activity assay of CALB/ZIF-8, p-NPB was first dissolved in acetone and then diluted with phosphate buffer (50 mM, pH 7.0) containing 1.25% (w/v) Triton X-100. The composite of CALB/ZIF-8 was added to the phosphate buffer containing 4-nitrophenyl butyrate (p-NPB) (0.5 mM) to initiate the reaction. After reaction for 3 min, the solution was centrifuged at 12,000 rpm for 2 min. The absorbance of the supernatant was determined at 400 nm by using a UV/Vis spectrophotometer. Enzyme stability in denaturing organic solvents Enzyme/ZIF-8 composites and corresponding free enzyme powders were incubated in water, dimethyl sulfoxide (DMSO), dimethyl formamide (DMF) at 80 °C and in boiling methanol and ethanol for 1 h. Tiny amount of the enzyme solution was taken out and diluted to appropriate concentration and subjected to the above enzymatic assays. The relative activity of enzyme/ZIF-8 was calculated as the ratio of the activity of treated enzyme/ZIF-8 exposing to high temperature and organic solvents after required immersion time and activity of the untreated enzyme/ZIF-8 (Eq. 1). The activity of the untreated enzyme/ZIF-8 was set as 100%. The relative activity of free enzyme was calculated in the same way. $${\text{Relative Activity}}= \frac{{{{{\text{Activity}}\;{\text{of}}\;{\text{Enzyme}}} /{{\text{ZIF-}}8\;{\text{treated}}}}}}{{{{{\text{Activity}}\;{\text{of}}\;{\text{Enzyme}}} / {{\text{ZIF-}}8\;{\text{untreated}}}}}} \times 100\%.$$ As shown in Fig. 1, in this study, using cytochrome c (Cyt c), horseradish peroxidase (HRP), and Candida antarctica lipase B (CALB) as model enzymes, the synthesis of enzyme-MOF composites was carried out by mixing 4 mL of enzyme solution (5 mg/mL), 4 mL of zinc nitrate water solution (310 mM), and 40 mL of 2-methylimidazole water solution (1.25 M). The synthesis was conducted in water solution at room temperature for 30 min. Protein induced the nucleation of zeolitic imidazolate frameworks-8 (ZIF-8), and ZIF-8 framework grew around enzyme molecule providing a protective shell. After aging at room temperature for 12 h, the product was collected by centrifugation at 6000 rpm for 10 min, followed by removing the adsorbed protein by three cycles of washing and centrifugation. Scheme of the green synthesis of enzyme-MOF composites exhibiting tolerance for denaturing solvents and heat As shown in Fig. 2 and Additional file 1: Figures S1, S2, the scanning electron microscope (SEM) images and transmission electron microscope (TEM) images of HRP, CALB, Cyt c/ZIF-8 nanocomposites showed similar morphologies to that of pure ZIF-8, with sizes ranging from ~100 to ~800 nm. It seemed that the size of the enzyme/MOF composites was widely distributed due to the rapid nucleation and diverse growth of the crystals. The X-ray diffraction (XRD) patterns of the enzyme/ZIF-8 composites agreed well with the patterns of simulated ZIF-8 and pure ZIF-8 (Fig. 3a), which verified that the incorporation of enzyme did not affect the crystallinity of ZIF-8. SEM images of a HRP/ZIF-8, b CALB/ZIF-8, c Cyt c/ZIF-8, and d ZIF-8. Scale bars 1 μm a XRD patterns of ZIF-8, Cyt c/ZIF-8, HRP/ZIF-8, CALB/ZIF-8, and simulated ZIF-8. b FT-IR spectrum of ZIF-8, Cyt c/ZIF-8, HRP/ZIF-8, CALB/ZIF-8, and HRP The Fourier transform infrared spectroscopy (FTIR) spectra (Fig. 3b, 1640–1660 cm−1 corresponding to amide I band mainly from C=O stretching mode) proved the presence of protein in the composites. The thermal gravity analysis (TGA) in air also confirmed the presence of protein in the composites (Additional file 1: Figure S3). The loading percentages of protein in the composites interpreted from TGA curves were 10, 5, and 10% for HRP, CALB, and Cyt c, respectively, which is consistent with the result of size exclusion chromatography. To further confirm that enzymes were embedded in ZIF-8 instead of adsorbing on the external surface, control experiments were carried out by physically mixing the as-prepared pure ZIF-8 crystals with enzyme solutions to form the enzyme-adsorbed ZIF-8 samples. After the same washing step, both the enzyme-embedded ZIF-8 (enzyme/ZIF-8) and enzyme-adsorbed ZIF-8 (enzyme@ZIF-8) samples were digested by acetic acid, followed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE, Additional file 1) performed on an analytic polyacrylamide (12%) gel. As shown in Additional file 1: Figure S4, in SDS-PAGE, protein bands corresponding to respective molecular weights of enzymes (12 kD for Cyt c, 38 kD for CALB, and 44 kD for HRP) appeared on the gel for both free enzymes and enzyme/ZIF-8 samples (lane 1 and lane 2, respectively). In contrast, no obvious bands were observed for all three types of enzyme@ZIF-8 samples (lane 3). The result demonstrated that the embedded enzymes in ZIF-8 scaffolds cannot be removed by simple washing step but only can be released out from ZIF-8 scaffolds once the ZIF-8 was digested by acetic acid. In contrast, the enzyme molecules adsorbed on the external surface of ZIF-8 crystals were completely removed in the washing step. The specific activities of the synthesized enzyme/ZIF-8 composites were determined at the same protein concentration as native enzymes (Additional file 1). HRP/ZIF-8 and CALB/ZIF-8 retained 12 and 5% of activity compared to native enzymes. Please note that the small aperture size of ZIF-8 is 3.5 Å, indicating the transport of substrate such as 4-nitrophenyl butyrate through the small aperture must be very difficult. However, previous studies showed that enzyme was embedded in ZIF-8 crystals in the form of protein aggregates with size around 20–30 nm (Lyu et al. 2014). The relatively large cavities localizing protein aggregates were presented in both the surface and inside of enzyme-MOF composites, and these cavities were partially connected (Lyu et al. 2014). In addition, in the biomimetic mineralization, structural defects of crystals usually formed due to the presence of protein in the crystallization process. These cavities and structural defects in composites could possibly serve as the major routes for transferring substrates. Moreover, studies also demonstrated that the pores of MOFs have a "breathing" effect which could enlarge the aperture size to some extent to allow the transferring of molecules (Serre et al. 2002). All these evidence and results provided the possible mechanism for substrate transportation inside the enzyme-MOF composites. The low activity was possibly due to the activity loss during the immobilization process and/or the hindered mass transfer caused by the ZIF-8 framework, which was commonly observed in most cases of immobilized enzymes (Kim et al. 2008; Brady and Jordaan 2009; Hanefeld et al. 2009). However, Cyt c embedded in ZIF-8 showed a sixfold increase in activity compared to free Cyt c in solution. The control experiment confirmed that the pure ZIF-8 had no catalytic activity towards the substrate. The high activity of Cyt c in ZIF-8 was possibly due to the increased substrate affinity resulted from a conformational change of Cyt c to expose its heme group (Ono and Goto 2006) in the appropriate microenvironment created by ZIF-8. Moreover, the interaction between the embedded Cyt c and Zn2+ in ZIF-8 crystals also might contribute to the enhancement of the catalytic activity of Cyt c (Lyu et al. 2014; Ge et al. 2012). The mechanism of the increased activity of Cyt c in MOF has been investigated elsewhere (Lyu et al. 2014). The thermal stability of enzymes embedded in ZIF-8 was evaluated by comparing the residual activity of free enzymes and enzyme/ZIF-8 composites after incubating in phosphate buffer saline solution at high temperatures (70 °C for Cyt c, 50 °C for HRP, 40 °C for CALB). As shown in Fig. 4, after 6-h incubation all enzyme/ZIF-8 composites maintained over 90% of their original activities while free enzymes lost 60 and 30% of their original activities for HRP and CALB, respectively. The greatly enhanced stability of enzyme/ZIF-8 composites can be attributed to the rigid structure and the confinement effect of MOF scaffolds that repress protein conformational transition or unfolding at high temperatures. As an exceptional case, free Cyt c itself had a good stability in water solution at 70 °C, with almost no loss of activity during the 6 h incubation (Fig. 4e). For Cyt c/ZIF-8, an increased activity up to 450% was observed during the incubation at 70 °C, which might be caused by the increased accessibility of heme group in Cyt c at high temperature which has been discussed previously (Lee et al. 2005; Pace and Hermans 1975; Fujita et al. 2007). Long-term storage stability is important for immobilized enzyme. As shown in Additional file 1: Figure S5, the HRP/ZIF-8 retained half of its original activity even after 4-day incubation at room temperature. We attributed the retention of the activity to the protecting effect from the rigid structure of ZIF-8 for maintaining the conformation of encapsulated enzyme. We evaluated the recyclability of HRP/ZIF-8. As shown in Additional file 1: Figure S6, the activity of HRP/ZIF-8 was decreased by 50% at the third cycle of reusing. The loss of activity might be ascribed to the difficulty of recovering all the nano-sized composites during the centrifugation process in the recovery. a, c, e Stability of CALB/ZIF-8, HRP/ZIF-8, Cyt c/ZIF-8 incubated in water for 6 h at 40, 50, and 70 °C, respectively. b, d, f Comparison of stabilities of enzymes and corresponding enzyme-MOF composites in water, DMF, DMSO at 80 °C and in boiling methanol and boiling ethanol. Enzymes and corresponding enzyme-MOF composited were incubated in the above solvents for 1 h and taken out for enzymatic assays To examine the tolerance of enzyme/ZIF-8 composite to polar and hydrophilic solvents at high temperature, the enzyme/ZIF-8 composites were incubated in water, DMF, DMSO at 80 °C and in boiling methanol, boiling ethanol for 1 h. As shown in Fig. 4, by using 4-nitrophenyl butyrate and 1,2,3-trihydroxybenzene as substrate, native CALB was almost fully deactivated in all the above conditions and native HRP retained no more than 20% of its original activity. In contrast, at the same condition, the activity of CALB/ZIF-8, HRP/ZIF-8, and Cyt c/ZIF-8 remained almost unchanged, demonstrating the unprecedented protecting effect of the framework. In the case of free Cyt c, an enhancement of 50–450% was observed after incubation in these solvents. The exceptional activity can again be ascribed to the exposure of heme in Cyt c under the condition where the active sites of Cyt c became more accessible to substrates, which facilitates the catalysis process (Ono and Goto 2006). When Cyt c was encapsulated in ZIF-8, the protecting effect of the framework resisted the protein configuration change in denaturing organic solvents, which resulted in almost unchanged of activity (Fig. 4f). These organic solvents including DMF, DMSO, methanol, and ethanol are recognized as very effective denaturing reagents for most proteins, because they can seriously destroy the structure of protein due to their high polarity and solubilization capability. It has been investigated that protein molecules in such denaturing solvents quickly lost the tertiary and secondary structures, presenting as unfolded or partially unfolded configurations (Desai and Klibanov 1995; Xu et al. 1997). The deactivation of enzyme induced by denaturing solvents is irreversible. Because of the serious denaturing effect, almost no free enzyme is reported to retain reasonable activity after incubating in these pure organic solvents (DMF, DMSO, methanol, and ethanol). Protein engineering (including direct evolution and random or site-directed mutagenesis) which is a very effective tool for improving enzyme stability under various circumstances also found its difficulty in solving this problem. Protein engineering has only been proven to be effective for increasing enzyme stability in aqueous-denaturing solvent mixtures. For example, random mutagenesis resulted in more stable subtilisin, which can only increase its stability in 60% DMF (Chen and Arnold 1991). Therefore, the retention of activity in these anhydrous denaturing organic solvents was still far from enough since these solvents such as DMSO and DMF usually served as commonly used solvents for the synthesis of polysaccharides (Ferreira et al. 2002), peptide (Nilsson and Mosbach 1984), etc. (Bordusa 2002). Recent studies on enzyme-MOF composites demonstrated the capability of improving enzyme catalytic performance in aqueous solutions (Feng et al. 2015; Lykourinou et al. 2011; Chen et al. 2012). In this study, we showed that the confinement of enzyme in ZIF-8 framework provided an effective way to increase enzyme stability in denaturing organic solvents. The confinement of ZIF-8 can prevent the encapsulated protein molecules from conformational changes and therefore retain the protein structure integrity, while the free enzyme was directly exposed to the denaturing organic solvents leading to serious change of protein configuration and loss of activity. Enzyme/ZIF-8 composites were prepared by the biomimetic mineralization process. The one-step synthesis was carried out in aqueous solution at room temperature within 30 min. The structural rigidity and confinement of MOF scaffolds greatly enhanced the thermal stability of embedded enzymes. In protein denaturing organic solvents including DMF, DMSO and boiling methanol and ethanol, free enzymes were almost fully deactivated, while enzyme/ZIF-8 composites retained all its original activity after incubation in these solvents for 1 h. This study demonstrated a green chemistry way of preparing immobilized enzymes based on MOFs to achieve high enzyme stability at harsh conditions. MOF: metal-organic framework Cyt c: cytochrome c HRP: horseradish peroxidase CALB: Candida antarctica lipase B DMSO: DMF: dimethyl formamide ZIF-8: zeolitic imidazolate frameworks-8 SEM: TEM: XRD: FTIR: Fourier transform infrared spectroscopy TGA: thermal gravity analysis SDS-PAGE: sodium dodecyl sulfate-polyacrylamide gel electrophoresis Bordusa F (2002) Proteases in organic synthesis. Chem Rev 102:4817–4868 Bornscheuer U, Huisman G, Kazlauskas R, Lutz S, Moore J, Robins K (2012) Engineering the third wave of biocatalysis. Nature 485:185–194 Brady D, Jordaan J (2009) Advances in enzyme immobilisation. Biotechnol Lett 31:1639–1650 Cao S-L, Yue D-M, Li X-H, Smith TJ, Li N, Zong M-H, Wu H, Ma Y-Z, Lou W-Y (2016) Novel nano-/micro-biocatalyst: soybean epoxide hydrolase immobilized on UiO-66-NH2 MOF for efficient biosynthesis of enantiopure (R)-1, 2-octanediol in deep eutectic solvents. ACS Sustain Chem Eng 4:3586–3595 Chen K, Arnold FH (1991) Enzyme engineering for nonaqueous solvents: random mutagenesis to enhance activity of subtilisin e in polar organic media. Nat Biotechnol 9:1073–1077 Chen Y, Lykourinou V, Vetromile C, Hoang T, Ming L-J, Larsen RW, Ma S (2012) How can proteins enter the interior of a MOF? Investigation of cytochrome c translocation into a MOF consisting of mesoporous cages with microporous windows. J Am Chem Soc 134:13188–13191 Desai UR, Klibanov AM (1995) Assessing the structural integrity of a lyophilized protein in organic solvents. J Am Chem Soc 117:3940–3945 Feng D, Liu T-F, Su J, Bosch M, Wei Z, Wan W, Yuan D, Chen Y-P, Wang X, Wang K, Lian X, Gu Z-Y, Park J, Zou X, Zhou H-C (2015) Stable metal-organic frameworks containing single-molecule traps for enzyme encapsulation. Nat Commun 6:5979. doi:10.1038/ncomms6979 Ferreira L, Gil MH, Dordick JS (2002) Enzymatic synthesis of dextran-containing hydrogels. Biomaterials 23:3957–3967 Fujita K, MacFarlane DR, Forsyth M, Yoshizawa-Fujita M, Murata K, Nakamura N, Ohno H (2007) Solubility and stability of cytochrome c in hydrated ionic liquids: effect of oxo acid residues and kosmotropicity. Biomacromolecules 8:2080–2086 Ge J, Lei J, Zare RN (2012) Protein-inorganic hybrid nanoflowers. Nat Nanotechnol 7:428–432 Hanefeld U, Gardossi L, Magner E (2009) Understanding enzyme immobilisation. Chem Soc Rev 38:453–468 He H, Han H, Shi H, Tian Y, Sun F, Song Y, Li Q, Zhu G (2016) Construction of thermophilic lipase-embedded metal-organic frameworks via biomimetic mineralization: a biocatalyst for ester hydrolysis and kinetic resolution. ACS Appl Mater Interfaces 8:24517–24524 Ji X, Su Z, Wang P, Ma G, Zhang S (2016) Integration of artificial photosynthesis system for enhanced electronic energy-transfer efficacy: a case study for solar-energy driven bioconversion of carbon dioxide to methanol. Small 12:4753–4762 Kim J, Grate JW, Wang P (2008) Nanobiocatalysis and its potential applications. Trends Biotechnol 26:639–646 Kirk O, Borchert TV, Fuglsang CC (2002) Industrial enzyme applications. Curr Opin Biotechnol 13:345–351 Lee C-H, Lang J, Yen C-W, Shih P-C, Lin T-S, Mou C-Y (2005) Enhancing stability and oxidation activity of cytochrome c by immobilization in the nanochannels of mesoporous aluminosilicates. J Phys Chem B 109:12277–12286 Li P, Moon SY, Guelta MA et al (2016a) Encapsulation of a nerve agent detoxifying enzyme by a mesoporous zirconium metal-organic framework engenders thermal and long-term stability. J Am Chem Soc 138(26):8052–8055 Li S, Dharmarwardana M, Welch RP, Ren Y, Thompson CM, Smaldone RA, Gassensmith JJ (2016b) Template-directed synthesis of porous and protective core–shell bionanoparticles. Angew Chem 128:10849–10854 Liang K, Carbonell C, Styles MJ, Ricco R, Cui J, Richardson JJ, Maspoch D, Caruso F, Falcaro P (2015a) Biomimetic replication of microscopic metal-organic framework patterns using printed protein patterns. Adv Mater 27:7293–7298 Liang K, Ricco R, Doherty CM, Styles MJ, Bell S, Kirby N, Mudie S, Haylock D, Hill AJ, Doonan CJ (2015b) Biomimetic mineralization of metal-organic frameworks as protective coatings for biomacromolecules. Nat Commun 6:7240. doi:10.1038/ncomms8240 Liang H, Jiang S, Yuan Q, Li G, Wang F, Zhang Z, Liu J (2016a) Co-immobilization of multiple enzymes by metal coordinated nucleotide hydrogel nanofibers: improved stability and an enzyme cascade for glucose detection. Nanoscale 8:6071–6078 Liang K, Richardson JJ, Cui J, Caruso F, Doonan CJ, Falcaro P (2016b) Metal-organic framework coatings as cytoprotective exoskeletons for living cells. Adv Mater 28:7910–7914 Liang K, Coghlan CJ, Bell SG, Doonan C, Falcaro P (2016c) Enzyme encapsulation in zeolitic imidazolate frameworks: a comparison between controlled co-precipitation and biomimetic mineralisation. Chem Commun 52:473–476 Liu J, Yang Q, Li C (2015) Towards efficient chemical synthesis via engineering enzyme catalysis in biomimetic nanoreactors. Chem Commun 51:13731–13739 Lykourinou V, Chen Y, Wang X-S, Meng L, Hoang T, Ming L-J, Musselman RL, Ma S (2011) Immobilization of MP-11 into a mesoporous metal-organic framework, MP-11@ mesoMOF: a new platform for enzymatic catalysis. J Am Chem Soc 133:10382–10385 Lyu F, Zhang Y, Zare RN, Ge J, Liu Z (2014) One-pot synthesis of protein-embedded metal-organic frameworks with enhanced biological activities. Nano Lett 14:5761–5765 Nilsson K, Mosbach K (1984) Peptide synthesis in aqueous–organic solvent mixtures with α-chymotrypsin immobilized to tresyl chloride-activated agarose. Biotechnol Bioeng 26:1146–1154 Novak MJ, Pattammattel A, Koshmerl B, Puglia M, Williams C, Kumar CV (2015) "Stable-on-the-table" enzymes: engineering the enzyme–graphene oxide interface for unprecedented kinetic stability of the biocatalyst. ACS Catal 6:339–347 Ono T, Goto M (2006) Peroxidative catalytic behavior of cytochrome c solubilized in reverse micelles. Biochem Eng J 28:156–160 Pace C, Hermans J (1975) The stability of globular protein. CRC Crit Rev Biochem 3:1–43 Serre C, Millange F, Thouvenot C, Nogues M, Marsolier G, Louer D, Ferey G (2002) Very large breathing effect in the first nanoporous chromium(III)-based solids: MIL-53 or CrIII(OH)·{O2C− C6H4− CO2}·{HO2C− C6H4− CO2H} x ·H2O y . J Am Soc Chem 124:13159–13526 Shi J, Wang X, Zhang S, Tang L, Jiang Z (2016) Enzyme-conjugated ZIF-8 particles as efficient and stable pickering interfacial biocatalysts for biphasic biocatalysis. J Mater Chem B 4:2654–2661 Shieh F-K, Wang S-C, Yen C-I, Wu C-C, Dutta S, Chou L-Y, Morabito JV, Hu P, Hsu M-H, Wu KC-W (2015) Imparting functionality to biocatalysts via embedding enzymes into nanoporous materials by a de novo approach: size-selective sheltering of catalase in metal-organic framework microcrystals. J Am Chem Soc 137:4276–4279 Wen L, Gao A, Cao Y, Svec F, Tan T, Lv Y (2016) Layer-by-layer assembly of metal-organic frameworks in macroporous polymer monolith and their use for enzyme immobilization. Macromol Rapid Commun 37:551–557 Wu X, Yang C, Ge J, Liu Z (2015a) Polydopamine tethered enzyme/metal-organic framework composites with high stability and reusability. Nanoscale 7:18883–18886 Wu X, Ge J, Yang C, Hou M, Liu Z (2015b) Facile synthesis of multiple enzyme-containing metal-organic frameworks in a biomolecule-friendly environment. Chem Commun 51:13408–13411 Xu K, Griebenow K, Klibanov AM (1997) Correlation between catalytic activity and secondary structure of subtilisin dissolved in organic solvents. Biotechnol Bioeng 56:485–491 Zhang Y, Wang F, Ju E, Liu Z, Chen Z, Ren J, Qu X (2016) Metal-organic-framework-based vaccine platforms for enhanced systemic immune and memory response. Adv Funct Mater 26:6454–6461 XW, CY, and JG designed the experiments. XW and CY performed the experiments. XW and JG drafted the manuscript. XW and JG contributed to the discussion. All authors read and approved the final manuscript. All data generated or analyzed during this study are included in this articles. This work was supported by the National Key Research and Development Plan of China (2016YFA0204300). Key Lab for Industrial Biocatalysis, Ministry of Education, Department of Chemical Engineering, Tsinghua University, Beijing, 100084, China Xiaoling Wu, Cheng Yang & Jun Ge Xiaoling Wu Cheng Yang Jun Ge Correspondence to Jun Ge. Additional information. Wu, X., Yang, C. & Ge, J. Green synthesis of enzyme/metal-organic framework composites with high stability in protein denaturing solvents. Bioresour. Bioprocess. 4, 24 (2017). https://doi.org/10.1186/s40643-017-0154-8 Revised: 11 May 2017 Biocatalysis Biomineralization
CommonCrawl
SemLinker: automating big data integration for casual users Hassan Alrehamy1,2 & Coral Walker ORCID: orcid.org/0000-0002-0258-93011 A data integration approach combines data from different sources and builds a unified view for the users. Big data integration inherently is a complex task, and the existing approaches are either potentially limited or invariably rely on manual inputs and interposition from experts or skilled users. SemLinker, an ontology-based data integration system, is part of a metadata management framework for personal data lake (PDL), a personal store-everything architecture. PDL is for casual and unskilled users, therefore SemLinker adopts an automated data integration workflow to minimize manual input requirements. To support the flat architecture of a lake, SemLinker builds and maintains a schema metadata level without involving any physical transformation of data during integration, preserving the data in their native formats while, at the same time, allowing them to be queried and analyzed. Scalability, heterogeneity, and schema evolution are big data integration challenges that are addressed by SemLinker. Large and real-world datasets of substantial heterogeneities are used in evaluating SemLinker. The results demonstrate and confirm the integration efficiency and robustness of SemLinker, especially regarding its capability in the automatic handling of data heterogeneities and schema evolutions. Big data is growing rapidly from an increasing plurality of sources, ranging from machine-generated content such as purchase transactions and sensor streams, to human-generated content such as social media and product reviews. Although much of these data are accessible online, their integration is inherently a complex task, and, in most cases, is not performed fully automatically but through manual interactions [1, 2]. Typically, data must go through a process called ETL (Extract, Transform, Load) [3] where they are extracted from their sources, cleaned, transformed, and mapped to a common data model before they are loaded into a central repository, integrated with other data, and made available for analysis. Recently the concept of a data lake [4], a flat repository framework that holds a vast amount of raw data in their native formats including structured, semi-structured, and unstructured data, has emerged in the data management field. Compared with the monolithic view of a single data model emphasized by the ETL process, a data lake is a more dynamic environment that relaxes data capturing constraints and defers data modeling and integration requirements to a later stage in the data lifecycle, resulting in an almost unlimited potential for ingesting and storing various types of data despite their sources and frequently changing schemas, which are often not known in advance [5]. In one of our earlier papers [6], we propose personal data lake (PDL), an exemplar of this flexible and agile storage solution. PDL ingests raw personal data scattered across a multitude of remote data sources and stores them in a unified repository regardless of their formats and structures. Although a data lake like PDL, to some extent, contributes towards solving the big data variety challenge, data integration remains an open problem. PDL allows its users to ingest raw data instances directly from the data sources, but the data extraction and integration workflow, without predefined schemas or machine-readable semantics to describe the data, is not straightforward. Often the user has to study the documentation of each data source to enable suitable integration [7]. An enterprise data lake system built with Hadoop [8] would rely on professionals and experts playing active roles in the data integration workflow. PDL, however, is designed for ordinary people, and has no highly trained and skilled IT personnel to physically manage its contents. To this end, equipping PDL with an efficient and easy-to-use data integration solution is essential for casual users and allows them to process, query, and analyze their data, and to gain insights for supporting their decision-making [9]. To support PDL, the big data integration (BDI) system faces the following three challenges: The scalability challenge arises from the vast number of data sources that may input to a data lake [10], and the continuous addition of new and varying data sources [2]. The heterogeneity challenge is the implication of dealing with various types of raw data collected from a large number of unrelated data sources [1]. The data sources of a lake, even in the same domain, can be very heterogeneous regarding how their data are structured, labeled and described (e.g., naming conventions for JSON keys, XML tags, or CSV headers), exhibiting considerable variety even for data with substantially similar attributes [11]. The reconciliation of semantic and structural heterogeneities in raw data is a necessary preparatory step for storing and retrieving data quickly and cost efficiently and aligning raw data from different sources so that all types of data relevant to a single analysis requirement can be combined and analyzed together. Manually handling heterogeneity reconciliations would pose a huge burden on PDL users. Despite efforts in the fields of semantic web and data integration for automating the reconciliation process [12,13,14], existing approaches, most of which require optimized parameter tuning and expertise-based configurations to cope with the heterogeneities of data [10], cannot be adopted in PDL. The schema evolution challenge refers to the problem of handling unexpected changes in the schema and structure of the data ingested [15, 16]. Big data is often subject to frequent schema evolutions, which would cause query executions to crash if not dealt with [17]. Handling schema evolutions is a non-trivial task, and the common practice normally involves employing skilled manpower. Schema evolution has been a known problem in the database community for the last three decades [18] and has become frequent and extensive in the era of big data, yet it has not been addressed effectively [1, 7, 11]. In this paper, we introduce SemLinker, an ontology-based BDI system to address the BDI challenges discussed above. SemLinker, as a principal integrated component in the PDL architecture, adopts an automatic approach that only operates on the schema metadata level without involving physical transformation of data during integration. Thus, it preserves the data in their native formats and structures while, at the same time, allowing the data to be easily analyzed and queried by casual users. In addition, SemLinker also handles frequent schema evolutions automatically and shields analysis processes operating on the schema metadata level from crashing due to unexpected schema changes. The remainder of this paper is organized as follows: a summary of related work is given in "Related work" section; an overview of the SemLinker architecture is given in "SemLinker architecture overview" section; the implementation of SemLinker is discussed in detail in "Global schema layer", "Local schemata layer", and "Query" sections; the integration of SemLinker into the PDL architecture is described in "Integration with PDL" section; an experimental evaluation is given, and its results are presented and analysed in "Experimental evaluation" section; finally, our conclusions and future work directions are discussed in "Conclusion and future work" section. Lenzerini introduces in [19] a theoretical framework for integrating a set of heterogeneous data sources based on their associated schema metadata, more formally, local schemas. The framework's integration workflow is to maintain a mediated schema (i.e., global ontology) and specify relationships, or mappings, between the mediated schema and the local schemas of different data sources under integration. The concept of an ontology [20] is used as an efficient description tool for representing the mediated schema and for providing unified views over the data collected from the integrated sources. The user formulates queries by utilizing the terms (concepts) defined by the ontology, and the queries are executed according to mappings between the ontological terms and their corresponding representations in the local schemas. Many current state-of-the-art ontology-based data integration systems follow Lenzerini's framework [19] to integrate structured and/or semi-structured data collected from heterogeneous data sources, such as [7, 21,22,23]. Although these systems can deliver effective and efficient data integration performance in many use cases, they typically require continuous human intervention to supervise the process of discovering mappings between the global ontology and the local schemas [1, 14, 24], which is a laborious and time-consuming task itself that requires schema matching expertise. Furthermore, these systems favor static integration workflows, where any changes in the global ontology or the local schema metadata implies a high degree of manual processing to (re)configure the mappings [7, 14]. With the increasing popularity of data lakes in the big data landscape, metadata is becoming of immense importance for BDI research [25], and metadata management is currently an active research topic. GEMMS [5] is a Metadata Management Framework (MMF) that extracts and manages metadata about the data stored in the constance data lake. GEMMS integrates the user's personal data in life science fields by modeling them with a common metadata model. Although GEMMS is theoretically capable of reconciling semantic heterogeneities between the raw data, and tolerating big data volume and variety, its architecture suffers multiple drawbacks: first, it reconciles structural heterogeneities through physical data transformations, which implies altering the native schemas and structures of the data and posing constraints on the ingestion process; secondly, it is very sensitive to emerging changes in the raw data schemas; thirdly, the GEMMS literature does not describe how the integrated data can be systematically accessed and queried. Kayak [26], a generic framework for managing data lake content through metadata-based data preparation and wrangling, is a case similar to GEMMS. Although it promises integration and querying capabilities, its approach has not yet been revealed. Atlas [27] is an agile Apache enterprise framework for data governance and metadata management in Hadoop data lakes. After integration with Apache Avro [28], it became capable of handling schema evolutions in the datasets stored in a Hadoop data lake. In [7], Nadal et al. propose an integration-oriented ontology-based system for integrating heterogeneous JSON data in data lakes, and for governing their schema evolutions. The system has two ontology levels. The top level is modeled as an OWL ontology to offer unified views over local schemas, and the bottom level contains local schemas maintained as RDF graphs, A data steward is responsible to provide mappings between the two levels. If a particular local schema evolves, the data steward is notified, and a remapping then takes place. The shortcoming of current data lake BDI solutions is that they inherently exhibit the same drawbacks found in traditional data integration. For instance, raw data metamodeling remains an expensive task and requires expert user supervision [1, 29]. Furthermore, the schema evolution problem and its impact on data access, processing, integration, and analysis in a data lake is often overlooked, and its solution largely remains manual [7, 28, 30]. Rahm states in [1] that most big data integration proposals are limited to a few data sources, and analytics over a large volume of heterogeneous data ingested from various data sources is only possible with the availability of a holistic data integration solution that: (i) should be fully automatic or require only minimal manual interaction, and (ii) should make it easy to add and use additional data sources and automatically deal with frequent changes in these sources. SemLinker, as a data integration system, shares many features and functionalities with other solutions. However, as a solution for PDL whose users are typically casual and unskilled, SemLinker needs to be in agreement with Rahm's automation proposal and isolate its users from the technical details imposed by the integration workflow. Our proposed automations in SemLinker are implemented in the following operations: Metadata extraction and maintenance. Schema evolution handling. Discovery of mappings between the system's global ontology and the metadata denoting the local schema of a data source. SemLinker also supports data analysis in PDL by allowing direct queries over its metadata repository. Thus, big data management tasks such as data summarization, analytics, and insight discovery can be readily performed. SemLinker architecture overview The SemLinker architecture consists of a global schema layer, a local schemata layer, and the relationships between these layers. The global schema layer consists of the global schema (\({\mathcal{G}}\)), and the query engine for formulating queries over \({\mathcal{G}}\). The local schemata layer consists of the schemas repository (S), and schema metadata extraction, mapping and management components. As an ontology-based integration system, SemLinker is conceptually based on the theoretical framework proposed by Lenzerini [19], and we formally define the system as follows: SemLinker is a triple \(\left\langle {{\mathcal{G}},S,{\mathcal{M}}} \right\rangle\), where \({\mathcal{G}}\) is the global schema, S is a set of local schemas corresponding to n data sources, \(S = \left\{ {S_{1} ,S_{2} , \ldots ,S_{n} } \right\}\), and \({\mathcal{M}}\) is a set of mapping assertions, such that, for each Si there is a set of mappings between g and Si, \(g \in {\mathcal{G}}\), \(1 \le i \le n\), in the form: p → a, where attribute \(a \in S_{i}\) and property \(p \in g\). The system's global schema \({\mathcal{G}}\) is modeled as a global ontology and is described using web ontology language (OWL) [31]. SemLinker extracts and maintains machine-readable metadata describing the physical schema details of each data source connected with the PDL, and specific semantics about its available data. We refer to such metadata as a local schema. A local schema is described using resource description framework (RDF) [32] and is stored in the schema's repository S. SemLinker is responsible for automatically mapping the local schema S i of the data source i to a semantically corresponding concept in the global ontology \({\mathcal{G}}\). Such mappings provide a metadata model that allows SemLinker to systematically annotate data ingested and allows the user to pose queries over \({\mathcal{G}}\) which serves as an abstraction layer over S and its raw data. With the metamodeling in place, the raw data of PDL are integrated on the metadata level; no manual effort is required to reconcile the heterogeneities in the physical schemas and structures of the raw data. Figure 1 gives a high-level overview of the SemLinker architecture. Overview of SemLinker architecture Here we introduce a personal data example comparable to a real-life scenario to give a realistic view of the challenges that a BDI system like SemLinker must meet. Figure 2 lists four personal data instances representing social media feeds posted by a PDL user on Facebook and Twitter and ingested by the PDL through the available API of each source (Facebook Graph API, and Twitter Streaming API) with evolved schemas. Although these instances exist in self-describing formats and contain abstract schema metadata implicitly encoded in JSON keys and XML tags, they suffer semantic and structural heterogeneities, even for the instances ingested from the same data source. For example, the JSON keys in Facebook data instances (Fig. 2a, b) are expressed with different strings and exist in different structures (see "location" and "geo" keys). Similarly, Twitter data (Fig. 2c, d) also exist in different data formats. The example serves as a reference point for later sections on SemLinker's implementation. Examples of personal data instances ingested from two data sources. a Facebook data in schema v.2.9. b Facebook data in schema v.3.0. c Twitter data in schema v.1.1. d Twitter data in schema v.1.2 Global schema layer The global ontology \({\mathcal{G}}\) serves two purposes: to tag data sources with type semantic information, and to form an indispensable basis in the form of query-able format-agnostic unified views, that allows executing uniform queries over the raw data ingested from different data sources. An ideal global ontology is a comprehensive and standardized ontology that provides semantic coverage and interoperability across a vast range of domains [33]. For this reason, we initiate \({\mathcal{G}}\) as an OWL implementation of SCHEMA [15]. SCHEMA is a lightweight and well-curated vocabulary which consists of abstract concepts common across many domains and is used as a backbone schema for annotation in many large-scale knowledgebase projects, such as Wikipedia, DBPedia, and Google Knowledge Graph. Such initiation is beneficial in supporting semantic interoperability between a multitude of data sources that possibly exist in different domains; the disadvantage, however, is that SCHEMA abstract concepts can be too generic and require more specificity to support concise metamodeling and integration [34]. To balance between the conceptual abstraction and the semantic specificity, we enable \({\mathcal{G}}\) extensibility. The elements of \({\mathcal{G}}\) may be extended by adding new properties to the current set of properties of a concept, \(g \in {\mathcal{G}}\), to increase its coverage over elements in local schemas at the local schemata layer. g may also be extended by adding a new subordinate concept to it. rdf:type and rdfs:subClassOf are used for importing new and more specific concepts. To comply with \({\mathcal{G}}'s\) structure, the newly added concept must be associated with a set of properties (declared using \({\mathcal{G}}\):hasProperty) and each property is of a certain primitive data type that is strictly reused from the XSD vocabulary [35] (declared using \({\mathcal{G}}\):hasDatatype). Figure 3 depicts an example of extending SocialMediaPosting, a concept in \({\mathcal{G}}\), with Feed, a more specific concept imported from the SIOC vocabulary [36]. The extension taking place is to support a unified view over data ingested from social media data sources. Feed is linked to SocialMediaPosting using rdfs:subClassOf, and is described by a set of properties imported from the DCMI [37] and WGS84 [38] vocabularies. The required RDF data to implement such an extension are automatically generated by SemLinker and are added to the \({\mathcal{G}}\) ontology. Example of concept extension in \({\mathcal{G}}\) Local schemata layer Schemas repository The schemas repository S is the principal component in the local schemata layer. It stores the set of local schemas corresponding to different data sources that are added to SemLinker over time. Each local schema is stored in S as a data graph that contains machine-readable metadata in the form of RDF triples to describe the physical schema details of the data ingested from the data source. For each new data source i, SemLinker initializes a new empty RDF graph representing i's local schema, denoted as S i . Subsequently, SemLinker requires S i to be tagged with a concept \(g \in {\mathcal{G}}\), so that g reflects the underlying type semantics of the data typically offered by i. Local schema tagging is normally modeled as an RDF triple, and follows the pattern: $$\langle S_{i} M{:}\;isInstanceOf\; g \rangle$$ For example, to add data source Facebook to SemLinker (Fig. 2), a global concept "Feed" is used to tag Facebook, i.e. Tag(Facebook)=Feed, and the RDF interpretation of such tagging is asserted as $$\langle S_{Facebook} M{:}\;isInstanceOf\; \rm{\mathcal{G}}{:}Feed\rangle$$ The physical schema of any data source is subject to changes and updates [15, 39]. In the example depicted in Fig. 2, schema evolution is observed at both the semantic level (data attribute renaming, e.g. "message" ↦ "story", and "text" ↦ "message"), and the structural level (data format changes, e.g. JSON↦XML, and attribute changes, e.g. casting the JSON object "location" in Fig. 2a into the simple attribute "geo" in Fig. 2b). SemLinker takes a novel, automatic approach to handle the schema revolution problem. In this approach, the RDF representation of the local schema of a data source is regarded as dynamic. It contains a changeable set of subgraphs, each of which represents an evolving version of the schema and is called a source schema. A schema extraction algorithm is used to extract source schemas automatically, and a mapping computation algorithm is responsible for mapping them to the global ontology. A formal definition of the local schema of a dynamic feature is given below. (Local schema) The local schema \(S_{i} \in S\) is a dynamic set of source schemas corresponding to m physical schema evolutions in the data ingested from the data source i, \(S_{i} = \left\{ {S_{i1} ,S_{i2} , \ldots ,S_{im} } \right\}\). For each \(S_{ij} \in S_{i}\), 1 ≤ j ≤ m, there is a set of mapping assertions \({\mathcal{M}}\) between S ij and \(g \in {\mathcal{G}}\) of the form: p → a, where attribute \(a \in S_{ij}\) and property \(p \in g\). The system ingests a data instance from its source's API that is typically associated with a release version. Analysis of the instance's physical schema is needed to obtain its source schema S ij , where i is the data source's unique identifier (typically a URI), and j is i's API release version. SemLinker (fully/partially) then maps S ij to the tagging concept g in the global ontology, stores it in the underlying graph of S i , and uses it to integrate i's data with other data stored in the PDL. Furthermore, S ij is regarded as a benchmark and is used to run schema checks on any new data instances ingested from i. A schema check may fail, and when the number of failures reaches a predefined threshold, the system infers that data source i has released its API with a newer version. Consequently a new evolution has occurred in the physical schema of i's data, and the system must augment the local schema S i by constructing a new source schema, say S ik , that is also mapped to g and added to S i , so that S ik is utilized to integrate any new data instances ingested from i with the release version k, meanwhile utilizing S ij to maintain backward integration support for the data instances that have already been ingested from i with the release version j. The procedure for augmenting the local schemas upon schema evolutions in the APIs of their data sources is automatically repeated to keep up-to-date metadata about the physical schema of the data instances ingested from each data source. Schema extraction algorithm The schema extraction algorithm automatically extracts source schemas from data instances (see List 1). It takes as input a data instance F ingested from a data source i, with release version j, and a mime string specifying the format type of F. F is assumed to conform to a known format specification [40], and its structure consists of a mix of flat and complex attributes, each of which has a label and a value. The algorithm operates on the structure level of F and extracts its RDF representation S ij that consists of nodes and relationships between them. Each node in S ij describes a specific element (attribute) in the physical schema of F and is associated with three constructs: Identifier, Semantic Type, and Relation. The algorithm assigns a value to each node and constructs Identifier using the URI of the data source and the release version j as base values. Semantic Type specifies the semantic class of the node, and its range is one of the classes, \({\mathcal{S}}\):Attribute, \({\mathcal{S}}\):Object, or \({\mathcal{S}}\):Collection. Relation refers to a relation between a pair of nodes, and can be \({\mathcal{S}}\):hasAttribute, \({\mathcal{S}}\):hasObject, \({\mathcal{S}}\):hasCollection, \({\mathcal{S}}\):hasFormat, \({\mathcal{S}}\):isComposedBy, or \({\mathcal{S}}\):isDecomposedFrom. The algorithm has two main procedures: InitializeGraph() and GenerateGraph(). InitializeGraph() starts with specifying the given URI and the release version j as the root of S ij (line 2); the auxiliary function ToRDF() adds format attributem (the input mime) to S ij as one of its nodes (lines 3 and 4); ToRDF() then specifies the relationship (:hasFormat) between the format node and its parent node (line 5). At this point, the source schema S ij is initiated. The procedure then invokes GenerateGraph() and passes F and the root of S ij to it. GenerateGraph() constructs S ij through a series of iterations and recursive calls over the physical schema of F. In each call, the procedure takes a label-value pair from F and parentId (the URI) as input, creates a node in S ij corresponding to the label, and links the node to its parent node (parentId). The initialization and linking of any node is modeled as the RDF triples (NodeId rdf:type S:Type) and (ParentNodeId S:relation NodeId), respectively (line 12). Next ToRDF(), based on the type of the node, appends these triples to S ij (lines 6–15). Depending on the complexity of F's structure, a label-value pair may represent a flat attribute in F (e.g. the "id" key in Fig. 2a), in which case, the node type obtained from the auxiliary function Type() is \({\mathcal{S}}\):Attribute, and the corresponding node is linked to its parent node using \({\mathcal{S}}\):hasAttribute relation, and the algorithm moves to the next label-value pair. Conversely, the current label-value pair may correspond to a complex attribute (e.g. the embedded object "location" in Fig. 2a). In this case, the type obtained from Type() is either \({\mathcal{S}}\):Collection or \({\mathcal{S}}\):Object, and the node is linked to its parent node using one of the relations \({\mathcal{S}}\):hasCollection or \({\mathcal{S}}\):hasObject, and subsequently the node's identifier and value are passed to the recursive procedure GenerateGraph(). Figure 4, as an example, depicts the graphical representation of the source schema extracted from the data sample given in Fig. 2a. The first node in the graph is created as a leaf node because the first label-value (JSON key "id") is a simple attribute in F. Its identifier is https://graph.facebook.com/me/feed/2.9/id. A node maybe embedded in another node, such is the case with the node labeled 'latitude', which serves as one of the flat attributes of the object 'location'. Example of source schema extracted using the schema extraction algorithm Mapping computation and management Once a source schema is constructed, it needs to be mapped to the global ontology. A mapping is a relationship specifying how an element structured under one schema (i.e., the source schema) corresponds to an equivalent element structured under the mediated schema (i.e., the global ontology \({\mathcal{G}}\)) [19]. Mappings may be discovered either implicitly or explicitly. In SemLinker, because the global concepts of \({\mathcal{G}}\) are predefined independently from the data sources, it is likely that a source schema is semantically incompatible with the concepts of \({\mathcal{G}}\), and therefore no implicit mappings can be directly discovered between a source schema S ij and a tagging concept g. Typically, computing mappings between a source schema and a tagging concept involves specifying the semantic types of the source schema elements, i.e., labeling each schema element with a semantically equivalent property in the tagging concept [13]. However, semantic labeling alone is not sufficient [41], and a precise mapping computation process requires an extra step that specifies how the elements of a source schema should be organized in accordance with the structure of its tagging concept so that the two constructs become semantically compatible and ready for mapping. This 'extra step' is often missed in systems that automate the mappings [14, 24, 29, 42, 43] and is expected to be dealt with manually [41]. SemLinker uses a two-step mapping approach that not only does the explicit mappings, but also performs the 'extra step' automatically. The two steps are schema matching (SM) and virtual transformation of source attribute (VTSA). Mapping algorithm The mapping algorithm (see List 2) takes as inputs a tagging concept \(g,\) a source schema S ij , and a threshold t. It takes two steps, SM and VTSA, to compute mappings between properties and source attributes. Mappings are established as RDF triples, where each mapping triple has the pattern \(\left( {p \, M:mapsTo \, a} \right),\;p \in g,\;a \in S_{ij}\). Such modeling offers the flexibility of allowing multiple source attributes of multiple source schemas to be mapped to a single property. The source attributes mapped to the same property are considered semantically equivalent between themselves, so a unified view over them can be automatically represented by the property. Revisiting the example in Fig. 2, we see that the Twitter data source is tagged with the concept "Feed". With the mappings specified below, "text" in STwitter,v1.1 (Fig. 2c) is regarded as semantically equivalent to "message" in STwitter,v1.2 (Fig. 3d). $$\begin{aligned} \left\langle {Feed{:}\;description\; M{:}\;mapsTo\; S_{Twitter,v1.1} {:}\;text} \right\rangle \hfill \\ \left\langle {Feed{:}\;description\; M{:}\;mapsTo\; S_{Twitter,v1.2} {:}\;message} \right\rangle \hfill \\ \end{aligned}$$ Such mappings allow SemLinker to automatically reconcile heterogeneous attributes from different source schemas of the same data source, and the reconciliation can be further obtained by a SPARQL query with the pattern \(\left\langle {Feed{:}\;description M{:}\;mapsTo\; ?a} \right\rangle\). In our running example, the result \(?a = \{ S_{Twitter,v1.1} {:}\;text,\,S_{Twitter, v1.2} {:}\;message\}\) allows an analysis process to access the values of both data attributes from both versions, STwitter,v1.1 and STwitter,v1.2. This can also be applied to unify semantically equivalent attributes in the source schemas of different data sources as long as they are tagged with the same concept. In our example, if we have both Facebook and Twitter data sources tagged with the same concept Feed, then "message" in SFacebook,v2.9, "story" in SFacebook,v3.0, "text" in STwitter,v.1.1, and "message" in STwitter,v1.2 are all regarded as equivalent. Schema matching For each property p (line 2), the mapping algorithm invokes function Matcher() to find the attribute in S ij that is semantically equivalent with p (line 3). Matcher() is an interface function that passes the matching task to an external schema matcher that has been plugged into the system. For each attribute a, it computes a score that quantifies the semantic correspondence between a and p. If the score is larger than the threshold t, a and p are regarded as semantically equivalent. When there is more than one property equivalent to the same source attribute, or more than one source attribute equivalent with the same property, the algorithm, before a mapping is established, adjusts the structure of S ij using VTSA. Matcher() returns a data structure containing two collection constructs, A and P; while A holds zero or more source attributes, P holds one or more properties. The algorithm decides its next step according to what is returned in the A and P constructs. If A = ∅ (line 4), no match is found and the algorithm proceeds to the next p. If |A| = 1 and \(\left| P \right| = 1\) (line 5), one matching attribute a of the source schema is found, so the algorithm establishes a mapping between p and a (line 6). If |A| > 1 and \(\left| P \right| = 1\) (line 8), an operation called Composition is performed on the attributes of A before establishing mappings (lines 9–17). If |A| = 1 and \(\left| P \right| > 1\) (line 18), an operation called Decomposition is performed on the attribute a stored in A before establishing mappings (lines 19–28). After the operation, P is skipped from g using the auxiliary function Skip() for optimization purposes (line 29). While typically information regarding the concept g is abundant, information regarding a specific input S ij is often inadequate [44]. When a situation like this arises, SemLinker uses matchers from third parties to handle schema matching tasks. Matchers are classified into three groups, schema-level, instance-level, and hybrid matchers [45]. Schema-level matchers utilize the information available in input schemas to find matches between schema elements. Instance-level matchers use statistics, metadata, or trained classifiers to decide if the values of two schema elements match. Hybrid matchers combine both mechanisms to determine match candidates. Schema matching approaches are constantly evolving, and often they apply other techniques such as dictionaries, thesauri, and user-provided match or mismatch information [44]. After every single property is examined, and mappings between g and S ij are established, the underlying RDF data of the newly constructed S ij are added into the local schema S i (line 33). Virtual transformation of source attribute In Fig. 2, "latitude" and "longitude", the two properties in the concept Feed, correspond directly to the flat attributes of the embedded object labeled "location" in Fig. 2a, but correspond indirectly to the flat attribute labeled "geo" in Fig. 2b. The relationships between "latitude" and "longitude" and their indirect corresponding source attribute "geo", though apparent, can semantically hold only if "geo" is transformed into two new source attributes, i.e., \(\hbox{``}geo\hbox{''} \to \left\langle {\hbox{``}latitude\hbox{''},\hbox{``}longitude\hbox{''}} \right\rangle\), or vice versa. To preserve the structure of the raw data stored in the lake, we adopt two virtual transformation operations, Composition μ and Decomposition γ, to work on the schema of the raw data rather than on the data themselves. The virtual transformation operations are based on [46, 47], and they allow SemLinker to virtually map an attribute in a source schema to a property in the global ontology. (Composition \(\mu\)) Given a set of source attributes A, \(A = \left( {a_{1} ,a_{2} , \ldots ,a_{k} } \right)\), \(A \subseteq S_{ij} ,S_{ij} \in S_{i}\), \(1 < k \le n,n = \left| {S_{ij} } \right|,\) the composition operator \(\mu_{{A,a_{\mu } }}\) composes A into a single virtual attribute a μ . The mapping algorithm uses the condition (|A| > 1, and \(\left| P \right| = 1\)) as the heuristic rule to compose a subset of source attributes \(A,A = \left( {a_{1} ,a_{2} , \ldots ,a_{k} } \right)\) as a single new attribute a μ and adds it to the S ij . Since a μ is a new source attribute, it must be initialized in the same manner as other source attributes. Two types of mappings are established to activate the composition transformation. Mapping p x → a μ is performed by adding the RDF triple \(\langle p_{x} M{:}mapsTo\; a_{\mu }\rangle\) to \(S_{ij}\) (line 13); mapping A → a μ is performed by adding a set of RDF triples, each following the pattern \(\left\langle {a_{y} \rm{\mathcal{S}}{:}{\it{isComposedBy\; a}}_{\mu } } \right\rangle\) (lines 14–17). Since a μ is a virtual attribute that has no physical implementation, its data values are dynamically constructed when queried. (Decomposition \(\gamma\)) Given an attribute a y ∊ S ij , S ij ∊ S i , the decomposition operator \(\gamma_{{a_{y} ,A_{\gamma } }}\) decomposes the attribute \(a_{y}\) into a set of virtual attributes \(A_{\gamma } ,{\text{where }}A_{\gamma } = \left\{ {a_{\gamma 1} a_{\gamma 2} , \ldots ,a_{\gamma k} } \right\},\; k > 1.\) When |A| = 1, and \(\left| P \right| > 1\) (line 18) a decomposition operation takes place to decompose a source attribute a y into a set of new virtual attributes A γ , and adds the set to S ij . In the operation, a y is modeled as the parent node of the new virtual attributes (lines 23–25). Similar to composition, the algorithm establishes two types of mappings to activate the decomposition transformation. Mapping p x → a γi is materialized through the RDF triple \(\left\langle {p_{x} M{:}mapsTo\; a_{\gamma i} } \right\rangle\), and mapping a γi → a y is materialized through the RDF triple \(\left\langle {a_{\gamma i} \rm{\mathcal{S}}{:}{\it isDecomposedFrom\; a}_{y} } \right\rangle\). Since A γ is a set of virtual attributes that have no actual implementation, the value of each attribute a γi in A γ must be dynamically constructed whenever needed. Partial unified views Instead of providing a strictly unified view that requires full mapping between the global schema and local schemas as is normally supported in rigorous data modeling, SemLinker allows a partial unified view and gives users control over the scope of the view. The scope of the partial unified view can be adjusted by adding or removing properties in the global ontology. Figure 5 depicts a sample of two source schemas and their mappings to properties of a tagging concept in the global ontology. The source schemas are extracted from Facebook data samples given in Fig. 2b, using the schema extraction algorithm, and the mappings are computed using the mapping algorithm. In the figure, red circles indicate normal source attributes mapped to the equivalent properties in straightforward schema matching operations. The source attribute 'geo' in the source schema https://graph.facebook.com/me/feed/3.0 is marked by a white circle to indicate that decomposition has taken place, and 'geo' is decomposed into virtual source attributes, namely, "latitude" and "longitude". The virtual source attributes (yellow circles) are also mapped to their corresponding global properties "geo:latitude" and "geo:longitude". Nonetheless, not all source attributes are mapped to properties of the tagging concept. Some source attributes (gray circles) are inaccessible, such as "attachments" of the second local schema. An inaccessible attribute, without an equivalent property in the global ontology, cannot be accessed through queries. Mappings between the concept "sioc:Feed" and two source schemas The Query engine of the global schema layer (see Fig. 1) provides querying services for SemLinker. It serves two purposes: (i) to provide an SQL abstraction for formulating SQL-like queries targeting the unified views over raw data, and (ii) to compile, translate, and execute SQL-like queries and return results to the users. The query engine takes a successfully compiled input query, converts it into a relevance query and an unfolding query, both of which are internal SPARQL queries. A relevance query is a SPARQL SELECT query derived from the input query based on the concepts embedded in its clause formulation, and its execution over \({\mathcal{G}}\) returns all conceptually relevant data sources. An unfolding query is the input query unfolded in a SPARQL formulation and is executed iteratively on the underlying RDF graphs of the relevant local schemas that have been found by the relevance query. The result of the iterative execution is a list of source attributes that correspond to properties of the concepts specified in the query. Once the source attributes have been identified, we have all the relevant metadata information regarding the query, and the last phrase of the query is to retrieve data values from data instances stored in the raw data lake based on the returned metadata and assemble the results into a list before giving it to the user. Here is a simple query scenario. Suppose a user (the same user of the example in Fig. 2) is interested in retrieving all social feeds, and their geolocations, after a specified date (e.g., 1/10/2017), and so she formulates the following query. On receiving the query, the query engine compiles it, and based on the concept (sioc:Feed) used in the clause of the query, it forms the following relevance query: This above relevance query is executed on the global ontology. A successful execution returns all local schemas that are tagged with the concept Feed. In our case (assuming Facebook and Twitter are the only local schemas tagged with the concept Feed), S Facebook , S Twitter are returned. Next, the query engine unfolds the input query and generates the following unfolding query, executing it iteratively on the RDF graphs of each local schema it has found, i.e., S Facebook , S Twitter . Table 1 lists the metadata returned from the execution of the above unfolding query. From the table, we see two matching local schemas, each with two source schemas, and their attributes corresponding to the properties indicated in the query. Two virtual source attributes from decomposition, "latitude" and "longitude", are among the source attributes returned. Table 1 Schema metadata results from a query Once the metadata information is obtained, the query engine retrieves and parses the corresponding data instances to retrieve data values matching to the specific source attributes and the filter condition. The nature of PDL, a marriage between data lake and data gravity pull, determines that data are stored in their native formats, and third-party apps and tools can serve the lake as plugins (gravity pull) so that any special need for a specific data type can be dealt with professionally [6]. For our instance, both the source schemas of S Facebook and STwitter,v1.1 use JSON, whereas STwitter,v1.2 uses XML. A suitable plugin parser for each data instance is chosen in order to parse it. Integration with PDL SemLinker, specializing in structured and semi-structured raw data integration, together with SemCluster [48], which tags free-text documents with key phrases that are associated with ontology-based semantics, and SemMedia, which extracts and manages metadata for multimedia data, constitute the MMF for PDL. Figure 6 depicts the architecture of the PDL, in which SemLinker is directly connected to the ingestion, and storage layers of the lake. The ingestion layer temporarily stores incoming data instances in a message queue before being preprocessed and dispatched to the storage layer. Through a cross-layer data pipeline SemLinker pulls data instances from the message queue, extracts their schema metadata, and dispatches the data instances and their associated metadata to the unified storage repository in the storage layer. The storage layer consists of a key-value tuple store and a simple hash table called the Linkage Table. The MMF associates each data instance and its metadata with a hash key representing the data instance's unique identifier and stores the hash key with the data instance itself as a key-value pair in the key-value store. Subsequently, the hash key and the data instance's metadata are stored as a key-value pair in the linkage table. Note that the metadata record associated with a data instance includes various information produced from different MMF components, such as lineage metadata (e.g., creation date, last access date), and security metadata (e.g., access control, encryption information). The RDF graph identifier (URI) of the local schema, and the subgraph identifier of the source schema (i.e., i and j) are also stored in the record, whereas the actual schema are stored in the schema repository. The architecture of PDL integrated with SemLinker-based MMF Experimental evaluation The evaluation of SemLinker is carried out in two phases. The first phase examines the accuracy of SemLinker's mapping computations on data with substantial heterogeneities and frequent schema evolutions. The second phase investigates integration effectiveness and the runtime functional complexity of SemLinker. Because of the difficulty in collecting personal data and privacy concerns, we do not use personal data in the evaluation, but instead use 11 real-world publicly available datasets (see Table 2) that exhibit a high degree of heterogeneity and have frequent schema evolutions. Each dataset consists of many data instances, and each data instance may be of a different release version. The collection include 3 datasets that contain sensor (accelerometer and gyroscope) streams generated by smart phone and smart watches carried by human subjects and are collected from two human activity recognition experiments [49, 50]. Also included are a mix of social data, some of which concerns user opinions, reviews, and ratings of popular places in London such as hotels, restaurants, and pubs, and two public datasets published by UK government agencies. Table 2 Datasets used in the evaluation To integrate the above datasets into the system, we first need to tag the data with the most relevant concept of the global ontology \({\mathcal{G}}\). For example, "sc:Review" is used to tag the TripAdvisor and Tourpedia datasets; "sc:LocalBusiness" to tag the EnglandPubs dataset; and "sc:PostalAddress" to tag the OpenPostCode dataset. \({\mathcal{G}}\) is also extended to include more specific concepts, such as "G:SensorReading" (extension to "sc:Dataset") to tag the HAR-1, HAR-2, and HAR-3 datasets, and "sioc:Feed" (extension to "sc:SocialMediaPosting") to tag the Facebook, Twitter, Flickr, and Foursquare datasets. Full information about the global ontology and its extension used in the evaluation is available at [51]. Automatic mapping management evaluation Here we evaluate the accuracy of SemLinker's mapping computations between the schema of each dataset and its tagging concept. Three matcher plugins are used. The first two, SemanticTyper [13] and AgreementMaker [43], are both open source matching approaches [52, 53]. SemanticTyper is an instance-level schema matcher that collects statistical information about the data based on their type and decides if two schema elements match. AgreementMaker comprises multiple automatic matchers that are grouped into three layers. Each layer uses a different representation and similarity comparison measure, with the third layer being a combination of the other two. For AgreementMaker, because of the data lake lacks structural information in schemas, we use only the first layer, which represents features of schema elements (labels, comments, instances, etc.) in TF.IDF vectors and compares their similarities using the cosine similarity metric or some string-based measures (such as edit distance and substrings). The third plugin, SemMatcher, is the system's default matcher. A detailed discussion of SemMatcher is beyond the scope of the paper, however, SemMatcher is built as a combination of AgreementMaker, the schema-level matcher, and SemanticTyper, the instance-based matcher. It is linguistics-based and measures the similarity between two schema elements based on their textual descriptions retrieved from an external schema dictionary [44]. We measure the accuracy of SemLinker mappings against gold standard mappings—the mappings manually obtained using the Karma tool [54] and use the following formula to compute the accuracy score for SemLinker's mapping computations: $$Acc\left( {S_{ij} ,g} \right) = \frac{{M_{SemLinker} \left( {S_{ij} ,g} \right)}}{{M_{Gold} \left( {S_{ij} ,g} \right)}}$$ where M SemLinker (S ij , g) is the number of correct mappings between \(S_{ij} {\text{and }}g\) that are automatically computed by SemLinker, and M Gold (S ij , g) is the total number of mappings established using Karma. The following formula is used to obtain an overall accuracy score: $$Ave\left( {S_{i} ,g} \right) = \mathop \sum \limits_{v = 1}^{j} Acc\left( {S_{ij} ,g} \right)$$ where j is the total number of evolutions in the physical schema of i. The evaluation was run three times, each using a different schema matcher. Figure 7 displays the comparison results of the overall accuracy score when using different schema matchers. The results demonstrate clearly that the accuracy of a mapping calculation is very much determined by the schema matcher that is used, and that the system's own matcher, SemMatcher, outperforms the other two matchers in most of the datasets. SemanticTyper successfully captures correct matches wherever AgreementMaker fails, and this, to an extent, explains why SemMatcher, which combines the best features of the matchers, gets almost full scores on 6 of the datasets. The fact that SemMatcher is also linguistics-based suggests that, for social and public datasets, by providing proper schema-level linguistic information (e.g., meaningful attribute labels), schema matching may achieve a better precision. The overall precision score of mapping calculations using different matchers Functional efficiency and query complexity evaluation To evaluate the functional efficiency of SemLinker in integrating big data with frequently changing schemas and the time complexity of executing queries, we compare SemLinker with a similar integration-oriented and ontology-based prototype system that is used in the SUPERSEDE project and is discussed in [7] (we refer to this as the BDI Ontology system). The BDI Ontology system prototype is implemented using a MongoDB [55] database backend to store JSON data, and SQL to store CSV and XML data. The downside of using the BDI Ontology system is immediately apparent as substantial effort (including manual interactions) is required to maintain its global ontology and to manage the source attributes found in the data collected from data sources. Each schema evolution also requires manual (re)mappings. Two scenarios are used in the evaluation: Scenario 1 (involving datasets HAR-1, HAR-2, and HAR-3): It is assumed we want to retrieve all gyroscope readings ingested from gyroscope sensors to pass to a specialized HAR application for HAR analysis. For this purpose, the following query is formed: A gyroscope reading, such as [0.0041656494, − 0.0132751465, 0.006164551] (see HAR-3 dataset), consists of values corresponding to the x, y and z axes. The global concept "SensorReading", which has one property, "G:reading", has been used to tag all three HAR datasets. This apparently simple Gyroscope reading retrieval has some complexity: the reading in HAR-1&2 is described by three separate attributes, X, Y, and Z, whereas the reading in HAR-3 is described by only one attribute. However, before the query takes place, this heterogeneity problem has been solved when HAR-2 data are ingested and the schema mapping takes place. In the mapping, the source schema, SHAR-2,V1.0, is virtually transformed into the virtual source attribute "reading". Consequently, the query alone is sufficient to retrieve the required data without any extra pre- or post-processing steps. However, for the BDI Ontology system, since there is no easy solution for structural heterogeneities in the source schemas, it is impossible to execute the query directly. Either we must transform the data so that HAR-1&2 and HAR-3 share the same structure, or we tag them with different concepts and query them separately. Scenario 2 (involving social and public datasets): It is assumed we are interested in local businesses in London such as hotels, restaurants, and pubs, and would like to know their full address (including postcode), and reviews and ratings about them. We may also apply sentiment analysis to gauge the polarity in the comments that are retrieved. Because the data exist in different formats and semantic contexts, a direct query may seem to be complicated. With SemLinker, however, provided the raw data are ingested properly into the system, and predefined inline functions (another kind of plugin of the lake) that fulfill certain relevant tasks are in place, a direct query will return the desired results. In the datasets, there are defects in the data, such as a postcode missing from the reviewed business, or some geolocation is inaccurate, or the name of a business may have different spelling (using "&" for "and" or "65" for "sixty-five", and so on). If such problems have been anticipated, as in our case, customized inline functions may be designed and imported in advance to deal with these situations at query time. Here we use Sentiment(string) to produce a polarity representing the user opinion (i.e., positive, negative, or neutral), Normalize(string) to normalize the business names, Radius(float,integer) to generate x values around an input spatial coordinate, and WordNet(string,int) (a WordNet-based function) to retrieve all the possible synonyms and hyponyms of an input string. In addition, there is also the complication regarding schema evolution that has already been dealt with by SemLinker. Once all elements are in place, the user can retrieve the desired information using the following query: In the query, the postcode associated with each review is obtained by chaining data instances across multiple datasets. For both scenarios, we ran 20 queries targeting raw data in the range 0–800K data instances on both SemLinker and the BDI Ontology prototype system, and we measure and compare their query execution time. The recorded time of each query includes input query translation, query unfolding, and data retrieval from the backend. Figure 8 presents the runtime benchmark data recorded for the query executions of each system. We observe that when the number of datasets is small, the difference in the execution times for the two systems is insignificant, but when the retrieved data are moderately large, SemLinker significantly outperforms the BDI Ontology system. For example, SemLinker requires 8 s on average to retrieve and integrate 40K review results, whereas the BDI ontology system requires 96 s on average to perform the same task. SemLinker's significant improvement is mainly due to the following reasons: (i) since SemLinker fully supports the storage, integration, and querying of raw data regardless of their formats and structures, any high-performance key-value store can be adopted as the central unified backend (e.g., Redis [56]). Hence, compared to the data access and query execution overheads of the BDI Ontology system (MongoDB/SQL), SemLinker's backend is conceptually a big hash table with data access complexity O(1); and (ii) in SemLinker the source schemas of each dataset are modeled as subgraphs grouped into one RDF graph (i.e., the local schema), whereas in the BDI Ontology system each source schema is treated as a separate RDF graph. As expected, SemLinker, which executes its internal SPARQL queries on a single graph for each dataset, is much faster than the BDI Ontology system, which executes its queries on several graphs for each data set. Real time query execution performance comparison Conclusion and future work We have presented SemLinker, an ontology-based data integration system for PDL and other similar data lake implementations. SemLinker allows casual users with limited technical background and with minimal effort, to integrate, process, and analyze heterogeneous raw data through a unified conceptual representation of the data schemas regarding a widely used global ontology. To the best of our knowledge, SemLinker is the first domain-agnostic integration system that offers self-adapting capabilities to automatically integrate big data with frequently evolving schemas based on solid theoretical foundations. SemLinker has been evaluated on large datasets in multiple domains, and the results not only validate its integration effectiveness and functional efficiency, but also indicate that SemLinker's performance is robust and promising, albeit there is still room for improvement in multiple aspects of the system. Although SemLinker is a generic integration solution, it targets only structured and semi-structured data, and it is, by no means, a holistic integration solution when unstructured data such as free-text documents and multimedia files are also considered. For such data we have proposed, in an earlier paper [48], SemCluster, an automatic key phrase extraction tool that specializes in extracting keyphrases from free text documents and annotating each keyphrase with ontology-based metadata. One of our planned immediate undertakings is to combine SemLinker and SemCluster into a broader integration solution towards an effective and efficient metadata management framework for the personal data lake. In this paper, we have discussed the importance of automating the tasks of the integration process, thereby building an easy-to-use data lake for casual users. However, SemLinker, though in many aspects it can be regarded as an automatic system, still has two vital tasks that need to be dealt with manually by its users: data source tagging and selecting a schema matcher plugin. Using machine learning based approaches to label data sources with ontological concepts automatically, and thus relieving users of the burden of manual data source tagging, is one of our future research goals. As for schema matcher selection, though the performance of SemMatcher in the evaluation is promising, we intend to extend it by combining a number of other matching approaches, so that it provides a good matching solution for schemas of various characteristics, reducing the need for users to resort to other schema matchers. PDL: personal data lake ETL: extract-transform-load process BDI: MMF: metadata management framework SM: VTSA: OWL: Web Ontology Language resource description framework DCMI: Dublin core metadata initiative vocabulary XSD: XML schema definition vocabulary RDFS: RDF schema vocabulary SIOC: Semantically-Interlinked Online Communities vocabulary SPARQL: recursive acronym for SPARQL Protocol and RDF Query Language WordNet: the lexical database WordNet TF: term frequency IDF: inverse document frequency Erhard R. The case for holistic data integration. In: East European conference on advances in databases and information systems. Berlin: Springer; 2016. Jagadish HV, Gehrke J, Labrinidis A, Papakonstantinou Y, Patel J, Ramakrishnan R, Shahabi C. Big data and its technical challenges. Commun ACM. 2014;57(14):86–94. Ponniah P. Data extraction, transformation, and loading. New York: Wiley; 2001. Dixon J. Pentaho, hadoop, and data lakes. James Dixon Blog. http://www.pentaho.com/blog/2010/10/15/pentaho-hadoop-and-data-lakes. Accessed 25 Dec 2017. Quix C, Hai R, Vatov I. Metadata extraction and management in data lakes With GEMMS. Complex Syst Inf Model Quart. 2016;9(16):67–83. Walker C, Alrehamy H. Personal data lake with data gravity pull. In: 2015 IEEE fifth international conference on Big data and cloud computing (BDCloud); 2015. Nadal S, Romero O, Abelló A, Vassiliadis P, Vansummeren S. An integration-oriented ontology to govern evolution in big data ecosystems. EDBT/ICDT workshops; 2017. Apache Hadoop. http://hadoop.apache.org/. Accessed 25 Dec 2017. Jones W. A review of personal information management. IS-TR-2005-11-01. The information school technical repository. Washington: University of Washington; 2005. Dong XL, Srivastava D. Big data integration. In: 2013 IEEE 29th international conference on data engineering (ICDE); 2013. Abelló A. Big data design. In: Proceedings of the ACM eighteenth international workshop on data warehousing and OLAP. New York: ACM; 2015. Shvaiko P, Euzenat J. Ontology matching: state of the art and future challenges. IEEE Trans Knowl Data Eng. 2013;25(1):158. Ramnandan S, Mittal A, Knoblock C, Szekely P. Assigning semantic labels to data sources. In: European semantic web conference. Cham: Springer; 2015. Peukert E, Eberius J, Rahm E. A self-configuring schema matching system. In: 2012 IEEE 28th international conference on data engineering (ICDE); 2012. Ramanathan V, Brickley D, Macbeth S. Schema. org: evolution of structured data on the web. Commun ACM. 2016;59(16):44–51. Manousis P, Vassiliadis P, Zarras A, Papastefanatos G (2015) Schema evolution for databases and data warehouses. In: European business intelligence summer school. Berlin: Springer; 2015. Curino C, Moon H, Deutsch A, Zaniolo C. Automating the database schema evolution process. VLDB J. 2013;22(13):73–98. Andany J, Léonard M, Palisser C. Management of schema evolution in databases. In: VLDB. 1991. p. 161–70. Lenzerini M. Data integration: a theoretical perspective. In: Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on principles of database systems. New York: ACM; 2002. p. 233–46. Gruber T. A translation approach to portable ontology specifications. Knowl Acquisit. 1993;5(93):199–220. Giese M, Soylu A, Vega-Gorgojo G, Waaler A, Haase P, Jiménez-Ruiz E, Lanti D. Optique: zooming in on big data. Computer. 2015;48(15):60–7. Calvanese D, Cogrel B, Komla-Ebri B, Kontchakov R, Lanti D, Rezk M, Rodriguez-Muro M, Xiao G. Ontop: answering SPARQL queries over relational databases. Semantic Web. 2017;8(17):471–87. Marcos M, Maldonado J, Martínez-Salvador B, Boscá D, Robles M. Interoperability of clinical decision-support systems and electronic health records using archetypes: a case study in clinical trial eligibility. J Biomed Inform. 2013;46(4):676–89. Cate B, Dalmau V, Kolaitis P. Learning schema mappings. ACM Trans Database Syst (TODS). 2013;38(13):28. Varga J, Romero O, Pedersen T, Thomsen C. Towards next generation BI systems: the analytical metadata challenge. In: International conference on data warehousing and knowledge discovery, vol. 8646. Cham: Springer; 2014. p. 89–101. Maccioni A, Torlone R. Crossing the finish line faster when paddling the data lake with kayak. Proc VLDB Endowment. 2017;10(12):1853. Apache Atlas. http://atlas.apache.org/. Accessed 25 Dec 2017. Apache Avro. https://avro.apache.org/. Accessed 25 Dec 2017. Reis D, Cesar J, Pruski C, Reynaud-Delaître C. State-of-the-art on mapping maintenance and challenges towards a fully automatic approach. Expert Syst Appl. 2015;42(15):1465–78. Scherzinger S, Cerqueus T, Cunha de Almeida E. Controvol: a framework for controlled schema evolution in nosql application development. In: 2015 IEEE 31st international conference on data engineering (ICDE). 2015. p. 1464–7. McGuinness D, Van Harmelen F. OWL web ontology language overview. W3C Recommen. 2004;1010(4):2004. Lassila O, Swick R. Resource description framework (RDF) model and syntax specification. W3C Technical Report. 1999. https://www.w3.org/TR/REC-rdf-syntax/ Mascardi V, Cordì V, Rosso P. A comparison of upper ontologies. In: WOA; 2007. Heath T, Bizer S. Linked data: evolving the web into a global data space. Synth Lect Semantic Web. 2011;1(11):1–136. XSD Vocabulary. https://www.w3.org/TR/xmlschema11-1/. Accessed 25 Dec 2017. SIOC Vocabulary. http://rdfs.org/sioc/spec/. Accessed 25 Dec 2017. DCMI Vocabulary. http://dublincore.org. Accessed 25 Dec 2017. WGS84 Vocabulary. https://www.w3.org/2003/01/geo/. Accessed 25 Dec 2017. Wang S, Keivanloo I, Zou Y. How do developers react to restful API evolution? In: International conference on service-oriented computing. Berlin: Springer; 2014. p. 245–59. Media types listing by the internet assigned numbers authority. https://www.iana.org/assignments/media-types/media-types.xhtml. Accessed 25 De 2017. Taheriyan M, Knoblock A, Szekely P, Ambite J. A scalable approach to learn semantic models of structured sources. In: 2014 IEEE international conference on semantic computing (ICSC); 2014. p. 183–90. Shen W, Wang J, Han J. Entity linking with a knowledge base: Issues, techniques, and solutions. IEEE Trans Knowl Data Eng. 2015;27(15):443–60. Cruz I, Antonelli F, Stroe C. AgreementMaker: efficient matching for large real-world schemas and ontologies. Proc VLDB Endowment. 2009;2(9):1586–9. Madhavan J, Bernstein P, Doan A, Halevy A. Corpus-based schema matching. In: Proceedings 21st international conference on ICDE 2005 data engineering; 2005. p. 57–68. Bernstein A, Madhavan J, Rahm E. Generic schema matching, ten years later. Proc VLDB Endowment. 2011;4(11):695–701. Xu L, Embley D. Combining the best of global-as-view and local-as-view for data integration. ISTA. 2004;48:123–36. Fagin R, Kolaitis P, Popa L, Tan W. Schema mapping evolution through composition and inversion. In: Schema matching and mapping. Berlin: Springer; 2011. p. 191–222. Alrehamy H, Walker C. SemCluster: unsupervised automatic keyphrase extraction using affinity propagation. In: UK workshop on computational intelligence. Cham: Springer; 2017. p. 222–35. Stisen A, Blunck H, Bhattacharya S, Prentow T, Kjærgaard M, Dey A, Sonne T, Jensen M. Smart devices are different: assessing and mitigating mobile sensing heterogeneities for activity recognition. In: Proceedings of the 13th ACM conference on embedded networked sensor systems. New York: ACM; 2015. p. 127–40. Faye S, Louveton N, Jafarnejad S, Kryvchenko R, Engel T. An open dataset for human activity analysis using smart devices. 2017. hal-01586802, Version 1. https://hal.archives-ouvertes.fr/hal-01586802 SemLinker Experimental Evaluation Setup. https://github.com/alrehamy/SemLinker_Evaluation. Accessed 25 Dec 2017. AgreementMaker Source Code Repoistory. https://github.com/agreementmaker/agreementmaker. Accessed 25 Dec 2017. SemanticTyper Source Code Repository. https://github.com/tknandu/SemanticLabelingRepo. Accessed 25 Dec 2017. Karma Web-based Integration Tool Source Code Repository. https://github.com/usc-isi-i2/Web-Karma. Accessed 25 Dec 2017. MongoDB Database Homepage. https://www.mongodb.com. Accessed 25 Dec 2017. Carlson J. Redis in action. New York: Manning Publications Co.; 2013. Human Activity Recognition Dataset (HAR 3). https://hal.archives-ouvertes.fr/hal-01586802. Accessed 25 Dec 2017. Facebook Open Graph API. https://graph.facebook.com. Accessed 25 Dec 2017. Twitter Data Streaming API. https://api.twitter.com. Accessed 25 Dec 2017. Foursquare API. https://api.foursquare.com/v2/. Accessed 25 Dec 2017. Flickr API. https://api.flickr.com/services/rest/. Accessed 25 Dec 2017. London Restaurants Reviews Dataset. https://www.kaggle.com/PromptCloudHQ/londonbased-restaurants-reviews-on-tripadvisor. Accessed 25 Dec 2017. Tourpedia API. http://tour-pedia.org/api/. Accessed 25 Dec 2017. United Kingdom Government open datasets, the food standards agency, food safety and food hygiene ratings dataset. http://ratings.food.gov.uk/open-data/. Accessed 25 Dec 2017. United Kingdom Postal Codes Dataset. https://www.getthedata.com/open-postcode-geo. Accessed 25 Dec 2017. Human Activity Recognition Dataset (HAR 1,2). https://archive.ics.uci.edu/ml/datasets/Heterogeneity+Activity+Recognition. Accessed 25 Dec 2017. Both authors contributed to the investigation of the research hypothesis and the initiation of the research. Hassan Alrehamy is the main force in developing and evaluating the SemLinker system, while Coral Walker took on a supervisory role and oversaw the completion of the work. Both authors contributed significantly in producing the paper. Both authors read and approved the final manuscript. In a GitHub repository associated with our paper, we include all datasets used in our experiments. In addition, we also provide the scripts to reproduce our analysis [51,52,53,54, 57,58,59,60,61,62,63,64,65]. The repository is publicly available on GitHub at [51]. School of Computer Science and Informatics, Cardiff University, Queens Building, Cardiff, UK Hassan Alrehamy & Coral Walker College of Information Technology, Babylon University, Babylon, Iraq Search for Hassan Alrehamy in: Search for Coral Walker in: Correspondence to Coral Walker. Alrehamy, H., Walker, C. SemLinker: automating big data integration for casual users. J Big Data 5, 14 (2018) doi:10.1186/s40537-018-0123-x Received: 29 January 2018 Schema evolution Schema mapping
CommonCrawl
Domain of $f(x) = \dfrac{(3x + \left|x\right|)}x$ For the equation $$f(x) = \dfrac{(3x + |x|)}x$$ how do you algebraically figure out the domain of the function? I know it's continuous all the way, but I've tried to split the function into $3x+|x|$ which is continuous in all instances of x and then 1/x which is not continuous at zero. How come this function's domain is still continuous even though 1/x cannot exist at zero? I feel like I'm missing something really obvious. algebra-precalculus functions Ahaan S. Rungta ShuriShuri $\begingroup$ Does the function have a limit as $x\to 0$? How do you know the domain is continuous? $\endgroup$ – abiessu Jan 8 '14 at 17:12 $\begingroup$ What do you mean by "the domain is continous"? How can a set be continous or not? $\endgroup$ – user127.0.0.1 Jan 8 '14 at 17:14 $\begingroup$ When you see an absolute value, split it up: for positive values, and negative ones. It's quite helpful. $\endgroup$ – Gigili Jan 8 '14 at 17:14 $\begingroup$ @user127001 The concept isn't meaningless, even though there's no property that's called "continuos" for sets, I think either "open", "connected", or "simply connected" may express what he means $\endgroup$ – GPerez Jan 8 '14 at 17:18 $\begingroup$ @Shuri what properties do you know about operations on functions, and what happens to their domain? $\endgroup$ – GPerez Jan 8 '14 at 17:20 Before answering the question, I'd like to dispel what seems to be a misconception. This question about domain has nothing to do with continuity. You may be confused because in many practical cases, a function is continuous at all points in its domain. For example, this is the case for rational functions. But there are other cases where this is not true. A function $f$ is defined at a point $a$ when it makes sense to calculate $f(a)$. This is the same as saying that $a$ is in the domain of $f$. A function $f$ is continuous at a point $a$ if, not only is $f$ defined at $a$, but $\lim_{x \to a} f(x)$ exists and equals $f(a)$. To solve your problem, you need to answer the following question: for what values of $x$ does the expression $\frac{3x + |x|}{x}$ make sense? $3x$ always makes sense, and so does $|x|$. Also, you can always add two numbers, no matter what they are. So $3x + |x|$ always makes sense. The main point, however, is that you can't always divide a number $A$ by a number $B$. When does a fraction $A/B$ make sense? 24411 silver badge11 bronze badge Hint: Simplify the function for positive and negative values of $x$. GigiliGigili $$x>0\longrightarrow f(x)=\frac{3x+|x|}{x}=\frac{3x+\color{red}{x}}{x}=\frac{4x}{x}=4$$ and $$x<0\longrightarrow f(x)=\frac{3x+|x|}{x}=\frac{3x+(\color{blue}{-x})}{x}=\frac{2x}{x}=2$$ and if $x=0$ then ... mrsmrs $\begingroup$ @J.W.Perry: Oh yes. I did a bad simplification. Fixed and Thanks. $\endgroup$ – mrs Jan 8 '14 at 17:32 $\begingroup$ This was a clear answer as well, thank you! $\endgroup$ – Shuri Jan 8 '14 at 17:34 $\begingroup$ No problemo there. Good concise answer. $\endgroup$ – J. W. Perry Jan 8 '14 at 17:36 Not the answer you're looking for? Browse other questions tagged algebra-precalculus functions or ask your own question. The domain of $f(x)=\sqrt{\frac{1-\cos(x)}{1+\cos(x)}}$ Find domain of function with quadratic numerator algebraically How to sketch the level curves of $f(x,y) = x^2 - y^2$ How to find the domain of this trig function? How to find the domain of a function such that it will be all positive numbers? Cardinality of the Domain vs Codomain in Surjective (non-injective) & Injective (non-surjective) functions Inverse of any strictly monotonic increasing function defined over a fixed domain and range. Computing the limit with a radical: $ \lim_{x\to\infty}\left(\sqrt{x^2+4x}-x\right) $ What is the range of the function, $f(x)=\dfrac{4x+7}{(6x)^2-5}$, algebraically? Prove that $f(x) = \frac{\sin(x+\alpha)}{\sin(x+\beta)}$ is monotonic in any interval of its domain
CommonCrawl
Overlooked potential of positrons in cancer therapy Positronium in medicine and biology Paweł Moskal, Bożena Jasińska, … Steven D. Bass In vivo imaging of mitochondrial membrane potential in non-small-cell lung cancer Milica Momcilovic, Anthony Jones, … David B. Shackelford Enhanced noninvasive imaging of oncology models using the NIS reporter gene and bioluminescence imaging Rianna Vandergaast, Sarawut Khongwichit, … Lukkana Suksanpaisan Reporter PET Images Bortezomib Treatment-Mediated Suppression of Cancer Cell Proteasome Activity Jin Hee Lee, Kyung-Ho Jung, … Kyung-Han Lee 18F-FIMP: a LAT1-specific PET probe for discrimination between tumor tissue and inflammation Satoshi Nozaki, Yuka Nakatani, … Yasuyoshi Watanabe Fructose and prostate cancer: toward an integrated view of cancer cell metabolism Daniela Carreño, Néstor Corro, … Alejandro S. Godoy Intra-individual dynamic comparison of 18F-PSMA-11 and 68Ga-PSMA-11 in LNCaP xenograft bearing mice Sarah Piron, Jeroen Verhoeven, … Filip De Vos The use of hyperpolarised 13C-MRI in clinical body imaging to probe cancer metabolism Ramona Woitek & Ferdia A. Gallagher Hyperpolarized 13C-pyruvate MRI detects real-time metabolic flux in prostate cancer metastases to bone and liver: a clinical feasibility study Hsin-Yu Chen, Rahul Aggarwal, … Daniel B. Vigneron Takanori Hioki1,3,4, Yaser H. Gholami1,3,4, Kelly J. McKelvey3,4, Alireza Aslani2,5, Harry Marquis1,4, Enid M. Eslick2, Kathy P. Willowson1,2, Viive M. Howell3,4,5 & Dale L. Bailey2,4,5 Scientific Reports volume 11, Article number: 2475 (2021) Cite this article Biological physics Positron (β+) emitting radionuclides have been used for positron emission tomography (PET) imaging in diagnostic medicine since its development in the 1950s. Development of a fluorinated glucose analog, fluorodeoxyglucose, labelled with a β+ emitter fluorine-18 (18F-FDG), made it possible to image cellular targets with high glycolytic metabolism. These targets include cancer cells based on increased aerobic metabolism due to the Warburg effect, and thus, 18F-FDG is a staple in nuclear medicine clinics globally. However, due to its attention in the diagnostic setting, the therapeutic potential of β+ emitters have been overlooked in cancer medicine. Here we show the first in vitro evidence of β+ emitter cytotoxicity on prostate cancer cell line LNCaP C4-2B when treated with 20 Gy of 18F. Monte Carlo simulation revealed thermalized positrons (sub-keV) traversing DNA can be lethal due to highly localized energy deposition during the thermalization and annihilation processes. The computed single and double strand breakages were ~ 55% and 117% respectively, when compared to electrons at 400 eV. Our in vitro and in silico data imply an unexplored therapeutic potential for β+ emitters. These results may also have implications for emerging cancer theranostic strategies, where β+ emitting radionuclides could be utilized as a therapeutic as well as a diagnostic agent once the challenges in radiation safety and protection after patient administration of a radioactive compound are overcome. Approximately 90% of cancer related deaths are from metastatic disease and thus the focus of cancer radiotherapy (RT) in both the scientific and clinical communities is shifting towards a more targeted, personalized approach1. While the radiobiology of gamma, MV X-rays and particles (e.g., β− α2+, Auger electron) for therapy have been well investigated, there is a paucity of in vitro and in vivo studies on the biological impact of the positron (β+) which could provide a new modality for cancer RT2,3,4,5. The β+ is the anti-matter counterpart of the electron and its emission from a radionuclide such as fluorine-18 (18F), leads to the conversion of a proton within the atomic nucleus to a neutron accompanied by a release of an electron neutrino. This decay process results in two 0.511 MeV photons due to the annihilation of the β+ when coming to rest in matter and making contact with an electron. β+ lose their kinetic energy in discrete quantities (≈ 100 eV) producing 'rabbles' of excited and ionized molecules6. They create an excess-electron/positive-ion pairs inside a spherical nano-volume along its radiation track (called the β+ 'spur') and during the post-thermalization process (called the terminal β+ 'blob') with further annihilation ionization when they come to rest after positronium (Ps) formation (Fig. 1)7,8,9. β+ track deposits energy in water in through the formation of "spurs" and the terminal "blob", before a positronium (Ps) formation and annihilation to generate the 0.511 MeV photons. Each spur and blob on the particle track are a temporal and spatial microcosm of highly reactive species capable of locally depositing energies up to 100 and 5000 eV respectively9. Thus, a relatively large radiation dose (i.e., large energy deposition in a small volume) can be delivered when a spur or blob interacts with biomolecules such as DNA9. Despite these unique radiation properties, the fundamental interactions of β+ particles before the annihilation process and its potential therapeutic impact in vivo/vitro, have not been considered due to its prominent role in PET to image annihilation photons. The aim of our study was to derive the radiobiological parameters for 18F β+ emission and demonstrate proof-of-concept cytotoxicity data for β+ emission in radionuclide therapy (RNT). These results may have implications for novel RNT treatment planning strategies, where β+ emitting radionuclides are used as a theranostic. Using clonogenic assays described in the methods, the cell survival fraction (SF) shown in Fig. 2 demonstrated > 90% cell kill when LNCaP C4-2B prostate cancer cells were treated with 20 Gy 18F β+. Furthermore, Fig. 2 shows the relationship between the absorbed dose of 18F β+ emission and its impact on cell clonogenicity compared to X-ray external beam radiotherapy (EBRT) using the small animal radiation research platform (SARRP—See supplementary data). Both data have a linear quadratic (LQ) trend fitted for the purpose of visualization and to extrapolate the relative biological effectiveness (RBE). The SF data from 18F are relatively linear up to 20 Gy, but trends towards an exponential decay at higher doses. SARRP X-ray irradiation induced an expected LQ trend in cell kill, with a shoulder at ~ 2.5 Gy, and a linear region at higher doses [α/β ratio = 3.66; Eq. (3)]. The calculated mean RBE from Fig. 2 was 0.42 for 18F relative to the 0.5 SF for the SARRP. LNCaP C4-2B Clonogenic Cell Survival: 18F β+ emission versus SARRP EBRT. Black dotted line depicts the mean absorbed dose required to reduce cell survival fraction to 0.5. Using Monte Carlo (MC) simulation of a linear DNA model in TOPAS-nBio we acquired the frequency of hits per one million primaries resulting in single strand breaks (SSBs) and double strand breaks (DSBs) from the emission of β+ and β−/electrons, presented as a function of the particle's kinetic energy in Fig. 3(i),(ii)10,11. Both β+ and β−/electrons exhibited a similar trend for SSB with an exponential decrease for particles with higher kinetic energies (Fig. 3(i)). DSB frequency for both β+ and electrons showed an exponential trend as their energy increased (Fig. 3(ii)). The high fluctuations in the recorded DSB for energies < 400 eV is primarily due to the limitations of the MC package PENELOPE, as the particle transport is not recorded below 100 eV. The spatial dependency of a DSB also contributes to the fluctuations, as variations in SSBs for a particular energy are significantly lower than that of DSBs. Observing the trend of the plot for the linear energy transfer (LET) (Fig. 3(iii)) for the primary particle simulated, the expected maximum SSB and DSB frequency for both β+ and electron would have been at 250 eV for the simulated energies. Although the LET of β+ and electrons significantly increase when they come to rest, β+ have relatively higher LET (≈7% higher at E = 250 eV). (i) SSB frequency and (ii) DSB frequency for β+ and electrons from the TOPAS-nBio simulation. The highest difference in the frequency between β+ and electrons occurred at 400 eV. (iii) Plot of the primary β+ and electron LET (keV/μm) across the simulated energies. This is also consistent with the blob model where the β+ creates an excess-electron/positive-ion pairs towards the end of its track. In addition to the β+ annihilation ionization with outermost valence electrons (which is the predominant process), a fraction of sub-eV β+ can tunnel through the repulsive nuclear potential and annihilate with core electrons (a phenomena used in Induced Auger Electron Spectroscopy)12. This creates a vacancy in the inner electronic shell leading to an emission of high LET Auger electrons and, thus, a β+ traversing the DNA has a higher probability of causing lethal damage when compared to electrons. This is again consistent with the trend observed for both the simulated SSB and DSB induced by β+ tracks. The greatest difference in SSBs and DBSs for the emission profiles occurred at 400 eV where β+ demonstrated a 55% integral increase in SSBs compared to electrons, and a 117% increase in DSBs. These results are consistent with our previous study demonstrating the use of 89Zr and 64Cu for dose enhancement in nuclear medicine imaging compared to β− therapeutic isotopes such as 67Cu, by virtue of their higher ionization properties13. Another in vivo study demonstrated that both 64Cu (with 38.5% β−) and 67Cu (with 100% β−) were an effective targeted β− RNT for subcutaneous human colon carcinoma carried in hamster thighs when radiolabeled with a mouse anti-human colorectal cancer14. Although in this study only the β− emission of 64Cu (with decay branching ratio: 38.5% β−, 43.9% electron capture and 17.6% β+ emission) was considered for therapy, our results suggest that there could be a noticeable contribution of β+ emission in the therapeutic effectiveness of 64Cu targeted RNT. Cell survival curves of X-ray versus β+ irradiation of LNCaP C4-2B cells revealed substantial cell kill (≈ 70% at 10 Gy) by 18F β+ emission. This may be therapeutically significant as previous in vitro studies have shown less than ~ 30% cell kill in a range of cancer cell lines by 10 Gy of β− emitting radionuclides, 90Y and 177Lu15,16. This suggests that the RBE of 18F β+ emission may be three times higher than β− emitters depending on the β− and β+ emitters being compared. Although these results suggest that other β+ emitter isotopes such as 68 Ga, 64Cu and 89Zr could potentially be considered as a therapeutic isotope, due to their differences in positron energy spectrum, branching ratio and half-life, further in vitro experiments are required to quantify their therapeutic impact. The MC simulation demonstrated 1.5-fold and 2.2-fold more SSBs and DSBs, respectively, are induced by β+ tracks, when compared to electron tracks. Thus, the direct interaction of a single β+ with DNA results in more lethal (i.e., DSB) damage than for a single electron. Based on the spur model (Fig. 1), a β−/electron and β+ have similar radiation tracks except at sub-keV energies7,8,9. For a sub-keV electron, the mean separation between two energy deposition events and the formation of spurs (≈ 40 nm) is 20 times larger than the diameter of the DNA helix (≈ 2 nm)7,16. However, for a sub-keV β+ track (i.e., thermalized β+), spurs are formed in a continuous distribution of radiation damage with a higher density of several electrons and ions. Thus, at sub-keV energies, β+ have a higher LET compared to β−/electrons (Fig. 3(iii))6,7. In addition, the total number of ionizations per β+ track is greater than per electron due to the additional ionization occurring at the terminal annihilation event. Moreover, in a β+ spur the incident β+, secondary electrons and ions are within each other's electrostatic/Coulomb field resulting in greater localization of energy deposition and a higher probability of inducing direct DNA DSB. As per convention, the radiobiological parameters of 18F β+ emission in our in vitro study were benchmarked against X-ray therapy. The low mean β+ RBE of 0.42 compared to X-ray (SARRP) calculated from the cell survival curve can be attributed to the > 200-fold difference in dose rate between the two modalities (i.e. ≈ 7–21 mGy/min and 4.5 × 103 mGy/min respectively). For X-rays, the main mechanism of cell kill is through indirect methods primarily via the production of free radical species. X-ray EBRT cell survival fraction curves often have a shoulder [quadratic region of LQ model; Eq. (3)] at 2 Gy contributed by the reconciliation of DNA damage and DNA repair1,2. In LNCaP C4-2B cells this was observed at 2.5 Gy (Fig. 2) using the SARRP1,2. While the lower initial slope observed for 18F β+ irradiation at this similar dose region may be due to dose rate differences between SARRP and 18F irradiation, at the higher doses, the clonogenic survival approaches a deterministic process where the relative difference in number of surviving clonogenic cells is proportional to the increase in dose [linear region of LQ; Eq. (3)], and thus the corresponding survival curve becomes exponential17. Current administration of β+ emitting radionuclides in the clinic ensures that the doses absorbed to the patients in diagnostic nuclear medicine are minimized and reported values for mean effective doses for 18F-FDG range from 3.4 to 13.4 mSv18. In RNT, therapeutic doses vary significantly due to differences in tumor type, burden and administration method. For example, absorbed doses range from 1.2 to 540 Gy in thyroid cancer patients, whereas for prostate cancer reported doses range from 3.4 to 92.5 Gy19,20. The large differences between diagnostic and therapeutic doses in part explains the lack of investigation into β+ emitters as a therapeutic agent. The abundance of currently available targeting ligands for imaging using β+ emitters allow for a "theranostic" application with the same radionuclide to be feasible using this approach21. Our study presents the first in vitro evidence of therapeutic effect of β+ emitters in cancer medicine, with an attractive prospective for future studies. Clinically, the highly penetrating, low ionizing properties of the emitted 0.511 MeV photons mean that the safety and logistics behind the administration of therapeutic doses of β+ emitters must be reconsidered. Although a specialized form of automated/remote injection in a shielded room will be required, the relatively short half-life, cost, availability and reduced imaging time is justifiable to find the means of overcoming the limitations presented. The human prostate cancer cell line LNCaP C4-2B, (ATCC, Catalogue no. CRL-3315), was cultured in Roswell Park Memorial Institute (RPMI) medium supplemented with 10% v/v fetal bovine serum (FBS) and incubated in a humidified atmosphere containing 5% CO2 at 37 °C. SARRP X-ray irradiation LNCaP C4-2B cells were grown in T-25 flasks in 5 mL of medium for 3 days prior to the irradiation to ensure approximately 80% confluence of the cells in the flask. Using the SARRP (Xstrahl Inc, Atlanta, Georgia, USA), the EBRT open-field setup (20 × 20 cm2) was utilized. Flasks were placed at the isocenter of the beam on top of a 3.6 cm Perspex backscatter base at a source-to-axis distance (SAD) of 33.4 cm to give a dose rate of 4.45 ± 0.06 Gy/min at the flask with the default X-ray beam energy spectrum of 225 kVp. Doses using this setup were confirmed using EBT3 GAFchromic™ films. Doses ranging from 0 to 7.5 Gy in increments of 2.5 Gy were delivered by changing the exposure time according to the dose rate. The cells were then harvested for seeding post-irradiation. 18F irradiation 18F has a half-life of 109.77 min with a branching ratio of 96.73% β+ emission and 3.27% electron capture22. The emitted β+ has a mean energy of 249.8 keV and a maximum energy of 633.5 keV, corresponding to mean ranges of approximately 0.6 and 2.4 mm in water, respectively22,23. The sodium fluoride (18F) solution was obtained from Cyclotek NSW (batch number: F181362001) with an activity concentration of 4.14 GBq/mL. LNCaP C4-2B cells were grown in an identical manner for the SARRP and 18F irradiations. Doses were calculated analytically using Eqs. (1) and (2) and were verified in silico using the sphere model (2 cm3 volume) in the OLINDA/EXM software Version 1.1 (Table 1)24. $$D = \frac{{\tilde{A} \times \overline{E } \times \kappa }}{m}$$ $$\tilde{A} = A_{0} \mathop \smallint \limits_{0}^{T} e^{ - \lambda t} dt$$ where D (Gy) is the dose delivered, \(\tilde{A}\) (Bq) is the cumulative activity, \(\overline{E}\) (J) is the average energy of the emitted particle,\(\kappa\) is the branching ratio (0.9673), m (kg) is the mass of the target volume (2 mL), \(A_{0}\) (Bq) is the initial activity, T (s) is the irradiation time and λ (s−1) is the decay constant. Table 1 Dose calculations for the administered 18F activities as determined from experimental (analytical) and in silico data (OLINDA/EXM). The initial activity concentrations to achieve 10–30 Gy after 10 half-lives (≈ 18 h) ranged from 26.2 to 78.6 MBq/mL. Stock activity of 18F was diluted in an appropriate volume of medium to achieve the required initial activity for each analytical dose including a buffer time, confirmed using a dose calibrator with identical setup. Dose calibration confirmed the specific initial activity concentration of the radionuclide solution (at pH ≈ 7) and approximately 1% of the initial activity remaining after 18 h. For irradiation, the 18F medium solution was passed through a 0.22 μm filter to remove biological contaminants and then administered to the T-25 flasks once solutions reached the desired initial activity including buffer time. After 18 h, the radioactive medium was removed, and cells were washed with phosphate buffered saline (PBS) before harvesting for seeding post-irradiation. Clonogenic assay Cell survival was determined using the clonogenic assay post-irradiation delayed plating (DP) method for both the SARRP and 18F experiments.25 The protocol used is further detailed in the "Supplementary Methods". Radiobiological parameters Using PRISM8 (GraphPad, v8.4.0, San Diego, California, USA), cell survival data from the SARRP irradiation were analyzed, and the LQ model [Eq. (3)] was fitted to calculate the radiobiological linear α (Gy−1) and quadratic components β (Gy−2) from the dose delivered D (Gy)17. $$SF = e^{{ - \left( {\alpha D + \beta D^{2} } \right)}}$$ To calculate the RBE of 18F irradiation, the cell survival data from the radionuclide experiments were plotted against the EBRT SARRP data, and the RBE determined at the point of 0.5 cell survival fraction. The protocol used is further detailed in the "Supplementary Methods". TOPAS-nBio simulation An MC simulation model was developed in the TOPAS-nBio toolkit to investigate the sub-cellular damage mechanism at DNA level by β+ and electron irradiations. All simulations were performed using TOPAS-nBio (version 1.0), which uses GEANT4 (Version 10.03.3)10,11. A standard linear DNA model (a half cylinder base and a quarter cylinder backbone) was simulated on this platform (Fig. 4) and isotropically irradiated with a volume source of β+ and electrons using energies ranging from 250 to 1500 eV to score the single and double strand breakages (SSBs and DSBs respectively)11. Linear DNA model in TOPAS-nBio, consisting of the sugar phosphate backbone (blue and red), bases (white) and volumetric source (green). 100 bases are simulated, with a total DNA length of 34 nm, outer and base diameter of 2.37 and 1 nm, respectively. Particles with energies above 1500 eV were not simulated as the cross-section for those energies is significantly small at DNA level. The number of primary incident particles (primaries) for the β+ and electrons was set to 106, with 3 different seed numbers to gauge the spread in acquired values. The results are reported in frequency per million primaries for each energy. The strand breaks were scored by assuming an energy deposition threshold resulting in ionization of 17.5 eV, with a maximum of 10 bases between two SSBs to define a DSB11. The LET for the primary β+ and electron were simulated (GEANT4–NIST Database) in water for a range of sub-keV energies (250–600 eV). Chaffer, C. L. & Weinberg, R. A. A Perspective on cancer cell metastasis. Science 331, 1559–1564 (2011). Durante, M., Orecchia, R. & Loeffler, J. S. Charged-particle therapy in cancer: Clinical uses and future perspectives. Nat. Rev. Clin. Oncol. 14, 483–495 (2017). Nuhn, P. et al. Update on systemic prostate cancer therapies: Management of metastatic castration-resistant prostate cancer in the era of precision oncology. Eur. Urol. 75, 88–99 (2019). Frey, B. et al. Antitumor immune responses induced by ionizing irradiation and further immune stimulation. Cancer Immunol. Immunother. 63, 29–36 (2014). Pouget, J.-P. et al. Clinical radioimmunotherapy—The role of radiobiology. Nat. Rev. Clin. Oncol. 8, 720–734 (2011). Champion, C. & Loirec, C. L. Positron follow-up in liquid water: I. A new Monte Carlo track-structure code. Phys. Med. Biol. 51, 1707–1723 (2006). Mogensen, O. E. Positron Annihilation in Chemistry Vol. 58 (Springer, Berlin, 1995). Stepanov, S. V. et al. Positronium in a liquid phase: Formation, Bubble state and chemical reactions. Adv. Phys. Chem. 2012, 1–17 (2012). Clinical radiation oncology. (Elsevier, Amsterdam, 2016). Perl, J., Shin, J., Schümann, J., Faddegon, B. & Paganetti, H. TOPAS: An innovative proton Monte Carlo platform for research and clinical applications: TOPAS: An innovative proton Monte Carlo platform. Med. Phys. 39, 6818–6837 (2012). Schuemann, J. et al. TOPAS-nBio: An extension to the TOPAS Simulation toolkit for cellular and sub-cellular radiobiology. Radiat. Res. 191, 125 (2018). Chirayath, V. A. et al. Auger electron emission initiated by the creation of valence-band holes in graphene by positron annihilation. Nat. Commun. 8, 16116 (2017). Gholami, Y. H., Maschmeyer, R. & Kuncic, Z. Radio-enhancement effects by radiolabeled nanoparticles. Sci. Rep. 9, 14346 (2019). Connett, J. M. et al. Radioimmunotherapy with a 64Cu-labeled monoclonal antibody: A comparison with 67Cu. Proc. Natl. Acad. Sci. 93, 6814–6818 (1996). Chan, H. S. et al. In Vitro comparison of 213Bi- and 177Lu-radiation for peptide receptor radionuclide therapy. PLoS ONE 12, e0181473 (2017). Gholami, Y. H. et al. Comparison of radiobiological parameters for 90Y radionuclide therapy (RNT) and external beam radiotherapy (EBRT) in vitro. EJNMMI Phys. 5, (2018). Jones, L., Hoban, P. & Metcalfe, P. The use of the linear quadratic model in radiotherapy: A review. Australas. Phys. Eng. Sci. Med. 24, 132–146 (2001). Quinn, B., Dauer, Z., Pandit-Taskar, N., Schoder, H. & Dauer, L. T. Radiation dosimetry of 18F-FDG PET/CT: Incorporating exam-specific parameters in dose estimates. BMC Med. Imaging 16, 41 (2016). Sgouros, G. et al. Patient-Specific Dosimetry for 131I Thyroid Cancer Therapy Using 124I PET and 3-Dimensional–Internal Dosimetry (3D–ID) Software. 8. Violet, J. et al. Dosimetry of 177 Lu-PSMA-617 in metastatic castration-resistant prostate cancer: Correlations between pretherapeutic imaging and whole-body tumor dosimetry with treatment outcomes. J. Nucl. Med. 60, 517–523 (2019). Chen, K. & Chen, X. Positron emission tomography imaging of cancer biology: Current status and future prospects. Semin. Oncol. 38, 70–86 (2011). Tilley, D. R., Weller, H. R., Cheves, C. M. & Chasteler, R. M. Energy levels of light nuclei A = 18–19. Nucl. Phys. A 595, 1–170 (1995). Conti, M. & Eriksson, L. Physics of pure and non-pure positron emitters for PET: A review and a discussion. EJNMMI Phys. 3, 8 (2016). Stabin, M. G., Sparks, R. B. & Crowe, E. OLINDA/EXM: The Second-Generation Personal Computer Software for Internal Dose Assessment in Nuclear Medicine. 6. Franken, N. A. P., Rodermond, H. M., Stap, J., Haveman, J. & van Bree, C. Clonogenic assay of cells in vitro. Nat. Protoc. 1, 2315–2319 (2006). This work was partially supported by the Sydney Vital Translational Cancer Research Centre—Cancer Institute NSW, Sydney, Australia. School of Physics, Faculty of Science, The University of Sydney, Sydney, Australia Takanori Hioki, Yaser H. Gholami, Harry Marquis & Kathy P. Willowson Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia Alireza Aslani, Enid M. Eslick, Kathy P. Willowson & Dale L. Bailey Bill Walsh Translational Cancer Research Laboratory, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia Takanori Hioki, Yaser H. Gholami, Kelly J. McKelvey & Viive M. Howell Sydney Vital Translational Cancer Research Centre, Sydney, Australia Takanori Hioki, Yaser H. Gholami, Kelly J. McKelvey, Harry Marquis, Viive M. Howell & Dale L. Bailey Faculty of Medicine and Health, The University of Sydney, Sydney, Australia Alireza Aslani, Viive M. Howell & Dale L. Bailey Takanori Hioki Yaser H. Gholami Kelly J. McKelvey Alireza Aslani Harry Marquis Enid M. Eslick Kathy P. Willowson Viive M. Howell Dale L. Bailey T.H. designed and performed the experiment, developed the method and obtained the results. T.H. analyzed the data and wrote the manuscript. All the authors were participants in the discussion and interpretation of results, determination of the conclusions and revision of the manuscript. Correspondence to Takanori Hioki or Dale L. Bailey. Supplementary Information. Hioki, T., Gholami, Y.H., McKelvey, K.J. et al. Overlooked potential of positrons in cancer therapy. Sci Rep 11, 2475 (2021). https://doi.org/10.1038/s41598-021-81910-4 Catalytic activity imperative for nanoparticle dose enhancement in photon and proton therapy Lukas R. H. Gerken Alexander Gogos Inge K. Herrmann Nature Communications (2022)
CommonCrawl
Classical Physics Quantum Physics Quantum Interpretations Special and General Relativity Atomic and Condensed Matter Nuclear and Particle Physics Beyond the Standard Model Cosmology Astronomy and Astrophysics Other Physics Topics Field Renormalization vs. Interaction Picture Thread starter maline Renormalized fields give the wrong values for the free Hamiltonian. When introducing renormalization of fields, we define the "free Lagrangian" to be the kinetic and mass terms, using the renormalized fields. The remaining kinetic term is treated as an "interaction" counterterm. If we write down the Hamiltonian, the split between "free" and "interaction" terms, used in defining the interaction picture, will presumably be the same. So apparently field renormalization involves multiplying the free Hamiltonian by a factor ##Z^{-1}##. How does this make sense? The free Hamiltonian has a well-defined scale - it is just the sum of the particle number operators for given momenta, with the appropriate energies (plus an infinite constant). And this only works with a particular normalization of the field operators: the derivation uses the canonical commutation/anticommutation relations, implying canonical normalization. Related Quantum Physics News on Phys.org Computing with spins of light A. Neumaier maline said: Summary:: Renormalized fields give the wrong values for the free Hamiltonian. In the free case, Z=1. A. Neumaier said: Of course. Does this address my question? Yes, since 1 doesn't equal infinity. Thus the free theory needs no renormalization beyond normal ordering. I did not ask about the free theory. I asked about the "free Hamiltonian" ##H_0=H-V## used in defining the interaction picture of the theory, which is needed to derive the Feynman diagram formalism from canonical QFT. But this is the Hamiltonian of the free theory, no renormalization there. The factors appear in the interaction, not in the free term. The point is that given a Hamiltonian for a system one is in principle free to do perturbation theory around any free Hamiltonian. In the renormalization context, one needs to take the physical one, whereas the parameters in the original Hamiltonian are bare parameters to be determined by the renormalization conditions. vanhees71 To answer the original question. You start from ficitious entities, described by the free Hamiltonian, called "bare particles", e.g., in QED represented by a Dirac field describing electrons and positrons. Since bare particles don't interact, they don't carry an electromagnetic field. Now you add the interaction term with the electromagnetic field, and this implies that the electron interacts with the electromagnetic field and as a charge carrier (including a magnetic moment automatically too) also has an electric field around it. This electric field carries energy, momentum, and angular momentum. Since the energy is equivalent to a mass in the famous sense of ##E_0=m c^2## (note that there are only invariant masses in modern theory), the corresponding mass of the electromagnetic field adds to the bare mass of the non-interacting electron. Unfortunately with our physicists' sloppy math we add an infinite mass in each order of the socalled self-energy diagram, but of course the bare mass is not observable, because electrons carry their electromagnetic field around. Thus we can lump another infinity to the bare mass such as to cancel the infinite mass contribution of the electromagnetic field such that the observed mass of the electron is left at any order of perturbation theory. The same holds for the normalization of the electron-positron Dirac field and the the electromagnetic field, i.e., you have to lump the infinities to the bare normalization constants, which don't occur in any observable quantities at the end. Last but not least the same holds for the electric charge, which is renormalized by the same mathematical reshuffling of infinities. What's also renormalized is the magnetic moment of the electron, but fortunately that's a finite contribution of the perturbative series (in the radiative corrections to the proper three-point vertex function), and is among the most precise predictions of any physical theory ever (in comparison with the as amazing accuracy of the experimental determination of this quantity). QED (and the entire standard model) is a very special kind of quantum field theory: It's Dyson-renormalizable, i.e., there are only a finite number of constants (wave-function normalization, masses, and coupling constants) needed to lump all the infinities from the sloppy treatment of the math into the corresponding unobservable "bare quantities". What also enters necessarily into the game of this renormalization procedure is an energy-momentum scale, the socalled renormalization scale, at which you determine the said constants through measurement of observable quantities concerning appropriate types scattering reactions (e.g., the electromagnetic interaction strength is taken at low energies, leading to a fine-structure constant of about 1/137 by looking, e.g., at the Cross section for elastic Coulomb scattering; if you look at larger scattering energies (as at the ##Z##-boson-mass scale) you find a larger value of about 1/128). This hints at another important interpretation of the renormalization procedure, which so far looks a bit artificial in just eliminating mathematical blunder by subtracting infinities such as to get finite results for the observable quantities: That's the Wilsonian view on the renormalization procedure, which interprets QFTs like the standard model as effective theories, dependent on the resolution at which I look at the interacting particles in scattering experiments. The higher the scattering energy, the finer spatial resolution I get in investigating the inner structures of the scattering particles. At the extreme in the standard model with this respect are the strongly interacting particles. At the fundamental level, writing down the bare Lagrangian of quantum chromodynamics you deal with quarks and gluons of Quantum Chromodynamics (forgetting for a moment about the electromagnetic and weak interactions also described by the standard model). Then you switch on the interaction and realize very fascinating properties: Contrary to the similar looking case of QED the strong coupling constant decreases with larger renormalization scales. This is called asymptotic freedom, and it applies that we can't use perturbation theory for the strong interactions among quarks and gluons for low-energy scattering energies, but it works well at large scattering energies. Indeed at low energies we cannot find any quarks and gluons. All we find looking at strongly interacting objects are bound states of quarks and gluons, the hadrons (among the protons and neutrons, which are the building blocks of the atomic nuclei of all the matter around us), which carry no net-color-charge. Now you can use electrons to scatter on the hadrons and investigate their inner structure, and indeed it turns out that a proton consists of three socalled valence quarks and an entire cloud of quarks, antiquarks, and gluons, and the more energetic the interactions get, the more details are revealed (encoded in socalled parton-distribution functions or even generalized parton-distribution functions). Reactions: odietrich But the factors do appear in the free term! When we write ##\partial^\mu\phi\partial_\mu\phi##, this ##\phi## is the renormalized field. It does not satisfy the canonical commutation relations. So this "free Hamiltonian" after renormalization has been changed by a factor ##Z^{-1}## compared to the Hamiltonian of the free theory. Demystifier So this "free Hamiltonian" after renormalization has been changed by a factor ##Z^{-1}## compared to the Hamiltonian of the free theory. A Hamiltonian multiplied by a constant is physically equivalent to the original Hamiltonian. If you consider the equations of motion, you can see that the two Hamiltonians describe the same physics expressed in different units of time. Demystifier said: Multiplying the Hamiltonian by a constant is a physical change. It changes the actual rate at which the dynamics occur, not just the arbitrary units. For instance, multiplying the Hamiltonian by a large number will cause objects to move faster than ##c##. To see that this change is not a matter of units, note that the Schrodinger equation $$\partial_t |\psi>=-i H |\psi>$$ is dimensionally correct as it stands (using ##\hbar=1##). There is no room to insert an arbitrary dimension-full constant such as a unit of time. When we do renormalized perturbation theory we write the Hamiltonian ##H## first in terms of the (already scaled) renormalized field. Then we get the interaction ##V=H-H_0## by subtracting a quadratic expression ##H_0## in this renormalized field, defined by the physical mass. This ##H_0## is the Hamiltonian of a free theory, which we use to determine time-dependent Hamiltonian of the interaction picture. The multipliers only appear in the interaction terms, not in ##H_0##. Once again: the fields in a free theory satisfy canonical commutation relations, while renormalized fields do not. Therefore it seems clear that a quadratic expression in renormalized fields, without further multipliers, cannot be the Hamiltonian of a free theory. Once again: the fields in a free theory satisfy canonical commutation relations, while renormalized fields do not. You confuse two different notions of canonical commutation relations. Both ##H_0## and ##H## are independent of time because of translation invariance, and their definition only involves the fields at a fixed time. The renormalized field operators are causal and hence satisfy canonical equal time commutation relations. From the Schrödinger equation, the quadratic Hamiltonian ##H_0## produces a free renormalized field with 4-dimensional CCR, while the nonquadratic Hamiltonian ##H## produces an interacting renormalized field where the CCR is valid only for space-like arguments, consistent with causality. The renormalized field operators are causal and hence satisfy canonical equal time commutation relations The fields at spacelike separated points do of course commute/anticommute. But the canonical relations also determine a scale: the RHS of the CCR is a simple delta function, without numerical factors. For renormalized fields this is not the case. This actually proves that your assumption that the fields in the Hamiltonian represent the renormalized field is incorrect. (The fact that the renormalization factor is infinite means that the mathematical formulas at the singularity - the light cone, in particular at equal spacetime position - and all arguments based on them become meaningless.) Indeed, the field operators figuring in the Hamiltonian acts on a Fock space, but the renormalized field operators (obtained after taking the renormalization limit) don't. The whole Fock space structure (used to set up the renormalization schemes in the conventional way with infinite constants) disappears in the renormalization limit. What survives under renormalization are just the n-point correlation functions. Thus the correct picture is the following: One finitely rescales the field and the couplings to define a new Hamiltonian with counterterms, interprets the field operator in this formal expression in the canonical way, calculates the regularized n-point correlation functions perturbatively, sets the counterterms to meet the renormalization conditions, then removes the regularization by taking the appropriate limit (which makes the counterterms approach zero or infinity), and gets the (both retarded and time-ordered) renormalized n-point correlation functions. From these, assuming the series defining these at each order converge, the renormalized field operators and the Hilbert space they act on can be constructed using Wightman's reconstruction theorem. This is consistent with the description given in the textbooks. One finitely rescales the field and the couplings to define a new Hamiltonian with counterterms, interprets the field operator in this formal expression in the canonical way, I don't understand. You are saying that as long as we treat ##Z## as finite, the renormalized fields do satisfy the CCR without numerical factors? How does this make sense? We "rescale" the field without changing the commutator? I don't understand. You are saying that as long as we treat ##Z## as finite, the renormalized fields do satisfy the CCR without numerical factors? No. There are no renormalized fields on the Fock space level, i.e., before the final limit in the n-point functions has been performed. We "rescale" the field without changing the commutator? One changes the action by inserting multiplicative factors wherever we can without spoiling symmetry and renormalizability (and informally interprets them as scalings). But the field is rescaled only in the Hamiltonian, not in the equal time CCR. You can see this by looking at the actual computations made. When you change the units of time, then ##\hbar## also changes its value. You have to account for that too. The interacting fields indeed satisfy other commutation relations and have another normalization. If this were not so, the fields were not interacting, i.e., for interacting fields ##Z \neq 1##. For a pretty good and careful discussion, see the famous book by Bjorken and Drell (the 2nd volume about "Quantum Field Theory"; stay away from the 1st volume "Relativistic Quantum Mechanics", which is utmost confusing from a modern point of view since Dirac's hole theory is just a very complicated version of the modern version of QED in terms of a relativistic QFT), i.e., the asymptotic free field obeys the weak (!) limit $$\lim_{x^0 \rightarrow \pm \infty} \hat{\phi}(x) = \sqrt{Z} \hat{\phi}_{\text{in}/\text{out}}(x).$$ So let's just say that the standard convention ##\hbar=1## eliminates the freedom to change the unit of time without changing the unit of energy, and that if you change both, you don't get new factors in the Hamiltonian. At any rate, in renormalization theory we certainly are keeping ##\hbar## fixed, so the change of units you suggest does not solve the issue I raised. I'm sorry, I still don't understand. Would you mind spelling this out in detail? vanhees71 said: Yes, this agrees with my current understanding. But it brings us back to my original question: the free part of the Hamiltonian equals the sum of frequencies of particles, if and only if it is written using fields that satisfy the CCR. So how can we do perturbation theory using fields with a different normalization? Also, are the two of you (Vanhees71 and A. Neumaier) in agreement here? OK, fair enough, now I see that I was wrong. The solution of your problem is, in fact, entirely different. The idea is the following. The physical quantities one is really interested about are matrix elements of the S-matrix. Via the LSZ formalism, they are computed from the ##n##-point functions. Hence the ##n##-point functions are the only abstract theoretical entities one is really interested about. All the other abstract theoretical entities (Hamiltonians, field operators, ...) are just auxiliary mathematical quantities that do not make sense on its own, except as an intermediate tool towards computing the ##n##-point functions. In other words, a strange new factor in the Hamiltonian does not matter, as long as it leads to the physically right behavior of the ##n##-point functions. That's because, in this formalism, one does not study physical effects by solving the Schrodinger equation with the modified Hamiltonian. Instead, one studies the physical effects by computing the S-matrix from the modified ##n##-point functions. The above is the "philosophy" lying behind the so called Gell-Mann and Low approach to renormalization, common in particle physics. It is quite unintuitive from the point of view of more traditional formulation of quantum theory based on the Schrodinger equation. Fortunately, there is an alternative Wilson approach to renormalization, which is much more intuitive. It is much more popular in statistical physics and condensed matter. It's used in particle physics too, but less often. In the Wilson approach the Hamiltonian plays a central role. The Hamiltonian gets an additional factor too, but the intuitive picture of that is entirely different. In the Wilson approach you replace a more fundamental Hamiltonian ##H## (with a larger number of degrees of freedom) with an effective Hamiltonian ##H'## (with a smaller number of degrees of freedom). In the so called "renormalizable" theories, ##H'## takes a similar form as ##H##, but there is the extra factor that accounts for the effect of the fundamental degrees of freedom in ##H## that are ignored in ##H'##. In this sense ##H## and ##H'## are different but their large-distance effects are the same. ##H'## contains a smaller number of degrees of freedom, but it's compensated by an additional factor (usually larger than 1) in front of it. Here is a classical picture one may have in mind in the Wilson approach. Suppose that 10 men push a car. The Hamiltonian ##H## contains 10 degrees of freedom corresponding to 10 men. But the effect of pushing can be described by ##H'## describing only 1 super-man, who is as strong as 10 real men. Hence ##H'## must contain the extra factor of ##10##. Last edited: Wednesday, 3:27 AM Usually @A. Neumaier agree about standard notions of QFT, but this he should judge. The free part of the Hamiltonian does not equal the sum of frequencies of particles. Written in the interaction picture in terms of particle-number operators, it rather reads (in normal-ordered form) $$\hat{H}_0=\sum_{\sigma} \int_{\mathbb{R}^3} \mathrm{d}^3 \vec{p} E(\vec{p}) [\hat{N}(\vec{p,\sigma}) + \hat{\bar{N}}(\vec{p},\sigma)],$$ where ##E(\vec{p})=\sqrt{\vec{p}^2+m^2}>0##, and ##\hat{N}## and ##\hat{\bar{N}}## are the occupation-number operators of field modes with definite momentum ##\vec{p}## and spin-magnetic quantum number ##\sigma \in \{-s,-s+1,\ldots,s \}## where ##s \in \{0,1/2,1,\ldots \}## is the spin quantum number of the particular sort of particles under investigation. It's of course clear that when you take into account interactions all physical quantities must be renormalized, and you have corresponding renormalization conditions ensuring that you get the correct normalization of energies, momenta and any other physical quantities as well as S-matrix elements. A change of the renormalization prescription must not change the outcome for physical quantities or the observable cross sections defined via the S-matrix elements. This idea leads to the renormalization-group equations, which describe how the renormalized quantities (wave-function normalizations, masses, couplings) change when changing the renormalization prescription. Depending on which renormalization scheme you use these quantities need not necessarily be the observed quantities. E.g., a renormalized mass is only the physical mass of the particle if you can use a socalled on-shell renormalization scheme, which is usually not the case as soon as massless particles are involved in the fields. The observed mass is rather defined as the poles in the two-point (one-particle) Green's function as a function of energy, and this observable mass is independent of the renormalization scheme and a physical quantity. Yes, we agree. As I mentioned, one does not scale the CCR when preparing the perturbation scheme, hence scaling is strictly speaking a misnomer, part of the inaccurate laisse faire manner in which renormalization is typically introduced. More rigorously, what is called field renormalization is just the introduction of a new constant into the Hamiltonian, needed to make the renormalization work. The effect is that the new Hamiltonian depends on enough constants to be fixed by renormalization conditions. The field in this parameterized Hamiltonian remains a free field (of asymptotic particles) with standard CCR, since the Fock machinery is defined only for such a field. Last edited: Thursday, 1:22 AM Reactions: vanhees71 Of course all this is quite subtle, as everything having to do with the definition of what "asymptotic free fields/states" are. These are needed to define the prime observable quantities aimed at to be calculated in perturbative "vacuum QFT", namely cross sections with the corresponding transition-rate matrix elements (or S-matrix elements). All this can be found in the literature under "LSZ reduction formalism" or the like. As I said, I find the discussion in the old textbook by Bjorken and Drell pretty illuminating. Also the discussion in the book by Itzykson and Zuber is nice. An alternative elegant formulation is using the path-integral formalism defining various generating functionals. This can be found in the book by Bailin and Love. My own attempt can be found in https://itp.uni-frankfurt.de/~hees/publ/lect.pdf All this is of course far from being mathematically rigorous, and I'm not sure, whether there are mathematically fully rigorous formulations. I doubt it ;-). Related Threads for: Field Renormalization vs. Interaction Picture Proving Interaction picture field satisfies KG eqn Interaction picture I Interaction Picture Doubts Interaction picture - time evolution operator Interaction picture and s-matrix Interaction-picture and gell-mann low I Hamiltonian after transformation to interaction picture I Quantum physics vs Probability theory I The construction of particles in QFT A Is renormalization the ideal solution? I What is SR transformarion for bispinor? I Local Causality and Bell's Second Theorem
CommonCrawl
Proceeding (2) Farrell, R. H. (22) Brown, L. D. (2) Kiefer, J. (1) Walbran, A. (1) Annals of Mathematical Statistics (15) Annals of Statistics (2) Berkeley Symposium on Mathematical Statistics and Probability (2) Duke Mathematical Journal (2) Illinois Journal of Mathematics (1) 62C07 (2) 62F10 (2) decision theory (2) multivariate poisson parameter (2) admissibility (1) linear estimator (1) Author: R. H. Farrell Complete Class Theorems for Estimation of Multivariate Poisson Means and Related Problems Brown, L. D. and Farrell, R. H. The Annals of Statistics Volume 13, Number 2 (June, 1985), 706-726. More by L. D. Brown More by R. H. Farrell All Admissible Linear Estimators of a Multivariate Poisson Mean The Annals of Statistics Volume 13, Number 1 (March, 1985), 282-294. On the Best Obtainable Asymptotic Rates of Convergence in Estimation of a Density Function at a Point Farrell, R. H. The Annals of Mathematical Statistics Volume 43, Number 1 (February, 1972), 170-180. On the Bayes Character of a Standard Model II Analysis of Variance Test The Annals of Mathematical Statistics Volume 40, Number 3 (June, 1969), 1094-1097. On the Admissibility of a Randomized Symmetrical Design for the Problem of a One Way Classification The Annals of Mathematical Statistics Volume 40, Number 2 (April, 1969), 356-365. On the Admissibility at $\infty$, Within the Class of Randomized Designs, of Balanced Designs The Annals of Mathematical Statistics Volume 39, Number 6 (December, 1968), 1978-1994. Towards a Theory of Generalized Bayes Tests The Annals of Mathematical Statistics Volume 39, Number 1 (February, 1968), 1-22. On a Necessary and Sufficient Condition for Admissibility of Estimators When Strictly Convex Loss is Used The Annals of Mathematical Statistics Volume 39, Number 1 (February, 1968), 23-28. The existence of idempotents in certain operator algebras Duke Mathematical Journal Volume 34, Number 2 (June 1967), 233-237. On the Lack of a Uniformly Consistent Sequence of Estimators of a Density Function in Certain Cases
CommonCrawl
Clamor Vincit Omnia A place to discuss the various ways of organising noises. Some mathematics is assumed. 2048 versus 4096 351 versus 2048 223 versus 351 56 versus 200 80 HexaSets 4115 Hexadecatonic PC Sets 17TET Consonancies 7711 Heptadecatonic PC Sets Icositetratonic PC Sets 1-5 Icositetratonic Hexachords How Many PC Sets Small Cubics The Unreasonable Ubiquity of Dodecatonicity Symmetries The symmetries of the dodecagon, the regular 12-sided polygon used in models of Pitch Class Set Theory, are well known. Viewed as a dihedral group (of order 24), it has a class index $$P_{D_{12}}(\textbf{x}) = \frac{1}{24} \left( x_1^{12} + 7 x_2^6 + 2 x_3^4 + 2 x_4^3 + 2 x_6^2 + 4 x_{12} + 6 x_1^2 x_2^5 \right)$$ which sheds some light upon the 2, 3, 4 and 6-fold symmetries of 12-sided figures. This can be used to enumerate all the essentially different (equivalent under rotations and reflections) shapes of all (mostly) irregular polygons of all orders - from 0 to 12 - inscribable within it. One simply replaces every $x_i$ with $(1 + t^i)$ in the above (Pólya Enumeration) to recover the following degree 12 polynomial in t: $$E_{12}(t) = 1 + t + 6 t^2 + 12 t^3 + 29 t^4 + 38 t^5 + 50 t^6 + 38 t^7 + 29 t^8 + 12 t^9 + 6 t^{10} + t^{11} + t^{12}$$ which counts the number of essentially different k-gonal shapes - as the coefficient of tk - corresponding to all 224 Pitch Class Sets of different sizes available to Twelve Tone Music. If you count non-symmetric shapes twice (i.e. as distinct from their reflections) rather than just once, the Cyclic Group polynomial enumerator - $1 + t + 6 t^2 + 19 t^3 + 43 t^4 + 66 t^5 + 80 t^6 + 66 t^7 + 43 t^8 + 19 t^9 + 6 t^{10} + t^{11} + t^{12}$ - is the appropriate one to use. This aggregates 352 distinct shapes (evaluate the polynomial at t = 1). Because it counts mirror-pairs (musically inverted PC sets) as distinct, this enumeration reveals that there are (352 - 224) = 128 pairs of non-symmetric shapes and 352 - 2 × 128 = 96 symmetric ones. So far, so standard (this blog also catalogues them all here). Most of these tk coefficients are rather haphazard (like, say, 29 and 38) and have no particularly friendly relations with the polygon they inhabit - rather to be expected. However, we see that the number of 3-sided shapes is 12, a number very friendly indeed to the enclosing dodecagon. It means that there's a possibility that all the different 3-sided shapes - and here they all are - might be enticed to exactly fit, four at a time, into three separate dodecagons. Symmetric 3-sets (equilateral and isosceles triangles) Asymmetric 3-sets (scalene triangles) The 12 Dihedrally Equivalent PC Sets Set Coverings Note that the coefficient 6 (of t2) is also '12 friendly'. But tilt and move them as you will, it's impossible to fit the 6 different 'diangles' into one dodecagon, so that each occupies a different dodecagonal vertex pair. It's not possible to cover (or partition) the set {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} with 6 non-intersecting 2-sets where each such 2-set represents each of the six possible differences (on the dodecagon, i.e. modulo 12 differences) between the 12-set's points. ⇐? 12-Tone Interval Classes cannot cover the 12-Set Octatonality If we drop back to a tonality in which only 8 Pitch Classes are available, the Class Index of this system is $$P_{D_{8}}(\textbf{x}) = \frac{1}{16} \left( x_1^8 + 4 x_1^2 x_2^3 + 5 x_2^4 + 2 x_4^2 + 4 x_8 \right)$$ we find its catalogue of (30) essentially different sub-polygons is enumerated by $$E_{8}(t) = 1 + t + 4 t^2 + 5 t^3 + 8 t^4 + 5 t^5 + 4 t^6 + t^7 + t^8$$ One notices that the t2 term has coefficient 4, indicating that octatonicity's 4 interval classes 1, 2, 3 and 4, might be capable of covering itself in a single bound. And indeed it is. Interval class 1 can cover pitch classes 2, 3 and (switching to abbreviations) IC 2 can cover PCs 6, 8, IC 3 can cover PCs 4, 7 and finally IC 4 can cover PCs 1, 5. It's the only way of doing it with regard to equivalence of rotational and reflective symmetries. Octaconicity's Interval Class Coverage But as a musical transformer - say, as the PC permuter (1,5)(2,3)(4,7)(6,8) - it's not very interesting since all it can do is exchange pitch classes. As such a transformation is an interval-preserving operation, it's pretty much a do-nothing, musically speaking. This is why, in general, we don't regard any tonality covers with PC Sets containing fewer than 3 pitch classes as interesting But we're not quite done yet with the octagon since there's another friendly coefficient in the 8t4 which indicates that we may have a quadruple covering of the octagon with its four pairs of tetragons (possibly more commonly referred to as quadrilaterals). However such hopes are quickly dashed since one of the 8 quadrilaterals must be a square, and placing a square inside an octagon requires a second square to complete the cover, thus violating our règle du jeu that each shape be used exactly once. We don't even need to bother with the other shapes as we're already dead in the water. This is a general upper limit to the size of an 'inner polygon'. Once we reach the halfway point of an evenly sized polygon, a subset of exactly half the size which covers every other vertex can complete a cover only with a duplicate of itself to soak up the remaining 'every-other' vertices. Beyond the halfway point of course, there's no possibility of coverage since too few vertices remain. Our mission, then, is to find the 'special' values of n and k (2 < k < n/2) where rotationally and reflectionally equivalent binary colourings of n-gons (creatures known as necklaces) carry Cn,k (generally irregular) k-sized sub-polygons. We require that k divides both n (permitting n-set coverings with p = n/k of its k-sized subsets) and Cn,k itself (permitting use of all equivalent k-gons) and where p divides Cn,k (permitting complete coverage of Cn,k/p separate n-gons). The Triumph of the Dodecagon And so, by this definition, the first polygon where there's even a possibility that a complete set of dihedrally equivalent Pitch Set Classes will be capable of covering its own tonality is the dodecatonic. It's the first one large enough. It's reasonably obvious that tonalities with a prime number of pitch classes don't even get a look-in with regard to subset covering. So goodbye to 17TET, 19TET, 31TET etc. The 15TET, non-prime, system is quite popular but none of its subset shape collections occur in quantities appropriate to such coverages. The only candidates for tonalities up to 64TET are presented here: Tonicity Subset Size C/p 32 4 8 624 78 32 8 4 165288 41322 36 3 12 108 9 36 6 6 27474 4579 40 4 10 1240 124 44 11 4 87161756 21790439 48 8 6 3936144 656024 48 12 4 725782644 181445661 50 10 5 102749880 20549976 63 21 3 219201890450655 73067296816885 64 8 8 34597680 4324710 Catalogue of possible n-gon coverings by p distinct k-sets Coverings and Applications The power of the enumerative polynomial lies in its proof of existence of such potential coverings. But it's non-constructive and cannot tell us how to build the things it counts. Still less does it tell us anything about their set-covering capabilities. So, given the unlikelihood of coverage indicated by the previously shown impossibility of 'dianglising a dodecagon' (in musical terms, 'interval-covering a chromatic scale'), and despite the very real lack of interest we have in covering polygons with anything smaller than triangles in any case, the big question is can all 12 triangles (musical triads) - taken four at a time - fit into three dodecagons so that none of their vertices occupy the same dodecagonal vertex (musical pitch class) as another's? Covering the first dodecagon is easy since there's so much choice. Even with only 8 triangles left, a second cover remains only mildly troublesome. But with only four remaining triangles, a final cover is hard to find since there are so many ways to rotate and reflect them. By the time you have four left, if your earlier choices are unsuitable then the final cover will likely be impossible. But there are many solutions. Thousands, in fact. Here is one. Tri-Dodeca-Tetra-Tri-Coverage In fact this particular solution is not found by trial-and-error but is rather special even in terms of the game itself. We've used the computer algebra system, GAP, to find solutions. Of the thousands of solutions, this is one from a tiny subset of 20 of them. All of the others, except one, which is really special, are of the same class. The vast majority of triple-set coverings by triad quadruples are found as an example of the Alternating Group on 12 symbols (usually notated as A12). This is a simple group of order 12!/2 = 239500800. Despite the term 'simple' (a term of Group Theory meaning that it has a simple structure and is akin to a prime number in Number Theory) the way this group can fling things about is prodigious, as the number of its operations suggests. But the configuration above is an example of an arguably more interesting group discovered in the 19th century. It's one of the sporadic simple groups, specifically the Mathieu Group M12. But it is somewhat less flingy as its order is a paltry 95040. And from an applicative point of view it's often nicer to have fewer choices (the apparently paradoxical idea behind 'constraint sets you free') since it's easier to actually make a beginning. The way the group features into these arrangements is by way of construction. If we take the first of the three coverings we can use it as a permutation - specifically (1,2,3)(4,5,7)(6,9,11)(8,10,12), the four clockwise tricycles in the covering - to generate a group. If we regard the numbers as pitch classes (it's usually 0 to 11 in musical PC Set theory but we can equivalence 12 and 0 with impunity) then we may operate upon any musical segment (the choice of what constitutes a segment is completely up to the applier) by permuting its pitch classes with that permutation. Each pitch class moves (conventionally clockwise) around the triad it finds itself on. For example we might have a segment with pitch classes { 1, 4, 8, 11 } - which could represent a $C\sharp m7$ chord. The aforementioned permutation moves 1 to 2, 4 to 5, 8 to 10 and 11 to 6 (the wraparound) and thereby produces the new set { 2, 5, 10, 6 } - re-presented as { 2, 5, 6, 10 }. This set may - if interpreted as rooted on pitch class 6 (PC Set theory conventionally $F\sharp$) - be perceived as $F\sharp^{+} Maj 7$. Such a transformation may be applied to any musical segment, such as the one shown below. A first segmental transformation A second use of the same permutation would then transform the PC Set from { 2, 5, 6, 10 } to { 3, 7, 9, 12 } (perhaps $Am7\flat5$?). By the way, that's 3 of the seven sevenths, so this - admittedly tiny - orbit fixes some kind of seventhiness, if you will. Permutation product as transformation composition And a third application will of course return us to the initial set, since all cycles are of the same size, i.e. the order of the permutation is 3. This single permutation has a class index of $x_3^4$ (being a product of 4 3-cycles) and the group it generates is the tiny simple cyclic group of order 3, C3. The Group M12 is generated when all three coverings are used to construct permutations. The other two are - respectively from the above figure - (1,4,7)(2,5,9)(3,8,10)(6,11,12) and (1,5,9)(2,3,6)(4,8,10)(7,11,12) and although each is a simple $x_3^4$ cycle, when allowed to operate together by composition the group of permutations has the class index \begin{split} P_{M_{12}}(\textbf{x}) = \frac{1}{95040} ( x_1^{12} + 495 x_1^4 x_2^4 &+ 2970 x_1^4 x_4^2 + 1760 x_1^3 x_3^3 + 396 x_2^6 + 11880 x_1^2 x_2 x_8\\ &+ 9504 x_1^2 x_5^2 + 15840 x_1 x_2 x_3 x_6 + 2970 x_2^2 x_4^2 + 2640 x_3^4 + 17280 x_1 x_{11}\\ &+ 9504 x_2 x_{10} + 11880 x_4 x_8 + 7920 x_6^2 ) \end{split} If we label these three permutations with the letters 'a', 'd', and 'm' - for no particular reason - then the successive application of, say m followed by a followed by d would be written as mad. It would be the permutation (2,4,3)(5,12,7,11,10,9)(6,8), which is one of the $15840 x_1 x_2 x_3 x_6$ permutations with that particular shape (the $x_1$ in this case representing the point 1, or pitch class $C\sharp$, which happens to be fixed by this particular permutation). If we apply this mad operation to a fairly well-known melodic segment, we obtain the following: A transformed copyright problem unsupported audio element To recover the original, one would apply the inverse operation $(mad)^{-1} = d^{-1}a^{-1}m^{-1}$ - which is of course the 'undoing' permutation (2,3,4)(5,9,10,11,7,12)(6,8). Because such transformations operate only upon pitch classes however, there can be no indication of which particular octave the original pitch class was in and such information is lost. There are also relationships between the group's generators. One such relation is found when we look at dm-1 = (1,10,2)(3,4,12)(6,7,9), one of the $1760 x_1^3 x_3^3$ which - in this case - fixes the triad {5, 8, 11}. Since this is clearly an operation of order 3 (comprising exclusively 3-cycles) then any set acted upon by it thrice will reappear. This means that dm-1dm-1dm-1 = (dm-1)3 = (), the group's identity operation. It's also (dm2)3 because the square of any of the generators is also the generator's inverse (two clockwise turns gets you to the same place as one anticlockwise turn). One might wish to verify that the rather unpleasant looking d2amad2m2a2md2ma2 is also, in fact, a 'do nothing'. As we're on the subject of the Mathieu Group M12, Dave Benson's book mentions that Messiaen's piano piece Ile de Feu 2 uses the 2 permutations (1,7,10,2,6,4,5,9,11,12)(3,8) and (1,6,9,2,7,3,5,4,8,10,11) to transform both tones and durations. The first permutation being one of the $9504 x_2 x_{10}$ and the second one of the $17280 x_1 x_{11}$, they are themselves nothing directly to do with dodecatonic coverages by triads. It's extremely unlikely that the particular triplet of generators we're using here (out of the 20 possible coverages we've found) has any correspondence, pitch-class-wise, with Messiaen's. But it is nonetheless possible to write them in terms of ours, the first as $d a^2 d^2 m a d^2 m^2 a d m^2 (a^2 d)^2 a m d m^2 a^2 d^2 a m^2$ and the second as the slightly simpler (!) $m^2 d^2 (m a^2 d)^2 a^2 d$. These are also calculated by GAP, but such permutation re-mappings are quite tough to work out and it's possible that there are shorter paths from our madness to Messiaen. Posted by LemoUtan PC Set Viewer enter path: ← dual → ↑ flip ↓ 12 TET/EDO, 5th @ 7 symmetries = a12a25 interval paths intervalencies <2,5,4,3,6,1> consonancies prime forms {0,1,3,5,6,8,10} {0,2,4,7,9} Sequitor bebop spoken here Album review: Markus Rutz - Storybook - Markus Rutz (trumpet); Sharel Cassity (sax); Adrian Ruiz (piano); Kurt Schweitz/Samuel Peters (bass); Kyle Swan (drums); Kyle Asche (guitar on 3 tks).An in... Interdependent Science Steady Growth - There is a notion around that humanity requires steady growth to be healthy and happy. Steady growth clearly cannot continue for long on a finite planet. S... Essays & Endnotes - In the previous post, Broken Symmetry 1, I divided the analysis into two parts. Before proceeding to Broken Symmetry 2 & 3 where I will call on further... Oldcastle Newcastle Bomb damage 1941 South Shields - Click on pics for larger versions. These 'photographs taken from the Official Records of Air Raid Damage in the County Borough of South Shields' were pub... BENT OBJECTS His name is "Stay!" - I'm lucky enough to get a call every so often from the Endress+Hauser Corporation to make something out of their spare parts. They give me full creative... Get the feed LemoUtan © 2018 Paul Sampson. Ethereal theme. Powered by Blogger.
CommonCrawl
stats (version 3.6.2) Lognormal: The Log Normal Distribution Density, distribution function, quantile function and random generation for the log normal distribution whose logarithm has mean equal to meanlog and standard deviation equal to sdlog. dlnorm(x, meanlog = 0, sdlog = 1, log = FALSE) plnorm(q, meanlog = 0, sdlog = 1, lower.tail = TRUE, log.p = FALSE) qlnorm(p, meanlog = 0, sdlog = 1, lower.tail = TRUE, log.p = FALSE) rlnorm(n, meanlog = 0, sdlog = 1) x, q vector of quantiles. vector of probabilities. number of observations. If length(n) > 1, the length is taken to be the number required. meanlog, sdlog mean and standard deviation of the distribution on the log scale with default values of 0 and 1 respectively. log, log.p logical; if TRUE, probabilities p are given as log(p). lower.tail logical; if TRUE (default), probabilities are \(P[X \le x]\), otherwise, \(P[X > x]\). dlnorm gives the density, plnorm gives the distribution function, qlnorm gives the quantile function, and rlnorm generates random deviates. The length of the result is determined by n for rlnorm, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than n are recycled to the length of the result. Only the first elements of the logical arguments are used. The log normal distribution has density $$ f(x) = \frac{1}{\sqrt{2\pi}\sigma x} e^{-(\log(x) - \mu)^2/2 \sigma^2}% $$ where \(\mu\) and \(\sigma\) are the mean and standard deviation of the logarithm. The mean is \(E(X) = exp(\mu + 1/2 \sigma^2)\), the median is \(med(X) = exp(\mu)\), and the variance \(Var(X) = exp(2\mu + \sigma^2)(exp(\sigma^2) - 1)\) and hence the coefficient of variation is \(\sqrt{exp(\sigma^2) - 1}\) which is approximately \(\sigma\) when that is small (e.g., \(\sigma < 1/2\)). Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) Continuous Univariate Distributions, volume 1, chapter 14. Wiley, New York. Distributions for other standard distributions, including dnorm for the normal distribution. dlnorm(1) == dnorm(0)
CommonCrawl
Quantifying Popularity in Real-Time for High-Volume Websites, Part 1 June 4, 2015 11:02 pm by P.I.E. Staff This is the first half of a two-part blog post series. Part one covers the theory, part two is an implementation. Because there is no accepted industry standard algorithm for determining popularity, content publishers can afford to get creative in their assessments. Sometimes, however, these algorithms can be trivially exploitable by spammers to deliver low-quality content to high-traffic areas of a website (e.g. the front page of the website). What follows is Paragon Initiative Enterprise's user-driven popularity algorithm that is resilient against fraudulent voting. There is no patent for this algorithm; instead, we release it to the public domain. We hope that, after being refined and studied, it can be put to use for the public good. What is Popularity Anyway? Although your definition might differ, when we're discussing popularity we mean two things: Popularity means it is well-liked. This means that on any given scale (0 to 5 stars, 0 to 10 out of 10, 0 to 100 points, etc.), popular items are ranked higher. Popularity also means recency. Something that was well-liked 10 years ago might not be relevant compared to something scoring high this very minute. In practice, this means any attempt to rank popularity must have two qualities: The algorithm must, within reason, allow a community to judge what content is better than other content. From a security perspective, this means that an army of fake accounts should not be able to easily tilt the scale of apparent public opinion. The algorithm must prioritize the new over the old. Let's begin by designing an abuse-resistant algorithm for ranking content by a community's perception of its quality, then let's make one adjustment to fairly prioritize the latest and greatest over the tried and true. To Judge Popularity, First Assess Quality We'll start with an algorithm used by IMDB for their Top 250 movies, and it looks like this: $$W = \dfrac{Rv + Cm}{v+m}$$ $R$ = the average score for a particular item $v$ = the number of users who voted for a particular item $C$ = the average of all the votes in the database $m$ = the minimum number of votes to qualify $W$ = the weighted rating (what we're solving for) This has an interesting property: The Weighted rating ($W$) for a particular item is dragged closer to mediocrity ($C$) until it achieves a lot of votes that push it further away from the average. In mathematics terms: As $v$ approaches infinity, $W$ approaches $R$. As $v$ approaches zero, $W$ approaches $C$. The IMDB algorithm is quite powerful: In IMDB's case, it prevents new movies with 30,000 votes exclusively for 10 out of 10 from scoring higher than a movie with over 500,000 votes (most which are 10/10). However, in absence of outside mitigations or manual intervention, this algorithm invites the possibility of automated fraudulent voting. All votes are treated equally, whether or not they are legitimate (although in IMDB's case, they only count active users' votes). What if, instead, we allowed the community to self-select the users whose votes should count more? The Karmatic Quality Formula Popular news websites such as Hacker News and Reddit employ a karma system, which in simple terms tallies all of the upvotes and downvotes a user has received from their peers. Since the karma for any given user is known, and the average karma for all active users is knowable, we can therefore weigh each user's votes by a simple ratio of the two: $$ r_i = \dfrac{k_i}{\bar k} $$ Now that we have a weight for each user's votes, let's modify the IMDB Algorithm above to include karma ratios instead of a simple average. $$ K = \frac{\sum_0^{v-1}{{s_i}{r_i}}}{\sum_0^{v-1}{r_i}} $$ $$ W = \dfrac{Kv + Cm}{v+m} $$ $s_i$ = a particular score from a particular vote $r_i$ = the user's karma ratio $K$ = weighted average Quick Example Let's say we have 10,000 active users with an average karma of 100. An attacker, who wants to push something terrible to the front page (e.g. to profit from ad impressions), controls 1,000 fake accounts (karma == 1) and has them all vote 10/10 on their spam submission. 50 legitimate average users (karma == 100) rate the spam submission 0/10. What's the result of $K$? About 0.197 out of 10. What is the result of $W$ if $ m = 100 $ and $ C = 6 $? About 5.178 out of 10. Despite controlling 10% of the population, a handful of high-karma users (as decided by their peers) can effectively demote spam submissions just by giving it a low score. Combined with CAPTCHAs and other spam-fighting solutions to make automated account registration more difficult, we can greatly reduce the potential impact of a successful spam campaign. But we still haven't quantified popularity. An Algorithm for Popularity The Karmatic Quality Formula can provide a reasonable estimate of the quality of some piece of content, but we're interested in what's good right this very second. Our solution is straightforward: $K_p$ is similar to our calculation for $K$ above, except we also multiply each $ s_i k_i $ by one more term to add an exponential decay: $$ K_p = \frac{\sum_0^{v-1}{{s_i}{r_i}{e^{D(t_i - t_{now})}}}}{\sum_0^{v-1}{r_i}} $$ $D$ = the magnitude of the exponential decay (a constant) $t_{now}$ = the current moment in time (e.g. UNIX timestamp) $t_i$ = the moment in time a particular vote was cast $K_p$ = the karmatic rating with a decay, for the purpose of calculating popularity Since $t_i$ will always be less than $t_{now}$, the result of $e^{D({t_i}-{t_now})}$ will always be less than or equal to 1.0. Finally we can determine the popularity of a particular article, $P$: $$ P = \dfrac{{K_p}{v} + Cm}{v+m} $$ $K_p$ = karma-weighted average, decayed for popularity $P$ = the popularity score Now we have a concise mathematical definition for popularity. In part two, we will implement this algorithm in PostgreSQL.
CommonCrawl
Spectrum sharing on interference channels with a cognitive relay Qiang Li1, Ashish Pandharipande2, See Ho Ting3 & Xiaohu Ge1 In this paper, an interference channel with a cognitive relay (IFC-CR) is considered to achieve spectrum sharing between a licensed primary user and an unlicensed secondary user. The CR assists both users in relaying their messages to the respective receivers, under the constraint that the performance of the legacy primary user is not degraded. Without requiring any non-causal knowledge, the CR uses a successive interference cancellation to first decode the primary and secondary messages after a transmission phase. A power allocation is then performed to forward a linear weighted combination of the processed signals in the relaying phase. Closed-form expressions of the end-to-end outage probability are derived for both primary and secondary users under the proposed approach. Furthermore, by exploiting the decoded primary and secondary messages in the first phase, we propose the use of dirty paper coding (DPC) at CR to pre-cancel the interference seen at the secondary (or primary) receiver in the second phase, which results in a performance upper bound for the secondary (or primary) user without affecting the other user. Simulation results demonstrate that with a joint consideration of the power control at the secondary transmitter and the power allocation at CR, performance gains can be achieved for both primary and secondary users. Background and related work Consider a spectrum sharing system, as shown in Fig. 1, where two interfering users co-exist with a cognitive relay in the same frequency band. For both users to operate properly, the cognitive relay [1, 2] assists in forwarding the messages from both transmitters to their respective receivers [3–6] and at the same time coordinates the mutual interference. This constitutes an interference channel with a cognitive relay (IFC-CR), which has been intensively studied from an information-theoretic perspective [7–15]. An interference channel with a cognitive relay (IFC-CR) where the CR has perfect non-causal knowledge of the messages originated from both transmitters A two-user symmetric Gaussian IFC-CR was first introduced in [7], where the CR was assumed to be full duplex and adopt a decode-and-forward (DF) processing. Through rate splitting [16] at both sources and joint decoding at each destination, an achievable rate region was obtained. Then, this achievable rate was improved in [8] by performing sophisticated coding strategies that require non-causal information of both transmitters at the CR prior to information transmission. By combining the Han-Kobayashi coding scheme [16] for interference channels with dirty paper coding (DPC) [17], a generalization of the achievable rate region obtained in [8] was derived in [9]. In [10], an outer bound for the capacity of a general IFC-CR was first derived. New inner and outer bounds for the capacity of IFC-CR were derived later in [11–15], under various conditions. A Gaussian interference channel with an out-of-band relay was investigated in [18, 19] where the relay was assumed to operate over orthogonal bands to the underlying interference channel. In [18], the entire system was characterized by two parallel channels, namely a Gaussian interference channel and a Gaussian relay channel. To characterize the capacity, relay operations were optimized with separable or nonseparable encoding between the interference channel and the out-of-band relay channel. In [19], the impact of the out-of-band relay channel and the corresponding signal interactions on the capacity were investigated under general channel conditions. In the above works, the two interfering users are assumed to be part of the peer users in the same radio system. A question that arises is as follows: what if the two interfering users belong to different radio systems that are of different priorities. In view of the mutual interference between the two users and the inherent cognition and cooperation ability equipped at the CR, it is natural to evaluate IFC-CR under a cognitive spectrum sharing setup between, e.g., a licensed primary user and an unlicensed secondary user [20–23]. Under such circumstances, we assume that the CR belongs to the secondary system or a third-party agent. Then, instead of characterizing the capacity or sum rate of the entire system as in [7–15], it is more pragmatic to enhance the performance of the secondary user under the constraint that no harm is caused to the legacy primary user [24–26]. A spectrum sharing protocol was proposed on the interference channel in [26]. With the assumption that the secondary transmitter has non-causal knowledge of the codewords originated at the primary transmitter, achievable rates of the secondary user were characterized under the constraint that no rate degradation was created for the primary system. As a variant of [26], a spectrum sharing protocol was proposed between a primary and a secondary user on an IFC-CR in [27]. With the assumption that non-causal knowledge of the primary codewords is available at both the secondary transmitter and the CR, an enhanced throughput was achieved for the secondary user without degrading the throughput of the primary user. In [28], a spectrum sharing protocol was proposed on an IFC-CR where the CR helps both the primary and secondary transmissions. A DF relay protocol was considered where only when both primary and secondary messages are successfully decoded at CR, they are forwarded in the second phase with a certain power allocation. Conditioned on the decoding results at CR, the received SNR at each receiver was analyzed, through which an upper bound of the outage probability was derived. In [29], an opportunistic adaptive relaying protocol is proposed on IFC-CR, where CR is able to determine when to cooperate with the primary user, when to cooperate with the secondary user, and when to cooperate with both users simultaneously. An upper bound of the secondary outage probability was derived under a primary outage probability threshold. In [30], an amplify-and-forward (AF) relay protocol was performed at CR to help relay the signals of both primary and secondary users over independent Nakagami-m fading channels. Assuming there are no cross links between primary and secondary users, end-to-end outage probabilities of the primary and secondary users were obtained. Simulation results demonstrated a performance gain for both primary and secondary users. Our contributions Motivated by the above works, we propose a causal cognitive spectrum sharing protocol on a fading IFC-CR [31]. As depicted in Fig. 2, both primary transmitter (PT) and secondary transmitter (ST) transmit simultaneously in the first phase. Without requiring non-causal knowledge of the messages from PT or ST, CR attempts to decode both messages using successive interference cancellation (SIC) [32]. To compensate the interference seen at the primary receiver (PR) that is caused by cross talk, we consider a power allocation scheme at CR. To be specific, a hybrid AF-DF relay protocol [33] is considered such that when at least one of the two messages is successfully decoded, a fraction α, 0<α<1, of the transmit power of CR is used to forward the primary signal, with the remaining power to forward the secondary signal, in the second phase. If however, neither of the messages is decoded, the CR simply stays silent and both PT and ST perform a retransmission simultaneously in the second phase. The proposed causal cognitive spectrum sharing protocol on an IFC-CR where the entire transmission process is divided into two transmission phases sequentially At the end of the second phase, by exploiting the received signals in two phases, maximal-ratio combining (MRC) is employed at PR to decode the desired message. Without incurring additional overhead to the legacy primary system, we assume that PR is only aware of a relay terminal but oblivious of the secondary system [20–23], thus the component of the secondary signal is simply treated as noise at PR [24–26]. This provides a performance lower bound for the primary system compared to the cases with sophisticated decoding strategies. On the other hand, at the secondary receiver (SR), by exploiting the received signals in two phases, SIC is employed to decode the desired message. The contributions of this paper are summarized as follows. In decoding the mixed signals using SIC, we define an event to describe whether a specific message can be successfully recovered. In order to illustrate the correlation between the successive events in decoding mixed signals, we introduce a graphical representation by which each event can be represented by the corresponding region in a 2-dimensional (or 3-dimensional) graph. On this basis, by integrating over the respective regions of events, accurate closed-form expressions of the end-to-end outage probability can be derived for both primary and secondary users under the proposed protocol. Without requiring non-causal knowledge, CR attempts to decode both primary and secondary messages after a first transmission phase in the proposed protocol. For the case where both messages are successfully recovered at CR after the first transmission phase, in order to further mitigate the mutual interference, we propose using DPC at CR to pre-cancel the interference seen at PR or SR in the subsequent relaying phase. Numerical results demonstrate a performance upper bound for the primary (or secondary) user, without affecting the performance of the other user. To guarantee that no harm is caused to the primary system, besides the power allocation performed at CR to forward the primary and secondary messages respectively, we find that a power control at ST is also needed to facilitate the SIC decoding at CR as well as to limit the interference caused to PR. Numerical results demonstrate that with a proper design of the power allocation at CR and the transmit power at ST, the secondary user is allowed to access the licensed spectrum and at the same time performance gains can be achieved for both primary and secondary systems. The rest of this paper is organized as follows. Section 2 describes the system model, where the two successive phases are discussed and the end-to-end outage probability of IFC-CR is defined. In Section 3, based on the possible decoding results at CR in the first transmission phase, the corresponding performance at PR and SR in the second phase is analyzed. By exploiting the decoded messages, in Section 4, we propose using DPC at CR, which provides a performance upper bound. Simulation results are presented in Section 5 where the effects of different parameters are evaluated. Finally, Section 6 concludes the paper. System model and protocol description As shown in Fig. 2, we consider an IFC-CR where a ST/SR pair co-exists with a PT/PR pair in the same frequency band with the assistance of a CR. It is assumed that all nodes operate in half-duplex mode and the channels experience independent block Rayleigh fading. For notational simplicity, we let h pp and h ss denote the coefficients of the direct channels from PT →PR and ST →SR, let h ps and h sp denote the coefficients of the cross interfering channels from PT →SR and ST →PR, and let h pr , h sr , h rp , and h rs denote the coefficients of the relay channels from PT →CR, ST →CR, CR →PR, and CR →SR, respectively. Then, we have the channel coefficient \(h_{\textit {ij}}\sim \mathcal {CN}\left (0,\delta _{\textit {ij}}^{-1}\right)\), where i j∈{p p,s s,p s,s p,p r,s r,r p,r s} and \(\delta _{\textit {ij}}^{-1}\) denotes the corresponding average channel power gain. By letting γ ij =|h ij |2, γ ij follows an exponential distribution that γ ij ∼ exp(δ ij ) [34]. For ease of exposition, we assume that the additive white Gaussian noise (AWGN) n 0 is of zero mean and with unitary variance at each receiver. Any channel can be reduced to this normalized form. The transmit powers at PT, ST, and CR are denoted as P P , P S , and P R , respectively. x p and x s denote the messages originated at PT and ST, with target rates R pt and R st , respectively. For easy reference, we summarize the abbreviations, notations, and symbols that appear in this paper in Table 1. Table 1 A summary of abbreviations, notations, and symbols As shown in Fig. 2, in order to maintain the causality of the system, we divide the entire transmission process into two phases as discussed in the following. First transmission phase In the first phase, both PT and ST transmit their respective messages simultaneously. Then, the corresponding received signal at PR, SR, and CR is given as $$ y_{j}(1)=h_{pj}\sqrt{P_{P}}x_{p}+h_{sj}\sqrt{P_{S}}x_{s}+n_{0},~~~j\in\{p,s,r\}. $$ In order to recover x p and x s that are mixed together, SIC is performed at CR. Then, for the decoding results of x p and x s , we define the following possible events that are mutually exclusive: \(\mathcal {E}^{(1)}=\{\)Both x p and x s are successfully decoded at CR }; \(\mathcal {E}^{(2)}=\{\)Only x p is successfully decoded at CR }; \(\mathcal {E}^{(3)}=\{\)Only x s is successfully decoded at CR }; \(\mathcal {E}^{(4)}=\{\)Neither of x p and x s is successfully decoded at CR }. The corresponding probabilities are defined as \(\Pr \left \{\mathcal {E}^{(1)}\right \}\), \(\Pr \left \{\mathcal {E}^{(2)}\right \}\), \(\Pr \left \{\mathcal {E}^{(3)}\right \}\), and \(\Pr \left \{\mathcal {E}^{(4)}\right \}\) that will be derived in Section 3.1. Second transmission phase When at least one of x p and x s is successfully decoded by CR, a power allocation is performed to forward a linear weighted combination of x p and x s in the second phase. Otherwise, CR simply stays silent and both PT and ST perform retransmissions simultaneously. 2.2.1 Conditioned on event \(\mathcal {E}^{(1)}\) In the second phase, CR broadcasts a composite message $$ x_{r}=\sqrt{\alpha P_{R}}x_{p}+\sqrt{(1-\alpha)P_{R}}x_{s}, $$ where α∈(0,1) is the power allocation factor for relaying the primary message x p . Then, the corresponding received signal at PR and SR is given as $$ y_{j}(2)=h_{rj}\sqrt{\alpha P_{R}}x_{p}+h_{rj}\sqrt{(1-\alpha)P_{R}}x_{s}+n_{0},~~~j\in\{p,s\}. $$ $$ \begin{aligned} x_{r}&=\sqrt{\alpha P_{R}}x_{p}+\beta \left(h_{sr}\sqrt{P_{S}}x_{s}+n_{0}\right),~~~\textmd{where}\\ \beta&=\sqrt{\frac{(1-\alpha)P_{R}}{\gamma_{sr}P_{S}+1}}. \end{aligned} $$ Then, the corresponding received signal at PR and SR is given as $$ \begin{aligned} y_{j}(2)=&h_{rj}\sqrt{\alpha P_{R}}x_{p}\,+\,h_{rj}\beta \left(h_{sr}\sqrt{P_{S}}x_{s}+n_{0}\right)\\&+n_{0},~~~j\in\{p,s\}. \end{aligned} $$ $$ \begin{aligned} x_{r}&=\beta\left(h_{pr}\sqrt{P_{P}}x_{p}+n_{0}\right)+\sqrt{(1-\alpha)P_{R}}x_{s},~~~\textmd{where}\\ \beta&=\sqrt{\frac{\alpha P_{R}}{\gamma_{pr}P_{P}+1}}. \end{aligned} $$ $$ \begin{aligned} y_{j}(2)=&\,h_{rj}\beta\left(h_{pr}\sqrt{P_{P}}x_{p} \;+\; n_{0}\right)\\&+\,h_{rj}\sqrt{(1-\alpha)P_{R}}x_{s}+n_{0},~~~j\in\{p,s\}. \end{aligned} $$ CR simply stays silent and PT and ST retransmit x p and x s respectively in the second phase. Then, the corresponding received signal at PR and SR is given as $$ y_{j}(2)=h_{pj}\sqrt{P_{P}}x_{p}+h_{sj}\sqrt{P_{S}}x_{s}+n_{0},~~~j\in\{p,s\}. $$ End-to-end performance For the decoding at PR at the end of the second phase, MRC is performed to decode x p by utilizing the received signals in two successive phases, i.e., y p (1) and y p (2), while treating the secondary component of x s simply as noise. Then, depending on the decoding results \(\mathcal {E}^{(1)}\), \(\mathcal {E}^{(2)}\), \(\mathcal {E}^{(3)}\), and \(\mathcal {E}^{(4)}\), at CR at the end of the first phase, we define \(O_{P}^{(1)}\), \(O_{P}^{(2)}\), \(O_{P}^{(3)}\), and \(O_{P}^{(4)}\) as the corresponding outage probabilities at PR at the end of the second phase. On the other hand, SIC is performed at SR to decode the desired message x s by utilizing both received signals y s (1) and y s (2). Similarly, we define \(O_{S}^{(1)}\), \(O_{S}^{(2)}\), \(O_{S}^{(3)}\), and \(O_{S}^{(4)}\) as the corresponding outage probabilities at SR at the end of the second phase. Theorem 1. In the proposed spectrum sharing protocol on IFC-CR, taking into account all possible decoding results at CR at the end of the first phase, the overall end-to-end outage probabilities of the primary and secondary systems can be respectively derived as $$\begin{array}{@{}rcl@{}} O_{P}&=& \Pr\left\{\mathcal{E}^{(1)}\right\}O_{P}^{(1)} + \Pr\left\{\mathcal{E}^{(2)}\right\}O_{P}^{(2)} + \Pr\left\{\mathcal{E}^{(3)}\right\}O_{P}^{(3)}\\ &&+ \Pr\left\{\mathcal{E}^{(4)}\right\}O_{P}^{(4)}, \end{array} $$ $$\begin{array}{@{}rcl@{}} O_{S}&=& \Pr\left\{\mathcal{E}^{(1)}\right\}O_{S}^{(1)} + \Pr\left\{\mathcal{E}^{(2)}\right\}O_{S}^{(2)} + \Pr\left\{\mathcal{E}^{(3)}\right\}O_{S}^{(3)} \\ &&+ \Pr\left\{\mathcal{E}^{(4)}\right\}O_{S}^{(4)}. \end{array} $$ Next, we proceed to analyze the decoding performance at CR as well as PR and SR in two successive phases, respectively. Decoding performance at CR in the first phase At the end of the first phase, CR attempts to decode both x p and x s from y r (1) using SIC. From (1), if the power level of x p is higher than that of x s , then CR attempts to decode x p first by considering the component of x s as noise. Defining \(C(\Lambda)=\frac {1}{2}\log _{2}\left (1+\Lambda \right)\), the achievable rate of x p is given as $$ R_{1,p}=C\left(\frac{P_{P}\gamma_{pr}}{P_{S}\gamma_{sr}+1}\right). $$ Thus, x p can be first decoded if event $$ \mathcal{E}_{1,p}=\left\{R_{1,p}\geq R_{pt}\right\}=\left\{\gamma_{pr}\geq\frac{R_{pt}^{\prime}P_{S}}{P_{P}}\gamma_{sr}+\frac{R_{pt}^{\prime}}{P_{P}}\right\} $$ occurs, where \(R_{\textit {pt}}^{\prime }=2^{2R_{\textit {pt}}}-1\). By reconstructing and removing the component of x p from y r (1), the achievable rate of the remaining x s is given as $$ R_{2,s}=C\left(P_{S}\gamma_{sr}\right), $$ which can be decoded successively if event $$ \mathcal{E}_{2,s}=\left\{R_{2,s}\geq R_{st}\right\}=\left\{\gamma_{sr}\geq\frac{R_{st}^{\prime}}{P_{S}}\right\} $$ occurs, where \(R_{\textit {st}}^{\prime }=2^{2R_{\textit {st}}}-1\). Conversely, if the power level of x s is higher than that of x p , then CR attempts to decode x s first by considering the component of x p as noise. Then, the achievable rate of x s that is subject to the interference from x p is given as $$ R_{1,s}=C\left(\frac{P_{S}\gamma_{sr}}{P_{P}\gamma_{pr}+1}\right), $$ which can be first decoded if event $$ \mathcal{E}_{1,s}=\left\{R_{1,s}\geq R_{st}\right\}=\left\{\gamma_{pr}\leq\frac{P_{S}}{R_{st}^{\prime}P_{P}}\gamma_{sr}-\frac{1}{P_{P}}\right\} $$ occurs. Similarly, by reconstructing and removing the component of x s from y r (1), the achievable rate of the remaining x p is given as $$ R_{2,p}=C\left(P_{P}\gamma_{pr}\right), $$ $$ \mathcal{E}_{2,p}=\left\{R_{2,p}\geq R_{pt}\right\}=\left\{\gamma_{pr}\geq\frac{R_{pt}^{\prime}}{P_{P}}\right\} $$ Thus, from (12), (14), (16), and (18), all possible decoding results at CR using SIC can be expressed as $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(1)}&=&\left\{\left(\mathcal{E}_{1,p}\cap \mathcal{E}_{2,s}\right) \cup \left(\mathcal{E}_{1,s}\cap \mathcal{E}_{2,p}\right) \right\}, \end{array} $$ ((19a)) $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(2)}&=&\left\{\mathcal{E}_{1,p}\cap\bar{\mathcal{E}}_{2,s}\right\}, \end{array} $$ ((19b)) $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(3)}&=&\left\{\mathcal{E}_{1,s}\cap\bar{\mathcal{E}}_{2,p}\right\}, \end{array} $$ ((19c)) $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(4)}&=&\left\{\bar{\mathcal{E}}_{1,p}\cap\bar{\mathcal{E}}_{1,s}\right\}. \end{array} $$ ((19d)) Take a close look at (12), (14), (16), and (18); since the multiple-access channels h sr and h pr are independent with each other, we can draw a 2-dimensional graph of γ sr and γ pr where the events defined in (19a)–(19d) are represented by their respective regions, as shown in Fig. 3. Here, we assume that R pt ,R st ≥1 such that there is no intersection between the regions of events \(\mathcal {E}_{1,p}\) and \(\mathcal {E}_{1,s}\). With the respective regions defined by {γ sr ,γ pr }, the corresponding probability of each event can thus be obtained by $$ \begin{aligned} &\int\int_{\left\{\gamma_{sr},\gamma_{pr}\right\}}f(\gamma_{sr},\gamma_{pr})d\gamma_{pr}d\gamma_{sr}\\ &\quad=\int_{\left\{\gamma_{sr}\right\}}f(\gamma_{sr})\left[\int_{\left\{\gamma_{pr}\right\}}f(\gamma_{pr})d\gamma_{pr}\right]d\gamma_{sr}, \end{aligned} $$ Graphical representations of events \(\mathcal {E}^{(1)}\), \(\mathcal {E}^{(2)}\), \(\mathcal {E}^{(3)}\), and \(\mathcal {E}^{(4)}\) at CR at the end of the first phase where \(f(\gamma _{\textit {sr}})=\delta _{\textit {sr}}e^{-\delta _{\textit {sr}}\gamma _{\textit {sr}}}\phantom {\dot {i}\!}\) and \(f(\gamma _{\textit {pr}})=\delta _{\textit {pr}}e^{-\delta _{\textit {pr}}\gamma _{\textit {pr}}}\phantom {\dot {i}\!}\) denote the respective probability density functions (PDF) of γ sr and γ pr , and f(γ sr ,γ pr )=f(γ sr )f(γ pr ) denotes the joint PDF [34]. Lemma 1. By employing SIC at CR to decode both x p and x s , the respective probabilities of the events defined in (19a)–(19d) can be obtained as $$\begin{array}{@{}rcl@{}} \Pr\left\{\mathcal{E}^{(1)}\right\}&=&\frac{\delta_{sr}e^{-\left(\frac{\delta_{sr}R_{st}^{\prime}} {P_{S}}+\frac{\delta_{pr}R_{pt}^{\prime}\left(1+R_{st}^{\prime}\right)}{P_{P}}\right)}} {\delta_{sr}+\frac{\delta_{pr}R_{pt}^{\prime}P_{S}}{P_{P}}}\\ &+\frac{\delta_{pr}e^{-\left(\frac{\delta_{pr}R_{pt}^{\prime}} {P_{P}}+\frac{\delta_{sr}R_{st}^{\prime}(1+R_{pt}^{\prime})}{P_{S}}\right)}} {\delta_{pr}+\frac{\delta_{sr}R_{st}^{\prime}P_{P}}{P_{S}}},~~~~~~~~ \end{array} $$ $$\begin{array}{@{}rcl@{}} \Pr\left\{\mathcal{E}^{(2)}\right\}&=&\frac{\delta_{sr}\!\left[\!e^{-\frac{\delta_{pr}R_{pt}^{\prime}} {P_{P}}}-e^{-\left(\frac{\delta_{sr}R_{st}^{\prime}}{P_{S}}+\frac{\delta_{pr}R_{pt}^{\prime} (1+R_{st}^{\prime})}{P_{P}}\right)}\!\right]}{\delta_{sr}+\frac{\delta_{pr}R_{pt}^{\prime}P_{S}} {P_{P}}}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \Pr\left\{\mathcal{E}^{(3)}\right\}&=&\frac{\delta_{pr}\!\left[\!e^{-\frac{\delta_{sr}R_{st}^{\prime}} {P_{S}}}-e^{-\left(\frac{\delta_{pr}R_{pt}^{\prime}}{P_{P}}+\frac{\delta_{sr}R_{st}^{\prime} (1+R_{pt}^{\prime})}{P_{S}}\right)}\!\right]}{\delta_{pr}+\frac{\delta_{sr}R_{st}^{\prime}P_{P}} {P_{S}}}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \Pr\left\{\mathcal{E}^{(4)}\right\}&=&1-\frac{\delta_{sr}e^{-\frac{\delta_{pr}R_{pt}^{\prime}} {P_{P}}}}{\delta_{sr}+\frac{\delta_{pr}R_{pt}^{\prime}P_{S}}{P_{P}}}- \frac{\delta_{pr}e^{-\frac{\delta_{sr}R_{st}^{\prime}}{P_{S}}}}{\delta_{pr}+\frac{\delta_{sr} R_{st}^{\prime}P_{P}}{P_{S}}}. \end{array} $$ Please find in Appendix A for the detailed derivations. Decoding performance at PR in the second phase Together with (1) and (3), MRC is performed at PR to decode the desired message x p . Then, the corresponding achievable rate is given as $$\begin{array}{@{}rcl@{}} R_{p}^{(1)}&=&C\left(\frac{P_{P}\gamma_{pp}}{P_{S}\gamma_{sp}+1}+\frac{\alpha}{1-\alpha+\frac{1}{P_{R}\gamma_{rp}}}\right)\\ &\approx&C\left(\frac{P_{P}\gamma_{pp}}{P_{S}\gamma_{sp}+1}+\frac{\alpha}{1-\alpha}\right), \end{array} $$ where the approximation is obtained assuming P R ≫1 [24, 25]. Conditioned on event \(\mathcal {E}^{(1)}\) that both x p and x s are successfully decoded at CR at the end of the first phase, the corresponding outage probability at PR at the end of the second phase can be derived as $$\begin{array}{@{}rcl@{}} O_{P}^{(1)}&=&\Pr\left\{R_{p}^{(1)}<R_{pt}\right\}\\ &\approx& \left\{\begin{array}{ll} 0,&\alpha\geq\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ 1-\frac{\delta_{sp}e^{-\frac{\delta_{pp}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)}{P_{P}} }}{\delta_{sp}+\frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}\delta_{pp}}{P_{P}} }, &\alpha<\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \end{array}\right.. \end{array} $$ Please find in Appendix B for the detailed derivations. Together with (1) and (5), by employing MRC at PR, the achievable rate of x p is given as $$\begin{array}{@{}rcl@{}} R_{p}^{(2)}&=&C\left(\frac{P_{P}\gamma_{pp}}{P_{S}\gamma_{sp}+1}+\frac{\alpha P_{R}\gamma_{rp}}{\gamma_{rp}\beta^{2}(P_{S}\gamma_{sr}+1)+1}\right)\\ &=&C\left(\frac{P_{P}\gamma_{pp}}{P_{S}\gamma_{sp}+1}+\frac{\alpha }{(1-\alpha)+\frac{1}{P_{R}\gamma_{rp}}}\right),~~~~ \end{array} $$ which is exactly the same (22). This is because in both cases, x p is successfully decoded and forwarded by CR with power α P R . Thus, the corresponding outage probability at PR at the end of the second phase is $$ O_{P}^{(2)}=O_{P}^{(1)}. $$ $$\begin{array}{@{}rcl@{}} R_{p}^{(3)}&=&C\left(\frac{P_{P}\gamma_{pp}}{P_{S}\gamma_{sp}+1}+\frac{\gamma_{rp}\beta^{2}P_{P}\gamma_{pr}}{(1-\alpha)P_{R}\gamma_{rp}+\gamma_{rp}\beta^{2}+1}\right)\\ &=&C\left(\frac{P_{P}\gamma_{pp}}{P_{S}\gamma_{sp}+1}\right.\\ &&+\left.\frac{\alpha}{(1-\alpha)+\frac{1}{P_{P}\gamma_{pr}}+\frac{1}{P_{R}\gamma_{rp}}+\frac{1}{P_{P}\gamma_{pr}P_{R}\gamma_{rp}}}\right)\\ &\approx&C\left(\frac{P_{P}\gamma_{pp}}{P_{S}\gamma_{sp}+1}+\frac{\alpha}{1-\alpha}\right), \end{array} $$ where the approximation in (26) is obtained assuming P P ,P R ≫1 [24, 25]. Since (26) is of the same form as (22), the corresponding outage probability at PR at the end of the second phase is $$ O_{P}^{(3)}\approx O_{P}^{(1)}. $$ $$ R_{p}^{(4)}=C\left(\frac{2P_{P}\gamma_{pp}}{P_{S}\gamma_{sp}+1}\right). $$ Conditioned on event \(\mathcal {E}^{(4)}\) that neither of x p and x s is successfully decoded at CR at the end of the first phase, the corresponding outage probability at PR at the end of the second phase can be derived as $$ O_{P}^{(4)}=\Pr\left\{R_{p}^{(4)}<R_{pt}\right\}=1-\frac{\delta_{sp}e^{-\frac{\delta_{pp}R_{pt}^{\prime}}{2P_{P}}}}{\delta_{sp}+\frac{\delta_{pp}R_{pt}^{\prime}P_{S}}{2P_{P}}}. $$ Please find in Appendix C for the detailed derivations. Substituting (23), (25), (27), and (29) into (9), we can thus obtain the end-to-end outage probability O P of the primary system. Take a close look at (23), (25), and (27), all \(O_{P}^{(1)}\), \(O_{P}^{(2)}\), and \(O_{P}^{(3)}\) approach 0 when \(\alpha \geq \frac {R_{\textit {pt}}^{\prime }}{R_{\textit {pt}}^{\prime }+1}\). Then, from (9), the term \(\Pr \{\mathcal {E}^{(4)}\}O_{P}^{(4)}\) dominates the end-to-end outage probability O P . In other words, the component of \(\Pr \{\mathcal {E}^{(4)}\}O_{P}^{(4)}\) brings a lower bound to O P . Defining \(\alpha ^{\ast }=\frac {R_{\textit {pt}}^{\prime }}{R_{\textit {pt}}^{\prime }+1}\), the end-to-end outage performance of the primary system is thus optimized when α≥α ∗, i.e., $$ \begin{aligned} &\arg\min_{\alpha}\left\{O_{P}\right\}=\left\{\alpha|\alpha\geq\alpha^{\ast}\right\},~\textmd{and}\\ &\min_{\alpha}\left\{O_{P}\right\}\geq\Pr\left\{\mathcal{E}^{(4)}\right\}O_{P}^{(4)}. \end{aligned} $$ Remark 1. From Theorem 2, CR may simply select a power allocation factor \(\alpha =\alpha ^{\ast }=\frac {R_{\textit {pt}}^{\prime }}{R_{\textit {pt}}^{\prime }+1}\) such that a reasonably good performance is achieved for the primary system, without requiring the CSI or other relevant information. For comparison purposes, we consider a benchmark case for the primary system without spectrum sharing, where PT transmits to PR directly without a relay. Then, with target rate R pt , the corresponding outage probability is $$ O_{P}^{\prime}=1-e^{-\frac{\delta_{pp}\left(2^{R_{pt}}-1\right)}{P_{P}}}. $$ The details are omitted here for the sake of brevity. Together with (9) and (31), in order to provide the primary system an incentive to participate in the spectrum sharing, the following condition has to be satisfied $$ O_{P}\leq O_{P}^{\prime}. $$ From (30) and (31), if \(\Pr \left \{\mathcal {E}^{(4)}\right \}O_{P}^{(4)}\leq O_{P}^{\prime }\), then it is possible to find a suitable power allocation factor α, e.g., α≥α ∗, such that the condition in (32) is satisfied. If however, \(\Pr \left \{\mathcal {E}^{(4)}\right \}O_{P}^{(4)}>O_{P}^{\prime }\), then the primary system experiences a performance loss compared to the benchmark case even when α→1. Thus, in order to guarantee the performance of the legacy primary system, apart from selecting a proper power allocation factor α≥α ∗ at CR, the transmit power P S also needs to be properly designed. For the power control of P S , it is assumed that the statistical CSI of h sp and h pp is available at ST, which is a common assumption made in existing works [20–23]. Assuming that the channels are reciprocal, the CSI can be acquired at ST through a feedback channel from PR [35–37]. In addition, other relevant information is also required, i.e., P P and R pt , which is usually inserted in the header of a packet that can be overheard by ST. With these information, both \(\Pr \left \{\mathcal {E}^{(4)}\right \}O_{P}^{(4)}\leq O_{P}^{\prime }\) and \(O_{P}^{\prime }\) can be estimated at ST. Furthermore, we assume that the probability \(\Pr \left \{\mathcal {E}^{(4)}\right \}\) is available at ST through a feedback channel from CR. Thus, although it is intractable to analytically derive P S such that \(O_{P}\leq O_{P}^{\prime }\), as long as a suitable P S is found to make sure that \(\Pr \left \{\mathcal {E}^{(4)}\right \}O_{P}^{(4)}\leq O_{P}^{\prime }\), it is possible to achieve cognitive spectrum sharing while proving a performance gain to the primary system. Decoding performance at SR in the second phase 3.3.1 Conditioned on event \(\boldsymbol {\mathcal {E}^{(1)}}\) Together with (1) and (3), SR attempts to decode the desired message x s from the the mixed signals of x s and x p using SIC. Similar to (11), (13), and (15), we have the following achievable rates in decoding x p and x s successively $$\begin{array}{@{}rcl@{}} R_{1,p}^{(1)}&=&C\left(\frac{P_{P}\gamma_{ps}}{P_{S}\gamma_{ss}+1}+\frac{\alpha}{1-\alpha+\frac{1}{P_{R} \gamma_{rs}}}\right)\\ &\approx& C\left(\frac{P_{P}\gamma_{ps}}{P_{S}\gamma_{ss}+1}+\frac{\alpha} {1-\alpha}\right), \end{array} $$ $$\begin{array}{@{}rcl@{}} R^{(1)}_{2,s}&=&C\left(P_{S}\gamma_{ss}+\left(1-\alpha\right)P_{R}\gamma_{rs}\right), \end{array} $$ $$\begin{array}{@{}rcl@{}} R^{(1)}_{1,s}&=&C\left(\frac{P_{S}\gamma_{ss}}{P_{P}\gamma_{ps}+1}+\frac{1-\alpha}{\alpha+\frac{1}{P_{R} \gamma_{rs}}}\right)\\ &\approx& C\left(\frac{P_{S}\gamma_{ss}}{P_{P}\gamma_{ps}+1}+\frac{1-\alpha} {\alpha}\right), \end{array} $$ where the approximations in (33a) and (33c) are obtained assuming P R ≫1 [24, 25]. Here, \(R_{1,p}^{(1)}\) denotes the achievable rate of decoding x p directly by considering the component of x s simply as noise. Upon successfully decoding and removing x p , \(R_{2,s}^{(1)}\) denotes the achievable rate of decoding the remaining x s successively. Conversely, \(R^{(1)}_{1,s}\) denotes the achievable rate of decoding x s directly that is subject to the interference from x p . Then, similar to (12), (14), and (16), we have the corresponding events in decoding the mixed signals of x p and x s by using SIC $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(1)}_{1,p}&=&\left\{R^{(1)}_{1,p}\geq R_{pt}\right\}\\ &\approx&\left\{\begin{array}{cc} \gamma_{ps}\geq \frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}\gamma_{ss}+\frac{R_{pt}^{\prime}- \frac{\alpha}{1-\alpha}}{P_{P}}, & \alpha<\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \textmd{certain event},& \alpha\geq\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\end{array}\right.~~~~~~ \end{array} $$ $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(1)}_{2,s}&=&\left\{R^{(1)}_{2,s}\geq R_{st}\right\}\\ &=&\left\{\gamma_{rs}\geq \frac{R_{st}^{\prime}}{(1-\alpha)P_{R}}-\frac{P_{S}}{(1-\alpha)P_{R}}\gamma_{ss}\right\}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(1)}_{1,s}&=&\left\{R^{(1)}_{1,s}\geq R_{st}\right\}\\ &\approx&\left\{\begin{array}{cc} \gamma_{ps}\leq\frac{P_{S}}{\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)P_{P}}\gamma_{ss}-\frac{1}{P_{P}}, & \alpha>\frac{1}{R_{st}^{\prime}+1}\\ \textmd{certain event}, & \alpha\leq\frac{1}{R_{st}^{\prime}+1}\end{array}\right.. \end{array} $$ From (34), the event of successfully decoding x s by using SIC can thus be expressed as $$ \mathcal{E}^{(1)}_{1,s}\cup\left(\bar{\mathcal{E}}^{(1)}_{1,s}\cap \mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{2,s}\right)=\mathcal{E}^{(1)}_{1,s}\cup\left(\mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{2,s}\right). $$ Conditioned on event \(\mathcal {E}^{(1)}\) that both x p and x s are successfully decoded at CR at the end of the first phase, the corresponding outage probability at SR at the end of the second phase can be derived as $$ \begin{aligned} O_{S}^{(1)}=&\,1-\Pr\left\{\mathcal{E}^{(1)}_{1,s}\right\}-\Pr\left\{\mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{2,s}\right\}\\ &+\Pr\left\{ \mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{1,s}\right\}, \end{aligned} $$ $$\begin{array}{@{}rcl@{}} \Pr\left\{\mathcal{E}^{(1)}_{1,s}\right\} &\approx&\begin{cases} \frac{\delta_{ps} e^{-\frac{\delta_{ss} \left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)}{P_{S}}}}{\delta_{ps}+\frac{\delta_{ss} \left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)P_{P}}{P_{S}}},&~\alpha>\frac{1}{R_{st}^{\prime}+1}\\ 1,&~\alpha\leq\frac{1}{R_{st}^{\prime}+1} \end{cases}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \Pr\left\{\mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{2,s}\right\} &\approx&\begin{cases} \frac{\delta_{ss}e^{-\frac{\delta_{rs} R_{st}^{\prime}}{(1-\alpha)P_{R}}}- \frac{\delta_{rs}P_{S}}{(1-\alpha)P_{R}} e^{-\frac{\delta_{ss}R_{st}^{\prime}}{P_{S}}}}{\delta_{ss}- \frac{\delta_{rs} P_{S}}{(1-\alpha)P_{R}} }, ~~~~~~~~~\alpha\geq\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \frac{\delta_{ss}e^{-\left(\frac{\delta_{sr}R_{st}^{\prime}}{(1-\alpha)P_{R}}+\frac{\delta_{ps}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)}{P_{P}}\right)} -\delta_{ss}e^{-\left(\frac{\delta_{ss}R_{st}^{\prime}}{P_{S}}+\frac{\delta_{ps}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(1+R_{st}^{\prime}\right)}{P_{P}}\right)}} {\delta_{ss}+\frac{\delta_{ps}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}-\frac{\delta_{rs}P_{S}}{(1-\alpha)P_{R}}}& \\ +\frac{\delta_{ss}e^{-\left(\frac{\delta_{ss}R_{st}^{\prime}}{P_{S}}+\frac{\delta_{ps}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(1+R_{st}^{\prime}\right)}{P_{P}}\right)}}{\delta_{ss}+\frac{\delta_{ps}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}}, ~~\alpha<\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \end{cases},~~~~~~~~ \end{array} $$ $$\begin{array}{@{}rcl@{}} \Pr\left\{ \mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{1,s}\right\} &\approx&\left\{\begin{array}{ll} \left\{\begin{array}{ll} 0,\\ \text{when}~\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)\geq1;\\ \frac{\delta_{ss}e^{-\frac{\delta_{ss}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}+1\right)\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right) }{P_{S}\left[1-\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)\right]} }}{\delta_{ss}-\frac{\delta_{ps}P_{S}\left[1-\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)\right]}{\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)P_{P}}},\\ \text{when}~\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)<1.\\ \end{array}\right. & \frac{1}{R_{st}^{\prime}+1}<\alpha<\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ 1, &\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\leq\alpha\leq\frac{1}{R_{st}^{\prime}+1}\\ \frac{\delta_{ss}e^{-\frac{\delta_{ps}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)}{P_{P}}}}{\delta_{ss}+\frac{\delta_{ps} \left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}} },& \alpha<\min\left(\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1},\frac{1}{R_{st}^{\prime}+1}\right)\\ \frac{\delta_{ps} e^{-\frac{\delta_{ss} \left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)}{P_{S}}}}{\delta_{ps}+\frac{\delta_{ss} \left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)P_{P}}{P_{S}}},& \alpha>\max\left(\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1},\frac{1}{R_{st}^{\prime}+1}\right)\\ \end{array}\right..~~~~~~ \end{array} $$ Please find in Appendix D for the detailed derivations. where Together with (1) and (5), SR attempts to decode the desired message x s using SIC. Similarly, we have the following achievable rates in decoding x p and x s successively $$\begin{array}{@{}rcl@{}} R^{(2)}_{1,p}&=& C\left(\frac{P_{P}\gamma_{ps}}{P_{S}\gamma_{ss}+1}+\frac{\alpha}{1-\alpha+\frac{1}{P_{R}\gamma_{rs}}}\right)\\ &\approx&C\left(\frac{P_{P}\gamma_{ps}}{P_{S}\gamma_{ss}+1}+\frac{\alpha}{1-\alpha}\right), \end{array} $$ $$\begin{array}{@{}rcl@{}} R^{(2)}_{2,s}&=& C\left(P_{S}\gamma_{ss}+\frac{(1-\alpha)P_{R}\gamma_{rs}P_{S}\gamma_{sr}}{(1-\alpha)P_{R}\gamma_{rs}+P_{S}\gamma_{sr}+1}\right)\\ &\approx&C\left(P_{S}\gamma_{ss}+P_{S}\gamma_{sr}\right), \end{array} $$ $$\begin{array}{@{}rcl@{}} R^{(2)}_{1,s}&=& C\left(\frac{P_{S}\gamma_{ss}}{P_{P}\gamma_{ps}+1}+\frac{1-\alpha}{\alpha+\frac{1}{P_{S}\gamma_{sr}}+\frac{1}{P_{R}\gamma_{rs}}+\frac{1}{P_{S}\gamma_{sr}P_{R}\gamma_{rs}}}\right)\\ &\approx&C\left(\frac{P_{S}\gamma_{ss}}{P_{P}\gamma_{ps}+1}+\frac{1-\alpha}{\alpha}\right), \end{array} $$ where the approximation in (38a) is obtained assuming P R ≫1, the approximation in (38b) is obtained assuming P R γ rs ≫P S γ sr , and the approximation in (38c) is obtained assuming P S ,P R ≫1 [24, 25], respectively. Then, we have the corresponding events in decoding x p and x s successively $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(2)}_{1,p}&=&\left\{R^{(2)}_{1,p}\geq R_{pt}\right\}\\&\approx&\left\{\begin{array}{cc} \gamma_{ps}\geq \frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}\gamma_{ss}+\frac{R_{pt}^{\prime}- \frac{\alpha}{1-\alpha}}{P_{P}}, & \alpha<\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \mathrm{certain event},& \alpha\geq\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\end{array},\right. \end{array} $$ $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(2)}_{2,s}&=&\left\{R^{(2)}_{2,s}\geq R_{st}\right\}=\left\{\gamma_{sr}\geq\frac{R_{st}^{\prime}}{P_{S}}-\gamma_{ss}\right\}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(2)}_{1,s}&=&\left\{R^{(2)}_{1,s}\geq R_{st}\right\}\\&\approx&\left\{\begin{array}{cc} \gamma_{ps}\leq\frac{P_{S}}{\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)P_{P}}\gamma_{ss}-\frac{1}{P_{P}}, & \alpha>\frac{1}{R_{st}^{\prime}+1}\\ \mathrm{certain event}, & \alpha\leq\frac{1}{R_{st}^{\prime}+1}\end{array}\right.. \end{array} $$ Similarly, the outage probability at SR at the end of the second phase can be derived as $$\begin{array}{*{20}l} O_{S}^{(2)}&=1-\Pr\left\{\mathcal{E}^{(2)}_{1,s}\right\}-\Pr\left\{\mathcal{E}^{(2)}_{1,p}\cap \mathcal{E}^{(2)}_{2,s}\right\}\\ &\quad+\Pr\left\{ \mathcal{E}^{(2)}_{1,p}\cap \mathcal{E}^{(2)}_{1,s}\right\}. \end{array} $$ From (34) and (39), since \(\mathcal {E}^{(2)}_{1,p}\) and \(\mathcal {E}^{(2)}_{1,s}\) are of the same form as \(\mathcal {E}^{(1)}_{1,p}\) and \(\mathcal {E}^{(1)}_{1,s}\) respectively, we have $$\begin{array}{@{}rcl@{}} \Pr\left\{\mathcal{E}^{(2)}_{1,s}\right\}&=&\Pr\left\{\mathcal{E}^{(1)}_{1,s}\right\}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \Pr\left\{ \mathcal{E}^{(2)}_{1,p}\cap \mathcal{E}^{(2)}_{1,s}\right\}&=&\Pr\left\{ \mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{1,s}\right\}. \end{array} $$ Then, by integrating over the corresponding region of event \(\left \{\mathcal {E}^{(2)}_{1,p}\cap \mathcal {E}^{(2)}_{2,s}\right \}\), we can obtain $$\begin{array}{@{}rcl@{}} &&\Pr\left\{\mathcal{E}^{(2)}_{1,p}\cap \mathcal{E}^{(2)}_{2,s}\right\}\\ &\approx&\left\{\!\!\begin{array}{ll} \frac{\delta_{ss}e^{-\frac{\delta_{sr}R_{st}^{\prime}}{P_{S}}}-\delta_{sr}e^{-\frac{\delta_{ss}R_{st}^{\prime}}{P_{S}}}}{\delta_{ss}-\delta_{sr}},~~~~~~~~~~~~~~~~~~~\alpha\geq\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \frac{\delta_{ss}e^{-\left(\!\frac{\delta_{sr}R_{st}^{\prime}}{P_{S}}+\frac{\delta_{ps}\left(\!R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\!\right)}{P_{P}}\!\right)} -\delta_{ss}e^{-\left(\!\frac{\delta_{ss}R_{st}^{\prime}}{P_{S}}+\frac{\delta_{ps}\left(\!R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\!\right)\left(\!1+R_{st}^{\prime}\!\right)}{P_{P}}\!\right)}} {\delta_{ss}+\frac{\delta_{ps}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}-\delta_{sr}}& \\ +\frac{\delta_{ss}e^{-\left(\frac{\delta_{ss}R_{st}^{\prime}}{P_{S}}+\frac{\delta_{ps}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(1+R_{st}^{\prime}\right)}{P_{P}}\right)}}{\delta_{ss}+\frac{\delta_{ps}\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}}, ~~~\alpha<\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \end{array}\right.. \end{array} $$ $$\begin{array}{@{}rcl@{}} R^{(3)}_{1,p}&=& C\left(\frac{P_{P}\gamma_{ps}}{P_{S}\gamma_{ss}+1}\right.\\&&+\left.\frac{\alpha}{1-\alpha+\frac{1}{P_{P}\gamma_{pr}}+\frac{1}{P_{R}\gamma_{rs}}+\frac{1}{P_{P}\gamma_{pr}P_{R}\gamma_{rs}}}\right)\\ &\approx&C\left(\frac{P_{P}\gamma_{ps}}{P_{S}\gamma_{ss}+1}+\frac{\alpha}{1-\alpha}\right), \end{array} $$ $$\begin{array}{@{}rcl@{}} R^{(3)}_{1,s}&=& C\left(\frac{P_{S}\gamma_{ss}}{P_{P}\gamma_{ps}+1}+\frac{1-\alpha}{\alpha+\frac{1}{P_{R}\gamma_{rs}}}\right)\\ &\approx&C\left(\frac{P_{S}\gamma_{ss}}{P_{P}\gamma_{ps}+1}+\frac{1-\alpha}{\alpha}\right), \end{array} $$ where the approximation in (44a) is obtained assuming P P ,P R ≫1 and the approximation in (44c) is obtained assuming P R ≫1 [24, 25], respectively. From (44) and (33), since in both cases x s is successfully decoded and forwarded by CR with power (1−α)P R , we have $$ O_{S}^{(3)}=O_{S}^{(1)}. $$ Again, SIC is performed at SR to decode the desired message x s by exploiting the received signals in (1) and (8). Then, we have the following achievable rates in decoding x p and x s successively $$\begin{array}{@{}rcl@{}} R_{1,p}^{(4)}&=&C\left(\frac{2P_{P}\gamma_{ps}}{P_{S}\gamma_{ss}+1}\right), \end{array} $$ $$\begin{array}{@{}rcl@{}} R_{2,s}^{(4)}&=&C\left(2P_{S}\gamma_{ss}\right), \end{array} $$ $$\begin{array}{@{}rcl@{}} R_{1,s}^{(4)}&=&C\left(\frac{2P_{S}\gamma_{ss}}{P_{P}\gamma_{ps}+1}\right). \end{array} $$ Correspondingly, we have the following events in decoding x p and x s successively $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(4)}_{1,p}&=&\left\{R_{1,p}^{(4)}\geq R_{pt}\right\}=\left\{\gamma_{ps}\geq\frac{R_{pt}^{\prime}P_{S}}{2P_{P}}\gamma_{ss}+\frac{R_{pt}^{\prime}} {2P_{P}}\right\}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(4)}_{2,s}&=&\left\{R_{2,s}^{(4)}\geq R_{st}\right\}=\left\{\gamma_{ss}\geq\frac{R_{st}^{\prime}}{2P_{S}}\right\}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \mathcal{E}^{(4)}_{1,s}&=&\left\{R_{1,s}^{(4)}\geq R_{st}\right\}=\left\{\gamma_{ps}\leq\frac{2P_{S}}{R_{st}^{\prime}P_{P}}\gamma_{ss}-\frac{1}{P_{P}}\right\}. \end{array} $$ For the considered scenario where R pt ,R st ≥1, there is no intersection between events \(\mathcal {E}_{1,p}^{(4)}\) and \(\mathcal {E}_{1,s}^{(4)}\). Thus, from Lemma 4, the corresponding outage probability at SR can be derived as $$ {\fontsize{9.3}{6} \begin{aligned} O_{S}^{(4)} &=1-\Pr\left\{\mathcal{E}^{(4)}_{1,s}\right\}-\Pr\left\{\mathcal{E}^{(4)}_{1,p}\cap \mathcal{E}^{(4)}_{2,s}\right\}\\ &=1-\int_{\frac{R_{st}^{\prime}}{2P_{S}}}^{\infty} \delta_{ss}e^{-\delta_{ss}\gamma_{ss}}\left[\int_{\frac{R_{pt}^{\prime}P_{S}}{2P_{P}}\gamma_{ss}+\frac{R_{pt}^{\prime}}{2P_{P}}}^{\infty}\delta_{ps}e^{-\delta_{ps}\gamma_{ps}}d\gamma_{ps}\right]d\gamma_{ss}\\ &\quad-\int_{\frac{R_{st}^{\prime}}{2P_{S}}}^{\infty} \delta_{ss}e^{-\delta_{ss}\gamma_{ss}}\left[\int_{0}^{\frac{2P_{S}}{R_{st}^{\prime}P_{P}}\gamma_{ss}-\frac{1}{P_{P}}}\delta_{ps}e^{-\delta_{ps}\gamma_{ps}}d\gamma_{ps}\right]d\gamma_{ss}\\ &=1-\frac{\frac{2\delta_{ps}P_{S}}{R_{st}^{\prime}P_{P}}e^{-\frac{\delta_{ss}R_{st}^{\prime}}{2P_{S}}} }{\delta_{ss}+\frac{2\delta_{ps}P_{S}}{R_{st}^{\prime}P_{P}}} -\frac{\delta_{ss}e^{-\left(\frac{\delta_{ps}R_{pt}^{\prime}}{2P_{P}}+\frac{\delta_{ss}R_{st}^{\prime}}{2P_{S}}+\frac{\delta_{ps}R_{pt}^{\prime}R_{st}^{\prime}}{4P_{P}}\right)}}{\delta_{ss}+\frac{\delta_{ps}R_{pt}^{\prime}P_{s}}{2P_{P}}}, \end{aligned}} $$ where \(\delta _{\textit {ss}}e^{-\delta _{\textit {ss}}\gamma _{\textit {ss}}}\phantom {\dot {i}\!}\) and \(\delta _{\textit {ps}}e^{-\delta _{\textit {ps}}\gamma _{\textit {ps}}}\phantom {\dot {i}\!}\) denote the respective PDFs of γ ss and γ ps . Substituting \(O_{S}^{(1)}\), \(O_{S}^{(2)}\), \(O_{S}^{(3)}\), and \(O_{S}^{(4)}\) into (10), we can thus obtain the overall end-to-end outage probability of the secondary system. A performance upper bound Conditioned on event \(\mathcal {E}^{(1)}\) that both x p and x s are successfully decoded at CR at the end of the first phase, we propose using DPC at CR to pre-cancel the interference seen at SR (or PR) in the second phase. A performance upper bound for secondary user We propose using DPC [17] at CR that transmits a composite message $$ x_{r}^{\prime}=\sqrt{\alpha P_{R}}x_{p}+x_{s}^{\prime} $$ in the second phase. Employing a similar coding scheme as in [26, Eq. (15)], \(x_{s}^{\prime }\) is encoded using DPC by treating the component of \(\sqrt {\alpha P_{R}}x_{p}\) as the known interference that will corrupt the reception at SR in the second phase. Thus, after dirty paper decoding, the effectively received signal at SR in the second phase is given as $$ y_{s}^{\prime}(2)=h_{rs}\sqrt{(1-\alpha)P_{R}}x_{s}+n_{0}. $$ That is, SR sees no interference in the second phase. Then, together with (1) and (50), SR attempts to decode x s using SIC. Following the same steps as in Lemma 4, the corresponding outage probability at SR at the end of the second phase can be similarly derived. The details are omitted here for the sake of brevity. On the other hand, with the same power α P R allocated to forward x p at CR, the outage performance of the primary user is the same as that in (23). A performance upper bound for primary user The interference seen at PR can be also pre-cancelled by performing DPC at SR, where a composite message $$ x_{r}^{\prime}=x_{p}^{\prime}+\sqrt{(1-\alpha)P_{R}}x_{s} $$ is transmitted in the second phase. Similarly, \(x_{p}^{\prime }\) is encoded using DPC by treating the component of \(\sqrt {(1-\alpha)P_{R}}x_{s}\) as the known interference that will corrupt the reception at PR. Thus, after dirty paper decoding, the effectively received signal at PR in the second phase is given as $$ y_{p}^{\prime}(2)=h_{rp}\sqrt{\alpha P_{R}}x_{p}+n_{0}. $$ That is, PR sees no interference in the second phase. Again, together with (1) and (52), MRC is employed at PR to decode x p . The details are omitted here for the sake of brevity. On the other hand, with the same power (1−α)P R allocated to forward x s at CR, the outage performance of the secondary user is the same as that in (36). From the above analysis, with a DPC performed at CR to exploit the successfully decoded messages received in the first phase, the interference seen at SR (or PR) in the second phase can be pre-cancelled, thus obtaining a performance upper bound for the secondary (or primary) user without affecting the performance of the other user. In this section, we illustrate the outage performance of both primary and secondary users in the proposed spectrum sharing protocol. In order to limit the interference caused to the primary user, we consider a power control at ST where P S =θ P P . To evaluate the SIC decoding at CR at the end of the first phase, we let \(\delta _{\textit {sr}}^{-1}=\tau \delta _{\textit {pr}}^{-1}\) for the multiple-access channels h sr and h pr at CR. To evaluate the interference seen at PR due to cross talk, we let \(\delta _{\textit {sp}}^{-1}=\varphi \delta _{\textit {pp}}^{-1}\) for the multiple-access channels h sp and h pp at PR. Unless otherwise specified, we let P P =30 dB, P S =10 dB, P R =40 dB, and R st =R pt =1. In order to reflect the geometric structure of the considered network shown in Fig. 2, we let \(\delta _{\textit {pp}}^{-1}=\delta _{\textit {ss}}^{-1}=0\) dB for the direct links, \(\delta _{\textit {sp}}^{-1}=\delta _{\textit {ps}}^{-1}=-10\) dB for the cross links, and \(\delta _{\textit {pr}}^{-1}=\delta _{\textit {sr}}^{-1}=\delta _{\textit {rp}}^{-1}=\delta _{\textit {rs}}^{-1}=10\) dB for the relay links, respectively. Simulation results are presented in Figs. 4, 5, 6, 7, 8, 9, 10, and 11 where lines denote the analytical results obtained in this paper and markers denote the results of Monte Carlo simulations. The probability \(\Pr \{\mathcal {E}^{(4)}\}\) with respect to θ where P P =30 dB and P S =θ P P , when \(\delta _{\textit {pr}}^{-1}=10\) dB and \(\delta _{\textit {sr}}^{-1}=\tau \delta _{\textit {pr}}^{-1}\) The end-to-end outage probability O P with respect to φ where \(\delta _{\textit {pp}}^{-1}=0\) dB and \(\delta _{\textit {sp}}^{-1}=\varphi \delta _{\textit {pp}}^{-1}\), when P P =30 dB and P S =θ P P The end-to-end outage probability O P with respect to α and θ where P P =30 dB and P S =θ P P A cross-sectional view of Fig. 6 when θ=−10, −15, −20 dB, respectively The end-to-end outage probability O P with respect to R pt where P S =10 dB and R st =1 The end-to-end outage probability O P with respect to P P where α=α ∗ The end-to-end outage probability O S with respect to α where P P =30 dB and P S =θ P P The end-to-end outage probability O S with respect to R pt where α=α ∗ and P S =10 dB SIC decoding at CR From Theorem 2, in order to fully exploit the relay transmissions, event \(\mathcal {E}^{(4)}\) that neither of x p and x s is decoded at CR should be avoided as much as possible. Figure 4 displays the probability of \(\Pr \left \{\mathcal {E}^{(4)}\right \}\) with respect to θ where P S =θ P P . Various channel conditions are considered where \(\delta _{\textit {pr}}^{-1}=10\) dB and \(\delta _{\textit {sr}}^{-1}=\tau \delta _{\textit {pr}}^{-1}\). As can be seen from Fig. 4, with an increase in θ, the probability of \(\Pr \left \{\mathcal {E}^{(4)}\right \}\) first increases and then decreases. This is reasonable as when there is a significant difference between the power levels of x p and x s received at CR, e.g., τ θ≪1 or τ θ≫1, SIC is facilitated and it would be easy to decode x p and x s successively. In contrast, if the components of x p and x s are of comparable power levels, e.g., τ θ≈1, then the SIC decoding is limited by the mutual interference and it would be difficult to decode either of x p and x s . This can be observed in Fig. 4 where \(\Pr \left \{\mathcal {E}^{(4)}\right \}\) takes peak values at θ=−10,0,10 dB for τ=10,0,−10 dB, respectively. End-to-end outage performance of the primary user Firstly, we evaluate how the interference due to cross talk affects the end-to-end performance of the primary system. Let \(\delta _{\textit {sp}}^{-1}=\varphi \delta _{\textit {pp}}^{-1}\), O P is plotted with respect to φ in Fig. 5. The outage probability \(O_{P}^{\prime }\) of the benchmark case considered in Remark 1 is also demonstrated. With an increase in φ, since the interference link ST →PR becomes stronger, the corresponding performance of the primary system is impaired. On the other hand, with an increase in θ, ST transmits at a higher power that impedes the SIC decoding at CR as well as cause more interference to PR, thus similarly impairing the performance of the primary system. In addition, it is observed that a performance improvement is achieved for the primary system with a higher power allocation factor α. When α=0.4, even though ST transmits at a low power, e.g., θ=−20 dB, and the interference link ST →PR is very weak, e.g., φ=−20 dB, the primary user experiences a performance loss, i.e., \(O_{P}>O_{P}^{\prime }\). Whereas when α=0.9, with all other parameters being the same, a significant performance improvement is achieved for the primary system. This means that besides the power control at ST, the power allocation at CR also needs to be designed to compensate the interference caused to the primary system due to secondary transmissions. To further illustrate the effects of P S and power allocation factor α, O P is plotted with respect to θ and α in Fig. 6. The outage probability \(O_{P}^{\prime }\) of the benchmark case (31) is also illustrated. It is observed that when θ takes values smaller than 0.05 AND when α takes values greater than 0.75, even subject to the interference from the secondary transmissions, the condition in (32) is always satisfied and a performance gain is achieved for the primary system. Otherwise, if θ>0.05, then we have \(O_{P}>O_{P}^{\prime }\) even when α→1. This can be better observed in Fig. 7 where a cross-sectional view of Fig. 6 is demonstrated. With an increase in θ, since the SIC at CR is impeded meanwhile more interference is caused to PR, the corresponding outage performance of the primary user is degraded. When θ=−20,−15 dB, it is observed that we can always find a suitable power allocation factor \(\alpha \geq \alpha ^{\ast }=\frac {R_{\textit {pt}}^{\prime }}{R_{\textit {pt}}^{\prime }+1}=0.75\) such that a performance gain is achieved for the primary system. However, when θ is increased to −10 dB, it is observed that \(O_{P}>O_{P}^{\prime }\) even when α→1. Similar phenomena can be observed for a benchmark case considered in [28], where a linear weighted combination of primary and secondary messages is forwarded by CR only when both messages are successfully decoded. Otherwise, CR simply stays silent and both PT and ST perform a retransmission in the second phase. It is observed that with the same system parameters, a better performance is achieved by the proposed approach compared to that in [28]. This is reasonable as in the proposed approach, CR is able to help forward the received messages more frequently, which is more beneficial compared to a retransmission by PT and ST simultaneously that will cause severe interference to one another. Furthermore, from Figs. 6 and 7, it is observed that when α takes values greater than \(\alpha ^{\ast }=\frac {R_{\textit {pt}}^{\prime }}{R_{\textit {pt}}^{\prime }+1}=0.75\), O P experiences a floor. This validates Theorem 2 that when α≥α ∗, \(O_{P}^{(1)}\), \(O_{P}^{(2)}\), and \(O_{P}^{(3)}\) all approach 0 and thus \(\Pr \left \{\mathcal {E}^{(4)}\right \}O_{P}^{(4)}\), which is irrelevant to α, dominates the overall outage performance. For a better illustration, the outage performance using DPC at CR is also presented in Fig. 7. Conditioned on event \(\mathcal {E}^{(1)}\) that both x p and x s are successfully decoded at CR, by using DPC to pre-cancel the interference seen at PR, it is observed that a performance upper bound is achieved for the primary system. For a better illustration of Theorem 2, O P is plotted with respect to R pt in Fig. 8. With an increase in R pt , the corresponding outage performance of the primary system is degraded. Whereas with an increase in α, a better outage performance is achieved. When \(\alpha =\alpha ^{\ast }=\frac {R_{\textit {pt}}^{\prime }}{R_{\textit {pt}}^{\prime }+1}\), the corresponding performance outperforms that with fixed power allocation factor, e.g., α=0.8,0.9, thus validating Theorem 2 that the end-to-end outage performance of the primary system is optimized with respect to α when α≥α ∗. Similar results can be observed for the benchmark case [28]. Again, a better performance is achieved by the proposed approach compared to that in [28]. Furthermore, in modest rate region where R pt <2.2, the condition in (32) is satisfied and a performance gain is achieved for the primary system compared to the benchmark case. Whereas in the high rate region, since both the SIC decoding at CR and the decoding at PR become more difficult, the primary user experiences a performance loss. Figure 9 displays O P with respect to P P where a power allocation factor of α=α ∗ is adopted at CR. When P S =10 dB, it is observed that a diversity order of 2 is achieved for the primary system when P P →∞. This is reasonable as in the proposed protocol, two independent copies of x p are received at PR from PT →PR and CR →PR respectively in two successive phases. However, when there is a fixed power ratio between P P and P S , i.e., P S =θ P P , it is observed that the performance of the primary system is limited by the interference from secondary transmissions and no diversity gain is achieved. End-to-end outage performance of the secondary user In Fig. 10, the end-to-end outage probability of the secondary user O S is plotted with respect to α where P P =30 dB and P S =θ P P . It is observed that O S experiences a plateau when α takes the values between 0.3 and 0.7. And when α is less than 0.25 or greater than 0.75, a reasonably good outage performance is achieved for the secondary user. This is because of the employment of SIC at SR. In the regions where α≤0.25 and α≥0.75, there is a significant difference between the power levels of x p and x s received at SR in the second phase, thus x s can either be first decoded or successively decoded using SIC with a high probability. Conversely, when α takes values between 0.3 and 0.7, SIC is limited by the comparable interference between x p and x s , which makes it difficult to decode either of them. For comparison purposes, the outage performance of the secondary user in the benchmark case [28] is also presented, where SR attempts to recover the desired signal x s by using MRC of the received signals in two phases and considers the primary component simply as noise. As illustrated in Fig. 10, with an increase in α, since more power is allocated to forward the primary signal meanwhile higher interference is seen at SR, the outage performance of the secondary user is severely degraded. Furthermore, it is observed from Fig. 10 that the performance of the secondary system is degraded with an increase in θ. This is because with a higher θ, e.g., θ is increased from −20 to −15 dB, from (34a), it becomes difficult to first decode and remove x p and then decode x s successively using SIC. In other words, it is not always beneficial to adopt a high transmit power at ST, even if the condition in (32) is met. Again, the outage performance using DPC at CR is illustrated in Fig. 10. Conditioned on event \(\mathcal {E}^{(1)}\), by using DPC to pre-cancel the interference seen at SR, it is observed that a performance upper bound is achieved for the secondary user. Together with Figs. 7 and 10, when α≥α ∗, it is possible to bring a performance gain for the primary system and at the same time achieve a reasonably good outage performance around 10−2 for the secondary system by the proposed approach. Whereas for the benchmark case considered in [28], although it is also possible to provide a performance gain to the primary user when α≥α ∗, the secondary message x s can be hardly delivered from ST to SR. In Fig. 11, O S is plotted with respect to R pt when α=α ∗. Similarly, with an increase in R pt , from (34a), it becomes difficult to decode x p first and then decode x s successively using SIC, thus the corresponding outage performance of the secondary system is degraded. On the other hand, with other parameters being the same, it is observed that a performance degradation is experienced by the secondary system with an increase in R st . From the above observations in Figs. 4, 5, 6, 7, 8, 9, 10, and 11, with properly designed parameters to facilitate the SIC decoding at CR as well as to limit the interference caused to PR due to the cross talk in the first phase, it is possible to find a suitable power allocation factor α≥α ∗ such that a performance gain is achieved for the primary user meanwhile the secondary user gains an opportunity to access the spectrum. Furthermore, with the same system parameters, performance gains are achieved for both primary and secondary systems by the proposed approach compared to a benchmark case in [28]. In this paper, the interference channel with a cognitive relay is exploited to achieve spectrum sharing between a licensed primary user and an unlicensed secondary user. A causal cognitive two-phase spectrum sharing protocol is proposed and closed-form expressions of the end-to-end outage probability are derived. In view of the inherent interference-limited property of the system, to guarantee the performance of the primary system, we consider a power control at ST together with a power allocation at CR to forward the processed primary and secondary messages, respectively. Simulation results demonstrate that by designing both the power control at ST and power allocation at CR, spectrum sharing is achieved between the primary and secondary systems and performance gains can be achieved for both parties. Appendix A: Derivations of \(\boldsymbol {\Pr \left \{\mathcal {E}^{(1)}\right \}}\), \(\boldsymbol {\Pr \left \{\mathcal {E}^{(2)}\right \}}\), \(\boldsymbol {\Pr \left \{\mathcal {E}^{(3)}\right \}}\), and \(\boldsymbol {\Pr \left \{\mathcal {E}^{(4)}\right \}}\) From Fig. 3, upon determining the region of event \(\mathcal {E}^{(1)}\) that is defined in (19a), the corresponding probability can be derived by $$ {\fontsize{8.8}{6}\begin{aligned} \Pr\{\mathcal{E}^{(1)}\}=&\Pr\left\{\left(\mathcal{E}_{1,p}\cap \mathcal{E}_{2,s}\right) \cup \left(\mathcal{E}_{1,s}\cap \mathcal{E}_{2,p}\right) \right\}\\ =&\int_{\frac{R_{st}^{\prime}}{P_{S}}}^{\infty}\delta_{sr}e^{-\delta_{sr}\gamma_{sr}}\left[\int_{\frac{R_{pt}^{\prime}P_{S}}{P_{P}}\gamma_{sr}+\frac{R_{pt}^{\prime}}{P_{P}}}^{\infty} \delta_{pr}e^{-\delta_{pr}\gamma_{pr}}d\gamma_{pr}\right]d\gamma_{sr}\\ &+\int_{\frac{R_{pt}^{\prime}}{P_{P}}}^{\infty}\delta_{pr}e^{-\delta_{pr}\gamma_{pr}}\left[\int_{\frac{R_{st}^{\prime}P_{P}}{P_{S}}\gamma_{pr}+\frac{R_{st}^{\prime}}{P_{S}}}^{\infty} \delta_{sr}e^{-\delta_{sr}\gamma_{sr}}d\gamma_{sr}\right]d\gamma_{pr}, \end{aligned}} $$ where \(\delta _{\textit {sr}}e^{-\delta _{\textit {sr}}\gamma _{\textit {sr}}}\phantom {\dot {i}\!}\) and \(\delta _{\textit {pr}}e^{-\delta _{\textit {pr}}\gamma _{\textit {pr}}}\phantom {\dot {i}\!}\) denote the respective PDFs of γ sr and γ pr . For event \(\mathcal {E}^{(2)}\) defined in (19b), the corresponding probability can be derived by $$ {\fontsize{9}{6}\begin{aligned} \Pr\{\mathcal{E}^{(2)}\}&=\Pr\left\{\mathcal{E}_{1,p}\cap\bar{\mathcal{E}}_{2,s}\right\}\\ &=\int_{0}^{\frac{R_{st}^{\prime}}{P_{S}}}\delta_{sr}e^{-\delta_{sr}\gamma_{sr}}\left[\int_{\frac{R_{pt}^{\prime}P_{S}}{P_{P}}\gamma_{sr}+\frac{R_{pt}^{\prime}}{P_{P}}}^{\infty} \delta_{pr}e^{-\delta_{pr}\gamma_{pr}}d\gamma_{pr}\right]d\gamma_{sr}. \end{aligned}} $$ For event \(\mathcal {E}^{(3)}\) defined in (19c), the corresponding probability can be derived by $$ {\fontsize{9}{6}\begin{aligned} \Pr\{\mathcal{E}^{(3)}\}&=\Pr\left\{\mathcal{E}_{1,s}\cap\bar{\mathcal{E}}_{2,p}\right\}\\ &=\int_{0}^{\frac{R_{pt}^{\prime}}{P_{P}}}\delta_{pr}e^{-\delta_{pr}\gamma_{pr}}\left[\int_{\frac{R_{st}^{\prime}P_{P}}{P_{S}}\gamma_{pr}+\frac{R_{st}^{\prime}}{P_{S}}}^{\infty} \delta_{sr}e^{-\delta_{sr}\gamma_{sr}}d\gamma_{sr}\right]d\gamma_{pr}. \end{aligned}} $$ Then the integrations in (53)–(55) can be solved to obtain the results in (21a)–(21c). Since events \(\mathcal {E}^{(1)}\), \(\mathcal {E}^{(2)}\), \(\mathcal {E}^{(3)}\), and \(\mathcal {E}^{(4)}\) are mutually exclusive, the corresponding probability of event \(\mathcal {E}^{(4)}\) can be readily obtained by $$ \Pr\{\mathcal{E}^{(4)}\}=1-\Pr\left\{\mathcal{E}^{(1)}\right\}-\Pr\left\{\mathcal{E}^{(2)}\right\}-\Pr\left\{\mathcal{E}^{(3)}\right\}. $$ Appendix B: Derivations of \(\boldsymbol {O_{P}^{(1)}}\) From (22), with target rate R pt , we have the corresponding outage probability at PR $$ {\fontsize{9}{6}\begin{aligned} O_{P}^{(1)}&=\Pr\left\{R_{p}^{(1)}<R_{pt}\right\}\\ &\approx\Pr\left\{C\left(\frac{P_{P}\gamma_{pp}}{P_{S}\gamma_{sp}+1}+\frac{\alpha }{1-\alpha}\right)<R_{pt}\right\}\\ &=\left\{\begin{array}{ll} 0,& \alpha\geq\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \Pr\left\{ \gamma_{pp}<\frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}\gamma_{sp}+\frac{R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}}{P_{P}}\right\}, &\alpha<\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \end{array}\right.. \end{aligned}} $$ When \(\alpha <\frac {R_{\textit {pt}}^{\prime }}{R_{\textit {pt}}^{\prime }+1}\), similarly, we can derive \(O_{P}^{(1)}\) by integrating over the corresponding region $$ {\fontsize{8.3}{6}\begin{aligned} &\Pr\left\{\gamma_{pp}<\frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}\gamma_{sp}+\frac{R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}}{P_{P}}\right\}\\ &=\int_{0}^{\infty}\delta_{sp}e^{-\delta_{sp}\gamma_{sp}}\left[\int_{0}^{\frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}\gamma_{sp}+\frac{R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}}{P_{P}}}\delta_{pp} e^{-\delta_{pp}\gamma_{pp}}d\gamma_{pp}\right]d\gamma_{sp},~~~~~~~~ \end{aligned}} $$ where \(\phantom {\dot {i}\!}\delta _{\textit {sp}}e^{-\delta _{\textit {sp}}\gamma _{\textit {sp}}}\) and \(\phantom {\dot {i}\!}\delta _{\textit {pp}}e^{-\delta _{\textit {pp}}\gamma _{\textit {pp}}}\) denote the respective PDFs of γ sp and γ pp . Then, this integration can be solved to obtain the results in (23). Appendix C: Derivations of \(\boldsymbol {O_{P}^{(4)}}\) $$\begin{array}{@{}rcl@{}} O_{P}^{(4)}&=&\Pr\{R_{p}^{(4)}<R_{pt}\}\\ &=&\Pr\left\{\gamma_{pp}<\frac{R_{pt}^{\prime}P_{S}}{2P_{P}}\gamma_{sp}+\frac{R_{pt}^{\prime}}{2P_{P}}\right\}. \end{array} $$ By integrating over the corresponding region defined in (59), \(O_{P}^{(4)}\) can be derived by $$ { \fontsize{9}{6}\begin{aligned} O_{P}^{(4)}=\int_{0}^{\infty}\delta_{sp}e^{-\delta_{sp}\gamma_{sp}}\left[\int_{0}^{\frac{R_{pt}^{\prime}P_{S}}{2P_{P}}\gamma_{sp}+\frac{R_{pt}^{\prime}}{2P_{P}}}\delta_{pp} e^{-\delta_{pp}\gamma_{pp}}d\gamma_{pp}\right]d\gamma_{sp}. \end{aligned}} $$ Then, this integration can be solved to obtain the result in (29). Appendix D: Derivations of \(\boldsymbol {O_{S}^{(1)}}\) From (35), the outage probability at SR can be expressed as [34] $$\begin{array}{@{}rcl@{}} O_{S}^{(1)}&=&1-\Pr\left\{\mathcal{E}^{(1)}_{1,s} \cup \left(\mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{2,s}\right)\right\}\\ &=&1-\Pr\left\{\mathcal{E}^{(1)}_{1,s}\right\}-\Pr\left\{\mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{2,s}\right\}\\&&+\Pr\left\{ \mathcal{E}^{(1)}_{1,s}\cap \mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{2,s}\right\}. \end{array} $$ Since \(R_{1,s}^{(1)}\leq R_{2,s}^{(1)}\) always holds, we have \(\mathcal {E}^{(1)}_{1,s}\subseteq \mathcal {E}^{(1)}_{2,s}\), thus (61) can be rewritten as $$\begin{array}{@{}rcl@{}} O_{S}^{(1)}=1-\Pr\left\{\mathcal{E}^{(1)}_{1,s}\right\}-\Pr\left\{\mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{2,s}\right\}+\Pr\left\{ \mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{1,s}\right\}. \end{array} $$ From (34), by integrating over the corresponding regions of the events in (62), we have $$\begin{array}{@{}rcl@{}} \Pr\left\{\mathcal{E}^{(1)}_{1,s}\right\}\approx \int_{\frac{R_{st}^{\prime}-\frac{1-\alpha}{\alpha}}{P_{S}}}^{\infty}\delta_{ss}e^{-\delta_{ss}\gamma_{ss}}\left[ \int_{0}^{\frac{P_{S}}{\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)P_{P}}\gamma_{ss}-\frac{1}{P_{P}}} \delta_{ps}e^{-\delta_{ps}\gamma_{ps}}d\gamma_{ps}\right]d\gamma_{ss},~~~~~ \end{array} $$ $$\begin{array}{@{}rcl@{}} &&\Pr\left\{\mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{2,s}\right\}\approx\\ &&\left\{\begin{array}{ll} 1-\int_{0}^{\frac{R_{st}^{\prime}}{P_{S}}}\delta_{ss}e^{-\delta_{ss}\gamma_{ss}}\left[\int_{0}^{\frac{R_{st}^{\prime}}{(1-\alpha)P_{R}}-\frac{P_{S}}{(1-\alpha)P_{R}}\gamma_{ss}}\delta_{rs}e^{-\delta_{rs}\gamma_{rs}}d\gamma_{rs}\right]d\gamma_{ss}, & \alpha\geq\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \int_{0}^{\frac{R_{st}^{\prime}}{P_{S}}}\delta_{ss}e^{-\delta_{ss}\gamma_{ss}}\left[\int_{\frac{R_{st}^{\prime}}{(1-\alpha)P_{R}}-\frac{P_{S}}{(1-\alpha)P_{R}}\gamma_{ss}}^{\infty}\delta_{rs}e^{-\delta_{rs}\gamma_{rs}}d\gamma_{rs}\right] & \\ \cdot\left[\int_{\frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}\gamma_{ss}+\frac{R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}}{P_{P}}}^{\infty}\delta_{ps}e^{-\delta_{ps}\gamma_{ps}}d\gamma_{ps}\right]d\gamma_{ss}+ &\\ \int_{\frac{R_{st}^{\prime}}{P_{S}}}^{\infty}\delta_{ss}e^{-\delta_{ss}\gamma_{ss}}\left[\int_{\frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}\gamma_{ss}+\frac{R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}}{P_{P}}}^{\infty}\delta_{ps}e^{-\delta_{ps}\gamma_{ps}}d\gamma_{ps}\right]d\gamma_{ss}, & \alpha<\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1} \end{array}\right. \end{array} $$ $$\begin{array}{@{}rcl@{}} &&\Pr\left\{ \mathcal{E}^{(1)}_{1,p}\cap \mathcal{E}^{(1)}_{1,s}\right\}\approx\\ &&\left\{\begin{array}{ll} \left\{\begin{array}{ll} 0,\\ \textmd{when}~\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)\geq1;\\ \int_{\frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}+1\right)\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)}{P_{S}\left[1-\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)\right]}}^{\infty}\delta_{ss}e^{-\delta_{ss}\gamma_{ss}}\\ \cdot\left[\int_{\frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}\gamma_{ss}+\frac{R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}}{P_{P}}}^{\frac{P_{S}}{\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)P_{P}}\gamma_{ss}-\frac{1}{P_{P}}}\delta_{ps}e^{-\delta_{ps}\gamma_{ps}}d\gamma_{pr}\right]d\gamma_{ss},\\ \textmd{when}~\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)<1.\\ \end{array}\right. &\frac{1}{R_{st}^{\prime}+1}<\alpha<\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\\ \textmd{certain event}, &\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1}\leq\alpha\leq\frac{1}{R_{st}^{\prime}+1}\\ \int_{0}^{\infty}\delta_{ss}e^{-\delta_{ss}\gamma_{ss}}\left[\int_{\frac{\left(R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}\right)P_{S}}{P_{P}}\gamma_{ss}+\frac{R_{pt}^{\prime}-\frac{\alpha}{1-\alpha}}{P_{P}}}^{\infty}\delta_{ps}e^{-\delta_{ps}\gamma_{ps}}d\gamma_{ps}\right]d\gamma_{ss}, &\alpha<\min\left(\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1},\frac{1}{R_{st}^{\prime}+1}\right)\\ \int_{\frac{R_{st}^{\prime}-\frac{1-\alpha}{\alpha}}{P_{S}}}^{\infty}\delta_{ss}e^{-\delta_{ss}\gamma_{ss}}\left[ \int_{0}^{\frac{P_{S}}{\left(R_{st}^{\prime}-\frac{1-\alpha}{\alpha}\right)P_{P}}\gamma_{ss}-\frac{1}{P_{P}}} \delta_{ps}e^{-\delta_{ps}\gamma_{ps}}d\gamma_{ps}\right]d\gamma_{ss}, &\alpha>\max\left(\frac{R_{pt}^{\prime}}{R_{pt}^{\prime}+1},\frac{1}{R_{st}^{\prime}+1}\right), \end{array}\right. \end{array} $$ respectively. Then, these integrations can be solved to obtain the results in (37a)–(37c). H Cheng, Y-D Yao, Cognitive-relay-based intercell interference cancellation in cellular systems. IEEE Trans. Veh. Tech. 59(4), 1901–1909 (2010). D Li, Cognitive relay networks: opportunistic or uncoded decode-and-forward relaying. IEEE Trans. Veh. Tech. 63(3), 1486–1491 (2014). JN Laneman, DNC Tse, GW Wornell, Cooperative diversity in wireless networks: efficient protocols and outage behavior. IEEE Trans. Inf. Theory. 50(12), 3062–3080 (2004). G Kramer, M Gastpar, P Gupta, Cooperative strategies capacity theorems for relay networks. IEEE Trans. Inf. Theory. 51(9), 3037–3063 (2005). X Zhang, W Wang, X Ji, Multiuser diversity in multiuser two-hop cooperative relay wireless networks: system model and performance analysis. IEEE Trans. Veh. Tech. 58(2), 1031–1036 (2009). F Yang, M Huang, M Zhao, S Zhang, W Zhou, Cooperative strategies for wireless relay networks with cochannel interference over time-correlated fading channels. IEEE Trans. Veh. Tech. 62(7), 3392–3408 (2013). O Sahin, E Erkip, in Proceedings of IEEE Global Telecommunications Conference (GLOBECOM). Achievable rates for the gaussian interference relay channel (IEEEWashington, DC, 2007), pp. 1627–1631. O Sahin, E Erkip, in Proceedings of 41st Asilomar Conference on Signals, Systems and Computers (ACSSC). On achievable rates for interference relay channel with interference cancelation (IEEEPacific Grove, CA, 2007), pp. 805–809. S Sridharan, S Vishwanath, S Jafar, S Shamai, in Proceedings of IEEE International Symposium on Information Theory (ISIT). On the capacity of cognitive relay assisted Gaussian interference channel (IEEEToronto, ON, 2008), pp. 549–553. S Rini, D Tuninetti, N Devroye, in Proceedings of IEEE Information Theory Workshop (ITW). Outer bounds for the interference channel with a cognitive relay (IEEEDublin, Ireland, 2010), pp. 1–5. Y Tian, A Yener, The Gaussian interference relay channel: improved achievable rates and sum rate upperbounds using a potent relay. IEEE Trans. Inf. Theory. 57(5), 2865–2879 (2011). S Rini, D Tuninetti, N Devroye, New inner and outer bounds for the memoryless cognitive interference channel and some new capacity results. IEEE Trans. Inf. Theory. 57(7), 4087–4109 (2011). S Rini, D Tuninetti, N Devroye, Inner and outer bounds for the Gaussian cognitive interference channel and new capacity results. IEEE Trans. Inf. Theory. 58(2), 820–848 (2012). H Charmchi, GA Hodtani, M Nasiri-Kenari, A new outer bound for a class of interference channels with a cognitive relay and a certain capacity result. IEEE Commun. Lett. 17(2), 241–244 (2013). S Rini, D Tuninetti, N Devroye, AJ Goldsmith, On the capacity of the interference channel with a cognitive relay. IEEE Trans. Inf. Theory. 60(4), 2148–2179 (2014). TS Han, K Kobayashi, A new achievable rate region for the interference channel. IEEE Trans. Inf. Theory. 27(1), 49–60 (1981). M Costa, Writing on dirty paper. IEEE Trans. Inf. Theory. 29(3), 439–441 (1983). O Sahin, O Simeone, E Erkip, Interference channel with an out-of-band relay. IEEE Trans. Inf. Theory. 57(5), 2746–2764 (2011). Y Tian, A Yener, Symmetric capacity of the gaussian interference channel with an out-of-band relay to within 1.15 bits. IEEE Trans. Inf. Theory. 58(8), 5151–5171 (2012). S Haykin, Cognitive radio: brain-empowered wireless communications. IEEE J. Select Areas Commun. 23(2), 201–220 (2005). Y Xing, CN Mathur, MA Haleem, R Chandramouli, KP Subbalakshmi, Dynamic spectrum access with QoS and interference temperature constraints. IEEE Trans. Mobile Comput. 6(4), 423–433 (2007). A Goldsmith, SA Jafar, I Marić, S Srinivasa, Breaking spectrum gridlock with cognitive radios: an information theoretic perspective. Proc. IEEE. 97(5), 894–914 (2009). Y-C Liang, K-C Chen, GY Li, P Mahonen, Cognitive radio networking and communications: an overview. IEEE Trans. Veh. Tech. 60(7), 3386–3407 (2011). Y Han, A Pandharipande, TS Ting, Cooperative decode-and-forward relaying for secondary spectrum access. IEEE Trans. Wireless Commun. 8(10), 4945–4950 (2009). Q Li, SH Ting, A Pandharipande, Y Han, Cognitive spectrum sharing with two-way relaying systems. IEEE Trans. Veh. Tech. 60(3), 1233–1240 (2011). A Jovičić, P Viswanath, Cognitive radio: an information-theoretic perspective. IEEE Trans. Inf. Theory. 55(9), 3945–3958 (2009). M Cheng, J Mao, L Li, in Proceedings of 4th International Conference on Wireless Communications Networking and Mobile Computing (WiCOM). Investigation of mutual interference channel with relay in cognitive transmission (IEEEDalian, China, 2008), pp. 1–4. W Jaafar, W Ajib, D Haccoun, in Proceedings of IEEE Global Communications Conference (GLOBECOM). A novel relay-aided transmission scheme in cognitive radio networks (IEEEHouston, TX, 2011), pp. 1–6. W Jaafar, W Ajib, D Haccoun, in Proceedings of IEEE International Conference on Communications (ICC). Opportunistic adaptive relaying in cognitive radio networks (IEEEKyoto, Japan, 2012), pp. 1811–1815. TMC Chu, H Phan, H-J Zepernick, in Proceedings of 23rd IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). Amplify-and-forward relay assisting both primary and secondary transmissions in cognitive radio networks over nakagami-m fading (IEEESydney, Australia, 2012), pp. 932–937. Q Li, A Pandharipande, SH Ting, in Proceedings of IEEE Wireless Communications and Networking Conference (WCNC). Spectrum sharing on interference channels with a cognitive relay (IEEEShanghai, China, 2013), pp. 2982–2987. TM Cover, JA Thomas, Elements of Information Theory, 2nd edition (John Wiley & Sons, Inc., Hoboken, New Jersey, 2006). Q Li, SH Ting, A Pandharipande, Y Han, Adaptive two-way relaying and outage analysis. IEEE Trans. Wireless Commun. 8(6), 3288–3299 (2009). A Papoulis, SU Pillai, Probability, Random Variables and Stochastic Processes, 4th Edition (McGraw Hill, New York, 2002). V Asghari, S Aissa, End-to-end performance of cooperative relaying in spectrum-sharing systems with quality of service requirements. IEEE Trans. Veh. Tech. 60(6), 2656–2668 (2011). K Tourki, KA Qaraqe, M-S Alouini, Outage analysis for underlay cognitive networks using incremental regenerative relaying. IEEE Trans. Veh. Tech. 60(2), 721–734 (2013). F Guimaraes, D Da Costa, T Tsiftsis, C Cavalcante, G Karagiannidis, Multiuser and multirelay cognitive radio networks under spectrum-sharing constraints. IEEE Trans. Veh. Tech. 63(1), 433–439 (2014). The authors would like to acknowledge the support from the Ministry of Science and Technology (MOST) of China under the grants 2014DFA11640 and 2015DFG12580, the National Natural Science Foundation of China (NSFC) under the grants 61301128, 61461136004, and 61271224, the NFSC Major International Joint Research Project under the grant 61210002, the Hubei Provincial Science and Technology Department under the grants 2011BFA004 and 2013BHE005, the Fundamental Research Funds for the Central Universities under the grant 2015XJGH011, and the Special Research Fund for the Doctoral Program of Higher Education (SRFDP) under the grant 20130142120044. This research is partially supported by EU FP7-PEOPLE-IRSES, project acronym S2EuNet (grant no. 247083), project acronym WiNDOW (grant no. 318992), and project acronym CROWN (grant no. 610524). School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China Qiang Li & Xiaohu Ge Philips Research, High Tech Campus, AE Eindhoven, 5656, Netherlands Ashish Pandharipande Ruckus Wireless, Singapore, Singapore See Ho Ting Search for Qiang Li in: Search for Ashish Pandharipande in: Search for See Ho Ting in: Search for Xiaohu Ge in: Correspondence to Xiaohu Ge. Li, Q., Pandharipande, A., Ting, S.H. et al. Spectrum sharing on interference channels with a cognitive relay. J Wireless Com Network 2015, 185 (2015) doi:10.1186/s13638-015-0408-0 Cognitive spectrum sharing Interference channel with a cognitive relay Successive interference cancellation Dirty paper coding Dynamic Spectrum Access for Throughput, Delay, and Fairness Enhancement In Cognitive Radio Networks
CommonCrawl
Methane formation in tropical reservoirs predicted from sediment age and nitrogen Sediment fluxes rather than oxic methanogenesis explain diffusive CH4 emissions from lakes and reservoirs Frank Peeters, Jorge Encinas Fernandez & Hilmar Hofmann Climate-driven shifts in sediment chemistry enhance methane production in northern lakes E. J. S. Emilson, M. A. Carson, … A. J. Tanentzap Transfer efficiency of organic carbon in marine sediments James A. Bradley, Dominik Hülse, … Sandra Arndt Terrestrial Vegetation Drives Methane Production in the Sediments of two German Reservoirs Jörg Tittel, Matthias Hüls & Matthias Koschorreck Global carbon budget of reservoirs is overturned by the quantification of drawdown areas Philipp S. Keller, Rafael Marcé, … Matthias Koschorreck Diverse sediment microbiota shape methane emission temperature sensitivity in Arctic lakes Joanne B. Emerson, Ruth K. Varner, … Virginia I. Rich Methane emissions offset atmospheric carbon dioxide uptake in coastal macroalgae, mixed vegetation and sediment ecosystems Florian Roth, Elias Broman, … Alf Norkko A synthesis of modern organic carbon accumulation rates in coastal and aquatic inland ecosystems Grace M. Wilkinson, Alice Besterman, … Michael L. Pace Dry habitats sustain high CO2 emissions from temporary ponds across seasons Biel Obrador, Daniel von Schiller, … Núria Catalán Anastasija Isidorova ORCID: orcid.org/0000-0002-4634-527X1, Charlotte Grasset1, Raquel Mendonça1,2 & Sebastian Sobek1 Freshwater reservoirs, in particular tropical ones, are an important source of methane (CH4) to the atmosphere, but current estimates are uncertain. The CH4 emitted from reservoirs is microbially produced in their sediments, but at present, the rate of CH4 formation in reservoir sediments cannot be predicted from sediment characteristics, limiting our understanding of reservoir CH4 emission. Here we show through a long-term incubation experiment that the CH4 formation rate in sediments of widely different tropical reservoirs can be predicted from sediment age and total nitrogen concentration. CH4 formation occurs predominantly in sediment layers younger than 6–12 years and beyond these layers sediment organic carbon may be considered effectively buried. Hence mitigating reservoir CH4 emission via improving nutrient management and thus reducing organic matter supply to sediments is within reach. Our model of sediment CH4 formation represents a first step towards constraining reservoir CH4 emission from sediment characteristics. Methane (CH4) is a potent greenhouse gas that contributes to climate change with a global warming potential 34 times greater than carbon dioxide (CO2) at a 100 year time scale1. Methane fluxes from hydroelectric reservoirs, especially from tropical reservoirs, have been debated over the past years2,3,4. Even though hydropower is often thought of as a 'green' source of energy, studies suggest that reservoirs are not carbon neutral and in extreme cases might have even higher carbon footprint than fossil fuel energy, particularly those situated in the tropics5,6. Tropical reservoirs have been estimated to emit about 3.0 Tg CH4-C year−1 2 and to contribute 64% of the total reservoir CH4 emission3. Conversely, latitude was not a strong predictor of reservoir CH4 emission in a more recent global analysis4. Apparently, the CH4 emission from reservoirs is currently not well understood. At an estimated global organic carbon (OC) burial rate of 0.06 Pg C year−1 in reservoir sediments7, there is apparently a large supply of organic substrate to the methanogenic microbes that live in anoxic sediments of reservoirs. Important factors that influence rates of CH4 formation (net production, i.e. production minus oxidation) in sediment are temperature as well as the organic matter (OM) characteristics (i.e. its reactivity or bioavailability) and OM supply rate. Strong exponential relationships between temperature and CH4 formation rates were found in sediments of lakes8,9,10,11 and rivers12, and ecosystem-scale CH4 emission strongly depends on temperature13. In addition, methanogenesis relies on OM characteristics and supply rate. There is evidence that more CH4 is produced from autochthonous OM (derived from aquatic plants and phytoplankton) than from allochthonous OM (derived from land plants and soils)14. Also, a high OM supply rate can stimulate high sediment CH4 formation and emission, particularly if the OM is biologically reactive15. However, allochthonous OM might degrade slower but produce the same amount of CH4 over longer period of time16, since allochthonous OM contains more support tissues that are processed more slowly16,17. When the OM is being decomposed at steady conditions, its CH4 formation decreases over time due to rapid initial decay of labile substances16. However, at timescales relevant in sediments (years), the effect of decreasing reactivity during OM decomposition on sediment CH4 formation is currently not clear. While it has been shown that CH4 formation decreases over time18, the ageing effect on sediment OM reactivity and CH4 formation was not quantified. Therefore, sediment CH4 formation can at present not be predicted at environmentally relevant timescales (years), severely limiting our understanding of the globally important function of reservoir sediments as both sinks of OC and sources of atmospheric CH4. Here, we investigate the impact of OM source, characteristics and age on CH4 formation rates in sediment of three widely different tropical reservoirs. We performed a long-term incubation experiment (ca 739 days) of sediment varying in age between 1 and 48 years, in order to understand CH4 formation in reservoir sediments over the typical lifetime of reservoirs. We selected 3 reservoirs of different productivity and located in different biomes and, therefore, of a range of sediment OM characteristics. We expected a decrease of CH4 formation potential with increasing sediment age, and lower CH4 formation in the sediment of a reservoir with little autochthonous OM production than in a reservoir with high autochthonous OM production. Sediment samples were obtained from three Brazilian reservoirs of different trophic status: Chapéu D' Uvas (CDU), Curuá-Una (CUN) and Funil (FUN). CDU is an oligotrophic reservoir (mean total phosphorus (TP) concentration, 12 µg L−1) that was constructed in 1994 for water supply19. CUN is a mesotrophic hydroelectric reservoir (mean TP, 19 µg L−1) that was constructed in 197719. FUN is a eutrophic hydroelectric reservoir (mean TP 34 µg L−1) that was built in 196920. CDU and FUN are located in the Atlantic Forest biome, and CUN is located in the Amazon. In each reservoir, coring sites (6 in CDU, 4 in FUN and 7 in CUN) were distributed across the entire reservoir (Supplementary Figs 1–3), from the river inflow areas to the dam, to cover gradients in sediment characteristics. Sediment cores (one per site) were taken with a UWITEC gravity corer equipped with a hammer. Tubes of various length (0.6 to 3 m) were used depending on the expected sediment depth. In 14 of 17 sediment cores, we reached the pre-flooding soil. The depth of the interface between pre-flooding soil and reservoir sediment (sediment depth) was visually detected and recorded. If the pre-flooding soil was not reached; sediment depth was estimated from other cores sampled in the same area. Samples from CDU were taken on the 7th of March 2016, from FUN on the 14th of March 2016, and from CUN on the 20th–28th of February 2016. Sediment cores were stored in dark at room temperature, i.e. similar to in-situ temperatures in these tropical systems, and sliced the day after sampling. Samples were sliced at 4 cm intervals in order to ensure sufficient amount of material for the experiment. The sub-surface (2–6 cm) layer of all sediment cores was used for the experiment to represent rather fresh but already anoxic sediment. We considered 2 cm sediment depth to be anoxic because oxygen penetration depths of 0.3–1.2 cm were reported from a Brazilian reservoir21. If possible the 4 cm layer of sediment just above the soil-sediment interface was used and up to 3 additional samples, distributed over the sediment core depth down to the soil-sediment interface, were obtained; for example, the longest core was FUN_48 (204 cm), and the soil-sediment interface was at 192 cm. This core was sampled at 2–6 cm, 32–36 cm, 60–64 cm, 116–120 cm, 172–176 cm. The total number of sediment samples per core varied between one and four, depending on the total length of each sediment core. In total, 17 samples were obtained from 2–6 cm deep sediment, and 25 samples were from deeper sediment layers. After slicing, the headspace in the sample containers (PP jars with LDPE snap lock) was filled with N2, and samples were placed in a N2-filled glove bag within 1–2 hours after slicing. The glove bag was filled with N2 and evacuated using a vacuum pump 2 times to minimize the oxygen exposure of the samples. Each sample was homogenized in the glove bag, and a sub-sample of 10 mL of homogenized sediment was transferred into pre-weighted 60 mL serum vials. 2.5 or 5 mL (depending on the sediment density) of sterile-filtered (0.2 µm GF filter), N2-bubbled water from the respective reservoir was added to the samples. 3 replicates of each sample were prepared. The samples were sealed and flushed with N2 after preparation. Vials were weighted before and after filling to know the exact mass of sediment in each vial. As a control treatment, 3 replicates of 10 mL of sterile water (filtered with 0.2 µm GF filter) from each reservoir were prepared in the same vials as sediment sample and sampled in the same way as sediment samples. Dissolved oxygen concentration was checked in those controls at the beginning of incubation with optical sensors (PreSens) and was below detection level (0.1 mg/L detection level). The rest of the sediment was dried in a custom made oven at about 55 °C, and water content was gravimetrically determined. Samples were incubated anoxically at 25 °C in the dark throughout the whole experiment time (739 days). The lag phase until the onset of growth of methanogens in similarly handled sediment samples is very short (a few days;16) in comparison to the experiment duration, and thus considered negligible. The CH4 formation rate was measured in the samples at 7 sampling occasions. Each sampling occasion contained 2 measurements: start and end of incubation, which lasted for about 2 weeks. Exact days of sampling occasions varied for each sample, and were on average day 1–9 for the 1st sampling, day 9–28 the 2nd sampling, day 72–91 for the 3rd sampling, day 185–200 for the 4th sampling, day 275–290 for the 5th sampling, day 423–442 for the 6th sampling and day 719–736 for the 7th sampling (Supplementary Table 3). Before and after each sampling occasion, samples were flushed with N2 to prevent any possible inhibition of methanogenesis by CH4, CO2 or other volatile and potentially inhibitory substances22,23. The thermodynamics of metabolic reactions imply a change in free energy in dependence of product concentration24, and the resulting negative effect on methanogenic metabolism has been observed in incubation and modelling studies23,25,26. To measure initial gas concentration in the vials (at start of sampling occasions), 8 mL of N2 was added to each sample vial, and then pressure in the vial was measured with a needle barometer. Samples were vigorously shaken for 1 min to allow equilibration of CH4 between the headspace and the pore-water. Then 7 mL of the headspace gas was extracted in a 10 mL syringe. At the end of each sampling occasion (after about 2 weeks) the procedure was repeated. 8 mL of N2 was added to the samples and the gas was extracted after shaking the vial for 1 min. Sample dilution after addition of N2 was calculated from pressure difference that was measured before and after the N2 addition. Samples were injected the day they were obtained into a GC equipped with a flame ionization detector (FID) (Agilent Technologies, 7890 A GC system). Concentrations of CH4 and CO2 in the sediment pore water were calculated according to the ideal gas law and Henry's law, where coefficients for CH4 solubility in water at varying temperatures were obtained from Wiesenburg and Guinasso Jr27, and coefficients for CO2 were obtained from Weiss28. At the end of the experiment, after the last CH4 measurement was performed, the sediment was extracted from the serum vials. The sediment of replicates was pooled, mixed and dried in an oven at 60 °C; we decided to pool the replicates because no significant differences were detected in their CH4 formation. The total C (TC) and total N (TN) content in the sediment were measured in each sample at the beginning and end of the incubation with a Costech Elemental Analyzer. We found no significant contribution of inorganic C in our samples. Data analysis and calculations The rates of CH4 and CO2 formation were obtained by the difference in CH4 concentration between two measurements and divided the time interval. The obtained rates represent net CH4 formation rates, including both production and any potential consumption of CH4 (e.g. by anaerobic CH4 oxidation29,30) in the sediment. The age of individual sediment layers was estimated from the multi-year average sedimentation rate, which can be precisely calculated from sediment thickness, and the time elapsed since dam closure, according to the formula: $$sample\,age=\frac{sediment\,layer\,depth}{total\,sediment\,depth}\ast reservoir\,age+incubation\,length$$ where sediment layer depth is the average depth of a slice, total sediment depth is the depth between the sediment surface and the depth of the interface between pre-flooding soil and reservoir sediment. Reservoir age is the time since damming at the sampling day, and incubation length is the time between sediment sampling and the day when CH4 formation rate was obtained. Sediment age here represents, then, a measure of time since sediment deposition (and not since OM fixation by photosynthesis or since sediment formation in the aquatic system) assuming constant sedimentation rate. We acknowledge that there probably is year-to-year variability in the sedimentation rate, e.g. due to hydrological extremes, which adds uncertainty to the age estimate of individual sediment layers. However, since there is no reason to assume any systematic change in sedimentation rate with reservoir age, the uncertainty in the estimated age of individual sediment layers is unlikely to suffer from any systematic bias, and is therefore considered to add random error to the age estimates. Because TC and TN contents were only measured at the start and end of the experiment, we assumed a linear change in TC and TN through the experiment to estimate TC and TN at every CH4 measurement occasion. Although it has been shown that the decrease of sediment TC and TN is exponential over timescales of several years-decades31, a linear approximation of TC and TN decrease over time is reasonable for relatively short timescales as used in this experiment. Accordingly, when we compared linear TN decrease with exponential TN decrease, modelled using the TN decay rate of 0.16 year−1 given by Gälman et al.31, the difference between TN estimates at any point of time was <0.002% TN, i.e. the potential error of assuming linearity is below analytical precision (data not shown). Since N is preferentially lost over C during sediment organic matter degradation31, the effect of interpolation approach on estimated TC will be even smaller than for TN. In some samples of the deepest sediment layers that had the lowest CH4 formation rates, the first 1–2 measurements were excluded because of possible oxygen contamination during sample preparation that delayed the establishment of methanogenic conditions (low microbial activity in old sediment delays the consumption of trace oxygen). To estimate sediment CH4 formation rates over reservoir lifetime we fitted an exponential decay model with 3 OM pools of different reactivity with core as a factor (gnls(CH4~a * exp(−b * Age) + c)32,33. The significance of the model parameters was assessed with the ANOVA function and the quality of the model was checked by visual examination of the residuals and predicted against measured data. We have also tested other non-linear models (exponential decay model with 2 OM pools, and reactivity continuum model) but they did not converge. From the exponential decay models we defined a "transition age" beyond which microbial OM degradation is not quantitatively significant for the standing stock of OC but at a low background level, i.e. the age beyond which sediment OM may be considered "buried" even though microbial activity does not cease entirely. In order to model sediment CH4 formation over the typical lifetime of reservoirs (100 years), and in order to estimate the "transition age" to low background CH4 formation, we used exponential decay models (described above). We define the transition age as the age at which the slope of the modelled curve is 179°, i.e. almost flat and not distinguishable from 180°, and thus indicative of a constantly low background rate of CH4 formation that no longer changes over time (Supplementary Fig. 4). We then integrated CH4 formation over the assumed typical lifetime of reservoir of 100 years considering yearly increase in sediment thickness, and calculated the total amount of CH4 that would be formed in each sediment core layer over 100 years. We calculated the average and standard deviation of CH4 formed for each sampled sediment core separately. The effect of sample age and TN on the CH4 formation rates was modeled with a linear model in R (lm) including the interaction between age and TN (lm (ln (CH4) ~ ln(Age) * TN). In the model CH4 formation rate per sediment dry weight (µmol g(dw)−1 d−1) was used because TC and TN are dependent on each other. The CH4 formation rates and sample age were ln transformed. The model assumptions were checked through model residual plots. Correlation between transition age and C:N ratio was checked with a Pearson correlation. The difference between fresh and old sediment CH4 formation was checked with repeated measures ANOVA. Both model assumptions were checked through model residual plots. We used R (R 3.4.334) for data analysis, modelling and plotting. Sediment characteristics The sediment samples used in our study differed in sediment characteristics (Table 1). The Amazonian CUN reservoir had the highest mean TC content (6.6%) as well as the highest mean C:N ratio (15.2), while the eutrophic FUN reservoir showed the lowest values (2.5% TC and C:N 10.6, Table 1). TN content was in general similar among the reservoirs, with slightly higher values in CUN (Table 1). Table 1 Pre-incubation sediment water content, total carbon content, total nitrogen, C:N ratio and total number of sediment slices used for the experiment (n) in the 3 studied reservoirs (mean (min–max)). CH4 formation rates measured during the incubation experiment CH4 formation per sediment dry weight (µmol g(dw)−1 d−1) significantly correlated with the amount of TC in the sediment (R2 = 0.56, p < 2.2−16, data not shown) therefore CH4 formation rates were normalized for TC (µmol gC−1 d−1). There was a significant difference between CH4 production in sub-surface (2–6 cm) and deeper sediment layers (p < 2−16, Fig. 1). CH4 formation rates over the time of the incubation experiment in the three reservoirs. Black lines are sub-surface sediment (2–6 cm depth) and blue lines are deeper sediment. Circles are an average of three replicates of CH4 formation rates with a mean stdev of 8.6% of the mean. CH4 formation rates decreased over time of the 739 d incubation experiment in sub-surface sediment layers (2–6 cm), but remained rather stable in deeper sediment layers (Fig. 1). CO2 formation rates also decreased over time for sub-surface sediment layers and also for deeper sediment for FUN (Supplementary Fig. 5). The highest CH4 formation rate was measured in a FUN sub-surface sediment sample (core FUN_46 at the first measurement occasion at day 7: 30.4 ± 2. µmol gC−1 d−1 (mean ± sd)). At the last measurement at day 736, the same sample had CH4 formation rate of 2.4 ± 0.1 µmol gC−1 d−1. On average FUN experienced highest CH4 formation rates in sub-surface sediment over the whole incubation period (8.8 ± 7.0 µmol gC−1 d−1, (mean ± sd)), while in CUN and CDU mean CH4 formation rates in sub-surface layers were lower over the whole incubation period (2.7 ± 2.3 µmol gC−1 d−1 and 3.9 ± 1.8 µmol gC−1 d−1 respectively). CH4 formation rates decreased most rapidly in FUN sub-surface sediment over the time of the incubation. The final CH4 formation rates (day 739) in sub-surface sediment in FUN were 24 ± 12% of the initial rates, while in CUN and CDU the CH4 formation rates decreased less and were 50 ± 16% of the initial rates in CUN and 50 ± 15% in CDU. CH4 formation rates of deepest sediment were significantly different between reservoirs (p < 0.001, Kruskal – Wallis test). CDU had higher mean CH4 formation rates over the whole incubation period in bottom layers (1.07 ± 0.49 µmol gC−1 d−1) than FUN (0.44 ± 0.14 µmol gC−1 d−1) and CUN (0.44 ± 0.16 µmol gC−1 d−1). CH4 formation rates over sediment age CH4 formation rates decreased exponentially with sediment age (Fig. 2). In none of the samples, the CH4 formation rate was zero. CH4 formation rates in sediment over age in the 3 reservoirs. Lines and points of the same colour are samples taken from the same sediment core, and correspond to the colour of the core name on the legend. Lines are exponential decay models of the cores (model statistics are found in Supplementary Table 1). Note the differences in scales. The decrease of CH4 formation rate over time was most pronounced in FUN (Fig. 2). The highest CH4 formation in FUN was measured in 1 year old sediment (30.4 ± 2.7 µmol gC−1 d−1) and the lowest in 42.9 years old sediment (0.2 ± 0.005 µmol gC−1 d−1). We could also observe clear exponential decrease in sediment CH4 formation in CUN, where the estimated age of the samples varied from 2 to 38 years. In CUN, the highest CH4 formation of 11.4 ± 0.6 µmol gC−1 d−1 was measured at 7 years old sediment and the lowest CH4 formation rate was measured in 36 years old sediment (0.2 ± 0.01 µmol gC−1 d−1). CDU is the youngest of the studied reservoirs. Estimated sediment age in CDU varied from 1 to 22 years. The highest CH4 formation in CDU was measured at 6 years old sediment (8.0 ± 0.2 µmol gC−1 d−1) and the lowest was 0.1 ± 0.03 2 µmol gC−1 d−1 at 18 years old sediment. Models for CH4 formation over time achieved best fit on deeper sediment cores where a wide age gradient was covered by many samples. In cases where the age gradient was very short, model fits were sometimes poor. Model statistics and coefficients can be found in Supplementary Table 1. CH4 formation rates as a function of TN and sediment age Merging all 764 measured CH4 formation rates in these three widely different reservoir sediments (Table 1) into one dataset, we found that CH4 formation rates could be predicted from the sediment TN (%) and the age of the sediment (R2 = 0.81, p < 2−16; Fig. 3; see Supplementary Table 2 for model statistics): $$\mathrm{ln}({{\rm{CH}}}_{{\rm{4}}}\,{\rm{formation}})=-\,0.59\ast \mathrm{ln}({\rm{Age}})+6.46\ast TN-0.99\ast \mathrm{ln}({\rm{Age}})\ast {\rm{TN}}-{\rm{3.12}}$$ where CH4 formation is expressed in µmol g(dw)−1 d−1, TN is expressed in mass %, and age is expressed in years. CH4 formation (in ln scale) in all sediment samples as a function of TN and ln(Age). The line is the 1:1 line. The back-transformation of ln-transformed predictions renders residuals that are heavily skewed, leading to underestimation of predicted mean CH4 formation rate35. Therefore we corrected the predicted CH4 formation rate following36: $${{\rm{CH}}}_{4}\,{{\rm{formation}}}_{{\rm{corr}}}=\exp (\mathrm{ln}\,{{\rm{CH}}}_{4}\,{\rm{formation}}+0.5\ast {{\rm{s}}}^{2})$$ $${{{\rm{s}}}^{2}}_{{\rm{corr}}}={({{\rm{CH}}}_{4{\rm{corr}}}{\rm{formation}})}^{2}\ast \exp ({{\rm{s}}}^{2})-1)$$ where CH4 formation is the ln transformed predicted CH4 formation rate and s2 = 0.28 is a residual variance of the model. CH4 formation corr is the corrected CH4 formation rate and s2 corr is the corrected variance. We report here the first model of CH4 formation in freshwater sediment that is applicable at relevant timescales (Fig. 3). Accordingly, CH4 formation rates stretching over more than 2 orders of magnitude could be predicted for sediments that cover a wide gradient in sediment characteristics (Table 1) and span over decades in age. Sediment age (i.e. time since sediment deposition) was an important predictor of CH4 formation (Supplementary Table 2), which can be seen in the exponential decrease of CH4 formation as the sediment got older, both in sub-surface sediment samples during our 739 days of experiment (Fig. 1), and when comparing sediment layers along a decadal age gradient (Fig. 2). We note that age was important in spite of the random noise inflicted by the unknown year-to year variability in sediment accumulation on the age estimate of individual sediment layers. The sediment TN concentration was the other important predictor of the CH4 formation model, which explained 81% of the variability in CH4 formation rates (Fig. 3). Even though a significant relationship between TN and CH4 formation in freshwater sediment was previously reported18, no relation to sediment age has been made beyond the timescales of incubation experiments (e.g.18). Contrarily, our analysis revealed that not only the TN concentration was important for CH4 formation, but also the change of TN over sediment age (i.e. the interaction term age * TN; Eq. 2). We suggest that the importance of TN may be related to the amount, source and diagenetic state of the sediment OM. First, we found that in our dataset, the sediment TN concentration was also positively related to the TC concentration (TN = 0.067 * TC + 0.077, p < 2.2−16, R2 = 0.90, not shown). Second, the TN concentration in the sediment is representative of the OM source37: autochthonous OM is comparatively rich in protein and poor in cellulose (i.e. high in TN), while allochthonous OM is protein-poor and rich in lignocellulose (i.e. low in TN). It was previously shown that N-rich autochthonous OM is more biodegradable38 and produces more CH4 than the degradation of allochthonous OM14 (even if very fresh allochthonous OM can also give rise to substantial CH4 production16), explaining the relationship between sediment TN for CH4 production (Eq. 2;18). Third, N is preferentially consumed during microbial degradation of sediment OM (e.g.31), meaning that over time, the sediment OM becomes poorer in TN and less reactive, in accordance with our finding that the interactive term (age * TN) was significant in the model. In fact, also sediment age is a proxy of OM reactivity, given the steep exponential decrease of the sediment OC decay with increasing sediment age39. The model presented here for the first time quantifies the influence of OM input (amount and source) and its diagenetic change, approximated by age, TN concentration and their interaction, on CH4 formation rates from sediment OM over decal timescales. In addition, the model is derived from tropical reservoirs of different trophic status and biomes, and in spite of the considerable variability of CH4 formation between reservoirs, all predicted CH4 formation rates for all cores from all reservoirs converge on one line (Fig. 3). For this reason, it seems likely that this relationship is valid in tropical reservoirs in general, and that CH4 formation can be predicted from TN and age, factors that can be relatively easily measured or approximated. Importantly, the rates presented here were derived at a standardized temperature of 25 °C. In order to derive in-situ CH4 rates, our model needs to be used in conjunction with the well-documented temperature dependence of CH4 production40. Also, before applying it to other systems, or beyond the data domain from which it was constructed, the model first needs to be verified. CH4 formation decreased over age in all sediment cores, however, the CH4 formation never reached zero, not even at over 40 years of age. Despite that CH4 formation slows down rapidly after sediment is deposited, CH4 continues to be produced in older sediment layers. For this reason, it is strictly speaking not warranted to speak of OC burial in reservoir sediment, simply because it is in many cases relatively young and thus still being degraded. However, we may determine the age at which CH4 formation asymptotically reaches a stable and low 'background' rate. Sediment younger than that may still be considered to degrade significantly, while sediment older than this threshold may be considered largely stabilized, i.e characterized by low and stable CH4 formation (see Methods). This age may therefore serve as an operationally defined age beyond which sediment OC may be considered "buried", i.e. not degraded to an extent that is quantitatively significant for the standing stock of OC. The age of transition to low background CH4 formation calculated by this approach was about 11 years in CDU and 12 years in CUN and 6 years in FUN (Table 2), which corresponds to 13, 9 and 23 cm of sediment depth in CDU, CUN and FUN respectively. Assuming a 100 year lifetime for these reservoirs, around 38% (Table 2) of the total time-integrated sediment CH4 formation would take place in sediment layers that are beyond the transition age threshold and that have reached a low and stable background CH4 formation. In other words, sediment layers that we often consider biologically inactive might significantly contribute to the overall sediment CH4 production. Table 2 Age of transition to low background CH4 formation, defined as the age at which the slope of the exponential decay curve reaches 179° (see Methods), and the corresponding sediment depth. Despite a significant contribution of older sediment layers to CH4 formation (with an average measured CH4 formation in sediment older than the transition age for 179 degree criteria 1.2 ± 0.6 µmol gC−1 d−1 in CDU, 0.5 ± 0.2 µmol gC−1 d−1 in CUN and 0.8 ± 0.1 µmol gC−1 d−1 in FUN), the amount of C that is degraded through the CH4 formation is quantitatively negligible compared to typical carbon burial rates. According to our measured CH4 formation rates in sediment older than the transition age, on average 0.3 ± 0.2% of the C contained in a sediment layer is transformed into CH4 in a year. We used the C:N ratio of the surface sediment to evaluate if the source of the sediment deposited onto the sediment surface has an effect on the transition age (Fig. 4). We found a significant increase in the transition age with increase in C:N ratio (p = 0.012, R2 = 0.34, n = 17). This indicates that autochthonous OM (low C:N) is relatively easily decomposed and readily transformed into CH416, but once the labile substances are decomposed the degradation rate declines steeply (Fig. 2). Allochthonous OM (high C:N) on the other side breaks down more slowly than autochthonous OM, and the slow anaerobic decay of refractory fraction of allochthonous OM can provide a continuous source of CH4 in reservoir sediment over the reservoir lifetime. The transition age in all 3 reservoirs vs C:N ratio of surface sediment of the same sediment core as the transition age was determined for (surface sediment CN ratio used as a proxy of reactivity of C that is deposited onto the sediment surface). Our study shows that the CH4 formation rate of reservoir sediments is predictable although the fluxes may further vary with sediment temperature and dissolved oxygen concentration40. As CH4 emission from reservoirs is very difficult to measure representatively due to the strong spatial and temporal variability of CH4 emission via diffusion19, and particularly via ebullition41,42, the sediment CH4 formation model (Eq. 2, Fig. 3) represents the first step towards constraining reservoir CH4 emission from sediment characteristics. Further, our study indicates that the high CH4 emission from eutrophic reservoirs4 may stem primarily from CH4 formation in young sediment layers (e.g. FUN, <6 years old; Table 2). Hence, reservoir management strategies that decrease nutrient inputs to limit autochthonous primary production could within less than a decade lead to a reduction in reservoir CH4 emission to the atmosphere. Lastly, the current hydropower boom in tropical areas43 carries a risk of creating new reservoirs that are large CH4 emitters: the high potential for soil erosion in the tropics44,45 implies that new tropical reservoirs may have high sedimentation rates and thus high supply of young and reactive OM that stimulates CH4 formation in the reservoir sediment (Fig. 2). This effect is aggravated in case nutrient management strategies are lacking and excess nutrient loads stimulate productivity, since the transformation of autochthonous OC to CH4 in reservoir sediment results in an anthropogenic enhancement of atmospheric radiative forcing46. The data are be publicly available on the DiVA repository http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-387547). Pachauri, R. K. et al. Climate change: synthesis report. Contribution of Working Groups I, II and III to the fifth assessment report of the Intergovernmental Panel on Climate Change. (IPCC, 2014). Barros, N. et al. Carbon emission from hydroelectric reservoirs linked to reservoir age and latitude. Nature Geosci 4, 593–596, https://doi.org/10.1038/ngeo1211 (2011). Article ADS CAS Google Scholar Li, S. & Zhang, Q. Carbon emission from global hydroelectric reservoirs revisited. Environmental Science and Pollution Research 21, 13636–13641 (2014). Deemer, B. R. et al. Greenhouse Gas Emissions from Reservoir Water Surfaces: A New Global Synthesis. BioScience 66, 949–964 (2016). Fearnside, P. M. Brazil's Samuel Dam: Lessons for Hydroelectric Development Policy and the Environment in Amazonia. Environmental Management 35, 1–19, https://doi.org/10.1007/s00267-004-0100-3 (2005). Fearnside, P. M. & Pueyo, S. Greenhouse-gas emissions from tropical dams. Nature Clim. Change 2, 382–384 (2012). Mendonça, R. et al. Organic carbon burial in global lakes and reservoirs. Nature communications 8, 1694 (2017). Duc, N. T., Crill, P. & Bastviken, D. Implications of temperature and sediment characteristics on methane formation and oxidation in lake sediments. Biogeochemistry 100, 185–196 (2010). Fuchs, A., Lyautey, E., Montuelle, B. & Casper, P. Effects of increasing temperatures on methane concentrations and methanogenesis during experimental incubation of sediments from oligotrophic and mesotrophic lakes. Journal of Geophysical Research: Biogeosciences 121, 1394–1406 (2016). Beaulieu, J. J., DelSontro, T. & Downing, J. A. Eutrophication will increase methane emissions from lakes and impoundments during the 21st century. Nature communications 10, 1375 (2019). Sepulveda-Jauregui, A. et al. Eutrophication exacerbates the impact of climate warming on lake methane emission. Science of the Total Environment 636, 411–419 (2018). Wilkinson, J., Maeck, A., Alshboul, Z. & Lorke, A. Continuous seasonal river Ebullition measurements linked to sediment methane formation. Environmental science & technology 49, 13121–13129 (2015). Yvon-Durocher, G. et al. Methane fluxes show consistent temperature dependence across microbial to ecosystem scales. Nature advance online publication, https://doi.org/10.1038/nature13164, http://www.nature.com/nature/journal/vaop/ncurrent/abs/nature13164.html-supplementary-information (2014). West, W. E., Coloso, J. J. & Jones, S. E. Effects of algal and terrestrial carbon on methane production rates and methanogen community structure in a temperate lake sediment. Freshwater Biology 57, 949–955 (2012). Sobek, S., DelSontro, T., Wongfun, N. & Wehrli, B. Extreme organic carbon burial fuels intense methane bubbling in a temperate reservoir. Geophysical Research Letters 39, L01401, https://doi.org/10.1029/2011gl050144 (2012). Grasset, C. et al. Large but variable methane production in anoxic freshwater sediment upon addition of allochthonous and autochthonous organic matter. Limnology and Oceanography (2018). Webster, J. & Benfield, E. Vascular plant breakdown in freshwater ecosystems. Annual review of ecology and systematics 17, 567–594 (1986). Gebert, J., Kothe, H. & Grongroft, A. Prognosis of methane formation by river sediments. Journal of Soils and Sediments 6, 75–83 (2006). Paranaiba, J. R. et al. Spatially resolved measurements of CO2 and CH4 concentration and gas exchange velocity highly influence carbon emission estimates of reservoirs. Environmental Science & Technology (2017). Pacheco, F. et al. The effects of river inflow and retention time on the spatial heterogeneity of chlorophyll and water–air CO2 fluxes in a tropical hydropower reservoir. Biogeosciences 12, 147–162 (2015). Mendonça, R. et al. Organic carbon burial efficiency in a subtropical hydroelectric reservoir (2016). Guerin, F., Abril, G., de Junet, A. & Bonnet, M. P. Anaerobic decomposition of tropical soils and plant material: Implication for the CO2 and CH4 budget of the Petit Saut Reservoir. Applied Geochemistry 23, 2272–2283 (2008). Hansson, G. & Molin, N. End product inhibition in methane fermentations: effects of carbon dioxide and methane on methanogenic bacteria utilizing acetate. European journal of applied microbiology and biotechnology 13, 236–241 (1981). Großkopf, T. & Soyer, O. S. Microbial diversity arising from thermodynamic constraints. The ISME journal 10, 2725 (2016). Williams, R. T. & Crawford, R. L. Methane production in Minnesota peatlands. Appl. Environ. Microbiol. 47, 1266–1271 (1984). Wallmann, K. et al. Kinetics of organic matter degradation, microbial methane generation, and gas hydrate formation in anoxic marine sediments. Geochimica Et Cosmochimica Acta 70, 3905–3927 (2006). Wiesenburg, D. A. & Guinasso, N. L. Jr. Equilibrium solubilities of methane, carbon monoxide, and hydrogen in water and sea water. Journal of chemical and engineering data 24, 356–360 (1979). Weiss, R. F. Carbon dioxide in water and seawater: the solubility of a non-ideal gas. Mar Chem 2, 203–215 (1974). Sivan, O. et al. Geochemical evidence for iron-mediated anaerobic oxidation of methane. Limnology and Oceanography 56, 1536–1544, https://doi.org/10.4319/lo.2011.56.4.1536 (2011). Bar-Or, I. et al. Iron-coupled anaerobic oxidation of methane performed by a mixed bacterial-archaeal community based on poorly reactive minerals. Environmental science & technology 51, 12293–12301 (2017). Galman, V., Rydberg, J., de-Luna, S. S., Bindler, R. & Renberg, I. Carbon and nitrogen loss rates during aging of lake sediment: Changes over 27 years studied in varved lake sediment. Limnology and Oceanography 53, 1076–1082 (2008). Pinheiro, J., Bates, D., DebRoy, S., Sarkar, D. & Team, R. C. _nlme: Linear and Nonlinear Mixed Effects Models_. R package version 3.1–131, https://CRAN.R-project.org/package=nlme (2017). Westrich, J. T. & Berner, R. A. The role of sedimentary organic matter in bacterial sulfate reduction: the G model tested. Limnology and Oceanography 29, 236–249 (1984). Team, R. C. (ISBN3-900051-07-0, https://www. R-project. org, 2017). Finney, D. J. On the distribution of a variate whose logarithm is normally distributed. Supplement to the Journal of the Royal Statistical Society 7, 155–161 (1941). Article MathSciNet Google Scholar Helsel, D. R. & Hirsch, R. M. Statistical Methods in Water Resources. (US Geological Survey, 2002). Meyers, P. A. & Ishiwatari, R. Lacustrine organic geochemistry - an overview of indicators of organic-matter sources and diagenesis in lake-sediments. Organic Geochemistry 20, 867–900, https://doi.org/10.1016/0146-6380(93)90100-P (1993). Meyers, P. A. Applications of organic geochemistry to paleolimnological reconstructions: a summary of examples from the Laurentian Great Lakes. Organic Geochemistry 34, 261–289 (2003). Middelburg, J. J., Vlug, T. & Vandernat, F. Organic-matter mineralization in marine systems. Global and Planetary Change 8, 47–58, https://doi.org/10.1016/0921-8181(93)90062-S (1993). Yvon-Durocher, G. et al. Methane fluxes show consistent temperature dependence across microbial to ecosystem scales. Nature 507, 488 (2014). DelSontro, T. et al. Spatial Heterogeneity of Methane Ebullition in a Large Tropical Reservoir. Environmental Science &. Technology 45, 9866–9873, https://doi.org/10.1021/es2005545 (2011). Maeck, A. et al. Sediment Trapping by Dams Creates Methane Emission Hot Spots. Environmental Science &. Technology 47, 8130–8137, https://doi.org/10.1021/es4003907 (2013). Zarfl, C., Lumsdon, A. E., Berlekamp, J., Tydecks, L. & Tockner, K. A global boom in hydropower dam construction. Aquatic Sciences 77, 161–170 (2015). Yang, D., Kanae, S., Oki, T., Koike, T. & Musiake, K. Global potential soil erosion with reference to land use and climate changes. Hydrological processes 17, 2913–2928 (2003). Panagos, P. et al. Global rainfall erosivity assessment based on high-temporal resolution rainfall records. Scientific reports 7, 4175 (2017). Prairie, Y. T. et al. Greenhouse Gas Emissions from Freshwater Reservoirs: What Does the Atmosphere See? Ecosystems, 1–14 (2017). We thank Emma Åkerman Fulford for help during the experiment. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007–2013)/ERC Grant Agreement 336642. Open access funding provided by Uppsala University. Limnology, Department of Ecology and Genetics, Uppsala University, Uppsala, Sweden Anastasija Isidorova, Charlotte Grasset, Raquel Mendonça & Sebastian Sobek Laboratory of Aquatic Ecology, Department of Biology, Federal University of Juiz de Fora, Juiz de Fora, Brazil Raquel Mendonça Anastasija Isidorova Charlotte Grasset Sebastian Sobek A.I., R.M. and S.S. developed the experimental design. A.I. and C.G. performed the sampling and data analyses. A.I. performed the experimental work and wrote the main manuscript text. All authors interpreted the data, discussed the results and reviewed the manuscript. Correspondence to Anastasija Isidorova. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Isidorova, A., Grasset, C., Mendonça, R. et al. Methane formation in tropical reservoirs predicted from sediment age and nitrogen. Sci Rep 9, 11017 (2019). https://doi.org/10.1038/s41598-019-47346-7 Eutrophication effects on CH4 and CO2 fluxes in a highly urbanized tropical reservoir (Southeast, Brazil) Roseli Frederigi Benassi Tatiane Araujo de Jesus William J Mitsch Environmental Science and Pollution Research (2021)
CommonCrawl
Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation Simone Wahl1,2,3, Anne-Laure Boulesteix4, Astrid Zierer2, Barbara Thorand2,3 & Mark A. van de Wiel5,6 An Erratum to this article was published on 05 December 2016 Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures. The aim of a prognostic study is to develop a classification model from an available data set and to estimate the performance it would have in future independent data, i.e., its predictive performance. This cannot be achieved by fitting the model on the whole data set and evaluating performance in the same data set, since a model generally performs better for the data used to fit the model than for new data ("overfitting") and performance would thus be overestimated. This can be observed already in low-dimensional situations and is especially pronounced in relatively small data sets [1, 2]. Instead, the available data have to be split in order to allow performance assessment in a part of the data that has not been involved in model fitting [3, 4]. For efficient sample usage, this is often achieved by internal validation strategies such as bootstrapping (BS), subsampling (SS) or cross-validation (CV). The task of assessing predictive performance is made even more complicated when the data set is incomplete. Missing values occur frequently in epidemiological and clinical studies, for reasons such as incomplete questionnaire response, lack of biological samples, or resource-based selection of samples for expensive laboratory measurements. The majority of statistical methods, including logistic regression models, assume a complete data matrix, so that some action is required prior to or during data analysis to allow usage of incomplete data. Since ad hoc strategies such as complete-case analysis and single imputation often provide inefficient or invalid results, and model-based strategies require often sophisticated problem-specific implementation, multiple imputation (MI) is becoming increasingly popular among researchers of different fields [5, 6]. It is a flexible strategy that typically assumes missing at random (MAR) missingness, that is, missingness depending on observed but not unobserved data, which is often, at least approximately, given in practice [5]. MI involves three steps [7]: (i) missing values are imputed multiple (M) times, i.e., missing values are replaced by plausible values, for instance derived as predicted values from a sequence of regression models including other variables, (ii) statistical analysis is performed on each of the resulting completed data sets, and (iii) the M obtained parameter estimates and their variances are pooled, taking into account the uncertainty about the imputed values [8]. When the estimate of interest is a measure of predictive performance of a classification model, or a measure of incremental predictive performance of an extended model as compared to a baseline model, the application of MI is not straightforward. Specifically, it is unclear how internal validation and MI should be combined in order to obtain unbiased estimates of predictive performance. Previous strategies combining internal validation with MI mostly focused on application without the aim to compare their chosen strategy against others or to assess their validity [9–11]. Musoro et al. [12] studied the combination of BS and MI in the situation of a nearly continuous outcome using LASSO regression, essentially reporting that the strategy of conducting MI first followed by BS on the imputed data yielded overoptimistic mean squared errors, whereas conducting BS first on the incomplete data followed by MI yielded slightly pessimistic results in the studied settings. Wood et al. [13] presented a number of strategies for performance assessment in multiply imputed data, leaving, however, the necessity of validating the model in independent data to future studies. Hornung et al. [14] examined the consequence of conducting a single imputation on the whole data set as compared to the training data set on cross-validated performance of classification methods, observing a negligible influence. Their investigation was restricted to one type of imputation that did not include the outcome in the imputation process. In this paper, we present results of a comprehensive simulation study and results of a real data-based simulation study comparing various strategies of combining internal validation with MI, with and without including the outcome in the imputation models. Our study extends upon previous work with regard to several aspects: (1) We consider different internal validation strategies and different ways to correct for optimism, we (2) study measures of discrimination, calibration and overall performance as well as incremental performance of an extended model, and we (3) closely examine the sensitivity of the results towards characteristics of the data set, including sample size, number of covariates, true effect size and degree and mechanism of missingness. Furthermore, we (4) elaborate on the number of imputations and resamples to be used and (5) provide an approach for the construction of confidence intervals for predictive performance estimates. Finally, we (6) translate our results into recommendations for practice, considering the applicability of the proposed methods for epidemiologists with limited analytical and computational resources. Two simulation studies were conducted: In the first, incomplete data were generated de novo with different (known) effect sizes, facilitating the comparison of predictive performance estimates of different combined validation/imputation strategies against the respective true performance measure. The second simulation study was based on the complete observations of a real incomplete data set, in which we introduced missing values in a pattern mirroring that of the whole incomplete data set, aiming to compare strategies in a realistic data situation. Simulation study 1: de novo simulation Data were generated according to a variety of settings, covering a large spectrum of practically occurring data characteristics (Table 1). For each setting, 250 data sets were randomly generated. Two situations were investigated. In situation 1, only one set of covariates was considered (the number of which is denoted as p), with the aim being the estimation of predictive performance of a model comprising this set of covariates. In situation 2, two sets of covariates were considered (with p0 the number of baseline covariates and p1 the number of additional covariates), in order to study the estimation of added predictive performance of the model comprising both sets of covariates as compared to a model containing only the p0 baseline covariates. Table 1 Simulation settings For each simulated data set, a binary outcome vector y=(y1,y2,…,y n ) was created with the pre-specified case probability frac. A covariate matrix X=(x1,x2,…,x n ) was simulated by drawing n times from a p or p0+p1-dimensional (in situations 1 and 2, respectively) multivariate normal distribution with mean vector 0 and variance-covariance matrix Σ with variances equal to 1 and covariances specified by the correlation among variables (ρ in situation 1, ρ0 and ρ1 for the baseline and additional covariates, respectively, in situation 2) as provided in Table 1. Then, effect sizes were introduced in a way that each set of covariates achieved an (added) performance approximately in the magnitude of a pre-specified area under the receiver-operating characteristic (ROC) curve (AUC) value. As a reference we used the theoretical relationship [15]: $$ \text{AUC} = \Phi\left(\frac{1}{2} \sqrt{\boldsymbol{\Delta\mu}^{T} \mathbf{\Sigma^{-1}} \boldsymbol{\Delta\mu}} \right), $$ where Δμ denotes the vector of mean differences in covariate values to be introduced between both outcome classes, i.e., Δμ=E(x i |y i =1)−E(x i |y i =0), and Φ the standard normal cumulative distribution function. We used a simplified scenario with a unique effect size chosen for all covariates within each set, i.e., Δμ=(Δμ,Δμ,…,Δμ) in situation 1, and Δμ=(Δμ0,Δμ0,…,Δμ0,…,Δμ1,Δμ1,…,Δμ1) in situation 2, and found Δμ by solving Eq. (1) numerically using the R function uniroot. Then, we added Δμ/2 to the cases' covariate values, and substracted Δμ/2 from the controls' covariate values, in order to achieve an average difference of Δμ in covariate values between cases and controls. Using this procedure, we implicitly model the outcome y i as follows: P(y i =1|x i )=logistic(γ·x i ), where x i denotes the vector of covariate values for observation i, i=1,…,n, γ=Σ−1·Δμ a p-dimensional (situation 1) or p0+p1-dimensional (situation 2) vector of coefficients, and \(\text {logistic}(x) = \frac {e^{x}}{1+e^{x}}\) the logistic function. Imposing missingness Different degrees of missingness (see Table 1) were introduced separately to the sets of covariates (one set in situation 1 with proportion of missing values denoted as miss; two sets in situation 2 with proportion of missing values in the baseline and additional covariates denoted as miss0 and miss1, respectively; to improve readability, we use the parameter notations of situation 1 below) according to three different mechanisms frequently occurring in practice: missing completely at random (MCAR), where missingness occurs independently of any observed or missing values, missing at random (MAR), where missingness of variables depends on observed values including outcome values but not on the unknown values of the missing data, and blockwise missing at random (MARblock), where blocks of variables share their missingness pattern. We did not consider missingness in the outcome. MCAR missingness was created by randomly introducing the pre-specified proportion miss of missing values into the covariates. To achieve MAR missingness, we used an approach similar to that applied by Marshall et al. [16]. Let X ij denote the jth covariate for observation i, with i=1,…,n and j=1,…,p, and M ij the indicator for its missingness. Then, the probability of missingness for each covariate value was modeled as a function of the value of one other covariate, of missingness of another covariate, and of the outcome value. $$\begin{array}{*{20}l} & P\left(X_{ij}~\text{missing}\right) = P\left(M_{ij} = 1\right)\\ & = \text{logistic}\left(\beta_{0j} + \beta_{1j} \cdot M_{i,j-1} + \beta_{2j} \cdot X_{ik_{j}} + 2 \cdot y_{i}\right) \end{array} $$ where \(X_{ik_{j}}\) denotes the observation of a randomly chosen other covariate and y i the binary outcome value. Without loss of generality, missingness of the previous (j−1th) covariate was used for technical reasons (missingness available). β1j was defined as $$\begin{array}{*{20}l} \beta_{1j} = \left\{ \begin{array}{ll} 0,& \quad\text{if}~j=1\\ 1;-1~\text{with probability}~0.5, & \quad\text{if}~j>1\\ \end{array}\right. \end{array} $$ and β2j as $$\begin{array}{*{20}l} \beta_{2j} = \left\{ \begin{array}{ll} 0,& \quad\text{if }j=0\\ 2, & \quad\text{if }j>1.\\ \end{array}\right. \end{array} $$ The intercepts β0j were estimated by numerically solving the equation $$\frac{1}{n}\sum_{i=1}^{n} P\left(M_{ij} = 1\right) = \text{miss} $$ for each j. To achieve the proportion of missing values miss exactly, values were set to missing by drawing n×miss times from a multinomial distribution with probability vector (P(M ij =1))i=1,…,n. Finally, we created a missingness structure similar to that observed in our application data, that is, a block structure of missingness (MARblock). In practice, such a structure can occur when groups of laboratory parameters are measured for certain groups of subjects defined by other variables (see below). Approximate blockwise missingness was simulated with missingness of variables assigned to each block depending on covariates outside the block. Variables were randomly assigned to three blocks, and probability of missingness modified as follows for the covariates j in each block b, b=1,…,3: $$\begin{array}{*{20}l} & P\left(M_{ij} = 1\right) = \text{logistic}\left(\beta_{0j} + 10 \cdot X_{ik_{b}} + 2 \cdot y_{i}\right) \end{array} $$ where for each covariate j within block b the same covariate \(X_{k_{b}}\) was chosen among covariates outside the block, leading to similarly high/low probabilities for all covariates in the respective block. The exact proportion of missing values miss was again achieved by drawing from a multinomial distribution, as described for MAR above. Example R code for simulation study 1 is available in Additional file 2. Simulation study 2: real data-based simulation Data were obtained from the population-based research platform MONICA (MONItoring of trends and determinants in CArdiovascular disease)/ KORA (COoperative health research in the Region of Augsburg), surveys S1 (1984/85), S2 (1989/90) and S3 (1994/95), comprising individuals of German nationality aged 25 to 74 years. The study design and data collection have been described in detail elsewhere [17]. Written informed consent was obtained from all participants and the studies were approved by the local ethics committee. In a random subcohort comprising 2225 participants aged 35 to 74 years, blood concentrations of 15 inflammatory markers were measured [18–20] as part of a case-cohort study assessing potential risk factors for cardiovascular diseases and type 2 diabetes. In the present analysis, all-cause mortality was used as the outcome. To achieve a largely healthy population at baseline, subjects with a history of stroke, myocardial infarction, cancer or diabetes at baseline were excluded. Among the remaining 2012 subjects, 294 (14.6 %) died during the 15-year follow-up period. Average survival time among the deceased participants was 9.0 years (range 0.2 to 15.0 years), and three participants were censored at 2.7, 6.9 and 7.9 years. See Additional file 1: Table S1 for a description of baseline phenotypes including the inflammatory markers. Whereas all other variables were almost completely observed (less than 0.4 % missing entries for each variable), missingness among the 15 inflammation-related markers was 7.2 % on average (range 0.2 – 26.4 %, see Additional file 1: Table S1), 37.2 % of observations had missing entries in inflammation-related markers, with missingness ranging from 0 to 93.3 %. The missingness pattern showed a block structure (Fig. 1), owing to the fact that measurement of inflammatory markers was conducted in different laboratory runs – for which samples were selected based on sample availability at the time of measurement. Five blocks of covariates could be roughly distinguished: Block 1, comprising CRP, without missings, block 2, conprising ICAM, E-Selectin, IL-6, MCP-1, IL-18,IP-10 and IL-8, block 3, comprising RANTES and MIF, block 4, comprising leptin, MPO, TGF- β1 and Adiponectin, and block 5, comprising 25(OH)D. Similarly, observations could be assigned to five patterns of missingness: pattern 1, comprising observations with a missing entry only for block 2, 3, 4 and 5 variables, pattern 2, only for block 4 and 5 variables, pattern 3, only for block 4 variables, pattern 4, only for block 3 and 5 variables, and pattern 5, only for the block 5 variable 25(OH)D. Missingness pattern among inflammation-related markers in the application data set. Plot of missingness indicators (black = entry observed; red = entry missing) for the 2012 observations against the 15 inflammation-related markers, both sorted by missingness To use the MONICA/KORA subcohort as the basis for the real data-based simulation study, we first investigated determinants of missingness in inflammation-related markers in the full subcohort, followed by imposing missingness on the data set consisting of the complete observations only (n=1258) in a way that yielded a missingness pattern closely resembling the block structure and the relations in the original data set. In detail, we used the five patterns of missingness described above as a basis, and, for each pattern, identified other variables in the data set correlated (Kendall's τ) with the respective pattern indicator (1 for observations that are part of the respective pattern; 0 else). Consequently, we selected those variables showing an absolute correlation above 0.1: sex and survey 1 for pattern 1, survey 1 for pattern 2, sex, survey 1 and alcohol intake for pattern 3, and no covariates for pattern 4 and 5. 250 simulations were conducted. In each simulation, a proportion of complete observations was assigned to each pattern identical to the proportion observed in the original data set. This was achieved by modeling pattern indicators as a function of the respective correlated variable(s) in the full incomplete data set in a logistic regression model, and predicting pattern membership probability of the respective pattern for the observations in the complete-observation data set. To achieve the aspired proportion of observations newly assigned to each pattern exactly, we drew the required number of times from a multinomial distribution with the predicted probability vector. Finally, for observations assigned to pattern 1, all variables of blocks 2, 3, 4 and 5 were set to missing, for pattern 2, variables of blocks 4 and 5, and so on, according to the definitions above. The resulting data sets showed a missingness pattern closely resembling that of the original data set (shown for the first 12 simulation runs in Additional file 1: Figure S1). We used the multiple imputation by chained equations (MICE) framework [7, 21]. It is based on the principle of a repeated chain of regression equations through the incomplete variables, where in each imputation model, the respective incomplete variable is modeled as a function of the remaining variables. Arbitrary regression models can be used. We applied predictive mean matching for all incomplete (continuous) variables. It is based on Bayesian linear regression, where after modeling, the posterior predictive distribution of the data is specified and used to draw predicted values [22]. Then, missing values are replaced by a random draw of observed values of that variable from other observations with the closest predicted values (default: the five closest values). In each imputation model, all other variables (and, in the data-based simulation study, quadratic terms of continuous variables, passively imputed themselves) were included as covariates. Before imputation, to improve normality of the continuous incomplete variables, distributions of raw, natural logarithm, cubic root and square root transformed variables were tested for normality using Shapiro-Wilk tests, and the transformation yielding the maximum test statistic was applied. Depending on the strategy used (see below), the outcome was included (strategy MI) or not included (strategy MI(-y)) in the imputation models. If MI was not combined with internal validation, a pooled performance estimate was obtained by averaging the performance estimates \(\hat \theta ^{(m)}\), m=1,…,M, from the M imputed data sets, according to Rubin [8]. Example R code for the conduction of MICE is available in Additional file 2. Internal validation strategies Three internal validation strategies were considered: bootstrapping (BS), subsampling (SS) and K-fold cross-validation (CV). The principles underlying the three strategies are visualized for complete data in Fig. 2a (BS) and in Additional file 1: Figure S2 (SS, CV). Combination of internal validation (Val), using the example of bootstrap (BS), and multiple imputation (MI). aVal: Visualization of BS in complete data. \(\hat \theta ^{{Dat}_{1},{Dat}_{2}}\) denotes performance when the model was fitted on Dat1 and evaluated on Dat2, where Orig denotes the original data set, BS(b) the bth BS set, OOB(b) the bth out-of-bag (OOB) set, b=1,…,B. Average performance values across the B sets are denoted by \(\hat \theta ^{{BS,BS}}\), \(\hat \theta ^{{BS, OOB}}\) and \(\hat \theta ^{{BS, Orig}}\). \(\hat \theta ^{{noinfo}}\) denotes the average performance in the absence of an effect (see text). Performance measures: \(\hat \theta ^{{opt.corr.}}\), ordinary optimism-corrected BS estimate [3]; \(\hat \theta ^{{OOB}}\), OOB performance estimate; \(\hat \theta ^{0.632+}\), BS 0.632+ estimate [23]. In the specific case of w=0.632, the BS 0.632 estimate (\(\hat \theta ^{0.632}\)) is obtained. bVal-MI: Combination of BS and MI by drawing BS samples followed by MI separately on the BS samples and on the OOB samples not contained in the respective BS draw. cMI-Val and MI(-y)-Val: Combination of MI and BS by conducting MI followed by drawing BS samples from the imputed data sets. For b and c, performance measures are derived similarly as for complete data (a), this time averaging across the B·M sets, and deriving apparent performance \(\hat \theta ^{{Orig,Orig}}\) as the average performance across the M imputed data sets Briefly, in BS, B bootstrap samples are drawn with replacement from the original sample, so that each BS sample will contain certain observations more than once, and others not at all. The average proportion of independent observations included in each BS sample is asymptotically 63.2 % [23]. The approx. 36.8 % remaining observations are frequently referred to as the out-of-bag (OOB) sample. To get an estimate for predictive performance from BS, several strategies were proposed (Fig. 2). First, the optimism of the apparent performance \(\hat \theta ^{\text {Orig,Orig}}\) (i.e., the performance of the model in the original data after using the whole original data set for model fitting), can be estimated as difference between average apparent performance in the BS samples and average performance of models fitted in each BS sample evaluated in the original sample [3]: \(\widehat {{optimism}}= \hat \theta ^{{BS,BS}} - \hat \theta ^{{BS,Orig}}\). Accordingly, an "optimism-corrected" (opt.corr.) measure for predictive performance, sometimes referred to as ordinary bootstrap estimate, can be obtained by subtracting the estimated optimism from apparent performance in the original data: \(\hat \theta ^{{opt.corr.}} = \hat \theta ^{{Orig,Orig}} -\widehat {{optimism}}\). Second, the model can be fitted on the BS samples and evaluated on the OOB samples (\(\hat \theta ^{\text {OOB}}\)). The resulting performance estimate tends to underestimate performance since less information was used in the model fitting step than provided in the full data [24]. Thus, the BS 0.632+ estimate (\(\hat \theta ^{\text {0.632+}}\)) has been proposed as a weighted average of apparent and OOB performance: $$\hat\theta^{\text{0.632+}} = (1-w) \cdot \hat\theta^{Orig,Orig} + w \cdot \hat\theta^{\text{OOB}} $$ with weights \(w = \frac {0.632}{1-0.368 \cdot R}\) depending on the relative overfitting rate \(R=\frac {\hat \theta ^{{OOB}}-\hat \theta ^{{Orig,Orig}}}{\hat \theta ^{{noinfo}}-\hat \theta ^{{Orig,Orig}}}\) (Fig. 2, [23]). This requires that we know the performance of the model in the absence of an effect (\(\hat \theta ^{\text {noinfo}}\)), which is either known (e.g., 0.5 in the case of the AUC, and 0 in the case of added predictive performance measures) or can be approximated as the average performance measure with randomly permuted outcome prediction. We used 1000 permutations to assess \(\hat \theta ^{\text {noinfo}}\) for the Brier score. In addition, we considered the BS 0.632 estimate \(\hat \theta ^{\text {0.632}} = 0.368 \cdot \hat \theta ^{{Orig,Orig}} + 0.632 \cdot \hat \theta ^{\text {OOB}}\) [25]. SS and CV involve drawing without replacement. For SS, we sampled a proportion 63.2 % of samples for model fitting, leaving again 36.8 % for evaluation. The optimism correction methods described for the BS can be directly translated to SS. For K-fold CV, the sample is split in K equally sized parts, and for each of the parts, the remaining K−1 parts are used for model fitting and the left-out part for evaluation of the model, followed by averaging the performance estimates obtained from the K runs. We used K=3 and K=10, with the former being comparable to BS in terms of the proportion of independent observations in the training sets, and the latter being a popular choice in the literature. Repeating K-fold CV B times and averaging the resulting performance estimates might improve stability of performance evaluation [2]. Thus, both simple (CV3, CV10) and repeated (CV3rep, CV10rep) CV with K=3 and K=10, respectively, were included in the investigation. Combination of internal validation with multiple imputation Simulated and real incomplete data were analyzed according to three combination strategies: Internal validation data splits followed by MI of the training/fitting and testing/evaluation data parts separately (Val-MI), and performing the internal validation on multiply imputed data with (MI-Val) and without (MI(-y)-Val) having included the outcome in the imputation models. Thereby Val represents the different validation strategies used, i.e., BS, SS, CVK and CVKrep. A visualization is provided for BS in Fig. 2. When performing MI, it is generally recommended to use the outcome data y in the imputation models for missing covariates (i.e., method MI-Val) [26]. However, in the present context, where we split the imputed data into a training and an evaluation set (Val), we may want to consider removing y from the imputation models (i.e., method MI(-y)-Val) because these models are fit to the whole data set, including the data that will become part of the evaluation set (i.e., the OOB or testing set). Dropping y from the imputation models keeps the evaluation set blind to the outcome-covariate relationship in the training set. This is by default the case for Val-MI, where training and testing parts of the data set are imputed separately, so we did not consider Val-MI(-y). For comparison, we also analyzed data using simple MI and MI(-y) without internal validation. In addition, strategies were compared to internal validation (Val) in complete data, where possible. Since we did not observe changes in variability across the simulations when values were increased beyond B=10 and M=5,B=10 validation samples and M=5 imputations were used for BS and SS for incomplete data, and B=50 for complete data in the simulation studies. For CV, none (B=1) or B=5 repetitions and M=5 were used for incomplete, and B=1 or 25 repetitions for complete data. Note that these do not represent choices for B and M in practice, but that lower numbers can be used for simulation where variability across the 250 simulated data sets exceeds resampling and imputation variability within each data set. Modeling and performance measures There is no unique definition for the performance of a prediction model. Three types of performance measures can be distinguished: measures of model discrimination, the ability of a model to separate outcome classes, i.e., to assign cases a higher risk than controls, measures of calibration, the unbiasedness of outcome predictions, in a way that of the observations with a predicted outcome probability of pr, about a fraction pr are cases, and measures of overall performance, the distance between observed and predicted outcome [3, 4]. We considered selected measures of each type for the binary (logistic) prediction model in the de novo simulation study. Of note, the focus was not on assessing the appropriateness of the different performance criteria in general, but rather to evaluate their estimation in the presence of missing values as compared to complete data. As a discrimination measure, we considered the area under the ROC curve (AUC), which determines the probability that the model assigns a randomly chosen case (or, in more general terms, observation with outcome y=1) a higher predicted outcome probability than a randomly chosen control (observation with outcome y=0) and is equal to the concordance (c) statistic in the case of a binary outcome [4, 27]. As calibration measures, we used intercept and slope of a logistic regression model of observed against predicted outcomes, with deviation from 0 and 1, respectively, indicating suboptimal calibration [11, 28]. Finally, as overall performance measures we considered the Brier score, i.e., the average squared difference between observed and predicted outcomes, \(\text {Brier} = \frac {1}{n}\sum _{i=1}^{n}{\left (y_{i} - \hat y_{i}\right)^{2}}\) [4, 29]. To assess added predictive performance of an extended as compared to a baseline model, we considered change in discrimination (ΔAUC) and three measures based on risk categories. These included, first, the net reclassification improvement (NRI), i.e., the difference between the proportion of observations moving into a 'more correct' risk category (i.e., cases moving up, controls moving down) and the proportion of observations moving into a 'less correct' risk category with the extended as compared to the baseline model [30]. This requires the definition of risk categories, where a single cutoff below the disease risk in the study population renders NRI by trend a measure for improvement in the classification of controls, and a single cutoff above the disease risk makes it a measure for improvement in the classification of cases [31]. In order to capture both, we chose three categories, [0, \(\frac {1}{2}{frac}\)], [\(\frac {1}{2}{frac}, \frac {3}{2}{frac}\)], [\(\frac {3}{2}{frac}\),1], where frac (≤0.5, without loss of generality, since the NRI is not sensitive towards class label assignment) denotes the outcome case frequency in the data set (see Table 1 for simulation study). Second, we used the continuous NRI, a category-free version of the NRI [32], and lastly, the integrated discrimination improvement (IDI), which equals the integrated NRI over all possible risk cutoffs [30]. In the data-driven simulation study, the ability of inflammation-related markers to predict all-cause mortality was assessed using a Cox proportional hazards model, with and without additional inclusion of covariates known to be relevant for mortality prediction (age, sex, survey, BMI, systolic blood pressure, total to high density lipoprotein (HDL) cholesterol ratio, smoking status, alcohol intake and physical activity). To acknowledge potential non-linear effects, quadratic terms were additionally included for all continuous variables. We focused on one measure of discriminative model performance, namely time-dependent AUC at 10 years of follow-up according to the Kaplan-Meier method by Heagerty et al. [33]. Accordingly ΔAUC(10 years) was used as a measure of added predictive performance of the inflammation-related markers beyond the known predictors. Evaluation of competing strategies In the de novo simulation study, the performance of the competing strategies of combining internal validation with imputation was assessed in terms of absolute bias, variance and mean squared error (MSE) of estimated performance criteria as compared to 'true' performance, defined as the average performance obtained when the model was fitted on the full (complete) data sets and evaluated on large (n=10,000) independent data sets with same underlying simulated effect sizes. Note that we did not compare (Δ)AUC estimates against the theoretical (Δ)AUC from which effect sizes were derived for simulation (see above), since these are often not achieved with small samples. In the data-driven simulation study, true effects were unknown. There, results of the competing strategies were compared against those from complete data. Construction of confidence intervals for performance estimates Jiang et al. [34] proposed a simple concept to estimate confidence intervals for prediction errors in complete data. It is based on the numerical finding that the cross-validated prediction error asymptotically has the same variability as the apparent error. Thus, they suggest to construct confidence intervals for the prediction error by generating a percentile interval based on resampling for the apparent error and centering this interval at the prediction error. The underlying theory extends to other performance/precision measures [35]. Using the notation of the present manuscript, their proposed procedure follows the steps: Estimate the prediction error (point estimate) based on cross-validation (i.e. \(\hat \theta ^{{Train,Test}}\)). Conduct resampling (they suggest perturbation resampling, where random weights are assigned to the observations in each resampling step; for details we refer to their manuscript): For b=1,…,B, determine the resampling apparent error resulting from the resampled data (i.e. \(\hat \theta ^{{BS(b),BS(b)}}\)). Substract the original apparent error from the resampled one: \(w_{b} = \hat \theta ^{{BS(b),BS(b)}} - \hat \theta ^{{Orig,Orig}}\). Obtain the α/2 and 1−α/2 percentiles \(\hat \xi _{\alpha /2}\) and \(\hat \xi _{1-\alpha /2}\) from the resampling distribution of the w b , b=1,…,B. Define the confidence interval for the prediction error as \(\left [\hat \theta ^{{Train,Test}} - \hat \xi _{1-\alpha /2}\text {, } \hat \theta ^{{Train,Test}} + \hat \xi _{\alpha /2}\right ]\). We modified the methodology with regard to several aspects. In step (2), we first used standard non-parametric bootstrapping as described above, and second, allowed for incomplete data by means of one of the combination strategies described above and in Fig. 2. That is, we obtained estimates \(\hat \theta ^{{BS(b,m),BS(b,m)}}, b=1,\ldots,B, m=1,\ldots,M\), by fitting and evaluating the model in each (imputed) BS sample (i.e., in each BS sample that was imputed when strategy Val-MI was applied, or in each BS sample drawn from imputed data when strategy MI-Val was applied). For each b and m, we defined \(w_{b,m} = \hat \theta ^{{BS(b,m),BS(b,m)}} - \hat \theta ^{{Orig,Orig}}\). In step (3), we obtained the α/2 and 1−α/2 percentiles from the empirical distribution of the wb,m, i.e. across all B×M estimates obtained. In step (4), we centered this interval at the BS 0.632+ estimate (\(\hat \theta ^{0.632+}\)) rather than the CV estimate \(\hat \theta ^{{Train,Test}}\): \(\left [\hat \theta ^{0.632+} - \xi _{1-\alpha /2}\text {,}\, \hat \theta ^{0.632+} - \xi _{\alpha /2} \right ]\), with α=0.05. The modified methodology can be integrated with performance estimation using the strategies described above within the same resampling (BS) scheme. For Val-MI, we performed B=100 bootstrap draws followed by M=1 imputation; for MI(-y)-Val, M=100 imputations were conducted followed by B=1 bootstrap draw. For complete data, B=100 was chosen. For comparison, we also constructed confidence intervals for apparent performance based on analytical test concepts, i.e., using DeLong's test for AUC and ΔAUC. In the presence of missing values (strategies MI and MI(-y)), Rubin's rules were applied to the AUC estimates and variances obtained from DeLong's test [8]. All calculations were performed using R, version 3.0.1 [36]. Data generation involved use of the R package mvtnorm, version 0.9-9995 [37]. MICE was performed using the package mice, version 2.17 [6]. Internal validation was performed using custom code. For predictive performance measures, the R packages pROC, version 1.7.3 [38], PredictABEL, version 1.2-2 [39], and survivalROC, version 1.0.3 [40], were used. Example R code is available in Additional file 2. Importance of validation and comparative performance of validation strategies In the de novo simulation experiment, complete and incomplete data were generated with varying data set characteristics, followed by applying the competing combined validation/imputation strategies. For comparison, we also assessed apparent performance, i.e., the performance in the original data in the case of complete data, and the performance estimates pooled using Rubin's rules from MI in the case of incomplete data. Results are shown in Fig. 3 (for AUC, at n=200, p=10 covariates) and in Additional file 1: Figures S3 to S8 (for other performance measures and choices of parameters). Simulation distribution of AUC estimates obtained by different strategies. Boxplots showing distribution of AUC estimates across the 250 simulated data sets in a setting with moderate sample size (n=200), p=10 covariates, moderate missing at random (MAR) missingness (miss=25 % of values missing), balanced outcome class distribution (frac=0.5) and uncorrelated covariates (ρ=0) in the absence (theoretical auc=0.5; a) and presence (theoretical auc=0.66, see text; b) of a moderate true effect of the covariates on the outcome. The horizontal line denotes 'true' AUC related to a complete data set of size 200 (which is not necessarily equal to theoretical auc; see text). BS, bootstrap; CVK, K-fold CV; CVKrep, repeated K-fold CV; MI, multiple imputation; MI(-y), multiple imputation without including the outcome; No val., no validation (i.e., apparent performance); OOB, out-of-bag estimate; opt.corr., ordinary optimism-corrected estimate; SS, subsampling; Val, validation The apparent performance estimates were generally optimistic – even in the case of large sample size and small number of variables (Additional file 1: Figure S3; n=2000, p=1). Optimism was particularly strongly pronounced for imputed data when the outcome had been included in the imputation models (strategy MI). Among the investigated ways to correct for optimism, the ordinary optimism correction and the 0.632 estimate tended to achieve less effective optimism control as compared to the BS/SS 0.632+ estimate, the BS/SS OOB estimate and CV estimates. This was most strongly observed in the absence of a true effect and with increasing number of covariates (Fig. 3 and in Additional file 1: Figures S4 to S8). Comparison of strategies of combining internal validation and multiple imputation The MI-Val strategy, i.e., conducting MI followed by internal validation (i.e., BS, SS, CVK or CVKrep) on the imputed data sets, generally yielded optimistically biased performance estimates and large mean squared errors in almost all settings, and more severely with an increasing number of variables, decreasing sample size, increasing degree of missingness, and decreasing true effect (shown for the AUC in Fig. 4 and in Additional file 1: Figure S9). Bias of AUC estimates obtained by different strategies based on bootstrapping. Bias is shown for one varying data set characteristic in each panel (a, b number of covariates p; c, d sample size n; e, f degree of missingness miss; g true effect auc), while keeping all remaining characteristics constant: sample size (n=200), p=10 covariates, 25 % missing values, missing at random (MAR), balanced outcome class distribution (frac=0.5), uncorrelated covariates (ρ=0). Results are shown for absence (theoretical auc=0.5; a, c, e, g) and presence (theoretical auc=0.66; b, d, f, g) of a moderate true effect of the covariates on the outcome MI(-y)-Val was largely unbiased in the absence of a true effect, but gave pessimistic results when the covariates truely affected the outcome (Fig. 4), largely independent of the number of covariates and the sample size. A likely explanation is that omitting the outcome from the imputation disrupts the correlation structure among covariates and outcome, leading to underestimation of effect sizes. The pessimistic bias became more pronounced with increasing degree of missingness and increasing effect size. Val-MI produced mostly unbiased AUC estimates; however, in the presence of a large number of missing values, a pessimistic bias was observed in the presence of a true underlying effect (Fig. 4). This trend was mostly weaker than for the MI(-y)-Val strategy and depended also on sample size, number of covariates and true effect size. Varying other data set characteristics, such as missingness mechanism, outcome class frequencies, correlation among the variables, number of baseline covariates and degree of missingness among baseline covariates, did not greatly influence results (Additional file 1: Figures S10 and S15). Trends observed for different model performance measures Although focusing on the AUC as a discrimination measure, the above described trends were largely similar across the model performance measures investigated (Additional file 1: Figures S11 to S21). Of note, biases that were already present in complete data were found to be mirrored, and sometimes augmented, in incomplete data. Examples include the negative bias of ΔAUC (Additional file 1: Figures S13 and S15) and the positive bias of categorical NRI (Additional file 1: Figure S16) in the absence of a true effect, specifically with increasing number of covariates and decreasing sample size. Another example is the pessimistic bias of the Brier score that was most strongly observed for Val-MI with increasing degree of missingness, number of covariates and decreasing sample size. Importantly, both Val-MI and MI(-y)-Val strategies generally did not produce (optimistic) bias that was not already (at least to a weaker extent) observed in complete data results. In terms of calibration, models tended to be miscalibrated in test (OOB) data for most strategies in both complete and incomplete data (Additional file 1: Figures S22 to S27). This trend became worse with decreasing number of covariates and was often observed such that calibration lines were too steep (i.e., intercept <0; slope >1), rendering recalibration of prediction models a desirable step. Although not influencing discriminative test performance, this might improve overall test performance (as measured e.g. by the Brier score). Extension to a real-data situation In order to assess how the competing strategies of combining internal validation and MI performed in a realistic situation, we based another simulation experiment on a real data set. In the population-based MONICA/KORA subcohort, the aim was to assess the ability of blood concentrations of inflammatory markers for predicting all-cause mortality over a follow-up time of 15 years in n=2012 healthy adults. We used the 1258 complete observations as a basis for a data-driven simulation study, where we imposed missingness on these data in a way that reflected the missingness structure in the original incomplete data set (Additional file 1: Figure S1), followed by applying the competing combined validation/imputation strategies to obtain time-dependent (change) in AUC. Results are shown in Fig. 5. Without validation, performance estimates were much higher than those obtained with validation, confirming the importance of validation for assessment of predictive performance. With ordinary optimism correction, performance estimates were still higher than for the other estimates, in line with the assumption that it may achieve insufficient correction for optimism. The lowest values were observed for the OOB, CV3 and CV3rep estimates, suggesting a pessimistic bias, which seemed to be improved by the 0.632+ estimates. Data-driven simulation. Boxplots showing distribution of performance estimates across the 250 simulations of missing values into the complete-observations data set derived from the MONICA/KORA subcohort. a, AUC (10 years), performance of a model comprising the 15 inflammation-related markers; b, ΔAUC (10 years), added performance of the inflammation-related markers on top of the baseline covariates. Note that the variability across the 250 simulations reflects variability in imposing missing values as well as resampling variability, but not population variability as is part of the variability in Fig. 3. BS, bootstrap; CVK, K-fold CV; CVKrep, repeated K-fold CV MI, multiple imputation; MI(-y), multiple imputation without including the outcome; No val., no validation (i.e., apparent performance); OBB, out-of-bag estimate; opt.corr., ordinary -corrected estimate; SS, subsampling; Val, validation Differences between the strategies of combining validation and imputation were less pronounced, presumably due to the large sample size and small proportion of missing values (7.2 % on average among the inflammation-related markers). Val-MI yielded lower ΔAUC estimates on average as compared to Val on complete data. This was consistent with our observation of a slight pessimism of Val-MI in the de novo simulation study in the presence of a true effect, and was even more strongly observed for CV10 and CV10rep. Val-MI also appeared more variable as compared to the other strategies. This is likely due to the fact that at the given low proportion of missing values, e.g. performing B=10 BS first followed by M=5 imputations on each yields less distinct data sets than performing M=5 imputations first followed by B=10 random BS runs or performing B=50 BS runs on the complete data. Choice of number of resamples and number of imputations in practice We addressed the question of how large the number of resamples B and the number of imputations M should be chosen in practice, for two of the best-performing strategies, Val-MI and MI(-y)-Val based on bootstrapping with the 0.632+ estimate. Therefore, we repeated the de novo simulation study for selected parameter settings with varying B and M. For Val-MI, we observed a steep decline of variability of performance estimates with increasing B, where as decline was weaker with increasing M (Fig. 6). This is expected, especially in the settings with lower degree of missingness, where the imputed data sets are not expected to differ strongly from each other. At constant total number B·M, the best option seems to be to choose the largest possible value of B (with M=1). This is also not unexpected, given that imputation variability is added on top of resampling variability in each sample. Choice of number of resamples B and imputations M in practice. Standard deviation of performance estimate (AUC for performance, ΔAUC for added performance) across 10 runs, averaged across 10 simulated data sets for strategies Val-MI and MI(-y)-Val based on bootstrapping at varying B and M. Apart from the parameters provided in the legend, parameters were chosen as in Fig. 3. In the case of added performance, the number of variables in the baseline model was set to p0=1 In contrast, for MI(-y)-Val, M seemed to be the number that mostly determined variability, with variability decreasing with increasing M even at constant B·M (Fig. 6). Furthermore, variability of performance estimates was generally larger in Val-MI as compared to MI(-y)-Val, even with the least variable combination of B and M at constant total number B·M. Thus, it is recommendable to choose B and M as large as possible if applying Val-MI and MI(-y)-Val, respectively. An analytic relationship can be utilized in order to assess variability of performance estimates with increasing B and M, respectively: The standard deviation of the mean is generally equal to the population standard deviation divided by the square root of the sample size, given that values are independent. Since the B performance estimates obtained with e.g. Val-MI are independent with regard to the BS, we can assume that the following relationship holds: $$ \text{SD}\left(\hat\theta_{B} \right) = \frac{1}{\sqrt{B}} \text{SD}\left(\hat\theta_{1}\right), $$ whereby SD denotes the standard deviation, and \(\hat \theta _{B}\) the performance estimate when B resamples were conducted (and M=1 imputations). Empirical evidence confirms this assumption for both Val-MI and MI(-y)-Val (Additional file 1: Figures S28 and S29). Thus, we provide standard deviation estimates at B=1 and M=1 for various parameter settings in Additional file 1: Tables S2 and S23. This may allow the reader to approximate the standard deviation for their situations at larger values of B or M using Eq. (2) and to choose B or M such that the required accuracy is obtained. Incomplete future patient data In the context of building prediction models in the presence of missing values, it has been noted earlier that future patients, to which the prediction model will be applied, might not have complete data for all covariates in the model [13]. To still allow application of the model, the missing values might be imputed using a set of patient data, whereby, notably, the outcome variable is not available. Thus, a relevant question that arises is whether and how predictive performance suffers from missingness in the evaluation data. Therefore, we evaluated models fitted to simulated complete data in large independent data sets with the same underlying simulated effect sizes and varying degrees of missingness, imputed using MI(-y). We observed a clear decrease of predictive performance when the proportion of missing values in the test data increased (Additional file 1: Figure S30). This was observed most severely (in absolute terms) with larger true performance. An approach towards confidence intervals for performance estimates As an outlook, we considered an approach of constructing resampling-based confidence intervals for performance estimates that is based on the work by Jiang et al. [34]. Figure 7 shows type 1 error and power for AUC and ΔAUC estimates for the competing strategies. Thereby, type 1 error was defined as the proportion of simulations with true AUC =0.5 or ΔAUC=0, where a test with the null hypothesis AUC =0.5 or ΔAUC=0 was rejected (i.e., confidence interval above 0.5 and 0, respectively). In the presence of a true effect (AUC > 0.5 or ΔAUC>0), this proportion specified power. In a low-dimensional situation (p=1), the nominal type 1 error rate of 5 % was kept on average for all strategies (Fig. 7a, c, e, g). However, at p=10 severely inflated type 1 error rates were observed for the strategies without validation (i.e., based on DeLong's test) and for the MI-Val 0.632+ estimate, while in complete data, Val-MI and MI(-y)-Val, the 0.632+ estimate kept the nominal type 1 error rate (Fig. 7b, d, f, h). As expected, the presence of missing values diminished power, as observed for Val-MI as compared to Val on complete data, and to an even stronger extent for MI(-y)-Val. Together, the proposed approach proposes to be a way of obtaining valid confidence intervals for both Val-MI and MI(-y)-Val 0.632+ estimates without additional computational costs. Type 1 error and power of resampling-based confidence intervals for AUC and ΔAUC estimates. Percentage of rejected null hypotheses (i.e., confidence interval above 0.5 and 0 for AUC (a, b, c, d) and ΔAUC, (e, f, g, h) respectively) among 250 simulations plotted against the underlying true (theoretical) value. In the absence of a true effect (true auc=0.5; Δauc=0), percentage of rejected null hypotheses equals type 1 error, otherwise power. Parameters were chosen as denoted in the figure titles, n=200,p0=0,1 and otherwise as in Fig. 3 Using simulated and real data we have compared strategies of combining internal validation with multiple imputation in order to obtain unbiased estimates of various (added) predictive performance measures. Our investigation covered a wide range of data set characteristics, validation strategies and performance measures, and also dealt with practical questions such as the numbers of imputations and bootstrap samples to be chosen in a given data set, and the aspects of incomplete future patient data and the construction of confidence intervals for performance estimates. Throughout the investigated simulation settings, we observed an optimistic bias for apparent performance estimates, which was insufficiently corrected by ordinary optimism correction and the BS (and SS) 0.632 estimate, whereas the OOB estimate tended to be pessimistic and the 0.632+ tended to provide unbiased estimates. CV estimates were more variable than BS estimates (although this comparison might not be completely fair since the total number of training/test set pairs was not always the same in BS/SS as in CV or CVrep). These trends were similarly observed for complete and incomplete data and are consistent with previous observations for complete data. For instance, Wehberg and Schumacher [41] reported the 0.632+ method to outperform ordinary optimism correction and 0.632, while the OOB estimate was pessimistic. Also, Smith et al. [1] and Braga-Neto et al. [42] observed insufficient optimism correction for the ordinary method and the 0.632 estimate, respectively, and both reported increased variability of CV estimates. Another publication focused on AUC estimation and found the BS 0.632+ estimate to be the least biased and variable one among the BS estimates [43]. When we investigated strategies of combining validation with imputation, we observed an optimistic bias for the strategy of imputing first and then resampling on the imputed data (MI-Val), whereas imputing training and test sets separately (Val-MI) provided largely unbiased and sometimes pessimistic results. The question of in which order bootstrapping and imputation should be combined has been studied before from a theoretical [44] and empirical [12] perspective. In MI-Val, all observations, which are later on repeatedly separated into training (BS) and test (OOB) sets, are imputed in one imputation process. Since values are imputed using predictions based on multivariate models including all observations, it is evident that future test observations do not remain completely blind to future training observations. Still, the severity of the expected optimism of the MI-Val approach given different data characteristics, validation strategies and performance estimates has not been intensively studied. In practice, both MI-Val and Val-MI have been applied before [9, 10, 45]. Val-MI tended to be pessimistically biased in the presence of a true underlying effect in our and others' [12] work. Specifically, when sample size is low and number of covariates large, the model overfits the training (BS) part of the data set, resulting in a worse fit to the test (OOB) data. In the presence of missing values, training and test data are imputed separately. It can be assumed that overfitting also occurs at the stage of imputation (where imputation models might become overfitted to the observed data both in the training and in the test set). This may result in a more severe difference in the observed covariate-outcome relationships between training and test data, and consequently worse fit of the model fitted to the training data to the test data, yielding an underestimation of predictive performance that apparently cannot be fully corrected using the 0.632+ estimate. MI(-y)-Val produced mostly pessimistic results in the presence of an underlying true effect, mostly independent of sample size and number of covariates. In general MI literature, it is not recommended to omit the outcome from the imputation models [26, 46]. Omitting the outcome equals making the assumption that it is not related with the covariates, as stated by von Hippel [26]. This assumption is wrong in the case of a true underlying effect, resulting in misspecified imputation models, and, in turn, in an underestimation of effect estimates [46]. Of note, the same study reported no difference between the MI and MI(-y) methods as far as inference is concerned. To our knowledge, the issue has not been investigated in the context of predictive performance estimation. In their study of 'incomplete' CV, Hornung et al. [14] investigated the effect of – amongst other preprocessing steps – imputing the whole data set prior to CV as compared to basing the imputation on the training data only. They used a single imputation method that omitted the outcome, and found only little impact on CV error estimation. For measures of added predictive performance we made the observation that even in complete data, estimates were sometimes biased in the absence of a true effect. For instance, ΔAUC and categorical NRI were pessimistically and optimistically biased, respectively. The optimistic bias of NRI has lead to critical discussion [47]. It is not unexpected that such bias is not eliminated when the respective validation method is combined with imputation. Our study focused on treating missing values and deriving reasonable estimates for predictive performance measures in the presence of incomplete data in the model development phase, i.e., in the phase where complete outcome data are available and one aims to derive a prediction model for use in future data. Our study focused on treating missing values and deriving reasonable estimates for predictive performance measures in the presence of incomplete data in the research stage, i.e. in the situation where data sets with complete outcome data are available from studies/cohorts and one aims to develop a prediction model for use in future patient data (as opposed to the application stage where the model is applied to predict patients' outcome). Thus, when we evaluated estimates, they were compared against average performance in large complete data sets. An important question is how missing values in future patient data impair the performance of a developed prediction model, and whether such impairment would have to be considered already when developing the model. It has been suggested that data in the research stage should be imputed omitting the outcome from the imputation process, at least in the test sets, to get close to the situation in future real-world clinical data, where no outcome would be available for imputation either [13]. According to this suggestion, the strategy Val-MI should be avoided. However, how close a predictive performance estimate obtained through any strategy on the research data approximates the actual performance in future clinical data, depends strongly on the similarity in the proportion (and putatively, in the pattern) of missing values in both situations. Our and others' [48] results suggest that – irregardless of how missing values in future clinical data are treated – accuracy is lost with increasing missingness in future data at a given proportion of missingness in the research data. We expect the proportion of missing values in future patient data to be lower than that in study data in many cases. Specifically, epidemiological study data are subject to additional missingness attributable to design, sample availability and questionnaire response. Since the precise missingness patterns in both study data and future patient data in clinical practice may vary between studies and the outcome of interest, no general rule can be developed for estimating predictive performance of a model when future patient data are expected to contain missing values. We propose a simple integrated approach for the construction of confidence intervals for performance estimates. The resulting intervals kept the nominal type 1 error rate for both Val-MI and MI(-y)-Val, although a severe loss in power as compared to complete data could be observed. The chosen approach relies on the numerical finding that prediction error estimates have the same variability as apparent error estimates and thus, bootstrap intervals for apparent error can be centered at prediction error estimate [34]. The strategy has a major computational advantage over alternative strategies of constructing confidence intervals for estimates of prediction error/performance measures that use resampling in order to estimate the distribution of e.g. CV errors [49]. The latter require nesting the whole validation (and imputation) procedure within an outer resampling loop. Other alternatives that do not require a double resampling loop might rely on tests applied to the test data. An example is the median P rule suggested by van de Wiel et al. [50], where a nonparametric test is conducted on the test parts of a subsampling scheme, resulting in a collection of P values of which the median is a valid summary that controls the type 1 error under fairly general conditions. The methodology could be generalized to other (parametric or nonparametric) tests conducted on the test observations, such as DeLong's test for (Δ)AUC, and extension to incomplete data is possible with the help of Rubin's combination rules. However, this strategy might lack power, because tests are conducted on the small test sets. Together, our findings allow the careful formulation of recommendations for practice. First, if one aims to assess predictive performance of a model, validation is of utmost importance to avoid overoptimism. As for complete data, bootstrap with the 0.632+ estimate, turned out to be a preferable validation strategy also in the case of incomplete data. When combining internal validation and MI, one should not impute the full data set including the outcome in the imputation followed by resampling (strategy MI-Val) due to its optimistic bias. Instead, we can recommend nesting the MI in the resampling (Val-MI) or performing MI first, but without including the outcome variable (MI(-y)-Val). The number of resamples (B) and imputations (M) should be maximized in Val-MI and MI(-y)-Val, respectively. The choice of exact number of resamples and imputations for a given data set can be guided by the variability data we provide. In many situations and for many performance criteria, Val-MI might be preferable, although this choice may also depend on computational capacity, which is lower for MI(-y)-Val, where variability of the 0.632+ estimate is lower at the same number of resamples and only half the number of imputation runs is required. One should also be aware of (complete-data) biases of specific performance criteria, which may be augmented in the presence of missing values. Finally, one possible way of constructing valid confidence intervals for predictive performance estimates may be to center the bootstrap interval of the apparent performance estimate at the predictive performance estimate. This strategy can be easily embedded in the Val-MI and MI(-y)-Val strategies. Strengths of this study include its comprehensiveness with regard to different data characteristics, validation strategies and performance measures, and the use of both simulated and real data. Our investigation may be extended with regard to several aspects. For instance, we did not vary effect strengths between the covariates. The relationship between effect strengths and missingness in covariates may influence the extent of potential bias in e.g. Val-MI. Furthermore, it will be interesting to extend the study on confidence intervals by adopting alternative approaches to incomplete data, with a focus on searching for a strategy that improves power. In addition, one might explore the role of the obtained findings in a higher-dimensional situation where variable selection and parameter tuning often requires an inner validation loop. Of note, while in our study results were very similar for BS and SS, in an extended situation involving model selection, or hypothesis tests following [50], SS should be preferred due to known flaws of the BS methodology [51]. In the presence of missing values, our most recommendable strategy to obtain estimates of predictive performance measures is to perform bootstrap for internal validation, with separate imputation of training and test parts and to determine the 0.632+ estimate. For this strategy, at given computational capacity, the number of resamples should be maximized. The strategy allows for the integrated calculation of confidence intervals for the performance estimate. ΔAUC: Change in AUC AUC: Area under the ROC curve BS: Cross-validation CVK: K-fold cross-validation CVKrep: Repeated K-fold cross-validation HDL: High density lipoprotein IDI: Integrated discrimination improvement KORA: Cooperative health research in the region of Augsburg MAR: Missing at random MARblock: Blockwise missing at random MCAR: Missing completely at random Multiple imputation MI-Val: Multiple imputation followed by internal validation MI(-y): Multiple imputation without including the outcome in the imputation models MI(-y)-Val: MI(-y) followed by internal validation MICE: Multiple imputation by chained equations Monitoring trends and determinants in cardiovascular disease Net reclassification improvement OOB: Out-of-bag ROC: Receiver-operating characteristic Subsampling Val: Val-MI: Internal validation followed by multiple imputation Smith GCS, Seaman SR, Wood AM, Royston P, White IR. Correcting for optimistic prediction in small data sets. Am J Epidemiol. 2014; 180(3):318–24. Steyerberg EW, Jr Harrell F, Borsboom GJ, Eijkemans MJ, Vergouwe Y, Habbema JD. Internal validation of predictive models: efficiency of some procedures for logistic regression analysis. J Clin Epidemiol. 2001; 54(8):774–81. Jr Harrell F, Lee KL, Mark DB. Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med. 1996; 15(4):361–87. Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ, Kattan MW. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology. 2010; 21(1):128–38. Raessler S, Rubin DB, Zell ER. Incomplete data in epidemiology and medical statistics. Handb Stat. 2008; 27:569–601. van Buuren S, Groothuis-Oudshoorn K. mice: Multivariate imputation by chained equations in R. J Stat Softw. 2011; 45:1–67. van Buuren S, Boshuizen HC, Knook DL. Multiple imputation of missing blood pressure covariates in survival analysis. Stat Med. 1999; 18:681–94. Rubin DB. Multiple Imputation for Nonresponse in Surveys. New York: John Wiley & Sons; 1987. Heymans MW, van Buuren S, Knol DL, van Mechelen W, de Vet HCW. Variable selection under multiple imputation using the bootstrap in a prognostic study. BMC Med Res Methodol. 2007; 7:33. Vergouw D, Heymans MW, Peat GM, Kuijpers T, Croft PR, de Vet HCW, van der Horst HE, van der Windt DAWM. The search for stable prognostic models in multiple imputed data sets. BMC Med Res Methodol. 2010; 10:81. Vergouwe Y, Royston P, Moons KGM, Altman DG. Development and validation of a prediction model with missing predictor data: a practical approach. J Clin Epidemiol. 2010; 63(2):205–14. Musoro JZ, Zwinderman AH, Puhan MA, ter Riet G, Geskus RB. Validation of prediction models based on lasso regression with multiply imputed data. BMC Med Res Methodol. 2014; 14:116. Wood AM, Royston P, White IR. The estimation and use of predictions for the assessment of model performance using large samples with multiply imputed data. Biom J. 2015; 57(4):614–32. Hornung R, Bernau C, Truntzer C, Wilson R, Stadler T, Boulesteix AL. A measure of the impact of CV incompleteness on prediction error estimation with application to PCA and normalization. BMC Med Res Methodol. 2015; 15:95. Su JQ, Liu JS. Linear combinations of multiple diagnostic markers. J Am Stat Assoc. 1993; 88(424):1350–5. Marshall A, Altman DG, Royston P, Holder RL. Comparison of techniques for handling missing covariate data within prognostic modelling studies: a simulation study. BMC Med Res Methodol. 2010; 10:7. Holle R, Happich M, Lowel H, Wichmann H. KORA – a research platform for population based health research. Gesundheitswesen. 2005; 67:19–25. Herder C, Baumert J, Zierer A, Roden M, Meisinger C, Karakas M, Chambless L, Rathmann W, Peters A, Koenig W, Thorand B. Immunological and cardiometabolic risk factors in the prediction of type 2 diabetes and coronary events: MONICA/KORA Augsburg case-cohort study. PLoS ONE. 2011; 6:19852. Thorand B, Zierer A, Huth C, Linseisen J, Meisinger C, Roden M, Peters A, Koenig W, Herder C. Effect of serum 25-hydroxyvitamin D on risk for type 2 diabetes may be partially mediated by subclinical inflammation: results from the MONICA/KORA Augsburg study. Diabetes Care. 2011; 34(10):2320–2. Karakas M, Koenig W, Zierer A, Herder C, Rottbauer W, Baumert J, Meisinger C, Thorand B. Myeloperoxidase is associated with incident coronary heart disease independently of traditional risk factors: results from the MONICA/KORA Augsburg study. J Intern Med. 2012; 271(1):43–50. Raghunathan TE, Lepkowski JM, Hoewyk JV, Solenberger P. A multivariate technique for multiply imputing missing values using a sequence of regression models. Surv Methodol. 2001; 27:85–95. Yuan Y. Multiple imputation using sas software. J Stat Softw. 2011; 45:1–25. Efron B, Tibshirani R. Improvement on cross-validation: the 0.632+ bootstrap method. J Am Stat Assoc. 1997; 92:548–60. Gerds TA, Cai T, Schumacher M. The performance of risk prediction models. Biom J. 2008; 50(4):457–79. Efron B. Estimating the error rate of a prediction rule: Some improvements on cross-validation. J Am Stat Assoc. 1983; 73:555–66. von Hippel PT. Regression with missing Y's: an improved method for analyzing multiply-imputed data. Sociol Methodol. 2007; 37:83–117. Jr Harrell F, Califf RM, Pryor DB, Lee KL, Rosati RA. Evaluating the yield of medical tests. JAMA. 1982; 247(18):2543–6. Miller ME, Hui SL, Tierney WM. Validation techniques for logistic regression models. Stat Med. 1991; 10(8):1213–26. Brier G. Verification of forecasts expressed in terms of probability. Mon Weather Rev. 1950; 78:1–3. Pencina MJ, Sr D'Agostino RB, Jr D'Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: From area under the roc curve to reclassification and beyond. Stat Med. 2008; 27:157–72. Mihaescu R, van Zitteren M, van Hoek M, Sijbrands EJG, Uitterlinden AG, Witteman JCM, Hofman A, Hunink MGM, van Duijn CM, Janssens ACJW. Improvement of risk prediction by genomic profiling: reclassification measures versus the area under the receiver operating characteristic curve. Am J Epidemiol. 2010; 172(3):353–61. Pencina MJ, Sr D'Agostino RB, Steyerberg EW. Extensions of net reclassification improvement calculations to measure usefulness of new biomarkers. Stat Med. 2011; 30(1):11–21. Heagerty PJ, Lumley T, Pepe MS. Time-dependent roc curves for censored survival data and a diagnostic marker. Biometrics. 2000; 56:337–44. Jiang B, Zhang X, Cai T. Estimating the confidence interval for prediction errors of support vector machine classifiers. J Mach Learn Res. 2008; 9:521–40. Uno H, Cai T, Tian L, Wei L. Evaluating prediction rules for t-year survivors with censored regression models. J Am Stat Assoc. 2007; 102(478):527–37. R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2014. http://www.R-project.org/. R Foundation for Statistical Computing. Genz A, Bretz F. Computation of Multivariate Normal and t Probabilities. Lecture Notes in Statistics, Vol. 195. Heidelberg: Springer-Verlag; 2009. ISBN:978-3-642-01688-2. http://CRAN.R-project.org/package=mvtnorm. Robin X, Turck N, Hainard A, Tiberti N, Lisacek F, Sanchez JC, Müller M. pROC: An open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinforma. 2011; 12:77. Kundu S, Aulchenko YS, Janssens ACJW. PredictABEL: Assessment of Risk Prediction Models. Heagerty PJ, packaging by Paramita Saha-Chaudhuri. survivalROC: Time-dependent ROC curve estimation from censored survival data. 2013. R package version 1.0.3, http://CRAN.R-project.org/package=survivalROC. Wehberg S, Schumacher M. A comparison of nonparametric error rate estimation methods in classification problems. Biom J. 2004; 46(1):35–47. Braga-Neto UM, Dougherty ER. Is cross-validation valid for small-sample microarray classification?Bioinformatics. 2004; 20(3):374–80. Sahiner B, Chan HP, Hadjiiski L. Classifier performance prediction for computer-aided diagnosis using a limited dataset. Med Phys. 2008; 35(4):1559–70. Shao J, Sitter RR. Bootstrap for imputed survey data. J Am Stat Assoc. 1996; 91(435):1278–88. Siersma V, Johansen C. The use of the bootstrap in the analysis of case-control studies with missing data. 2004. Technical report. Moons KGM, Donders RART, Stijnen T, Jr Harrell FE. Using the outcome for imputation of missing predictor values was preferred. J Clin Epidemiol. 2006; 59(10):1092–101. Pepe MS, Fan J, Feng Z, Gerds T, Hilden J. The net reclassification index (NRI): a misleading measure of prediction improvement even with independent test data sets. Stat Biosci. 2015; 7(2):282–95. Zhang Q, Rahman A, D'este C. Impute vs. ignore: Missing values for prediction. In: Neural Networks (IJCNN), The 2013 International Joint Conference On. IEEE: 2013. p. 1–8. http://ieeexplore.ieee.org/document/6707014/. Jiang W, Varma S, Simon R. Calculating confidence intervals for prediction error in microarray classification using resampling. Stat Appl Genet Mol Biol. 2008; 7(1):8. van de Wiel MA, Berkhof J, van Wieringen WN. Testing the prediction error difference between 2 predictors. Biostatistics. 2009; 10(3):550–60. Janitza S, Binder H, Boulesteix AL. Pitfalls of hypothesis tests and model selection on bootstrap samples: Causes and consequences in biometrical applications. Biom J. 2015; 58(3):447–73. We thank all MONICA/KORA study participants and all members of the field staff in Augsburg who planned and conducted the study. We thank Annette Peters, head of the KORA platform, for providing the data, and Andrea Schneider for excellent technical support. We thank the involved cooperation partners Wolfgang Koenig (University of Ulm Medical Center, Ulm, Germany) and Christian Herder (German Diabetes Center, Düsseldorf, Germany) for permission to use the MONICA/KORA subcohort data for the present analyses. The MONICA/KORA research platform and the KORA Augsburg studies are financed by the Helmholtz Zentrum München - German Research Center for Environmental Health, which is funded by the German Federal Ministry of Education and Research (BMBF) and by the State of Bavaria. Laboratory measurements in the MONICA/KORA case-cohort study were funded through research grants from the German Research Foundation (DFG) (TH-784/2-1 and TH-784/2-2) and additional funds provided by the University of Ulm, the German Diabetes Center, Düsseldorf, the Federal Ministry of Health and the Ministry of Innovation, Science, Research and Technology of the state North Rhine-Westphalia). Astrid Zierer was supported by the Else Kröner-Fresenius-Stiftung. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. MONICA/KORA subcohort data are available upon request through the application tool KORA.PASST (http:/epi.helmholtz-muenchen.de). R code for reproduction of the simulation study is available in Additional file 2. SW, AB and MW devised the basic idea for the manuscript. SW performed the statistical analyses with contributions from AB, AZ, BT and MW. BT and AZ contributed the MONICA/KORA data. SW, AB and MW wrote the manuscript. AZ and BT revised the manuscript for important intellectual content. All authors read and approved the final manuscript. Written informed consent was obtained from all MONICA/KORA participants and the studies were reviewed and approved by the local ethics committee (Bayerische Landesaerztekammer). Research Unit of Molecular Epidemiology, Helmholtz Zentrum München - German Research Center for Environmental Health, Ingolstädter Landstrasse, Neuherberg, 1, 85764, Germany Simone Wahl Institute of Epidemiology II, Helmholtz Zentrum München - German Research Center for Environmental Health, Ingolstädter Landstrasse, Neuherberg, 1, 85764, Germany Simone Wahl, Astrid Zierer & Barbara Thorand German Center for Diabetes Research (DZD e.V.), Ingolstädter Landstrasse, Neuherberg, 1, 85764, Germany Simone Wahl & Barbara Thorand Department of Medical Informatics, Biometry and Epidemiology, Ludwig-Maximilians-Universität München, Marchioninistrasse, Munich, 15, 81377, Germany Anne-Laure Boulesteix Department of Epidemiology and Biostatistics, VU University Medical Center, PO Box 7057, Amsterdam, 1007, MB, The Netherlands Mark A. van de Wiel Department of Mathematics, VU University, De Boelelaan 1081a, Amsterdam, 1081, HV, The Netherlands Astrid Zierer Barbara Thorand Correspondence to Simone Wahl. The original version of this article was revised. The spelling of author Mark A. van de Wiel's name was corrected An erratum to this article is available at http://dx.doi.org/10.1186/s12874-016-0271-7. Additional file 1 Supplementary Figures and Tables. Figure S1. Imposing missingness into complete observations from the application data set. Figure S2. Visualization of internal validation strategies in complete data. Figure S3. Simulation distribution of AUC estimates obtained by different strategies at large sample size (n=2000) and p=1 covariate. Figure S4. Simulation distribution of AUC estimates obtained by different strategies at large sample size (n=2000) and p=1 covariate. Figure S5. Simulation distribution of ΔAUC estimates obtained by different strategies. Figure S6. Simulation distribution of categorical NRI estimates obtained by different strategies. Figure S7. Simulation distribution of continuous NRI estimates obtained by different strategies. Figure S8. Simulation distribution of IDI estimates obtained by different strategies. Figure S9. Mean squared error of AUC estimates obtained by different strategies based on bootstrapping. Figure S10. Bias of AUC estimates obtained by different strategies based on bootstrapping – Influence of further data characteristics. Figure S11. Bias of Brier score estimates obtained by different strategies based on bootstrapping. Figure S12. Mean squared error of Brier score estimates obtained by different strategies based on bootstrapping. Figure S13. Bias of ΔAUC estimates obtained by different strategies based on bootstrapping. Figure S14. Mean squared error of ΔAUC estimates obtained by different strategies based on bootstrapping. Figure S15. Bias of ΔAUC estimates obtained by different strategies based on bootstrapping – Influence of further data characteristics. Figure S16. Bias of categorical net reclassification improvement (NRI) estimates obtained by different strategies based on bootstrapping. Figure S17. Mean squared error of categorical net reclassification improvement (NRI) estimates obtained by different strategies based on bootstrapping. Figure S18. Bias of continuous net reclassification improvement (NRI) estimates obtained by different strategies based on bootstrapping. Figure S19. Mean squared error of continuous net reclassification improvement (NRI) estimates obtained by different strategies based on bootstrapping. Figure S20. Bias of continuous integrated discrimination improvement (IDI) estimates obtained by different strategies based on bootstrapping. Figure S21. Mean squared error of integrated discrimination improvement (IDI) estimates obtained by different strategies based on bootstrapping. Figure S22. Simulation distribution of calibration intercept estimates obtained by different strategies. Figure S23. Bias of calibration intercept estimates obtained by different strategies based on bootstrapping. Figure S24. Mean squared error of calibration intercept estimates obtained by different strategies based on bootstrapping. Figure S25. Simulation distribution of calibration slope estimates obtained by different strategies. Figure S26. Bias of calibration slope estimates obtained by different strategies based on bootstrapping. Figure S27. Mean squared error of calibration slope estimates obtained by different strategies based on bootstrapping. Figure S28. Relationship of standard deviations of performance estimates with varying number of resamples and imputations. Figure S29. Standard deviation of performance estimates at varying number of resamples and imputations. Figure S30. Impairment of performance evaluation through missing values in the test data. Table S1. Descriptive information of phenotypic and inflammation-related markers from the MONICA/KORA subcohort. Table S2. Standard deviation of performance estimates obtained with Val-MI at one resample (B=1) and one imputation (M=1). Table S3. Standard deviation of performance estimates obtained with MI(-y)-Val at one imputation (M=1) and one resample (B=1). (PDF 1823 kb) R code. Example_R_Code.r Example R code showing how to apply functions. gen.data.r R functions for generation of data and imposing missingness. do.resampling.r R functions for performing internal validation and multiple imputation. get.performance.r R functions to do the modeling and obtain performance estimates. do.CIs.r R functions to obtain point and confidence interval estimates. (ZIP 20 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Wahl, S., Boulesteix, AL., Zierer, A. et al. Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation. BMC Med Res Methodol 16, 144 (2016). https://doi.org/10.1186/s12874-016-0239-7 Missing values Incomplete data Prediction model Predictive performance Internal validation
CommonCrawl
Best theory diagrams for multilayered structures via shell finite elements Marco Petrolo ORCID: orcid.org/0000-0002-7843-105X1 & Erasmo Carrera1,2 Composite structures are convenient structural solutions for many engineering fields, but their design is challenging and may lead to oversizing due to the significant amount of uncertainties concerning the current modeling capabilities. From a structural analysis standpoint, the finite element method is the most used approach and shell elements are of primary importance in the case of thin structures. Current research efforts aim at improving the accuracy of such elements with limited computational overheads to improve the predictive capabilities and widen the applicability to complex structures and nonlinear cases. The present paper presents shell elements with the minimum number of nodal degrees of freedom and maximum accuracy. Such elements compose the best theory diagram stemming from the combined use of the Carrera Unified Formulation and the Axiomatic/Asymptotic Method. Moreover, this paper provides guidelines on the choice of the proper higher-order terms via the introduction of relevance factor diagrams. The numerical cases consider various sets of design parameters such as the thickness, curvature, stacking sequence, and boundary conditions. The results show that the most relevant set of higher-order terms are third-order and that the thickness plays the primary role in their choice. Moreover, certain terms have very high influence, and their neglect may affect the accuracy of the model significantly. The finite element method (FEM) is one of the most common tools for the design of structures and makes use of three-dimensional (3D), 2D and 1D elements to solve a broad variety of linear and nonlinear structural problems. 2D and 1D elements, although less accurate than 3D, can lead to reduced computational costs. 2D models are referred to as shell and plate finite elements (FE) and can model metallic and composite thin-walled structures. 2D models available in commercial codes rely on the classical theories of structures [1,2,3]. In a 2D model, the primary unknown variables depend on two coordinates, x, and y. On the other hand, assumed fields define the unknown distributions along the thickness direction, z. A structural theory has a given expansion of the unknowns along z. Such expansions characterize the accuracy of a theory and its computational costs. For instance, in FEM, the expansion terms, referred to as generalized unknown variables, define the nodal degrees of freedom (DOF) of the model [4]. Richer expansions lead to higher accuracies and computational costs [5] but wider application scenarios. For a given accuracy level, the choice of a structural theory is problem dependent. In the case of composite structures, the following characteristics may require structural models with richer expansions than classical ones [6]: Moderately thick or thick structures, i.e., \(\frac{a}{h}\) < 50, where a is the characteristic length of the structure and h is the thickness. Materials with high transverse deformability, e.g., common orthotropic materials, in which \(\frac{E_L}{E_T}\), \(\frac{E_L}{E_z} > 5\), and \(\frac{G}{E_L}<\frac{1}{10}\), where E and G are the Young and shear moduli and L is the fiber direction of the fiber and T, z are perpendicular to L. Transverse anisotropy due, for instance, to the presence of contiguous layers with different properties. As well-known, such factors require the proper modeling of shear and normal transverse stresses, and variations of the displacement field at the interface between two layers with different mechanical properties, i.e., the Zig–Zag effect. The development of structural theories, i.e., the selection of the expansion terms, can follow two main approaches, namely, the axiomatic and asymptotic ones. The axiomatic method introduces expansions related to hypotheses on the mechanical behavior to reduce the mathematical complexity of the 3D differential equations of elasticity as in the case of classical theories [1,2,3]. The asymptotic method introduces a mathematically rigorous expansion having known accuracy if compared to the 3D exact solution [7, 8]. Axiomatic models are easier than asymptotic ones to implement but may miss fundamental expansion terms. Asymptotic models are more rigorous but the simultaneous consideration of multiple problem parameters, e.g., thickness and orthotropic ratio, may be cumbersome. Over the last decades, the research activity has focused on the development of shell and plate models incorporating the effects mentioned above [9, 10]. Most recent efforts describe well the open research topics and refinement techniques related to shells, such as, improvements of classical models [11] and higher-order models [12,13,14]; asymptotic approaches [15]; improvement of FE performances regarding membrane and shear locking [16,17,18,19,20,21], mesh accuracy [22], and distortion [23]; improved modeling of the interlaminar shear stresses [24]; Layer-Wise (LW) models [25, 26]; Zig–Zag models [27, 28]; mixed formulations [29,30,31]; variable kinematics finite elements with multifield effects [32]; extensions to non-linear problems [33, 34] and peridynamics [35]; innovative solution schemes such as the numerical manifold method [36]. Via the axiomatic/asymptotic method (AAM), this paper presents best theory diagrams (BTD) [37] providing the shell finite elements with the minimum computational cost and maximum accuracy for a given problem. In [37], the results stemmed from strong-form solutions restricting the analysis concerning boundary conditions and stacking sequences. This paper is the first contribution based on shell finite elements allowing the generation of BTD for various boundary conditions and stacking sequences. Moreover, this paper presents a novel metric referred to as Relevance Factor (RF) to evaluate the influence of terms and outline guidelines for the proper choice of the expansion terms. This paper is organized as follows: the governing equations and the methodology are in "Finite element formulation" and "Best theory diagram" sections, then, the "Results" and "Conclusions" sections follow. Finite element formulation The Carrera Unified Formulation (CUF) defines the displacement field of a 2D model as $$\begin{aligned} \mathbf {u}(x, y, z)=F_{\tau }(z)\mathbf {u}_{\tau }(x, y) \qquad \tau =1, \dots , M \end{aligned}$$ where the Einstein notation acts on \(\tau \). \(\mathbf {u}\) is the displacement vector, (u\(_{x}\) u\(_{y}\) u\(_{z}\))\(^T\). F\(_{\tau }\) are the thickness expansion functions. \(\mathbf {u}_{\tau }\) is the vector of the generalized unknown displacements. M is the number of expansion terms. In the case of polynomial, Taylor-like expansions, a third-order model, hereinafter referred to as N = 3, has the following displacement field: $$\begin{aligned} \begin{array}{l} u_{x}=u_{x_{1}}+z\,u_{x_{2}}+z^{2}\,u_{x_{3}}+z^{3}\,u_{x_{4}}\\ u_{y}=u_{y_{1}}+z\,u_{y_{2}}+z^{2}\,u_{y_{3}}+z^{3}\,u_{y_{4}}\\ u_{z}=u_{z_{1}}+z\,u_{z_{2}}+z^{2}\,u_{z_{3}}+z^{3}\,u_{z_{4}}\\ \end{array} \end{aligned}$$ The third-order model has twelve nodal unknowns. The order and type of expansion is a free parameter. In other words, the theory of structure is an input of the analysis. This paper makes use of the Equivalent Single Layer (ESL) formulation and N = 4 as reference model to build the BTD. The metric coefficients H\(^k_\alpha \), H\(^k_\beta \) and H\(^k_z\) of the k-th layer of the multilayered shell are $$\begin{aligned} \begin{aligned} H^k_\alpha = A^k (1 + z_k/R^k_\alpha ), \quad H^k_\beta = B^k (1 + z_k/R^k_\beta ), \quad H^k_z = 1\;. \end{aligned} \end{aligned}$$ R\(^k_\alpha \) and R\(^k_\beta \) are the principal radii of the middle surface of the k-th layer, A\(^k\) and B\(^k\) the coefficients of the first fundamental form of \(\Omega _k\), see Fig. 1. This paper focused only on shells with constant radii of curvature with A\(^k = \) B\(^k = 1\). The geometrical relations are $$\begin{aligned} \begin{aligned} \varvec{\epsilon }^k_p&= \begin{Bmatrix} \epsilon ^k_{\alpha \alpha }, \epsilon ^k_{\beta \beta }, \epsilon ^k_{\alpha \beta } \end{Bmatrix}^T = (\varvec{D}^k_p + \varvec{A}^k_p) \varvec{u}^k \\ \varvec{\epsilon }^k_n&= \begin{Bmatrix} \epsilon ^k_{\alpha z}, \epsilon ^k_{\beta z}, \epsilon ^k_{zz} \end{Bmatrix}^T = (\varvec{D}^k_{n\Omega } +\varvec{D}^k_{nz} - \varvec{A}^k_n) \varvec{u}^k \end{aligned} \end{aligned}$$ $$\begin{aligned} \varvec{D}^k_p= & {} \left[ \begin{array}{c@{\quad }c@{\quad }c} \frac{\partial _{\alpha }}{H^k_{\alpha }} &{} 0 &{} 0 \\ 0 &{} \frac{\partial _{\beta }}{H^k_{\beta }} &{} 0 \\ \frac{\partial _{\beta }}{H^k_{\beta }} &{} \frac{\partial _{\alpha }}{H^k_{\alpha }} &{} 0 \end{array} \right] ,\; \quad \varvec{D}^k_{n\Omega } = \left[ \begin{array}{c@{\quad }c@{\quad }c} 0 &{} 0 &{} \frac{\partial _{\alpha }}{H^k_{\alpha }} \\ 0 &{} 0 &{} \frac{\partial _{\beta }}{H^k_{\beta }} \\ 0 &{} 0 &{} 0 \end{array} \right] ,\; \quad \varvec{D}^k_{nz} = \left[ \begin{array}{c@{\quad }c@{\quad }c} \partial _z &{} 0 &{} 0 \\ 0 &{} \partial _z &{} 0 \\ 0 &{} 0 &{} \partial _z \end{array} \right] ,\; \end{aligned}$$ $$\begin{aligned} \varvec{A}^k_{p}= & {} \left[ \begin{array}{c@{\quad }c@{\quad }c} 0 &{} 0 &{} \frac{1}{H^k_{\alpha }R^k_{\alpha }} \\ 0 &{} 0 &{} \frac{1}{H^k_{\beta }R^k_{\beta }} \\ 0 &{} 0 &{} 0 \end{array} \right] , \quad \varvec{A}^k_{n} = \left[ \begin{array}{c@{\quad }c@{\quad }c} \frac{1}{H^k_{\alpha }R^k_{\alpha }} &{} 0 &{} 0 \\ 0 &{} \frac{1}{H^k_{\beta }R^k_{\beta }} &{} 0 \\ 0 &{} 0 &{} 0 \end{array} \right] . \end{aligned}$$ The stress–strain relations are $$\begin{aligned} \begin{aligned} \varvec{\sigma }_{p}^k&= \begin{Bmatrix} \sigma _{\alpha \alpha }^k, \sigma _{\beta \beta }^k, \sigma _{\alpha \beta }^k \end{Bmatrix}^T = \varvec{C}_{pp}^k \varvec{\epsilon }_{p}^k + \varvec{C}_{pn}^k \varvec{\epsilon }_{pn}^k \\ \varvec{\sigma }_{n}^k&= \begin{Bmatrix} \sigma _{\alpha z}^k, \sigma _{\beta z}^k, \sigma _{z z}^k \end{Bmatrix}^T = \varvec{C}_{np}^k \varvec{\epsilon }_{np}^k + \varvec{C}_{nn}^k \varvec{\epsilon }_{n}^k \\ \end{aligned} \end{aligned}$$ $$\begin{aligned} \begin{aligned} {\varvec{C}}_{pp}^k=&\left[ \begin{array}{c@{\quad }c@{\quad }c} C_{11}^k &{} C_{12}^k &{} C_{16}^k \\ C_{12}^k &{} C_{22}^k &{} C_{26}^k \\ C_{16}^k &{} C_{26}^k &{} C_{66}^k \end{array} \right] \qquad {\varvec{C}}_{pn}^k=\left[ \begin{array}{c@{\quad }c@{\quad }c} 0 &{} 0 &{} C_{13}^k\\ 0 &{} 0 &{} C_{23}^k\\ 0 &{} 0 &{} C_{36}^k \end{array} \right] \\ {\varvec{C}}_{np}^k=&\left[ \begin{array}{c@{\quad }c@{\quad }c} 0 &{} 0 &{} 0 \\ 0 &{} 0&{} 0\\ C_{13}^k &{} C_{23}^k &{} C_{36}^k \end{array} \right] \qquad {\varvec{C}}_{nn}^k=\left[ \begin{array}{c@{\quad }c@{\quad }c} C_{55}^k &{} C_{45}^k &{} 0 \\ C_{45}^k &{} C_{44}^k &{} 0 \\ 0 &{} 0 &{} C_{33}^k \end{array} \right] \end{aligned} \end{aligned}$$ The governing equations make use of the Principle of Virtual Displacements (PVD) and the finite element formulation exploits the MITC technique via nine-node shell elements. The displacement vector and its virtual variation are $$\begin{aligned} \varvec{u} = N_i F_\tau \varvec{u}_{\tau _i}, \qquad \delta \varvec{u} = N_j F_s \delta \varvec{u}_{s_j} \qquad i,j = 1,\ldots ,9 \end{aligned}$$ \(\varvec{u}_{\tau _i}\) and \(\delta \varvec{u}_{s_j}\) are the nodal displacement vector and its virtual variation, respectively. Considering the constitutive and geometrical equations, and the PVD, the following governing equation holds $$\begin{aligned} \quad \varvec{k}^{k}_{\tau s i j} \varvec{u}^{k }_{\tau i} = \varvec{p}^k_{s j} \end{aligned}$$ The \(3\times 3\) matrix \(\varvec{k}^{k}_{\tau s i j}\) is the fundamental mechanical nucleus whose expression is independent of the order of the expansion. \(\varvec{p}^k_{s j}\) is the load vector. More details regarding the finite element formulation are in [4]. Shell geometry Best theory diagram One of the CUF extensions is the AAM as a tool to analyze the influence of expansion terms starting from a full axiomatic theory [38, 39], in this paper, the \(\hbox {N} = 4\). Via the AAM, asymptotic-like results related to the relevance of each variable are obtainable by varying problem parameters, e.g., thickness, orthotropic ratio, stacking sequence, boundary conditions. The AAM can follow various approaches, the one used in this paper has the following steps: Definition of parameters such as geometry, boundary conditions, materials, and layer layouts. Axiomatic choice of a starting theory and definition of the starting nodal unknowns. Usually, the starting theory provides 3D-like solutions. The CUF generates the governing equations for the theories considered. In particular, the CUF generates reduced models having combinations of the starting terms as generalized unknowns. For each reduced model, the accuracy evaluation makes use of one or more control parameters, in this paper, the maximum transverse displacement. The number of active terms and the error identifies each theory on a Cartesian plane in which the abscissa reports the error and the ordinate reports the number of active terms. The best theory diagram (BTD) is the curve composed of all those models providing the minimum error with the least number of variables, see Fig. 2. Given the accuracy, models with fewer variables than those on the BTD do not exist. Given the number of variables, models with better accuracy than those on the BTD do not exist. The graphic notation makes use of black and white triangles to indicate active and inactive terms, respectively. In this paper, the control parameter for the error evaluation is the maximum u\(_z\), that is, error \(= 100\times \frac{|u_z - u_z^{N = 4}|}{|u_z^{N = 4}|}\). BTD for a fourth-order model The numerical results focus on cases retrieved from [40]. The shell has a = b and \(\hbox {R}_{\alpha } = \hbox {R}_{\beta }\) = R. The load is bi-sinusoidal and applied on the top surface, \(\hbox {p}_z = \hat{p}_{z}\sin (\pi \alpha /\hbox {a})\sin (\pi \beta /\hbox {b})\). The material properties are \(\hbox {E}_1\)/\(\hbox {E}_2 = 25\), \(\hbox {G}_{12}\)/\(\hbox {E}_2 = \hbox {G}_{13}\)/\(\hbox {E}_2 = 0.5\), \(\hbox {G}_{13}\)/\(\hbox {E}_2 = 0.2\), \(\nu = 0.25\). The finite element model of a quarter of shell has a \(4\times 4\) mesh as this discretization provides sufficiently accurate results [40]. In all cases, the BTD vertical axis ranges from 5 to 15 since, more often than not, models with 4 or less DOF provide very high errors and are not of practical interest. Simply-supported, 0/90/0 A simply-supported shell with symmetric lamination is the first numerical case. The analysis aims to study the influence of the thickness and curvature on the BTD. R/a and a/h vary to consider deep, shallow, thick and thin shells. Table 1 0/90/0, \(\overline{u}_{z}\ (\hbox {z} = 0) = 100u_{z}\) \(\hbox {E}_{T}\) h\(^3\)/(\(\overline{p}_z\) a\(^4\)) All combinations for 0/90/0, R/a = 5 BTD for 0/90/0, R/a = 5 Table 2 BTD models for 0/90/0, \(\hbox {R/a} = 5\), \(\hbox {a/h} = 100\) Table 3 BTD models for 0/90/0, \(\hbox {R/a} = 5\), \(\hbox {a/h} = 10\) Table 4 BTD models for 0/90/0, \(\hbox {R/a} = 5\), \(\hbox {a/h} = 5\) Table 1 presents the transverse displacement with comparisons against a 3D solution and an analytical model based on a fourth-order layer-wise model. As well-known, the accuracy of the present \(\hbox {N} = 4\) model decreases for thicker shells. However, given that the present work aims to investigate the role of higher-order terms and build BTD, the present \(\hbox {N} = 4\) model accuracy is satisfactory. Figure 3 presents the accuracy of all models stemming by the 2\(^{15}\) combinations of the \(\hbox {N} = 4\) model. In other words, each dot provides the accuracy of a structural theory based on a subset of the fifteen DOF full fourth-order expansion. The BTD is the lower boundary curve composed of those theories with the minimum number of terms for a given error. Figures 4 and 5 present the BTD for \(\hbox {R/a} = 5\) and 2, respectively. For comparison purposes, each plot shows the FSDT, \(\hbox {N} = 2\) and \(\hbox {N} = 3\) results. In the case of \(\hbox {a/h} = 5\), BTD with and without \(\hbox {N} = 2\) and FSDT are available to improve the readability of the results. Tables 2, 3 and 4 present each BTD model. For the sake of brevity, \(\hbox {R/a} = 2 \) is not reported since does not present any significant changes if compared to \(\hbox {R/a} = 5\). Each row shows the model providing the minimum error for a given number of DOF. For instance, for \(\hbox {R/a} = 5\), \(\hbox {a/h} = 100\), the 7 DOF BTD has the following displacement model: $$\begin{aligned} \begin{array}{l} u_{x}=u_{x_{1}}+z\,u_{x_{2}}\\ u_{y}=u_{y_{1}}+z\,u_{y_{2}}\\ u_{z}=u_{z_{1}}+z\,u_{z_{2}}+z^{2}\,u_{z_{3}}\\ \end{array} \end{aligned}$$ The last row of each table shows the relevance factor (RF) of given order terms in the BTD. The RF is the ratio between the number of active instances and the total number of cases. For instance, \(\hbox {RF}_0 = 1\) indicates that the zeroth-order terms are always present in the BTD. \(\hbox {RF}_4 = 0.27\) because fourth-order terms are in the BTD nine times out of 33 cases. The RF provides a metric to measure the influence of a set of variables, higher the RF higher the relevance. Table 5 reports the error from all the 14 DOF models. Each row refers to a model indicated by the inactive term. Table 5 Errors for all 14 DOF models, 0/90/0, \(\hbox {R/a} =5, \hbox {a/h} = 5\) Table 6 0/90/0/90, \(\overline{u}_{z}\)(z = 0) = 100\(\hbox {u}_{z}\) \(\hbox {E}_{T}\) h\(^3\)/(\(\overline{p}_z\,a^4)\) All combinations for 0/90/0/90, R/a = 5 BTD for 0/90/0/90, R/a = 5 BTD for 0/90/0/90, a/h = 10 The results suggest that In all cases, no more than six DOF are necessary to provide errors lower than 1%. The analysis of all combinations shows that for thin shells there is a significant gap between models providing acceptable accuracies and those with errors larger than 70%. On the other hand, as the thickness increases, the distribution has fewer accuracy gaps. As shown in Table 5, the zeroth and first-order terms affect the gap width to a great extent. In thin shells, their role is predominant, whereas, in thick shells, higher-order terms gain relevance. A more regular accuracy distribution is an indication of more relevance of higher-order terms. According to the distributions of accuracy from all combinations, the introduction of new terms in an expansion is ineffective if a very relevant term is not present. For instance, u\(_{x4}\) gains significance as the thickness increases. For thin shells, the FSDT provides higher accuracy with less DOF than the BTD due to the correction of the Poisson locking. For moderately thick shells, a/h = 10, the FSDT matches the BTD but with moderate accuracy. The use of 6 DOF improves the accuracy to a great extent. As a/h decreases further, the FSDT is no longer on the BTD. The \(\hbox {N} = 3 \) is always on the BTD, whereas the \(\hbox {N} = 2\) is a BTD only for thin shells. The thickness ratio influences the BTD more than curvature. The zeroth-order terms are active in each BTD independently of the thickness, i.e., \(\hbox {RF}_0 = 1\). The relevance of first- and second-order terms decreases as the thickness increases. The influence of third-order terms increases as the thickness increases. The fourth-order terms are the least influential, although, at a/h \(= 5\), the RF increases considerably to the level of second-order terms. Most of the zeroth-, first- and third-order terms present a regular pattern along a BTD table, i.e., as one of these terms becomes inactive, it does not appear in the BTD anymore. On the other hand, second- and fourth-order terms have a more irregular pattern indicating that their influence depends on the activation or deactivation of other terms. Simply-supported, 0/90/0/90 The second numerical case deals with a different stacking sequence to investigate the effect of an asymmetric lamination on the BTD. All other parameters remain as those of the previous case. Moreover, this section considers two additional R/a values, 100 and 50, for a more comprehensive analysis on the effect of the curvature. Table 6 presents the transverse displacement values with comparisons with other models from literature, when available. The all combination accuracy plot is in Fig. 6, whereas, Figs. 7 and 8 present the BTD for given R/a values and varying a/h, and Fig. 9 shows the BTD for a given a/h and varying R/a. The BTD models are in Tables 7, 8, 9, 10, and 11. The results suggest that The present case has more uniform accuracy distributions than the previous one indicating higher relevances of the higher-order terms. For the thick case, there are no relevant gaps up to 60%, and the proper choice of terms can provide any accuracy level. For a/h \(= 10\), there is an accuracy gap between 20 and 35% meaning that there are not structural models that can provide such level of accuracy. As in the previous case, the FSDT validity is confirmed for the thin case, whereas its accuracy is not sufficient from a/h \(= 10\) and below. Unlike the previous case, from a/h = 10 and below, some ten DOF are necessary to have errors lower than 1%. As the thickness increases, the RF distributions are similar to the previous case with a slightly higher influence of the higher-order terms and lower for zeroth- and first-order ones. For a/h \(= 5\) the influence of higher-order terms is of particular relevance. For instance, the 5 DOF BTD differs significantly from the FSDT and requires third-order terms. The variation of the curvature leads to less significant modifications of the BTD than the thickness. Table 7 BTD models for 0/90/0/90, \(\hbox {R/a} = 5, \hbox {a/h }= 100\) Table 8 BTD models for 0/90/0/90, \(\hbox {R/a} = 5, \hbox {a/h} = 10\) Table 9 BTD models for 0/90/0/90, \(\hbox {R/a} = 5, \hbox {a/h} = 5\) Table 10 BTD models for 0/90/0/90, \(\hbox {R/a} = 50, \hbox {a/h} = 10\) Table 11 BTD models for 0/90/0/90, \(\hbox {R/a} = 100, \hbox {a/h} = 10\) Table 12 0/90/0/90, \(\overline{u}_{z}\)(z = 0) = 100\(\hbox {u}_{z}\) \(\hbox {E}_{T}\) h\(^3\)/(\(\overline{p}_z\,a^4)\), clamped-free Table 13 BTD models for 0/90/0/90, \(\hbox {R/a} = 5, \hbox {a/h} = 100\), clamped-free Table 14 BTD models for 0/90/0/90, \(\hbox {R/a} = 5, \hbox {a/h} = 10\), clamped-free Table 15 BTD models for 0/90/0/90, \(\hbox {R/a} = 5,\hbox {a/h} = 5\), clamped-free Clamped-free, 0/90/0/90 The last numerical example proposes the 4-layer shell with two edges parallel to \(\beta \) clamped, and the other two free. The aim is to provide some insights into the effect of the geometrical boundary conditions on the BTD. All the other parameters are as in the previous case. Table 12 presents the transverse displacement from the \(\hbox {N} = 4\) model. The BTD for the present case are in Tables 13, 14, 15, and Fig. 10. The results show that the most relevant effect from the new set of boundary conditions is an increased relevance of higher-order terms at a/h \(= 10\). BTD for 0/90/0/90, R/a = 5, clamped-free Analysis of the relevance of generalized displacement variables This section aims at investigating the role of each generalized unknowns in the BTD and how their relevance changes with varying parameters. To this purpose, the RF restricts to each variable as shown, for instance, in Fig. 11. In this case, RF = 1 means that a given variable is present in each BTD of the shell configuration considered. For instance, for the 0/90/0 case with R/a = 5, u\(_{x1}\), u\(_{y1}\) and u\(_{z1}\) are in all BTD independently of the thickness ratio. Each set of figures presents the RF for the three terms of a given order, see Figs. 11, 12, 13, 14, and 15. The discussion for each order follows. Zeroth-order terms As expected, these terms have very high influence and are almost always present in BTD. Just u\(_{x1}\) presents RF lower than unity in three cases in which the 5 DOF BTD requires higher-order terms as discussed in previous sections. First-order terms In-plane components have unitary RF in most cases. On the other hand, the out-of-plane component has lower relevance and is consistent with the appearance of the FSDT model as 5 DOF BTD for thin and moderately thick shells. Second-order terms The influence of these terms varies consistently. u\(_{x3}\) has little relevance in the 0/90/0 case but higher in the asymmetric case, and such relevance tends to increase for higher thickness, and the curvature does not influence it. The u\(_{y3}\) relevance has smaller variations due to the thickness change. The thickness strongly influences the out-of-plane component and its influence decreases for thicker shells. Third-order terms The in-plane components have significant influence which increases for thicker shells. The out-of-plane influence is relevant in the symmetric case and increases for thicker shells. Fourth-order terms These terms are the less influential except for u\(_{x5}\) in the clamped-free case. The relevance of these terms should increase as soon as the BTD considers stress distributions. RF for zeroth-order displacement variables RF for first-order displacement variables RF for second-order displacement variables RF for third-order displacement variables RF for fourth-order displacement variables This paper presented results on the accuracy of higher-order generalized displacement variables for composite shells. Investigations used the CUF and the AAM. The former provided the finite element matrices for any-order models, and the latter led to the analysis of the relevance of each generalized variable. The combined use of these tools generated the BTD and relevance factor diagrams. The BTD provides the minimum number of DOF for a given accuracy level. The RF diagrams measure the importance of a variable, or of a set of variables, as various parameters change, e.g., thickness, curvature, stacking sequence. All results considered the maximum transverse displacement as the control parameter. The analysis led to the following guidelines and recommendations: For the cases considered in this paper, the thickness and stacking sequence are the most important factors for the choice of the primary variables. For thin shells, six DOF are sufficient to obtain errors lower than 1%. For thick shells, ten DOF are necessary. In most cases, the accuracy level obtainable from combinations of a given set of variables is not continuous as the DOF decrease. In other words, there are no structural models that can satisfy certain accuracy of the solution. Accuracy gaps indicate the presence of very effective terms that must be present in the expansion to ensure satisfactory accuracies. For instance, for thin shells, these terms coincide with the FSDT expansions. However, as the presence of non-classical effects due to asymmetries or high thickness increases, the relevance of higher-order terms increases and the accuracy gaps tend to disappear. The FSDT and second-order model are BTD only for thin shells. The third-order model is close to the BTD in most cases. As the thickness increases, the relevance of third-order variables increases significantly, and these terms can be the most relevant together with the zeroth-order ones. The out-of-plane displacement variables tend to have less relevance than in-plane ones. Such a relevance should increase significantly as soon as the analysis considers stress distributions as control parameters. The set of variables composing a BTD model depends on the boundary conditions; however, such a dependency is weaker than the thickness one. Most immediate future developments should deal with the inclusion of all displacement and stress components as control parameters and the analysis of more complex configurations. In fact, for the boundary conditions adopted, the use of the transverse displacement is the minimum requirement for a BTD. The inclusion of other control parameters, e.g., transverse shear and axial stresses, may modify the BTD concerning accuracy and set of active terms with higher relevance of higher-order terms. Kirchhoff G. Uber das gleichgewicht und die bewegung einer elastischen scheibe. J Reins Angewandte Mathematik. 1850;40:51–88. Reissner E. The effect of transverse shear deformation on the bending of elastic plates. J Appl Mech. 1945;12:69–76. Mindlin RD. Influence of rotatory inertia and shear in flexural motions of isotropic elastic plates. J Appl Mech. 1951;18:1031–6. Carrera E, Cinefra M, Petrolo M, Zappino E. Finite element analysis of structures through unified formulation. Chichester: Wiley; 2014. Washizu K. Variational methods in elasticity and plasticity. Oxford: Pergamon; 1968. Carrera E. Developments, ideas and evaluations based upon the Reissner's mixed variational theorem in the modeling of multilayered plates and shells. Appl Mech Rev. 2001;54:301–29. Gol'denweizer AL. Theory of thin elastic shells. International series of monograph in aeronautics and astronautics. New York: Pergamon Press; 1961. Cicala P. Systematic approximation approach to linear shell theory. Torino: Levrotto e Bella; 1965. Kapania K, Raciti S. Recent advances in analysis of laminated beams and plates, part I: shear effects and buckling. AIAA J. 1989;27(7):923–35. Carrera E. Historical review of zig–zag theories for multilayered plates and shells. Appl Mech Rev. 2003;56:287–308. Endo M. An alternative first-order shear deformation concept and its application to beam, plate and cylindrical shell models. Compos Struct. 2016;146:50–61. Katili I, Maknun IJ, Batoz JL, Ibrahimbegovic A. Shear deformable shell element DKMQ24 for composite structures. Compos Struct. 2018;160:586–93. Thakur SN, Ray C, Chakraborty S. Response sensitivity analysis of laminated composite shells based on higher-order shear deformation theory. Arch Appl Mech. 2018;88(8):1429–59. Reinoso J, Paggi M, Areias P, Blázquez A. Surface-based and solid shell formulations of the 7-parameter shell model for layered CFRP and functionally graded power-based composite structures. Mech Adv Mater Struct. (in press). Chung SW, Hong SG. Pseudo-membrane shell theory of hybrid anisotropic materials. Compos Struct. 2017;160:586–93. Ko Y, Lee Y, Lee PS, Bathe KJ. Performance of the MITC3+ and MITC4+ shell elements in widely-used benchmark problems. Comput Struct. 2017;193:187–206. Jun H, Toon K, Lee PS, Bathe KJ. The MITC3+ shell element enriched in membrane displacements by interpolation covers. Comput Methods Appl Mech Eng. 2018;337:458–80. Ko Y, Lee PS, Bathe KJ. The MITC4+ shell element and its performance. Comput Struct. 2016;169:57–68. Ko Y, Lee PS, Bathe KJ. A new MITC4+ shell element. Comput Struct. 2017;182:404–18. Ko Y, Lee PS, Bathe KJ. A new 4-node MITC element for analysis of two-dimensional solids and its formulation in a shell element. Comput Struct. 2017;192:34–49. Rama G, Marinkovic D, Zehn M. High performance 3-node shell element for linear and geometrically nonlinear analysis of composite laminates. Compos B Eng. 2018;151:118–26. Ho Nguyen Tan T, Kim HG. A new strategy for finite-element analysis of shell structures using trimmed quadrilateral shell meshes: a paving and cutting algorithm and a pentagonal shell element. Int J Numer Methods Eng. 2018;114(1):1–27. Wisniewski K, Turska E. Improved nine-node shell element MITC9i with reduced distortion sensitivity. Comput Mech. 2018;62(3):499–523. Gruttman F, Knust G, Wagner W. Theory and numerics of layered shells with variationally embedded interlaminar stresses. Comput Methods Appl Mech Eng. 2017;326:713–38. Naumenko K, Eremeyev VA. A layer-wise theory of shallow shells with thin soft core for laminated glass and photovoltaic applications. Compos Struct. 2017;178:434–46. Li DH, Zhang F. Full extended layerwise method for the simulation of laminated composite plates and shells. Comput Struct. 2017;187:101–13. Coda HB, Paccola RR, Carrazedo R. Zig–Zag effect without degrees of freedom in linear and non linear analysis of laminated plates and shells. Compos Struct. 2017;161:32–50. Ahmed A, Kapuria S. A four-node facet shell element for laminated shells based on the third order zigzag theory. Compos Struct. 2016;158(1):112–27. Zucco G, Groh RMJ, Madeo A, Weaver PM. Mixed shell element for static and buckling analysis of variable angle tow composite plates. Compos Struct. 2016;152:324–38. Cinefra M, Chinosi C, Della Croce L, Carrera E. Refined shell finite elements based on RMVT and MITC for the analysis of laminated structures. Compos Struct. 2014;113:492–7. Chai Y, Li W, Liu G, Gong Z, Li T. A superconvergent alpha finite element method (SaFEM) for static and free vibration analysis of shell structures. Comput Struct. 2017;179:27–47. Cinefra M, Petrolo M, Li G, Carrera E. Variable kinematic shell elements for composite laminates accounting for hygrothermal effects. J Therm Stress. 2017;40(12):1523–44. Liang K. Koiter–Newton analysis of thick and thin laminated composite plates using a robust shell element. Compos Struct. 2017;161:530–9. Punera D, Kant T. Elastostatics of laminated and functionally graded sandwich cylindrical shells with two refined higher order models. Compos Struct. 2017;182:505–23. Chowdhury SR, Roy P, Roy D, Reddy JN. A peridynamic theory for linear elastic shells. Int J Solids Struct. 2016;84:110–32. Guo H, Zheng H. The linear analysis of thin shell problems using the numerical manifold method. Thin-Walled Struct. 2018;124:366–83. Carrera E, Cinefra M, Lamberti A, Petrolo M. Results on best theories for metallic and laminated shells including layer-wise models. Compos Struct. 2015;126:285–98. Carrera E, Petrolo M. Guidelines and recommendation to construct theories for metallic and composite plates. AIAA J. 2010;48(12):2852–66. Carrera E, Petrolo M. On the effectiveness of higher-order terms in refined beam theories. J Appl Mech. 2011;78:021013. Cinefra M, Valvano S. A variable kinematic doubly-curved MITC9 shell element for the analysis of laminated composites. Mech Adv Mater Struct. 2016;23(11):1312–25. Huang NN. Influence of shear correction factors in the higher-order shear deformation laminated shell theory. Int J Solids Struct. 1994;31:1263–77. EC provided the CUF FEM framework. MP developed the AAM and obtained the BTD. Both authors read and approved the final manuscript. Erasmo Carrera acknowledges the Russian Science Foundation under Grant No. 18-19-00092. Data are available upon request. This work was partially supported by the Russian Science Foundation under Grant No. 18-19-00092. MUL² Group, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129, Turin, Italy Marco Petrolo & Erasmo Carrera Laboratory of Intelligent Materials and Structures, Tambov State Technical University, Sovetskaya, 106, 392000, Tambov, Russia Erasmo Carrera Search for Marco Petrolo in: Search for Erasmo Carrera in: Correspondence to Marco Petrolo. Petrolo, M., Carrera, E. Best theory diagrams for multilayered structures via shell finite elements. Adv. Model. and Simul. in Eng. Sci. 6, 4 (2019) doi:10.1186/s40323-019-0129-8 Finite element Higher-order theories
CommonCrawl
This issue Previous Article Homogenization of trajectory attractors of 3D Navier-Stokes system with randomly oscillating force Next Article Parabolic reaction-diffusion systems with nonlocal coupled diffusivity terms Stability of pyramidal traveling fronts in the degenerate monostable and combustion equations Ⅰ Zhen-Hui Bu and Zhi-Cheng Wang, School of Mathematics and Statistics, Lanzhou University, Lanzhou, Gansu 730000, China * Corresponding author: Z.-C. Wang Received: June 30, 2016 Abstract Full Text(HTML) Related Papers Cited by This paper is concerned with traveling curved fronts in reaction diffusion equations with degenerate monostable and combustion nonlinearities. For a given admissible pyramidal in three-dimensional spaces, the existence of a pyramidal traveling front has been proved by Wang and Bu [30] recently. By constructing new supersolutions and developing the arguments of Taniguchi [25] for the Allen-Cahn equation, in this paper we first characterize the pyramidal traveling front as a combination of planar fronts on the lateral surfaces, and then establish the uniqueness and asymptotic stability of such three-dimensional pyramidal traveling fronts under the condition that given perturbations decay at infinity. Pyramidal traveling front, reaction diffusion equation, degenerate monostable nonlinearity, combustion nonlinearity, stability. Mathematics Subject Classification: Primary: 35B35, 35K57; Secondary: 35K55. [1] D. G. Aronson and H. F. Weinberger, Multidimensional nonlinear diffusions arising in population genetics, Adv. Math., 30 (1978), 33-76. doi: 10.1016/0001-8708(78)90130-5. [2] A. Bonnet and F. Hamel, Existence of non-planar solutions of a simple model of premixed Bunsen flames, SIAM J. Math. Anal., 31 (1999), 80-118. doi: 10.1137/s0036141097316391. [3] Z.-H. Bu and Z.-C. Wang, Curved fronts of monostable reaction-advection-diffusion equations in space-time periodic media, Commun. Pure Appl. Anal., 15 (2016), 139-160. doi: 10.3934/cpaa.2016.15.139. [4] Z. -H. Bu and Z. -C. Wang, Global stability of V-shaped traveling fronts in combustion and degenerate monostable equations, submitted. [5] Z. -H. Bu and Z. -C. Wang, Stability of pyramidal traveling fronts in degenerate monostable and combustion equations Ⅱ, preprint. [6] M. El Smaily, F. Hamel and R. Huang, Two-dimensional curved fronts in a periodic shear flow, Nonlinear Anal., 74 (2011), 6469-6486. doi: 10.1016/j.na.2011.06.030. [7] G. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer, Berlin, 2001. [8] F. Hamel, Bistable transition fronts in $\mathbb{R}^{N}$, Adv. Math., 289 (2016), 279-344. doi: 10.1016/j.aim.2015.11.033. [9] F. Hamel and R. Monneau, Solutions of semilinear elliptic equations in $\mathbb{R}^{N}$ with conicalshaped level sets, Comm. Partial Differential Equations, 25 (2000), 769-819. doi: 10.1080/03605300008821532. [10] F. Hamel and N. Nadirashvili, Travelling fronts and entire solutions of the Fisher-KPP equation in $\mathbb{R}^{N}$, Arch. Ration. Mech. Anal., 157 (2001), 91-163. doi: 10.1007/PL00004238. [11] F. Hamel, R. Monneau and J.-M. Roquejoffre, Stability of conical fronts in a model for conical flames in two space dimensions, Ann. Sci. École Normale Sup., 37 (2004), 469-506. doi: 10.1016/j.ansens.2004.03.001. [12] F. Hamel, R. Monneau and J.-M. Roquejoffre, Existence and qualitative properties of multidimensional conical bistable fronts, Discrete Contin. Dyn. Syst., 13 (2005), 1069-1096. doi: 10.3934/dcds.2005.13.1069. [13] F. Hamel, R. Monneau and J.-M Roquejoffre, Asymptotic properties and classification of bistable fronts with Lipschitz level sets, Discrete Contin. Dyn. Syst., 14 (2006), 75-92. doi: 10.3934/dcds.2006.14.75. [14] M. Haragus and A. Scheel, Corner defects in almost planar interface propagation, Ann. Inst. H. Poincaré Anal. Linéaire, 23 (2006), 283-329. doi: 10.1016/j.anihpc.2005.03.003. [15] M. Haragus and A. Scheel, Almost planar waves in anisotropic media, Comm. Partial Differential Equations, 31 (2006), 791-815. doi: 10.1080/03605300500361420. [16] R. Huang, Stability of travelling fronts of the Fisher-KPP equation in $\mathbb{R}^{N}$, Nonlinear Diff. Eq. Appl., 15 (2008), 599-622. doi: 10.1007/s00030-008-7041-0. [17] Y. Kurokawa and M. Taniguchi, Multi-dimensional pyramidal traveling fronts in Allen-Cahn equations, Proc. Roy. Soc. Edinburgh Sect. A, 141 (2011), 1031-1054. doi: 10.1017/S0308210510001253. [18] J. A. Leach, D. J. Needham and A. L. Kay, The evolution of reaction-diffusion waves in a class of scalar reaction-diffusion equations: Algebraic decay rates, Phys. D, 167 (2002), 153-182. doi: 10.1016/S0167-2789(02)00428-1. [19] W.-M. Ni and M. Taniguchi, Traveling fronts of pyramidal shapes in competition-diffusion systems, Netw. Heterog. Media, 8 (2013), 379-395. doi: 10.3934/nhm.2013.8.379. [20] H. Ninomiya and M. Taniguchi, Global stability of traveling curved fronts in the Allen-Cahn equations, Discrete Contin. Dyn. Syst., 15 (2006), 819-832. doi: 10.1016/j.jde.2004.06.011. [21] H. Ninomiya and M. Taniguchi, Existence and global stability of traveling curved fronts in the Allen-Cahn equations, J. Differential Equations, 213 (2005), 204-233. doi: 10.1016/j.jde.2004.06.011. [22] D. H. Sattinger, Monotone methods in nonlinear elliptic and parabolic boundary value problems, Insiana Univ. Math. J., 21 (1972), 979-1000. [23] W.-J. Sheng, W.-T. Li and Z.-C. Wang, Periodic pyramidal traveling fronts of bistable reaction-diffusion equations with time-periodic nonlinearity, J. Differential Equations, 252 (2012), 2388-2424. doi: 10.1016/j.jde.2011.09.016. [24] M. Taniguchi, Traveling fronts of pyramidal shapes in the Allen-Cahn equations, SIAM J. Math. Anal., 39 (2007), 319-344. doi: 10.1137/060661788. [25] M. Taniguchi, The uniqueness and asymptotic stability of pyramidal traveling fronts in the Allen-Cahn equations, J. Differential Equations, 246 (2009), 2103-2130. doi: 10.1016/j.jde.2008.06.037. [26] M. Taniguchi, Multi-dimensional traveling fronts in bistable reaction-diffusion equations, Discrete Contnu. Dyn. Syst., 32 (2012), 1011-1046. doi: 10.3934/dcds.2012.32.1011. [27] M. Taniguchi, An $(N-1)$-dimensional convex compact set gives an $N$-dimensional traveling front in the Allen-Cahn equation, SIAM J. Math. Anal., 47 (2015), 455-476. doi: 10.1137/130945041. [28] M. Taniguchi, Convex compact sets in $\mathbb{R}^{N-1}$ give traveling fronts of cooperation-diffusion systems in $\mathbb{R}^{N}$, J. Differential Equations, 260 (2016), 4301-4338. doi: 10.1016/j.jde.2015.11.010. [29] A. I. Volpert, V. A. Volpert and V. A. Volpert, Traveling Wave Solutions of Parabolic Systems 140, Amer. Math. Soc. , Providence, RI, 1994. [30] Z.-C. Wang and Z.-H. Bu, Nonplanar traveling fronts in reaction-diffusion equations with combustion and degenerate Fisher-KPP nonlinearity, J. Differential Equations, 260 (2016), 6405-6450. doi: 10.1016/j.jde.2015.12.045. [31] Z.-C. Wang, W.-T. Li and S. Ruan, Existence, uniqueness and stability of pyramidal traveling fronts in reaction-diffusion systems, Sci. China Math., 59 (2016), 1869-1908. doi: 10.1007/s11425-016-0015-x. [32] Z.-C. Wang, W.-T. Li and S. Ruan, Existence and stability of traveling wave fronts in reaction advecion diffusion equations with nonlocal delay, J. Differential Equations, 238 (2007), 153-200. doi: 10.1016/j.jde.2007.03.025. [33] Z.-C. Wang, H.-L. Niu and S. Ruan, On the existence of axisymmetric traveling fronts in the Lotka-Volterra competition-diffusion system in $\mathbb{R}^{3}$, Discrete Contin. Dyn. Syst -B, 22 (2017), 1111-1144. doi: 10.3934/dcdsb.2017055. [34] Z.-C. Wang and J. Wu, Periodic traveling curved fronts in reaction-diffusion equation with bistable time-periodic nonlinearity, J. Differential Equations, 250 (2011), 3196-3229. doi: 10.1016/j.jde.2011.01.017. [35] Z.-C. Wang, Traveling curved fronts in monotone bistable systems, Discrete Contin. Dyn. Syst., 32 (2012), 2339-2374. doi: 10.3934/dcds.2012.32.2339. [36] Z.-C. Wang, Cylindrically symmetric traveling fronts in reaction-diffusion equations with bistable nonlinearity, Proc. Roy. Soc. Edinburgh Sect. A, 145 (2015), 1053-1090. doi: 10.1017/S0308210515000268. [37] Y.-P. Wu and X.-X. Xing, Stability of traveling waves with critical speeds for p-degree Fisher-type equations, Discrete Contin. Dyn. Syst., 20 (2008), 1123-1139. doi: 10.3934/dcds.2008.20.1123. [38] Y.-P. Wu, X.-X. Xing and Q.-X. Ye, Stability of traveling waves with algebraic decay for n-degree Fisher-type equations, Discrete Contin. Dyn. Syst., 16 (2006), 47-66. doi: 10.3934/dcds.2006.16.47. Zhen-Hui Bu Zhi-Cheng Wang
CommonCrawl
On C1, β density of metrics without invariant graphs Single-point blow-up for a multi-component reaction-diffusion system January 2018, 38(1): 231-245. doi: 10.3934/dcds.2018011 Chaotic behavior of the P-adic Potts-Bethe mapping Farrukh Mukhamedov 1,, and Otabek Khakimov 2, Department of Mathematical Sciences, College of Science, The United Arab Emirates University, P.O. Box, 15551, Al Ain Abu Dhabi, UAE Institute of Mathematics of Academy of Science of Uzbekistan, 29, Do'rmon Yo'li str., 100125, Tashkent, Uzbekistan * Corresponding author: [email protected] Received December 2016 Revised July 2017 Published September 2017 Fund Project: The authors would like to thank an anonymous referee for his useful suggestions which allowed to improve the content of the paper. Full Text(HTML) In the previous investigations of the authors the renormalization group method to p-adic models on Cayley trees has been developed. This method is closely related to the investigation of p-adic dynamical systems associated with a given model. In this paper, we study chaotic behavior of the Potts-Bethe mapping. We point out that a similar kind of result is not known in the case of real numbers (with rigorous proofs). Keywords: p-adic numbers, Ising-Potts function, chaos, shift. Mathematics Subject Classification: Primary:37B05, 37B10;Secondary:12J12, 39A70. Citation: Farrukh Mukhamedov, Otabek Khakimov. Chaotic behavior of the P-adic Potts-Bethe mapping. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 231-245. doi: 10.3934/dcds.2018011 S. Albeverio, U. A. Rozikov and I. A. Sattarov, p-adic (2, 1)-rational dynamical systems, J. Math. Anal. Appl., 398 (2013), 553-566. doi: 10.1016/j.jmaa.2012.09.009. Google Scholar N. S. Ananikian, S. K. Dallakian and B. Hu, Chaotic Properties of the Q-state Potts Model on the Bethe Lattice: Q <2, Complex Systems, 11 (1997), 213-222. Google Scholar V. Anashin and A. Khrennikov, Applied Algebraic Dynamics Walter de Gruyter, Berlin, New York, 2009. doi: 10.1515/9783110203011. Google Scholar R. Benedetto, Reduction, dynamics, and Julia sets of rational functions, J. Number Theory, 86 (2001), 175-195. doi: 10.1006/jnth.2000.2577. Google Scholar R. Benedetto, Hyperbolic maps in p-adic dynamics, Ergod. Th. & Dynam. Sys., 21 (2001), 1-11. doi: 10.1017/S0143385701001043. Google Scholar F. A. Bosco and R. S. Jr Goulart, Fractal dimension of the Julia set associated with the Yang-Lee zeros of the ising model on the Cayley tree, Europhys. Let., 4 (1987), 1103-1108. doi: 10.1209/0295-5075/4/10/004. Google Scholar H. Diao and C. E. Silva, Digraph representations of rational functions over the p-adic numbers, p-Adic Numbers, Ultametric Anal. Appl., 3 (2011), 23-38. doi: 10.1134/S2070046611010031. Google Scholar T. P. Eggarter, Cayley trees, the Ising problem, and the thermodynamic limit, Phys. Rev. B, 9 (1974), 2989-2992. doi: 10.1103/PhysRevB.9.2989. Google Scholar A. H. Fan, L. M. Liao, Y. F. Wang and D. Zhou, p-adic repellers in Qp are subshifts of finite type, C. R. Math. Acad. Sci Paris, 344 (2007), 219-224. doi: 10.1016/j.crma.2006.12.007. Google Scholar A. H. Fan, S. L. Fan, L. M. Liao and Y. F. Wang, On minimal deecomposition of p-adic homographic dynamical systems, Adv. Math., 257 (2014), 92-135. Google Scholar A. H. Fan, S. L. Fan, L. M. Liao and Y. F. Wang, Minimality of p-adic rational maps with good reduction, Discrete Cont. Dyn. Sys., 37 (2017), 3161-3182. doi: 10.3934/dcds.2017135. Google Scholar G. Gyorgyi, I. Kondor, L. Sasvari and T. Tel, From Phase Transitions to Chaos World Scientific, Singapore, 1992. doi: 10.1142/1633. Google Scholar M. Herman and J. -C. Yoccoz, Generalizations of some theorems of small divisors to nonArchimedean fields, In: Geometric Dynamics (Rio de Janeiro 1981), Lec. Notes in Math. , Springer, Berlin, 1007 (1983), 408–447 doi: 10.1007/BFb0061427. Google Scholar S. Kaplan, A survey of symbolic dynamics and celestial mechanics, Qualitative Theor. Dyn. Sys., 7 (2008), 181-193. doi: 10.1007/s12346-008-0010-5. Google Scholar M. Khamraev and F. M. Mukhamedov, On a class of rational p-adic dynamical systems, J. Math. Anal. Appl., 315 (2006), 76-89. doi: 10.1016/j.jmaa.2005.08.041. Google Scholar N. Koblitz, P-adic Numbers, P-adic Analysis and Zeta-function Berlin, Springer, 1977. Google Scholar J. Lubin, Nonarchimedean dynamical systems, Composito Math., 94 (1994), 321-346. Google Scholar S. Ludkovsky and A. Yu. Khrennikov, Stochastic processes on non-Archimedean spaces with values in non-Archimedean fields, Markov Process. Related Fields, 9 (2003), 131-162. Google Scholar J. L. Monroe, Julia sets associated with the Potts model on the Bethe lattice and other recursively solved systems, J. Phys. A: Math. Gen., 34 (2001), 6405-6412. doi: 10.1088/0305-4470/34/33/305. Google Scholar F. A. Mukhamedov, Dynamical system appoach to phase transitions p-adic Potts model on the Cayley tree of order two, Rep. Math. Phys., 70 (2012), 385-406. doi: 10.1016/S0034-4877(12)60053-6. Google Scholar F. Mukhamedov, On dynamical systems and phase transitions for q+1-state p-adic Potts model on the Cayley tree, Math. Phys. Anal. Geom., 53 (2013), 49-87. doi: 10.1007/s11040-012-9120-z. Google Scholar F. Mukhamedov, Renormalization method in p-adic λ-model on the Cayley tree, Int. J. Theor. Phys., 54 (2015), 3577-3595. Google Scholar F. Mukhamedov and H. Akin, Phase transitions for p-adic Potts model on the Cayley tree of order three, J. Stat. Mech. (2013), P07014, 30pp. Google Scholar F. Mukhamedov and O. Khakimov, Phase transition and chaos: $P$-adic Potts model on a Cayley tree, Chaos, Solitons & Fractals, 87 (2016), 190-196. doi: 10.1016/j.chaos.2016.04.003. Google Scholar F. Mukhamedov and O. Khakimov, On generalized self-similarity in p-adic field Fractals 24(2016), 1650041, 11pp. doi: 10.1142/S0218348X16500419. Google Scholar F. Mukhamedov and O. Khakimov, On Julia set and chaos in p-adic Ising model on the Cayley tree, (submitted). Google Scholar F. Mukhamedov and M. Saburov, On equation xq=a over ${\mathbb{Q}}_p$, J. Number Theor., 133 (2013), 55-58. Google Scholar F. M. Mukhamedov and U. A. Rozikov, On rational p-adic dynamical systems, Methods of Funct. Anal. and Topology, 10 (2004), 21-31. Google Scholar F. M. Mukhamedov and U. A. Rozikov, On Gibbs measures of p-adic Potts model on the Cayley tree, Indag. Math. N.S., 15 (2004), 85-99. doi: 10.1016/S0019-3577(04)90007-9. Google Scholar W. Y. Qiu, Y. F. Wang, J. H. Yang and Y. C. Yin, On metric properties of limiting sets of contractive analytic non-Archimedean dynamical systems, J. Math. Anal. App., 414 (2014), 386-401. doi: 10.1016/j.jmaa.2014.01.015. Google Scholar J. Rivera-Letelier, Dynamics of rational functions over local fields, Astérisque, 287 (2003), 147-230. Google Scholar U. A. Rozikov and O. N. Khakimov, Description of all translation-invariant $p$-dic Gibbs measures for the Potts model on a Cayley tree, Markov Proces. Rel. Fields, 21 (2015), 177-204. Google Scholar M. Saburov and M. A. Kh. Ahmad, On descriptions of all translation invariant p-adic Gibbs measures for the Potts model on the Cayley tree of order three Math. Phys. Anal. Geom. 18(2015), Art. 26, 33 pp. doi: 10.1007/s11040-015-9194-5. Google Scholar M. Saburov and M. A. Kh. Ahmad, The dynamics of the potts-bethe mapping over $\mathbb Q_p$: The case $p\equiv 2 (mod 3)$, J. Phys. : Conf. Ser. 819(2017), 012017. Google Scholar J. H. Silverman, The Arithmetic of Dynamical Systems New York, Springer, 2007. doi: 10.1007/978-0-387-69904-2. Google Scholar E. Thiran, D. Verstegen and J. Weters, p-adic dynamics, J. Stat. Phys., 54 (1989), 893-913. doi: 10.1007/BF01019780. Google Scholar V. S. Vladimirov, I. V. Volovich and E. I. Zelenov, p-adic Analysis and Mathematical Physics World Scientific, Singapour, 1994. doi: 10.1142/1581. Google Scholar C. F. Woodcock and N. P. Smart, p-adic chaos and random number generation, Experiment Math., 7 (1998), 333-342. doi: 10.1080/10586458.1998.10504379. Google Scholar A. Yu. Khrennikov, p-adic valued probability measures, Indag. Mathem. N.S., 7 (1996), 311-330. doi: 10.1016/0019-3577(96)83723-2. Google Scholar A. Yu. Khrennikov, p-adic description of chaos, In: Nonlinear Physics: Theory and Experiment. Editors E. Alfinito, M. Boti. , World Scientific, Singapore, (1996), 177–184. Google Scholar A. Yu. Khrennikov and M. Nilsson, P-Adic Deterministic and Random Dynamical Systems Kluwer, Dordreht, 2004. doi: 10.1007/978-1-4020-2660-7. Google Scholar James Kingsbery, Alex Levin, Anatoly Preygel, Cesar E. Silva. Dynamics of the $p$-adic shift and applications. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 209-218. doi: 10.3934/dcds.2011.30.209 Aihua Fan, Shilei Fan, Lingmin Liao, Yuefei Wang. Minimality of p-adic rational maps with good reduction. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3161-3182. doi: 10.3934/dcds.2017135 Daniel Gonçalves, Bruno Brogni Uggioni. Li-Yorke Chaos for ultragraph shift spaces. Discrete & Continuous Dynamical Systems - A, 2020, 40 (4) : 2347-2365. doi: 10.3934/dcds.2020117 Raf Cluckers, Julia Gordon, Immanuel Halupczok. Motivic functions, integrability, and applications to harmonic analysis on $p$-adic groups. Electronic Research Announcements, 2014, 21: 137-152. doi: 10.3934/era.2014.21.137 Yuliya Gorb, Dukjin Nam, Alexei Novikov. Numerical simulations of diffusion in cellular flows at high Péclet numbers. Discrete & Continuous Dynamical Systems - B, 2011, 15 (1) : 75-92. doi: 10.3934/dcdsb.2011.15.75 Prof. Dr.rer.nat Widodo. Topological entropy of shift function on the sequences space induced by expanding piecewise linear transformations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 191-208. doi: 10.3934/dcds.2002.8.191 Sarah Bailey Frick. Limited scope adic transformations. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 269-285. doi: 10.3934/dcdss.2009.2.269 Na Zhao, Zheng-Hai Huang. A nonmonotone smoothing Newton algorithm for solving box constrained variational inequalities with a $P_0$ function. Journal of Industrial & Management Optimization, 2011, 7 (2) : 467-482. doi: 10.3934/jimo.2011.7.467 Guozhen Lu, Yunyan Yang. Sharp constant and extremal function for the improved Moser-Trudinger inequality involving $L^p$ norm in two dimension. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 963-979. doi: 10.3934/dcds.2009.25.963 Peng Sun. Exponential decay of Lebesgue numbers. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3773-3785. doi: 10.3934/dcds.2012.32.3773 Danny Calegari, Alden Walker. Ziggurats and rotation numbers. Journal of Modern Dynamics, 2011, 5 (4) : 711-746. doi: 10.3934/jmd.2011.5.711 Xavier Buff, Nataliya Goncharuk. Complex rotation numbers. Journal of Modern Dynamics, 2015, 9: 169-190. doi: 10.3934/jmd.2015.9.169 Van Cyr, John Franks, Bryna Kra, Samuel Petite. Distortion and the automorphism group of a shift. Journal of Modern Dynamics, 2018, 13: 147-161. doi: 10.3934/jmd.2018015 Marco Scianna, Luigi Preziosi, Katarina Wolf. A Cellular Potts model simulating cell migration on and in matrix environments. Mathematical Biosciences & Engineering, 2013, 10 (1) : 235-261. doi: 10.3934/mbe.2013.10.235 Michele Gianfelice, Marco Isopi. On the location of the 1-particle branch of the spectrum of the disordered stochastic Ising model. Networks & Heterogeneous Media, 2011, 6 (1) : 127-144. doi: 10.3934/nhm.2011.6.127 Bernard Bonnard, Thierry Combot, Lionel Jassionnesse. Integrability methods in the time minimal coherence transfer for Ising chains of three spins. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4095-4114. doi: 10.3934/dcds.2015.35.4095 Andrew Klapper. The asymptotic behavior of N-adic complexity. Advances in Mathematics of Communications, 2007, 1 (3) : 307-319. doi: 10.3934/amc.2007.1.307 Frédéric Bernicot, Bertrand Maury, Delphine Salort. A 2-adic approach of the human respiratory tree. Networks & Heterogeneous Media, 2010, 5 (3) : 405-422. doi: 10.3934/nhm.2010.5.405 Daniel Gonçalves, Marcelo Sobottka. Continuous shift commuting maps between ultragraph shift spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1033-1048. doi: 10.3934/dcds.2019043 HTML views (29) Farrukh Mukhamedov Otabek Khakimov
CommonCrawl
Can the remainder of a Taylor expansion be estimated from the coefficients? Given a formula for the coefficients $c_n\in\mathbb C$ of a not analytically known function $f:\mathbb C\to\mathbb C, z\mapsto f(z)$'s Taylor series, is there any way to estimate the remainder term of the order $N$ $$r_N(z) := f(z) - \sum_{n=0}^N c_n z^n$$ within a given radius $|z|\le\rho$ (truly smaller than the convergence radius) that only depends on a finite amount of the $c_n$ or some limit thereof? In other words, is there a way to obtain a $R_N$ such that $|r_n(z)|\le R_n(\rho)\ \forall |z|\le\rho$? complex-analysis taylor-expansion estimation approximation-theory Tobias Kienzler Tobias KienzlerTobias Kienzler $\begingroup$ No, it can't. Remember the (in)famous example $f(x)=\exp(-1/x^2)$ for $x\neq0$ and $f(0)=0$: all coefficients of the Taylor series around $x=0$ are zero, and the remainder is small for small $x$, but it can't be estimated from the coefficients. $\endgroup$ – Professor Vector Oct 6 '17 at 19:20 $\begingroup$ @ProfessorVector Silly me, always forgetting to mention what Physicists implicitly assume, e.g. actual convergence and absence of singularities... $\endgroup$ – Tobias Kienzler Oct 6 '17 at 19:37 $\begingroup$ Sorry, but my $f(x)$ is in no way singular as a real function, we know a formula for the coefficients ($c_n=0$) , the series actually converges (to $0$), and since we assumed we don't know $f(x)$ analytically, we don't even know the series converges to another value. $\endgroup$ – Professor Vector Oct 6 '17 at 19:53 $\begingroup$ Well, in that case, you'd be well advised to include somewhere in your question that $z$ is a complex variable, and that the series has some (known?) positive radius of convergence. That radius also gives you the asymptotic behavior of the remainder, but hardly a rigorous estimate. $\endgroup$ – Professor Vector Oct 6 '17 at 20:14 $\begingroup$ I still have difficulty understanding this question. $\endgroup$ – Fimpellizieri Oct 9 '17 at 19:34 It seems to depend strongly upon what knowledge made you conclude that the radius of convergence is at least $\rho$? If for example, you happen to know that the function is analytic on a disk $D(0,\rho)$ and is uniformly bounded on that disk, say $|f(z)|\leq M<+\infty$ for all $|z|<\rho$ then indeed you can estimate the remainder. From Cauchy, integrating over a circle of radius $\rho-\epsilon$ and for $|z|<\rho$: $$ f(z) = \oint_{\partial B} \frac{f(u)}{u-z}\frac{du}{2\pi i} = \oint_{\partial B} \frac{f(u)}{u} \left( 1+ \frac{z}{u} + \cdots \right) \frac{du}{2\pi i}=\sum_{n=0}^{N} c_nz^n + R_N(z) $$ where (after taking the limit $\epsilon\rightarrow 0$): $$ |R_N(z)| \leq \left|\oint_{\partial B} \frac{f(u)}{u} \frac{z^{N+1}}{u^{N+1}}\frac{1}{u-z} \frac{du}{2\pi i}\right| \leq M \left(\frac{|z|}{\rho}\right)^{N+1} \frac{1}{\rho-|z|}$$ Without any uniform bounds on $|f|$ (or the sequence $c_n$), I don't have any reasonable suggestion. After all you may always add $10^{266} z^{N+1}$ without changing the radius of convergence. H. H. RughH. H. Rugh $\begingroup$ Thank you, that's an interesting approach. Of course by adding $10^{266}z^{N+1}$, you'd increase $M$ by up to $10^{266}\rho^{N+1}$ as well. Maybe a restriction to $c_n$ that are related via a finite analytical equation (e.g. a recurrence equation) would help here, since then such an irregular additional term would be excluded $\endgroup$ – Tobias Kienzler Dec 7 '17 at 11:41 $\begingroup$ A recurrence relation for $f$ indeed helps. But often by reducing to the above estimate. For example, counting binary trees you wind up looking at a generating fct $f$ for which: $f(z)=1+zf(z)^2$ and you may show that for $|z|\leq \rho<1/4$ there is $M=M(\rho)<+\infty$ for which my suggested estimate holds, and it gives result pretty close to the real ones. When recurrence is in the coeffs $c_n$ there are undoubtedly similar estimates, but I don't have any (natural) example. $\endgroup$ – H. H. Rugh Dec 7 '17 at 17:59 Let's give this a shot, also to clarify the question and hopefully attract better answers... The convergence radius $R$ is given by $$\frac 1R = \limsup_{n\to\infty} |c_n|^{\frac1n}.$$ $$\forall N\in\mathbb N\,\exists \epsilon_N>0\forall n>N:|c_n|^{\frac1n} < \frac{1+\epsilon_N}R \tag{*}\label{*}$$ (and $\epsilon_{n>N}\le\epsilon_N$ and $\lim\limits_{N\to\infty}\epsilon_N = 0$). Thus for $|z|<R/(1+\epsilon_N)$, $$\begin{align*} |r_{N-1}(z)| &= \Bigg|\sum_{n=N}^\infty c_n z^n\Bigg| \\ &\le \sum_{n=N}^\infty |c_n|\cdot |z|^n \\ &\stackrel{\eqref{*}}< \sum_{n=N}^\infty \underbrace{(1+\epsilon_N)^n \left|\frac zR\right|^n}_{=:(\zeta_N)^n} \tag{#}\label{#} \\ &= \frac{(\zeta_N)^N}{1-\zeta_N} < \frac{R}{R-(1+\epsilon_N)|z|} \end{align*}$$ The final inequality is probably too generous... Note that $\eqref{#}$ converges due to $|z|<R/(1+\epsilon_N) \Leftrightarrow \zeta_N<1$. The same procedure can probably be applied to estimate the remainder of a Laurent series' principal part using $r = \limsup\limits_{n\to\infty}|c_{-n}|^{\frac1n}$. It's probably not a spectacular boundary, so I hope someone else knows a better one... Not the answer you're looking for? Browse other questions tagged complex-analysis taylor-expansion estimation approximation-theory or ask your own question. Maclaurin series expansion for $e^{-1/x^2}$ How to use the Lagrange's remainder to prove that log(1+x) = sum(…)? The Taylor coefficients of a function of the form $\exp\circ f$, where $f$ is a power series Is there an alternative to Taylor expansion of functions with more control over the error distribution? Obtaining Taylor series coefficients in numerically stable way? Taylor expansion of $f(z) = z^2/(z+1)^2$ around $i$ ratiotest and radius of convergence of Taylor expansion of $x\sin(x)$
CommonCrawl
The Transactions of The Korean Institute of Electrical Engineers (전기학회논문지) The Korean Institute of Electrical Engineers The first issue of The Transactions of the Korean Institute of Electrical Engineers(KIEE) was in October 1948. The Transactions of the KIEE is a monthly periodical published on the 1st day of each month. KIEE is an official publication of the Korean Institute of Electrical Engineers. The aim of the journal is to publish the highest quality manuscripts dedicated to the advancement of electrical engineering technologies. The Journal contains papers related to electrical power engineering, electrical machinery and energy conversion systems, electro-physics and application, information and controls, as well as electrical facilities. Currently, KIEE Transactions is divided into: "A" Sector (Power Engineering); "B" Sector (Electrical Machinery and Energy Conversion System); "C" Sector (Electro-physics and Application); "D" Sector (Information and Control); and "E" Sector (Electrical Facilities). With an aim to collect and publish outstanding scientific papers related to electrical engineering, the topics of papers published in KIEE Transactions are further divided into: power systems, transmission and distribution systems, electric power system economics, micro-grid, power system protection and automation, pumped-storage hydro power generation and electric energy storage systems in the "A" Sector; electrical equipment and appliances, power electronics, electrical traffic systems, new and renewable energy systems, eco-friendly electric power equipment, super-conductive equipment and E-mobility in the "B" Sector; electrical materials and semiconductors, smart large power and high voltage technology, optoelectronics and electromagnetic waves, Micro Electro Mechanical Systems (MEMS), and power asset risk management in the "C" Sector; instrumentation and control (I&C), robotics and automation, computer intelligence and intelligence systems, signal processing and embedded systems, autonomous moving object information processing, and biomedical engineering in the "D" Sector; and engineering and supervision, construction technology, power distribution system diagnosis, safety technology, electric railroad, technical standards, LVDC Facility, electric Facility fusion, etc. in the "E" Sector http://www.trans.kiee.or.kr/ KSCI KCI SCOPUS Harmonics Assessment for an Electric Railroad Feeding System using Moments Matching Method Lee, Jun-Kyong;Lee, Seung-Hyuk;Kim, Jin-O 1 Generally, an electric railroad feeding system has many problems due to the different characteristics in contrast with a load of general three-phase AC electric power system. One of them is harmonics problem caused by the switching device existing in the feeding system, and moreover, the time-varying dynamic loads of rail way is inherently another cause to increase this harmonics problem. In Korea power systems, the electric railroad feeding system is directly supplied from the substation of KEPCO. Therefore, if voltages fluctuation or unbalanced voltages are created by the voltage and current distortion or voltage drop during operation, it affects directly the source of supply. The trainloads of electric railway system have non-periodic but iterative harmonic characteristics as operating condition, because the electric characteristic of the electric railroad feeding system is changed by physical conditions of the each trainload. According to the traditional study, the estimation of harmonics has been performed by deterministic way using the steady state data at the specific time. This method is easy to analyze harmonics, but it has limits in some cases which needs an assessment of dynamic load and reliability. Therefore, this paper proposes the probabilistic estimation method, moments matching method(MW) in order to overcome the drawback of deterministic method. In this paper, distributions for each harmonics are convolved to obtain the moments and cumulants of TDD(Total Demand Distortion), and this can be generalized for any number of trains. For the case study, the electric railway system of LAT(Intra Airport Transit) in Incheon International Airport is modeled using PSCAD/EMTDC dynamic simulator. The raw data of harmonics for the moments matching method is acquired from simulation of the LAT model. Assessment on Power Quality of Grid-Connected PV System Based on Incremental Conductance MPPT Control Seol, Jae-Woong;Jang, Jae-Jung;Kim, Dong-Min;Lee, Seung-Hyuk;Kim, Jin-O 8 During the last years, there has been an increased interest in the new energy such as photovoltaic(PV) system from the viewpoint of environmental pollution. In this regard, this paper estimates the power quality of grid-connected PV system. As the maximum power operating point(MPOP) of photovoltaic(PV) power systems alters with changing atmospheric conditions, the efficiency of maximum power point tracking(MPPT) is important in PV power systems. Moreover, grid-connected PV system occurs some problems such as voltage inequality and harmonics. Therefore, this paper presents the results of a grid-connected PV system modeling that contains incremental conductance MPPT controller by PSCAD/EMTDC simulator and investigates the influence that can occur in the grid-connected PV system from aspect of power quality, i.e. voltage drop, total harmonic distortion(TDD) and total demand distortion(TDD). For the case study, the measured data of the PV way in Cheongwadae, Seoul, Korea is used. Analysis of Reactive Power Capability for Salient Synchronous Generators, and its Application to Primary Restorative Systems Lee, H.J.;Park, S.M. 14 Power system restoration following a massive blackout starts with re-energizing primary restorative transmission systems at first. The comparison of the TLCC(transmission line charging capacity) and the RPC(reactive power capability) of related black-start generator should be considered in this stage because overvoltage can be caused by self-excitation at the generator when the RPC is smaller than the TLCC. The RPC can be decided by two criteria. One is stator end core heating, and the other is steady state stability. RPC in steady state stability area has been found based on a synchronous reactance Xd. This paper presents RPC limit of salient pole machine which is different from that of non-salient pole machine in steady state stability area and shows derivation process about that. Physical and Operational Supply Margin Evaluation of KOREA Power System Kwon, Jung-Ji;Jeong, Sang-Heon;Shi, Bo;Tran, TrungTinh;Choi, Jae-Seok;Cha, Jun-Min;Yoon, Yong-Tae 18 Successful operation of power system under regulated as well as deregulated electricity markets is very important. This paper presents marginal power flow evaluation of KEPCO system in view point of physical and operation mode by using Physical and Operational Margins (POM Ver.2.2), which is developed by V&R Energy System Research. This paper introduces feature and operation mode of POM Ver.2.2 and then evaluates scenarios of 6 lines contingencies of 765kv of KEPCO system at peak load time on summer in 2006 you. The case study for actual 2006 year KEPCO system shows that this POM program is applicable sufficiently to KEPCO system. Futhermore, it demonstrates that it is helpful for operator's operating the system successfully by evaluating physical and operational margins quickly for various contingencies occurred in KEPCO system. Eventually, it will assist operators to operate more reliably the KEPCO system in future. Development of an Expert Technique and Program to Predict the Pollution of Outdoor Insulators Kim, Jae-Hoon;Kim, Ju-Han;Han, Sang-Ok 28 Recently, with the rapid growth of industry, environmental condition became worse. In addition to outdoor insulators in seashore are polluted due to salty wind. Also this pollution causes the flashover and failure of electric equipments. Especially the salt contaminant is one of the most representative pollutants, and known as the main source of the accident by contamination. As well known, the pollution has a close relation with meteorological factors such as wind velocity, wind direction, temperature, relative humidity, precipitation and so on. In this paper we have statistically analyzed the correlation between the pollution and the meteorological factors. The multiple regression analysis was used for the statistical analysis; daily measured equivalent salt deposit density(dependent variable) and the weather condition data(independent variable) were used. Also we have developed an expert program to predict the pollution deposit. A new prediction system using this program called SPPP(salt pollution prediction program) has been used to model accurately the relationship between ESDD with the meteorological factors. Neutral Current Calculation of the One Step Type Pole using KEPCO's Distribution System Seo, H.C.;Park, K.W.;Kim, C.H.;Jung, C.S.;Yoo, Y.P.;Lim, Y.H.;Seol, I.H. 35 The one step type and two step type pole are used in distribution line. If the three phases are not balanced, the communication line can be damaged by induced voltage. This paper calculates the neutral current using KEPCO's distribution system model which is only composed by one step type pole. The used system model is modelled by using ATPDraw and the neutral current is calculated by using EMTP/MODELS. Many cases for abstracting the neutral current characteristics in KEPCO's distribution system are simulated and analyzed. A study on the Assessment of Transmission Loss-Factor Applicable to Competitive Electricity Markets Kim, Kang-Won;Han, Seok-Man;Kim, Balho-H. 41 Transmission Loss Factor (TLF) is one of the key factors affecting transmission pricing which should capture the intrinsic characteristics of competitive electricity markets and be amenable to the agreement of the market participants. This paper proposes a practical methodology which enhances the utility and applicability of TLF which is vulnerable to the choice of slack bus, computation methodologies, and incremental generation (or incremental load). The proposed methodology is demonstrated with a case study. Analysis on the Rotor Losses in High-Speed Permanent Magnet Synchronous Motor Considering the Operating Condition Jang, Seok-Myeong;Choi, Jang-Young;Cho, Han-Wook 48 In this paper, the rotor losses in high-speed permanent synchronous motor (PMSM) considering the operating condition are discussed. In order to maintain the mechanical integrity of a high-speed permanent magnet machine rotor intended for high-speed operation, the rotor assembly is often retained within a stainless steel or Carbon-Fiber/Epoxy sleeve. The sleeve is exposed to fields produced by the stator from either the slotting or the mmf harmonics that are not synchronous with rotor losses. On the basis of analytical field analysis, the rotor losses are analyzed. In particular, the no-load, rated with air-cooled, and forced water cooled conditions are considered. The results are validated extensively by comparison with non-linear finite element method (FEM). Reduction of Cogging Torque of BLDC Motor by Sinusoidal Air-Gap Flux Density Distribution Kim, Samuel;Jeong, Seung-Ho;Rhyu, Se-Hyun;Kwon, Byung-Il 57 Along with the development of power electronics and magnetic materials, permanent magnet (PM) brushless direct current (BLDC) motors are now widely used in many fields of modern industry BLDC motors have many advantages such as high efficiency, large peak torque, easy control of speed, and reliable working characteristics. However, Compared with the other electric motors without a PM, BLDC motors with a PM have inherent cogging torque. It is often a principle source of vibration, noise and difficulty of control in BLDC motors. Cogging torque which is produced by the interaction of the rotor magnetic flux and angular variation in the stator magnetic reluctance can be reduced by sinusoidal air-gap flux density waveform due to reduction of variation of magnetic reluctance. Therefore, this paper will present a design method of magnetizing system for reduction of cogging torque and low manufacturing cost of BLDC motor with isotropic bonded neodynium-iron-boron (Nd-Fe-B) magnets in ring type by sinusoidal air-gap flux density distribution. An analytical technique of magnetization makes use of two-dimensional finite element method (2-D FEM) and Preisach model that expresses the hysteresis phenomenon of magnetic materials in order for accurate calculation. In addition, For optimum design of magnetizing fixture, Factorial design which is one of the design of experiments (DOE) is used. Hysteresis Characteristics of Flux-Lock Type Superconducting Fault Current Limiter Lim, Sung-Hun 66 For the design to prevent the saturation of the iron core and the effective fault current limitation, the analysis for the operation of the flux-lock type superconducting fault current limiter (SFCL) with consideration for the hysteresis characteristics of the iron core is required. In this paper, the hysteresis characteristics of the flux-lock reactor, which is an essential component of the flux-lock type SFCL, were investigated. Under normal condition, the hysteresis loss of the iron core in the flux-lock type SFCL does not happen due to its winding structure. From the equivalent circuit for the flux-lock type SFCL and the fault current limiting experiments, the hysteresis curves could be drawn. From the analysis for both the hysteresis curves and the fault current limiting characteristics due to the number of turns for the 1st and 2nd windings, the increase of the number of turns in the 2nd winding of the flux-lock type SFCL had a role to prevent the iron core from saturation. A Sensorless Speed Control of Interior Permanent Magnet Synchronous Motor using an Adaptive Integral Binary Observer Kang, Hyoung-Seok;Kim, Young-Seok 71 A control approach for the sensorless speed control of interior permanent magnet synchronous motor(IPMSM) based on adaptive integral the binary is proposed. With a main loop regulator and an auxiliary loop regulator, the binary observer has a property of the chattering alleviation in the constant boundary layer. However, the width of the constant boundary limits the steady state estimation accuracy and robustness. In order to improve the steady state performance of the binary observer, the binary observer is formed by adding extra integral augmented switching the hyperplane equation. By mean of integral characteristics, the rotor speed can be finely estimated and utilized for a sensorless speed controller for IPMSM. The proposed adaptive integral binary observer applies an adaptive scheme, because the parameters of the dynamic equations such as the machine inertia or the viscosity friction coefficient is not well known and these values can be easily changed generally during normal operation. Therefore, the observer can overcome the problem caused by using the dynamic equations, and the rotor speed estimation is constructed by using the Lyapunov function. The experimental results of the proposed algorithm are presented to demonstrate the effectiveness of the approach. DC-Link Capacitance Estimation using Support Vector Regression in AC/DC/AC PWM Converters Ahmed G. Abo-Khalil;Jang, Jeong-Ik;Lee, Dong-Choon 81 This paper proposes a new capacitance estimation scheme for a DC-link capacitor in a three-phase AC/DC/AC PWM converter. A controlled AC voltage with a lower frequency than the line frequency is injected into the DC-link voltage, which then causes AC power ripples at the DC side. By extracting the AC voltage and power components on the DC output side using digital filters, the capacitance can then be calculated using the Support Vector Regression (SVR). By training of SVR, a function which relates a given input (capacitor's power) and its corresponding output (capacitance value) can be derived. This function is used to predict outputs for given inputs that are not included in the training set. The proposed method does not require the information of DC-link current and can be simply implemented with only software and no additional hardware. Experimental results confirm that the estimation error is less than 0.16%. Development of 60KV Pulsed Power Supply using IGBT Stacks Ryoo, Hong-Je;Kim, Jong-Soo;Rim, Geun-Hie;Goussev, G.I.;Sytykh, D. 88 In this paper, a novel new pulse power generator based on IGBT stacks is proposed for pulse power application. Because it can generate up to 60kV pulse output voltage without any step- up transformer or pulse forming network, it has advantages of fast rising time, easiness of pulse width variation and rectangular pulse shape. Proposed scheme consists of series connected 9 power stages to generate maximum 60kV output pulse and one series resonant power inverter to charge DC capacitor voltage. Each power stages are configured as 8 series connected power cells and each power cell generates up to 850VDC pulse. Finally pulse output voltage is applied using total 72 series connected IGBTs. To reduce component for gate power supply, a simple and robust gate drive circuit is proposed. For gating signal synchronization, full bridge invertor and pulse transformer generates on-off signals of IGBT gating with gate power simultaneously and it has very good characteristics of short circuit protection. Lightning Protection of Signalling Equipments at Subway Car Depot by Equi-potential Bonding Seo, Seog-Chul;Choi, Kyu-Hyoung 100 Signalling equipments at railroad sites are widely exposed to high voltage lightning surges. This paper presents a lightning protection system for the signalling equipments at subway car depots. The main features of the system are as follows : (1)the common grounding system between power system grounds and the signaling system grounds, (2)physical and chemical methods to reduce grounding resistivity, (3)rearrangement of lightning rods based on the rolling bal1 theory, (4)equi-potential bonding networks to minimize the potential differences between the equipments grounds. The system has been constructed at six subway car depots in Seoul metropolitan area and it is measured that the grounding resistivity are reduced to 0.266 ohms and the potential differences between devices are reduced to a negligible quantity. After the construction of the systems, it has not been reported the break-down of the signalling equipments caused by lighting surges. FPGA Implementation of Fuzzy Logic Controller for Maximum Power Point Tracking in Solar Power System Lee, Woo-Hee;Kim, Hyung-Jin;Lee, Hoong-Joo 106 In this study, we designed a digital fuzzy logic controller based on FPGA and microprocessor for MPPT of the sofar power generation system. A fuzzy algorithm to control the power tracking function of a boost converter has been built into the FPGA, and applied to the small scaled solar power generation system. The embodied controller showed a stable operation characteristic with the small output voltage ripple for the intensity change of solar radiation. This result proves that the implementation of the power tracking controller using FPGA is an effective way compared to the existing one using microprocessor. Hysteresis Phenomenon of Hydrogenated Amorphous Silicon Thin Film Transistors for an Active Matrix Organic Light Emitting Diode Choi, Sung-Hwan;Lee, Jae-Hoon;Shin, Kwang-Sub;Park, Joong-Hyun;Shin, Hee-Sun;Han, Min-Koo 112 We have investigated the hysteresis phenomenon of a hydrogenated amorphous silicon thin film transistor (a-Si:H TFT) and analyzed the effect of hysteresis phenomenon when a-Si:H TFT is a pixel element of active matrix organic light emitting diode (AMOLED). When a-Si:H TFT is addressed to different starting gate voltages, such as 10V and 5V, the measured transfer characteristics with 1uA at $V_{DS}$ = 10V shows that the gate voltage shift of 0.15V is occurred due to the different quantities of trapped charge. When the step gate-voltage in the transfer curve is decreased from 0.5V to 0.05V, the gate-voltage shift is decreased from 0.78V to 0.39V due to the change of charge do-trapping rate. The measured OLED current in the widely used 2-TFT pixel show that a gate-voltage of TFT in the previous frame can influence OLED current in the present frame by 35% due to the change of interface trap density induced by different starting gate voltages. A Study on Displacement Current Characteristics of DLPC Monolayer (I) Song, Jin-Won;Lee, Kyung-Sup;Choi, Yong-Sung 117 LB method is one of the most interesting technique to arrange certain molecular groups at precise position relative to others. Also, the LB deposition technique can fabricate extremely thin organic films with a high degree of control over their thickness and molecular architecture. In this way, new thin film materials can be built up at the molecular level, and the relationship between these artificial structures and the properties of materials can be explored. In this paper, evaluation of physical properties was made for dielectric relaxation phenomena by the detection of the surface pressures and displacements current on the monolayer films of phospolipid monomolecular DLPC. Lipid thin films were manufacture by detecting deposition for the accumulation and the current was measured after the electric bias was applied across the manufactured MIM device. It is found that the phospolipid monolayer of dielectric relaxation takes a little time and depend on the molecular area. When electric bias is applied across the manufactured MIM device by the deposition condition of phospolipid mono-layer, it wasn't breakdown when the higher electric field to impress by increase of deposition layers. Thle New Design of a Large Area Dye-sensitized Solar Cell with Ag Grid for Improving a Design Characteristics Choi, Jin-Young;Lee, Im-Geun;Hong, Ji-Tae;Kim, Mi-Jeong;Kim, Whi-Young;Kim, Hee-Je 123 Up sizing of dye-sensitized solar cell(DSC) is the important technology to bring about commercialization of DSC. Several studies to obtain a stable large area DSC have been investigated in overseas laboratories, but have been hardly done in our country. In this study, up sizing technology of dye sensitized solar cells(DSCs) was investigated. We investigated low dark current materials for the current collecting grid. From the result, a new DSC module with metal grid was designed, and fabricated. For a new interconnection, both working and counter electrodes are alternately coupled on 10[cm]$\times$7[cm] substrate. We have achieved 68% of fill factor and photoelectric conversion efficiency of around 2.6% as the best results of new designed DSC structure. Fabrication Technology of High Tc Superconducting Thick Films for Renewed Electric Power Energy Lee, Sang-Heon 128 YBaCuO superconducting ceramic thick films were fabricated by chemical process. YBaCuO films have been successfully grown on $SrTiO_3$ substrates without a template layer. The films show poor or non superconductivity although they have excellent crystalline properties. ion channeling measurement made it clear that the strain in the films due to strong chemical bonding between the substrate and epilayer remains, resulting in the poor superconductivity. The X ray diffraction pattern of the YBaCuO thick films contained 90K phase. The self template method have resolved this problem. We obtained high-Jc as-grown YBaCuO on $SrTiO_3$ (100). Structural and Field-emissive Properties of Carbon Nanotubes Produced by ICP-CVD: Effects of Substrate-Biasing Park, C.K.;Kim, J.P.;Yun, S.J.;Park, J.S. 132 Carbon nanotubes (CNTs) arc grown on Ni catalysts employing an inductively-coupled plasma chemical vapor deposition (ICP-CVD) method. The structural and field-emissive properties of the CNTs grown are characterized in terms of the substrate-bias applied. Characterization using the various techniques, such as field-omission scanning electron microscopy (FESEM), high-resolution transmission electron microscopy (HRTEM), Auger spectroscopy (AES), and Raman spectroscopy, shows that the structural properties of the CNTs, including their physical dimensions and crystal qualities, as well as the nature of vertical growth, are strongly dependent upon the application of substrate bias during CNT growth. It is for the first time observed that the provailing growth mechanism of CNTs, which is either due to tip-driven growth or based-on-catalyst growth, may be influenced by substrate biasing. It is also seen that negatively substrate-biasing would promote the vertical-alignment of the CNTs grown, compared to positively substrate-biasing. However, the CNTs grown under the positively-biased condition display a higher electron-emission capability than those grown under the negatively-biased condition or without any bias applied. A Study of The Photosensitive Characteristic and Fabrication of Polyimide Thin Film by Dry Processing Lee, Boong-Joo 139 Thin films of polyimide (Pl) were fabricated by a vapor deposition polymerization method (VDPM) and studied for the photosensitive characteristic. Polyamic acid (PAA) thin films fabricated by vapor deposition polymerization (VDP) from 6FDA and 4-4' DDE were converted to PI thin films by thermal curing. From AFM and Ellipsometer experimental, the films thickness was decreased and the reflectance was increased as the curing temperature was increased. Those results implies that thin film is uniform. From UV-Vis spectra, PI thin films showed high absorbance in 225 $\sim$ 260 [nm] region. The Effect of Working Gas Xex+Ne1-x on the Electro-optical Characteristics of AC PDP Park, Chung-Hoo;Yoo, Su-Bok;Lee, Don-Kyu;Lee, Hae-June;Lee, Ho-Jun;Kim, Jae-Sung 142 Nowadays, itis inevitable trend to use high Xe gas contents for increase luminous efficiency and luminance in plasma display panel. However, the increase of Xe gas contents causes the driving voltage, although the brightness is increase. In this paper, we study the characteristics of electro optical according to Xe gas contents and gas pressure. Electro-optical characteristics were investigated by the discharge voltage, luminance and luminous efficacy measurements, respectively. With some increasing Xe gas contents and pressure, the electro-optical properties increased. However, the characteristics of electro-optical begin to be saturated, when too high increased Xe gas contents and pressure. Properties of Carbon Films Formed for Renewed Electric Power Energy by Electro-deposition Electro-deposition of carbon film on silicon substrate in methanol solution was carried out with various current density, solution temperature and electrode spacing between anode and cathode. The carbon films with smooth surface morphology and high electrical resistance were formed when the distance between electrode was relatively wider. The electrical resistance of the carbon films were independent of both current density and solution temperature. A Compact Pulse Corona Plasma System with Photocatalyst for an Air Conditioner Shin, Soo-Youn;Moon, Jae-Duk 151 A compact discharge plasma system with a photocatalyst has been proposed and investigated experimentally for application to air conditioners. It was found that there was intense ultra violet radiation with high energy of 3.2 eV from the corona discharge due to the DC-biased pulse voltage applied on a wire. An electrophotochemical reaction took place apparently on the surfaces of the photocatalyst of $TiO_2$ irradiated ultra violet front the discharge plasma in the proposed plasma system. The proposed discharge plasma system with the photocatalyst of $TiO_2$ showed very high removal efficiency of VOCs by tile additional electrophotochemical reactions on the photocatalyst. The proposed discharge plasma system also showed very high removal efficiency of particles such as smokes, suspended bacteria, and pollen and mite allergens by the electrostatic precipitation part. This type of corona discharge plasma system with a photocatalyst can be used as an effective means of removing both indoor pollutant gases and particles including suspended allergens. Optical Characteristics of Bimetallic Silver-Gold Film Structure in Surface Plasmon Resonance Sensor Applications Gwon, Hyuk-Rok;Lee, Seong-Hyuk 156 Surface plasmon resonance(SPR) has been widely studied for biological and chemical sensing applications. The present study conducts numerical simulation for the single and bimetallic layer SPR configurations by using the multiple beam interference matrix(MBIM) method to investigate the influence of wave interference and complex refractive indices of materials on optical characteristics such as reflectance and optical phase shift which are used for sensing. First, calculated reflectances are compared with experimental data for validation. In addition, in the single film structures this study finds out the appropriate film thicknesses with minimum reflectance for cases of gold film and silver film. For a bimetallic silver-gold film structure, in particular, the bimetallic film thicknesses that has the minimum reflectance are found 36 nm for silver and 5 nm for gold. From the results, the use of phase shift would be useful compared to reflectance in determining the SPR configuration because the phase shift becomes more sensitive than reflectance. The Design of a K-Band 4$\times$4 Microstrip Patch Array Antennas with High Directitvity Lee, Ha-Young;Kim, Hyeong-Seok 161 In this paper, two 4$\times$4 rectangular patch array antennas operating at 20 GHz are implemented for the satellite communication. The sixteen patch antennas and microstrip feeding line are printed on a single-layered substrate. The design goal is to achieve high directivity and gain by optimizing design parameters through permutations in element spacing. The spacing between the array elements is chosen to be 0.736$\lambda$. Numerical simulation results indicate that the HPBW(Half-Power Beam Width) of the 4$\times$4 patch array antenna is 18.78 degrees in the E-plane and 18.48 degrees in the H-plane with a gain of 17.18 dBi. Numerical simulations of a 4$\times$4 recessed patch array antenna yield a HPBW of 18.71 degrees in the E-plane and 17.82 degrees in the H-plane with a gain of 19.43 dBi. An Optimal Initial Configuration of a Humanoid Robot Sung, Young-Whee;Cho, Dong-Kwon 167 This paper describes a redundancy resolution based method for determining an optimal initial configuration of a humanoid robot for holding an object. There are three important aspects for a humanoid robot to be able to hold an object. Those three aspects are the reachability that guarantees the robot to reach the object, the stability that guarantees the robot to remain stable while moving or holding the object, and the manipulability that makes the robot manipulate the object dexterously. In this paper, a humanoid robot with 20 degrees of freedom is considered. The humanoid robot is kinematically redundant and has infinite number of solutions for the initial configuration problem. The complex three-dimensional redundancy resolution problem is divided into two simple two-dimensional redundancy resolution problems by incorporating the symmetry of the problem, robot's moving capability, and the geometrical characteristics of the given robot. An optimal solution with respect to the reachability, the stability, and the manipulability is obtained by solving these two redundancy resolution problems. The influence of infection ratio on Gradual Reduction of Drug Dose for the treatment of AIDS patients Lee, Kang-Hyun;Jo, Nam-Hoon 174 In this paper, we study the influence of infection ratio on gradual reduction of drug dose for the five state HIV infection model that explicitly includes the population of the virus. We first compute all equilibrium points of the model and investigate the stabilities of them. As a result, a bifurcation diagram is obtained which shows a change in the equilibrium points, or in their stability properties, as the drug effect $\eta$ is varied from 0 to 1(alternatively, drug dose is changed from 1 to 0). Based on the bifurcation diagram, we show that the gradual reduction of drug dose can be applied for the treatment of AIDS patients. Moreover, we analyze the influence of the variation of infection ratio on the gradual reduction treatment. Computer simulation results are also presented to validate the proposed results. Speed Sensorless Control of an Induction Motor using Fuzzy Speed Estimator Choi, Sung-Dae;Kim, Lark-Kyo 183 This paper proposes Fuzzy Speed Estimator using Fuzzy Logic Controller(FLC) as a adaptive law in Model Reference Adaptive System(MRAS) in order to realize the speed-sensorless control of an induction motor. Fuzzy Speed Estimator estimates the speed of an induction motor with a rotor flux of the reference model and the adjustable model in MRAS. Fuzzy logic controller reduces the error of the rotor flux between the reference model and the adjustable model using the error and the change of error of the rotor flux as the input of FLC. The experiment is executed to verify the propriety and the effectiveness of the proposed speed estimator. An offset Curve Generation Method for the Computer Pattern Sewing Machine Oh, Tae-Seok;Yun, Sung-Yong;Kim, Il-Hwan 188 In this paper we propose an efficient offset curve generation algorithm for open and closed 2D point sequence curve(PS curve) with line segments in the plane. One of the most difficult problems of offset generation is the loop intersection problem caused by the interference of offset curve segments. We propose an algorithm which removes global as well as local intersection loop without making an intermediate offset curve by forward tracing of tangential circle. Experiment in computer sewing machine shows that proposed method is very useful and simple. A Novel Line Detection Method using Gradient Direction based Hough transform Kim, Jeong-Tae 197 We have proposed a novel line detection method based on the estimated probability density function of gradient directions of edges. By estimating peaks of the density function, we determine groups of edges that have the same gradient direction. For edges in the same groups, we detect lines that correspond to peaks of the connectivity weighted distribution of the distances from the origin. In the experiments using the Data Matrix barcode images and LCD images, the proposed method showed better performance than conventional Methods in terms of the processing speed and accuracy. A Study on the design of Video Watermarking System for TV Advertisement Monitoring Shin, Dong-Hwan;Kim, Sung-Hwan 206 In this paper, The monitoring system for TV advertisement is implemented using video watermark. The functions of the advertisement monitoring system are monitoring the time, length, and index of the on-air advertisement, saving the log data, and reporting the monitoring result. The performance of the video watermark used in this paper is tested for TV advertisement monitoring. This test includes LAB test and field test. LAB test is done in laboratory environment and field test in actually broadcasting environment. LAB test includes PSNR, distortion measure in image, and the watermark detection rate in the various attack environment such as AD/DA(analog to digital and digital to analog) conversion, noise addition, and MPEG compression. The result of LAB test is good for the TV advertisement monitoring. KOBACO and SBS are participated in the field test. The watermark detection rate is 100% in both the real-time processing and the saved the processing. The average deviation of the watermark detection time is 0.2 second, which is good because the permissible average error is 0.5 second. A Study on the Control System of Myoelectric Hand Prosthesis Choi, Gi-Won;Chu, Jun-Uk;Choe, Gyu-Ha 214 This paper presents a myoelectric hand prosthesis(MHP) with two degree of freedom(2-DOF), which consists of a mechanical hand, a surface myoelectric sensor(SMES) for measuring myoelectric signal, a control system and a charging battery. The actuation for the 2-DOF hand functions such as grasping and wrist rotation was performed by two DC-motors, and controlled by myoelectric signal measured from the residual forearm muscle. The grip force of the MHP was automatically changed by a mechanical automatic speed reducer mounted on the hand. The skin interface of SMES was composed of the electrodes using the SUS440 metal in order to endure a wet condition due to the sweat. The sensor was embedded with a amplifier and a filter circuit for rejecting the offset voltage caused by power line noises. The control system was composed of the grip force sensor, the slip sensor, and the two controllers. The two controllers were made of a RISC-type microprocessor, and its software was executed on a real-time kernel. The control system used Force Sensing Resistors, FSR, as slip pick-ups at the fingertip of a thumb and the grip force information was obtained from a strain-gauge on the lever of the MHP. The experimental results were showed that the proposed control system is feasible for the MHP. Development of the Fluorescence Endoscope System with Dual Light Source Apparatus Bae, Soo-Jin;Kang, Uk 222 We suggest the fluorescence endoscope system that has light source apparatus providing selectable white or excitation light. White light source generates normal color images and is easily switched over to excitation light with the wide spectrum range from 380 nm to 580 nm. 5-ALA is deposited selectively in the abnormal tissue like cancer and causes fluorescence in the red spectrum range when excited by blue spectrum range. In addition, the others of excitation light make the color background image by reflected light to allow accurate orientation and visualization of the abnormal tissue and around. According to clinical studies, the fluorescence intensity contrast that defines the fluorescence intensity of lesion over the fluorescence intensity of around has more than 2 in tumour. Proposed system is useful and objective way in early diagnosis. Furthermore, it can be used in the biopsy for tumour classification at the highest fluorescence intensity point.
CommonCrawl
Original Paper Conservation Laws of Deformed N-Coupled Nonlinear Schrödinger Equations and Deformed N-Coupled Hirota Equations S. Suresh Kumar ORCID: orcid.org/0000-0003-0631-09181 & R. Sahadevan2 International Journal of Applied and Computational Mathematics volume 6, Article number: 19 (2020) Cite this article 1 Accesses In this paper, we consider two deformed equations namely, deformed N-coupled nonlinear Schrödinger (N-coupled NLS) equations and deformed N-coupled Hirota equations and show that both of them admit an infinitely many conservation laws, which ensures their complete integrability. The conservation laws of the above equations have been constructed by using their Lax representations through Ricatti equations. Subscribe to journal Immediate online access to all issues from 2019. Subscription will auto renew annually. This is the net price. Taxes to be calculated in checkout. Abhinav, K., Guha, P.: Inhomogeneous Heisenberg spin chain and quantum vortex filament as non-holonomically deformed NLS systems. Eur. Phys. J. B 91(3), 52 (2018) Ablowitz, M.J., Clarkson, P.A.: Solitons, Nonlinear Evolution Equations and Inverse Scattering. Cambridge University Press, Cambridge (1991) Ablowitz, M.J., Kaup, D.J., Newell, A.C., Segur, H.: The inverse scattering transform-Fourier analysis for nonlinear problems. Stud. Appl. Math. 53, 249–315 (1974) Anco, S.C.: Generalization of Noether's theorem in modern form to non-variational partial differential equations. Recent progress and Modern Challenges in Applied Mathematics, Modeling and Computational Science. Fields Institute Communications, vol 79 (2017) Anco, S.C., Bluman, G.: Direct construction of conservation laws from field equations. Phys. Rev. Lett. 78, 2869–2873 (1997) Anco, S.C., Bluman, G.: Direct construction method for conservation laws of partial differential equations. Part I: examples of conservation law classifcations. Eur. J. Appl. Math. 13(5), 545–566 (2002) Anco, S.C., Bluman, G.: Direct construction method for conservation laws of partial differential equations. Part II: general treatment. Eur. J. Appl. Math. 13(5), 567–585 (2002) Bluman, G., Anco, S.C.: Symmetry and Integration Methods for Differential Equations. Springer Applied Mathematical Sciences, vol. 154. Springer, New York (2002) Bluman, G., Kumei, S.: Symmetries and Differential Equations. Springer Applied Mathematical Sciences, vol. 81. Springer, New York (1989) Fokas, A.S.: Symmetries and Integrability. Stud. Appl. Math. 77(3), 253–299 (1987) Götas, Ü., Hereman, W.: Symbolic computation of conserved densities forsystems of nonlinear evolution equations. J. Symb. Comput. 24(5), 591–622 (1997) Guha, P., Mukerjee, I.: Study of the family of nonlinear Schrödinger equations by using the Adler–Kostant–Symes framework and the Tu methodology and their nonholonomic deformation. arXiv:1311.4334v4 [nlin.SI] (2014) Ibragimov, N.H.: A new conservation theorem. J. Math. Anal. Appl. 333, 311–328 (2007) Kara, A.H., Mahomed, F.M.: Relationship between symmetries and conservation laws. Int. J. Theor. Phys. 39(1), 23–40 (2000) Kundu, A.: Nonlinearizing linear equations to integrable systems including new hierarchies with nonholonomic deformations. J. Math. Phys. 50, 102702 (2009) Kundu, A., Sahadevan, R., Nalinidevi, L.: Nonholonomic deformation of KdV and mKdV equations and their symmetries, hierarchies and integrability. J. Phys. A Math. Theor. 42, 115213 (2009) Lakshmanan, M., Rajasekar, S.: Nonlinear Dynamics: Integrability, Chaos and Patterns. Springer, Berlin (2003) Lax, P.D.: Integrals of nonlinear equations of evolution and solitary wave. Commun. Pure Appl. Math. 21, 467–490 (1968) Liu, Y., Gao, Y.T., Xu, T., Lu, X., Sun, Z.Y., Meng, X.H., Yu, X., Gai, X.L.: Soliton solution, backlund transformation, and conservation laws for the Sasa–Satsuma equation in the optical fiber communications. Z. Naturforsch. 65, 291–300 (2010) Ma, W.X.: Conservation laws of discrete evolution equations by symmetries and adjoint symmetries. Symmetry 7(2), 714–725 (2015) Ma, W.X.: Conservation laws by symmetries and adjoint symmetries. Discrete Contin. Dyn. Syst. Ser. S 11(4), 707–721 (2018) Ma, W.X.: Long-time asymptotics of a three-component coupled mKdV system. Mathematics 7, 573 (2019) Ma, W.X.: A generating scheme for conservation laws of discrete zero curvature equations and its application. Comput. Math. Appl. 78(10), 3422–3428 (2019) Miura, R.M., Gardner, C.S., Kruskal, M.D.: Korteweg–de Vries equation and generalizations. II. Existence of conservation laws and constants of motion. J. Math. Phys. 9, 1204–1209 (1968) Olver, P.J.: Applications of Lie Groups to Differential Equations. Springer, New York (1993) Sahadevan, R., Nalinidevi, L.: Similarity reduction, nonlocal and master symmetries of sixth order Korteweg–de Vries equation. J. Math. Phys. 50, 053505 (2009) Sahadevan, R., Nalinidevi, L.: Integrability of certain deformed nonlinear partial differential equations. J. Nonlinear Math. Phys. 17(3), 379–396 (2010) Suresh Kumar, S., Balakrishnan, S., Sahadevan, R.: Integrability and Lie symmetry analysis of deformed \(N-\)coupled nonlinear Schrödinger equations. Nonlinear Dyn. 90, 2783–2795 (2017) Suresh Kumar, S., Sahadevan, R.: Integrability and group theoretical aspects of deformed \(N-\)coupled Hirota equations. Int. J. Appl. Comput. Math. 5, 1–32 (2019) Yang, J.Y., Ma, W.X.: Conservation laws of a perturbed Kaup–Newell equation. Mod. Phys. Lett. B 30, 1650381 (2016) Zhang, H.Q., Chen, F.: Dark and antidark solitons for the defocusing coupled Sasa–Satsuma system by the Darboux transformation. Appl. Math. Lett. 88, 237–242 (2019) Zhang, H.Q., Hu, R., Zhang, M.Y.: Darboux transformation and dark soliton solution for the defocusing Sasa–Satsuma equation. Appl. Math. Lett. 69, 101–105 (2017) Zhang, H.Q., Yuan, S.S.: General \(N-\)dark vector soliton solution for multi-component defocusing Hirota system in optical fiber media. Commun. Nonlinear Sci. Numer. Simul. 51, 124–132 (2017) Zhang, H.Q., Yuan, S.S.: Dark soliton solutions of the defocusing Hirota equation by the binary Darboux transformation. Nonlinear Dyn. 89(1), 531–538 (2017) Zhang, H.Q., Zhang, M.Y., Hu, R.: Darboux transformation and soliton solutions in the parity-time-symmetric nonlocal vector nonlinear Schrödinger equation. Appl. Math. Lett. 76, 170–174 (2018) The authors are thankful to the anonymous referees for their constructive suggestions. The first author (S.S) would like to thank the Management and Principal of C. Abdul Hakeem College (Autonomous), Melvisharam, for their support and encouragement. The second author (R.S) is supported by Council of Scientific industrial Research(CSIR), New Delhi under Emeritus Scientist Scheme. PG and Research Department of Mathematics, C. Abdul Hakeem College (Autonomous), Melvisharam, Vellore Dt, Tamilnadu, 632509, India S. Suresh Kumar Ramanujan Institute for Advanced Study in Mathematics, University of Madras, Chepauk, Chennai, Tamilnadu, 600005, India R. Sahadevan Search for S. Suresh Kumar in: Search for R. Sahadevan in: Correspondence to S. Suresh Kumar. For the scalar deformed NLS equation (5), we have constructed another infinite sequence of conserved quantities \((\rho _j,F_j)\). For this purpose, we assume \(\varPhi _1(x,t) = T(x,t) \varPhi _2(x,t)\) and carry out similar analysis outlined in "Conserved quantities of scalar deformed NLS equation" subsection, the scalar deformed NLS equation (5) admits the following conserved quantities \((\rho _j,F_j)\). The conserved densities and associated fluxes are respectively given by, $$\begin{aligned} \rho _1= & {} u_1= \dfrac{-iqq^*}{2},\\ \rho _2= & {} u_2=\dfrac{q^* q_x}{4},\\ \rho _3= & {} u_3=\dfrac{iq^*(q^*q^{2}+q_{xx})}{8}, \\ \rho _4= & {} u_4=-\dfrac{q q^*(q q^*_x+4q^*q_x)+q^* q_{xxx}}{16}, \\&\vdots \\ \rho _{j}= & {} u_j ,j\ge 1 \end{aligned}$$ $$\begin{aligned} F_1= & {} -2\ u_2+\dfrac{iu_1q^*_x}{q^*}+\dfrac{ih}{2},\\ F_2= & {} - 2\ u_3+\dfrac{iu_2q^*_x}{q^*}+\dfrac{u_1g^*}{2 q^*},\\ F_3= & {} -2\ u_4+\dfrac{iu_3q^*_x}{q^*}+\dfrac{u_2g^*}{2 q^*},\\ F_4= & {} -2\ u_5+\dfrac{iu_4q^*_x}{q^*}+\dfrac{u_3g^*}{2 q^*},\\&\vdots \\ F_j= & {} -2\ u_{j+1}+\dfrac{iu_jq^*_x}{q^*}+\dfrac{u_{j-1}g^*}{2 q^*},j=2,3,4,\ldots ,\qquad \end{aligned}$$ $$\begin{aligned} u_1= & {} \dfrac{-iqq^*}{2},\\ u_2= & {} \dfrac{iu_{1x}}{2}-\dfrac{iu_1q^*_{x}}{2q^*},\\ u_3= & {} \dfrac{iu_{2x}}{2}-\dfrac{iu_2q^*_{x}}{2q^*}-\dfrac{iu_1^2}{2}, \\ u_4= & {} \dfrac{iu_{3x}}{2}-\dfrac{iu_3q^*_{x}}{2q^*}-iu_1u_2, \\ u_5= & {} \dfrac{iu_{4x}}{2}-\dfrac{iu_4q^*_{x}}{2q^*}-\dfrac{iu_2^2}{2}-iu_1u_3, \\ u_6= & {} \dfrac{iu_{5x}}{2}-\dfrac{iu_5q^*_{x}}{2q^*}-iu_1u_4-iu_2u_3, \\&\vdots \\ u_j= & {} \dfrac{iu_{(j-1)x}}{2}-\dfrac{iu_{(j-1)}q^*_{x}}{2q^*}-\dfrac{i}{2}\sum \limits _{k=1}^{j-2}u_ku_{j-k-1},j\ge 3. \end{aligned}$$ Suresh Kumar, S., Sahadevan, R. Conservation Laws of Deformed N-Coupled Nonlinear Schrödinger Equations and Deformed N-Coupled Hirota Equations. Int. J. Appl. Comput. Math 6, 19 (2020) doi:10.1007/s40819-019-0766-0 Lax pair Deformed N-coupled nonlinear Schrödinger equations Deformed N-coupled Hirota equations Integrability
CommonCrawl
Resources tagged with: Factors and multiples Other tags that relate to More Mathematical Mysteries Working systematically. Factors and multiples. Addition & subtraction. Properties of numbers. Generalising. Visualising. Pythagoras' theorem. Place value. Creating and manipulating expressions and formulae. Mathematical reasoning & proof. Broad Topics > Numbers and the Number System > Factors and multiples Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13. Even So Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why? I added together the first 'n' positive integers and found that my answer was a 3 digit number in which all the digits were the same... Three Times Seven A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why? Adding in Rows List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it? Neighbourly Addition I added together some of my neighbours' house numbers. Can you explain the patterns I noticed? Can You Find a Perfect Number? Can you find any perfect numbers? Read this article to find out more... Adding All Nine Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some. . . . Oh! Hidden Inside? Find the number which has 8 divisors, such that the product of the divisors is 331776. Counting Factors Is there an efficient way to work out how many factors a large number has? Have You Got It? Can you explain the strategy for winning this game with any target? When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" . Divisively So How many numbers less than 1000 are NOT divisible by either: a) 2 or 5; or b) 2, 5 or 7? What is the remainder when 2^2002 is divided by 7? What happens with different powers of 2? Ben's Game Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters. One to Eight Complete the following expressions so that each one gives a four digit number as the product of two two digit numbers and uses the digits 1 to 8 once and only once. Factor Track Factor track is not a race but a game of skill. The idea is to go round the track in as few moves as possible, keeping to the rules. Thirty Six Exactly The number 12 = 2^2 � 3 has 6 factors. What is the smallest natural number with exactly 36 factors? I'm thinking of a number. My number is both a multiple of 5 and a multiple of 6. What could my number be? Special Sums and Products Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48. Gaxinta A number N is divisible by 10, 90, 98 and 882 but it is NOT divisible by 50 or 270 or 686 or 1764. It is also known that N is a factor of 9261000. What is N? Eminit The number 8888...88M9999...99 is divisible by 7 and it starts with the digit 8 repeated 50 times and ends with the digit 9 repeated 50 times. What is the value of the digit M? Helen's Conjecture Helen made the conjecture that "every multiple of six has more factors than the two numbers either side of it". Is this conjecture true? Diggits Can you find what the last two digits of the number $4^{1999}$ are? What Numbers Can We Make Now? Imagine we have four bags containing numbers from a sequence. What numbers can we make now? Got it for Two Got It game for an adult and child. How can you play so that you know you will always win? AB Search The five digit number A679B, in base ten, is divisible by 72. What are the values of A and B? Diagonal Product Sudoku Given the products of diagonally opposite cells - can you complete this Sudoku? A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. Factoring Factorials Find the highest power of 11 that will divide into 1000! exactly. A First Product Sudoku Given the products of adjacent cells, can you complete this Sudoku? How many zeros are there at the end of the number which is the product of first hundred positive integers? Star Product Sudoku The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid. LCM Sudoku II You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku. Satisfying Statements Can you find any two-digit numbers that satisfy all of these statements? Digat What is the value of the digit A in the sum below: [3(230 + A)]^2 = 49280A Ewa's Eggs I put eggs into a basket in groups of 7 and noticed that I could easily have divided them into piles of 2, 3, 4, 5 or 6 and always have one left over. How many eggs were in the basket? Two Much Explain why the arithmetic sequence 1, 14, 27, 40, ... contains many terms of the form 222...2 where only the digit 2 appears. How Old Are the Children? A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?" What Numbers Can We Make? Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? Counting Cogs Which pairs of cogs let the coloured tooth touch every tooth on the other cog? Which pairs do not let this happen? Why? Mathematical Swimmer Twice a week I go swimming and swim the same number of lengths of the pool each time. As I swim, I count the lengths I've done so far, and make it into a fraction of the whole number of lengths I. . . . Powerful Factorial 6! = 6 x 5 x 4 x 3 x 2 x 1. The highest power of 2 that divides exactly into 6! is 4 since (6!) / (2^4 ) = 45. What is the highest power of two that divides exactly into 100!? What a Joke Each letter represents a different positive digit AHHAAH / JOKE = HA What are the values of each of the letters? Inclusion Exclusion How many integers between 1 and 1200 are NOT multiples of any of the numbers 2, 3 or 5? Sieve of Eratosthenes Follow this recipe for sieving numbers and see what interesting patterns emerge. American Billions Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit?
CommonCrawl
Aside from Durbin-Watson, what hypothesis tests can produce inconclusive results? The Durbin-Watson test statistic can lie in an inconclusive region, where it is not possible either to reject or fail to reject the null hypothesis (in this case, of zero autocorrelation). What other statistical tests can produce "inconclusive" results? Is there a general explanation (hand-waving is fine) for why this set of tests are unable to make a binary "reject"/"fail to reject" decision? It would be a bonus if someone could mention the decision-theoretic implications as part of their answer to the latter query — does the presence of an additional category of (in)conclusion mean that we need to consider the costs of Type I and Type II errors in a more sophisticated way? hypothesis-testing statistical-significance decision-theory SilverfishSilverfish $\begingroup$ A bit off-topic, but randomized tests have such a flavor. For some values of the data, you need to randomize over accepting and rejecting. $\endgroup$ – Christoph Hanck May 21 '15 at 11:42 $\begingroup$ @ChristophHanck thanks, that was an interesting connection that I wouldn't have noticed. Not what I was intending, but I was keeping the question purposefully vague in the hope of it being a catch-all - depending on the answer(s) I may tighten its focus later. $\endgroup$ – Silverfish May 21 '15 at 12:35 The Wikipedia article explains that the distribution of the test statistic under the null hypothesis depends on the design matrix—the particular configuration of predictor values used in the regression. Durbin & Watson calculated lower bounds for the test statistic under which the test for positive autocorrelation must reject, at given significance levels, for any design matrix, & upper bounds over which the test must fail to reject for any design matrix. The "inconclusive region" is merely the region where you'd have to calculate exact critical values, taking your design matrix into account, to get a definite answer. An analogous situation would be having to perform a one-sample one-tailed t-test when you know just the t-statistic, & not the sample size†: 1.645 & 6.31 (corresponding to infinite degrees of freedom & only one) would be the bounds for a test of size 0.05. As far as decision theory goes—you've a new source of uncertainty to take into account besides sampling variation, but I don't see why it shouldn't be applied in the same fashion as with composite null hypotheses. You're in the same situation as someone with an unknown nuisance parameter, regardless of how you got there; so if you need to make a reject/retain decision while controlling Type I error over all possibilities, reject conservatively (i.e. when the Durbin–Watson statistic's under the lower bound, or the t-statistic's over 6.31). † Or perhaps you've lost your tables; but can remember some critical values for a standard Gaussian, & the formula for the Cauchy quantile function. Scortchi - Reinstate Monica♦Scortchi - Reinstate Monica $\begingroup$ (+1) Thanks. I knew this was the case for the Durbin-Watson test (should have mentioned that in my question really) but wondered if this was an example of a more general phenomenon, and if so, whether they all work essentially the same way. My guess was that it can happen, for example, when performing certain tests while one only has access to summary data (not necessarily in a regression), but DW is the only case I can recall seeing the upper and lower critical values compiled and tabulated. If you have any thoughts on how I can make the question better targetted that would be very welcome. $\endgroup$ – Silverfish May 21 '15 at 12:29 $\begingroup$ The first question's a bit vague ("What other statistical tests [...]?"), but I don't think you could clarify it without answering the second ("Is there a general explanation [...]?") yourself - overall I think it's all right as it stands. $\endgroup$ – Scortchi - Reinstate Monica♦ May 21 '15 at 18:58 Another example of a test with possibly inconclusive results is a binomial test for a proportion when only the proportion, not the sample size, is available. This is not completely unrealistic — we often see or hear poorly reported claims of the form "73% of people agree that ..." and so on, where the denominator is not available. Suppose for example we only know the sample proportion rounded correct to the nearest whole percent, and we wish to test $H_0: \pi = 0.5$ against $H_1: \pi \neq 0.5$ at the $\alpha = 0.05$ level. If our observed proportion was $p=5\%$ then the sample size for the observed proportion must have been at least 19, since $\frac{1}{19}$ is the fraction with the lowest denominator which would round to $5\%$. We do not know whether the observed number of successes was actually 1 out of 19, 1 out of 20, 1 out of 21, 1 out of 22, 2 out of 37, 2 out of 38, 3 out of 55, 5 out of 100 or 50 out of 1000... but whichever of these it is, the result would be significant at the $\alpha = 0.05$ level. On the other hand, if we know the sample proportion was $p = 49\%$ then we do not know whether the observed number of successes was 49 out of 100 (which would not be significant at this level) or 4900 out of 10,000 (which just attains significance). So in this case the results are inconclusive. Note that with rounded percentages, there is no "fail to reject" region: even $p=50\%$ is consistent with samples like 49,500 successes out of 100,000, which would result in rejection, as well as samples like 1 success out of 2 trials, which would result in failure to reject $H_0$. Unlike the Durbin-Watson test I've never seen tabulated results for which percentages are significant; this situation is more subtle as there are not upper and lower bounds for the critical value. A result of $p=0\%$ would clearly be inconclusive, since zero successes in one trial would be insignificant yet no successes in a million trials would be highly significant. We have already seen that $p=50\%$ is inconclusive but that there are significant results e.g. $p=5\%$ in between. Moreover, the lack of a cut-off is not just because of the anomalous cases of $p=0\%$ and $p=100\%$. Playing around a little, the least significant sample corresponding to $p=16\%$ is 3 successes in a sample of 19, in which case $\Pr(X \leq 3) \approx 0.00221 < 0.025$ so would be significant; for $p=17\%$ we might have 1 success in 6 trials which is insignificant, $\Pr(X \leq 1) \approx 0.109 > 0.025$ so this case is inconclusive (since there are clearly other samples with $p=16\%$ which would be significant); for $p=18\%$ there may be 2 successes in 11 trials (insignificant, $\Pr(X \leq 2) \approx 0.0327 > 0.025$) so this case is also inconclusive; but for $p=19\%$ the least significant possible sample is 3 successes in 19 trials with $\Pr(X \leq 3) \approx 0.0106 < 0.025$ so this is significant again. In fact $p=24\%$ is the highest rounded percentage below 50% to be unambiguously significant at the 5% level (its highest p-value would be for 4 successes in 17 trials and is just significant), while $p=13\%$ is the lowest non-zero result which is inconclusive (because it could correspond to 1 success in 8 trials). As can be seen from the examples above, what happens in between is more complicated! The graph below has red line at $\alpha=0.05$: points below the line are unambiguously significant but those above it are inconclusive. The pattern of the p-values is such that there are not going to be single lower and upper limits on the observed percentage for the results to be unambiguously significant. R code # need rounding function that rounds 5 up round2 = function(x, n) { posneg = sign(x) z = abs(x)*10^n z = z + 0.5 z = trunc(z) z = z/10^n z*posneg # make a results data frame for various trials and successes results <- data.frame(successes = rep(0:100, 100), trials = rep(1:100, each=101)) results <- subset(results, successes <= trials) results$percentage <- round2(100*results$successes/results$trials, 0) results$pvalue <- mapply(function(x,y) { binom.test(x, y, p=0.5, alternative="two.sided")$p.value}, results$successes, results$trials) # make a data frame for rounded percentages and identify which are unambiguously sig at alpha=0.05 leastsig <- sapply(0:100, function(n){ max(subset(results, percentage==n, select=pvalue))}) percentages <- data.frame(percentage=0:100, leastsig) percentages$significant <- percentages$leastsig subset(percentages, significant==TRUE) # some interesting cases subset(results, percentage==13) # inconclusive at alpha=0.05 subset(results, percentage==24) # unambiguously sig at alpha=0.05 # plot graph of greatest p-values, results below red line are unambiguously significant at alpha=0.05 plot(percentages$percentage, percentages$leastsig, panel.first = abline(v=seq(0,100,by=5), col='grey'), pch=19, col="blue", xlab="Rounded percentage", ylab="Least significant two-sided p-value", xaxt="n") axis(1, at = seq(0, 100, by = 10)) abline(h=0.05, col="red") (The rounding code is snipped from this StackOverflow question.) Not the answer you're looking for? Browse other questions tagged hypothesis-testing statistical-significance decision-theory or ask your own question. Why ever use Durbin-Watson instead of testing autocorrelation? Inferring sample size from proportions "Strength" of hypothesis tests Hypothesis tests and confidence intervals Interpretation of a Durbin-Watson test? Durbin Watson test statistic paired t-tests and interpretation of results Hypothesis Test of Hypothesis Tests What causes low power on hypothesis tests?
CommonCrawl
A dynamical approach to lower and upper solutions for planar systems "To the memory of Massimo Tarallo" An optimization problem with volume constraint for an inhomogeneous operator with nonstandard growth Quantitative oppenheim conjecture for $ S $-arithmetic quadratic forms of rank $ 3 $ and $ 4 $ Jiyoung Han Research Institute of Mathematics, Seoul National University, GwanAkRo 1, Gwanak-Gu, Seoul, 08826, South Korea Received January 2020 Revised August 2020 Published October 2020 Fund Project: This paper is supported by the Samsung Science and Technology Foundation under project No. SSTF-BA1601-03 and the National Research Foundation of Korea(NRF) grant funded by the Korea government under project No. 0409-20200150 The celebrated result of Eskin, Margulis and Mozes [8] and Dani and Margulis [7] on quantitative Oppenheim conjecture says that for irrational quadratic forms $ q $ of rank at least 5, the number of integral vectors $ \mathbf v $ such that $ q( \mathbf v) $ is in a given bounded interval is asymptotically equal to the volume of the set of real vectors $ \mathbf v $ such that $ q( \mathbf v) $ is in the same interval. In rank $ 3 $ or $ 4 $, there are exceptional quadratic forms which fail to satisfy the quantitative Oppenheim conjecture. Even in those cases, one can say that two asymptotic limits coincide for almost all quadratic forms([8, Theorem 2.4]). In this paper, we extend this result to the $ S $-arithmetic version. Keywords: Quantitative Oppenheim conjecture, $ S $-arithmetic space, action of quadratic form-preserving groups, low-dimensional $ p $-adic symmetric space, Euclidean building. Mathematics Subject Classification: Primary: 22F30, 22E45, 20F65; Secondary: 20E08, 37P55. Citation: Jiyoung Han. Quantitative oppenheim conjecture for $ S $-arithmetic quadratic forms of rank $ 3 $ and $ 4 $. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020359 P. Abramenko and K. S. Brown, Buildings. Theory and Applications, Graduate Texts in Mathematics, 248. Springer, New York, 2008. doi: 10.1007/978-0-387-78835-7. Google Scholar J. S. Athreya and G. A. Margulis, Values of random polynomials at integer points, J. Mod. Dyn., 12 (2018), 9-16. doi: 10.3934/jmd.2018002. Google Scholar P. Bandi, A. Ghosh and J. Han, A generic effective Oppenheim theorem for systems of forms, J. Number Thoery, 218 (2020), 311-333. doi: 10.1016/j.jnt.2020.07.002. Google Scholar Y. Benoist, Five lectures on lattices in semisimple Lie groups, Géométries à Courbure Négative ou Nulle, Groupes Discrets et Rigidités, Sémin. Congr., Soc. Math. France, Paris, 18 (2009), 117-176. Google Scholar A. Borel and G. Prasad, Values of isotropic quadratic forms at $S$-integral points, Compos. Math., 83 (1992), 347-372. Google Scholar J. Bourgain, A quantitative Oppenheim theorem for generic diagonal quadratic forms, Israel J. Math., 215 (2016), 503-512. doi: 10.1007/s11856-016-1385-7. Google Scholar S. G. Dani and G. A. Margulis, Limit distributions of orbits of unipotent flows and values of quadratic forms, I. M. Gel'fand Seminar, Adv. Soviet Math., Part 1, Amer. Math. Soc., Providence, RI, 16 (1993), 91-137. Google Scholar A. Eskin, G. Margulis and S. Mozes, Upper bounds and asymptotics in a quantitative version of the Oppenheim conjecture, Ann. of Math., 147 (1998), 93-141. doi: 10.2307/120984. Google Scholar A. Eskin, G. Margulis and S. Mozes, Quadratic forms of signature $(2, 2)$ and eigenvalue spacings on rectangular $2$-tori, Ann. of Math., 161 (2005), 679-725. doi: 10.4007/annals.2005.161.679. Google Scholar A. Ghosh, A. Gorodnik and A. Nevo, Optimal density for values of generic polynomial maps, preprint, arXiv: 1801.01027. Google Scholar A. Ghosh and D. Kelmer, A quantitative Oppenheim theorem for generic ternary quadratic forms, J. Mod. Dyn., 12 (2018), 1-8. doi: 10.3934/jmd.2018001. Google Scholar A. Gorodnik, Oppenheim conjecture for pairs consisting of a linear form and a quadratic form, Trans. Amer. Math. Soc., 356 (2004), 4447-4463. doi: 10.1090/S0002-9947-04-03473-7. Google Scholar J. Han, S. Lim and K. Mallahi-Karai, Asymptotic distribution of values of isotropic quadratic forms at $S$-integral points, J. Mod. Dyn., 11 (2017), 501-550. doi: 10.3934/jmd.2017020. Google Scholar D. Kelmer and S. Yu, Values of random polynomials in shrinking targets, preprint, arXiv: 1812.04541. Google Scholar D. Kleinbock and G. Tomanov, Flows on $S$-arithmetic homogeneous spaces and applications to metric Diophantine approximation, Comment. Math. Helv., 82 (2007), 519-581. doi: 10.4171/CMH/102. Google Scholar Y. Lazar, Values of pairs involving one quadratic form and one linear form at $S$-integral points, J. Number Theory, 181 (2017), 200-217. doi: 10.1016/j.jnt.2017.06.003. Google Scholar G. A. Margulis, Formes quadratriques indéfinies et flots unipotents sur les espaces homogénes, C. R. Acad. Sci. Paris. Sér. I Math., 304 (1987), 249-253. Google Scholar H. Oh, Uniform pointwise bounds for matrix coefficients, Duke Math. J., 113 (2002), 133-192. Google Scholar A. Oppenheim, The Minima of Indefinite Quaternary Quadratic Forms, Thesis (Ph.D.)–The University of Chicago, 1930. Google Scholar V. Platonov and A. Rapinchuk, Algebraic Groups and Number Theory, Pure and Applied Mathematics, 139. Academic Press, Inc., Boston, MA, 1994. Google Scholar M. Ratner, Raghunathan's conjectures for Cartesian products of real and $p$-adic Lie groups, Duke Math. J., 77 (1995), 275-382. doi: 10.1215/S0012-7094-95-07710-2. Google Scholar G. Robertson, Euclidean Buildings, (lecture), "Arithmetic Geometry and Noncommutative Geometry", Masterclass, Utrecht, 2010. Google Scholar O. Sargent, Density of values of linear maps on quadratic surfaces, J. Number Theory, 143 (2014), 363-384. doi: 10.1016/j.jnt.2014.04.020. Google Scholar O. Sargent, Equidistribution of values of linear forms on quadratic surfaces, Algebra Number Theory, 8 (2014), 895-932. doi: 10.2140/ant.2014.8.895. Google Scholar W. M. Schmidt, Approximation to algebraic numbers, Enseignement Math., 17 (1971), 187-253. Google Scholar J.-P. Serre, A Course in Arithmetic, Graduate Texts in Mathematics, No. 7. Springer-Verlag, New York-Heidelberg, 1973. Google Scholar J.-P. Serre, Trees, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2003. Google Scholar T. A. Springer, Linear Algebraic Groups, Second edition, Progress in Mathematics, 9, Birkh user Boston, Inc., Boston, MA, 1998. doi: 10.1007/978-0-8176-4840-4. Google Scholar G. Tomanov, Orbits on homogeneous spaces of arithmetic origin and approximations, Adv. Stud. Pure Math., Math. Soc. Japan, Tokyo, 26 (2000), 265-297. doi: 10.2969/aspm/02610265. Google Scholar Figure 1. The 3-dimensional hyperbolic space $ {\mathbb{H}}^3 $. The measure of the set in (4) is equal to the Lebesgue measure of the grey area on the top of the sphere Figure 2. Apartment $ \mathcal A_0 $ of $ \mathcal{B}_3 $. $ K_p\setminus {\mathrm{SO}}(2x_1x_3-x_2^2) $ is embedded in the inverse image of the blue line Petr Čoupek, María J. Garrido-Atienza. Bilinear equations in Hilbert space driven by paths of low regularity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 121-154. doi: 10.3934/dcdsb.2020230 Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364 Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3997-4017. doi: 10.3934/dcds.2020037 Abdollah Borhanifar, Maria Alessandra Ragusa, Sohrab Valizadeh. High-order numerical method for two-dimensional Riesz space fractional advection-dispersion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020355 Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020463 Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Problems & Imaging, 2021, 15 (1) : 1-25. doi: 10.3934/ipi.2020048 Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361 Azmy S. Ackleh, Nicolas Saintier. Diffusive limit to a selection-mutation equation with small mutation formulated on the space of measures. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1469-1497. doi: 10.3934/dcdsb.2020169 Hai Q. Dinh, Bac T. Nguyen, Paravee Maneejuk. Constacyclic codes of length $ 8p^s $ over $ \mathbb F_{p^m} + u\mathbb F_{p^m} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020123 Lei Liu, Li Wu. Multiplicity of closed characteristics on $ P $-symmetric compact convex hypersurfaces in $ \mathbb{R}^{2n} $. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020378 Ying Lin, Qi Ye. Support vector machine classifiers by non-Euclidean margins. Mathematical Foundations of Computing, 2020, 3 (4) : 279-300. doi: 10.3934/mfc.2020018 Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065 Peng Luo. Comparison theorem for diagonally quadratic BSDEs. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020374 Jianquan Li, Xin Xie, Dian Zhang, Jia Li, Xiaolin Lin. Qualitative analysis of a simple tumor-immune system with time delay of tumor action. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020341 Saadoun Mahmoudi, Karim Samei. Codes over $ \frak m $-adic completion rings. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020122 Tomáš Oberhuber, Tomáš Dytrych, Kristina D. Launey, Daniel Langr, Jerry P. Draayer. Transformation of a Nucleon-Nucleon potential operator into its SU(3) tensor form using GPUs. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1111-1122. doi: 10.3934/dcdss.2020383 Luis Caffarelli, Fanghua Lin. Nonlocal heat flows preserving the L2 energy. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 49-64. doi: 10.3934/dcds.2009.23.49
CommonCrawl
Revisiting area risk classification of visceral leishmaniasis in Brazil Gustavo Machado ORCID: orcid.org/0000-0001-7552-61441, Julio Alvarez2,3 na1, Haakon Christopher Bakka4 na1, Andres Perez5, Lucas Edel Donato6, Francisco Edilson de Ferreira Lima Júnior6, Renato Vieira Alves6 & Victor Javier Del Rio Vilas7 BMC Infectious Diseases volume 19, Article number: 2 (2019) Cite this article Visceral leishmaniasis (VL) is a neglected tropical disease of public health relevance in Brazil. To prioritize disease control measures, the Secretaria de Vigilância em Saúde of Brazil's Ministry of Health (SVS/MH) uses retrospective human case counts from VL surveillance data to inform a municipality-based risk classification. In this study, we compared the underlying VL risk, using a spatiotemporal explicit Bayesian hierarchical model (BHM), with the risk classification currently in use by the Brazil's Ministry of Health. We aim to assess how well the current risk classes capture the underlying VL risk as modelled by the BHM. Annual counts of human VL cases and the population at risk for all Brazil's 5564 municipalities between 2004 and 2014 were used to fit a relative risk BHM. We then computed the predicted counts and exceedence risk for each municipality and classified them into four categories to allow comparison with the four risk categories by the SVS/MH. Municipalities identified as high-risk by the model partially agreed with the current risk classification by the SVS/MH. Our results suggest that counts of VL cases may suffice as general indicators of the underlying risk, but can underestimate risks, especially in areas with intense transmission. According to our BHM the SVS/MH risk classification underestimated the risk in several municipalities with moderate to intense VL transmission. Newly identified high-risk areas should be further evaluated to identify potential risk factors and assess the needs for additional surveillance and mitigation efforts. Visceral leishmaniasis (VL) in the Americas is a vector-borne neglected zoonosis caused by the intracellular protozoan Leishmania infantum [1, 2]. If left untreated, VL is fatal in more than 90% of cases, within two years of the onset of the disease [3]. Every year approximately 200,000–400,000 new cases of VL are registered worldwide [4]. In 2015, 88.8% of VL cases were reported from six countries: Brazil, Ethiopia, India, Somalia, South Sudan and Sudan [4], Brazil was ranked second, reporting 3289 new cases, 14% of the total reported worldwide, surpassed only by India [5]. In the Americas, Brazil represents 95% of total occurrences [6]. In Latin America transmission is mediated by the vector Lutzomyia longipalpis and Lutzomyia cruzi [7,8,9], a synanthropic sandfly with a wide geographic distribution in Brazil [10], and the domestic dogs as its the main animal reservoir in urban and rural areas. Control measures applied against the vector and the reservoir have shown limited success [11]. The Secretaria de Vigilância em Saúde of Brazil's Ministry of Health (SVS/MH) is responsible for the planning, implementation and evaluation of VL surveillance in Brazil. VL surveillance data is used by the SVS/MH for the classification of municipalities in four VL risk categories. This risk classification is the main pillar for the management of the VL control in the country, and is currently based on the average number of reported cases per municipality in periods of 3-years, without considering human population at risk. Such simple classification and ranking approach does not account for uncertainties around the average number of cases and variability around risk metrics, and may be unable to fully recognize and address spatial and spatiotemporal dependencies in the data [12]. In this study, we evaluate the spatiotemporal pattern of VL risk in Brazil and generate alternative risk categories to compare with the current SVS/MH risk-classification. We aim to provide additional insights in the epidemiology of VL in Brazil, and inform how accurately the current risk categories reflect the underlying VL risk at the municipality level. Data source and collection The study area comprised all 5564 municipalities in Brazil as listed by the Instituto Brasileiro de Geografia e Estatística (IBGE) database (IBGE general information http://www.ibge.gov.br/english/). Municipality-specific annual counts of VL cases for the period 2004–2014, and the official risk classification status for the period 2008–2014 were provided by the SVS/MH. In order to account for the population at risk, we computed the municipality-specific standardized incidence ratios (SIR), \( {SIR}_{it}=\frac{y_{it}}{e_{it}} \), where, for municipality i and year t, yit is the count of VL cases and eit the expected number of cases calculated by multiplying the population in municipality i for the t year (based on 2010 national census data) by the incidence of VL in the country. At the first level of the BHM, the observed number of human VL cases in municipality i and year t (yit) was assumed to follow a Poisson distribution yit ~ Poisson (eit, θit), where eit is defined above and θit is the unknown municipality-specific annual relative risk. The log of θit was then decomposed additively into spatial and temporal effects and a space-time interaction term, so that $$ Log\ \left({\theta}_{it}\right)=\alpha +{\upsilon}_i+{\nu}_i+{\gamma}_t+{\delta}_{it} $$ where α is the intercept, representing the population average risk, υi and νi describe respectively the spatially structured and unstructured variation in VL risk, γt represents the structured temporal effect, and δit is a space-time interaction term where given by the Kronecker product γt⨂ υi. Given the large number of municipalities with zero case counts we explored other parameterizations, specifically a zero inflated Poisson likelihood. We computed the Deviance Information Criterion (DIC) to compare the fit of our models [13]. A non-informative normal distribution with mean 0 and variance\( {\sigma}_{\nu}^2 \) was used as prior distribution for the spatially unstructured random effect νi, while the spatially structured effect υi was assigned a conditional autoregressive structure as previously described [14]. Briefly, υi was assumed to follow a normal distribution with mean conditional to the neighboring municipalities υj, where neighborhood is defined in terms of geographical adjacency, and variance \( {\sigma}_{\upsilon}^2 \) dependent on the number of neighboring municipalities ni, $$ {\upsilon}_i\mid \upsilon, j\ neighbor\ of\ i\sim N\ \left(\frac{1}{n_i}\ \gamma {\sum}_{j=1}^{n_i}{\upsilon}_j,\frac{\sigma_{\upsilon}^2}{n_i}\right) $$ Finally, γt was assigned a random walk type 1 (RW1) \( {\gamma}_t\sim N\left({\gamma}_{t-1},{\sigma}_{\gamma}^{-1}\right) \). Exponential priors (3, 0.01) were assigned to all the standard deviations of the random effects [15]. In addition, we investigated also the sensitivity of our results to other less informative priors with larger ranges. Model posterior parameters were estimated using Integrated Nested Laplace Approximation (INLA), and fitted using the R-INLA package [16] conducted in R [17]. Results were visualized using ArcGIS 10.4 (ESRI ArcMap, 2016). Risk classification The actual MH risk classification is based upon the most recent three-year moving average of the number of VL human cases registered in each municipality. This classification is updated every June. Municipalities are classified as no transmission (class 0, no cases reported), sporadic transmission (class 1, moving average < 2.4), moderate transmission (class 2, moving average in the interval [2.4–4.4), and intense transmission (class 3, moving average ≥ 4.4 cases) [18, 19]. In order to compare the current risk classification SVS/MH with the results of the BHM, we computed the posterior estimates of the 'exceedence' probability of risk Prob (θit > 1) ∣ y [20,21,22] further categorized into 4 categories (0, 1, 2, 3) if Prob(θit > 1) assumed < 0.5, 0.5–0.75, 0.75–0.95 and > 0.95, respectively. Exceedence categories were compared with the four SVS/MH risk classes via the weighted Kappa correlation test. Finally, the correspondent three-year moving average of the annual number of cases per municipality predicted by the model BHM (\( \hat{y_{it}} \)) was used to create a third risk classification in which municipalities were classified as no transmission (class 0, no cases predicted); sporadic transmission (class 1, \( \hat{y_{it}} \) predicted moving average < 2.4); moderate transmission (class 2, \( \hat{y_{it}} \) predicted moving average in the interval 2.4–4.4) and intense transmission (class 3, \( \hat{y_{it}} \) predicted moving average ≥ 4.4) to compare with the SVS/MH classification based on observed cases (yit). The agreement between this classification and that of the SVS/MH was also compared via the weighted Kappa correlation test. Descriptive results From January 2004 to December 2014, a total of 37,405 VL cases were registered by the SINAN/SVS/MH Brazil. The annual average case count by municipality is shown in Fig. 1. The annual case count of VL during the study period (2004–2014), for the entire country, ranged between 2947 and 3713 cases (Fig. 2). Spatial distribution of the annual average case count of visceral leishmaniasis by municipality, 2004–2014 Number of cases of visceral leishmaniasis reported in Brazil over 11 years (2004 to 2014) Five municipalities (0.09%) accounted for almost 20% of the total number of cases reported during the period of study: Fortaleza (state of Ceará) 1865 (4.98%), Campo Grande (state of Mato Grosso do Sul) 1520 (4.06%), Araguaína (state of Tocantins) 1294 (3.45%), Belo Horizonte (state of Minas Gerais) 1176 (3.14%), and Teresina (state of Piauí) 961 (2.57%). Bayesian hierarchical model The BHM with Poisson likelihood had the lowest DIC value (Table 1), and included spatial (structured and unstructured), temporal random effects, and interaction term. Models were robust to different choices of priors. Table 1 Composition of eight different models, description of likelihood and for model diagnostics DIC is reported The posterior estimates of the spatially structured random effect ui were higher for municipalities located in the central and eastern part of Brazil, while the non-spatially structured were scattered throughout the country (Fig. 3). The average standard deviation was calculated for all municipalities and υi shown to be 2.5 times larger than that of νi (6.96 versus 2.76), suggesting that a higher proportion of the unexplained risk of VL (not attributable to the size of the population at risk) was partially explained by factors with a spatial structure (Fig. 3). Finally, the proportion of the marginal variances were calculated for each parameter in the final model: the major contributors were the spatial effects ν (32.8%), υ (57.8%), with less variance explained by the temporal γ (1%) and spatial temporal interaction δ (9.3%). Spatial distribution of the exponentiated spatially structured υi (left) and non − structured νi (right) random effects Comparisons of the risk classifications The proportion of municipalities that were classified in the same category by both the BHM via computation of the exceedence probabilities and the SVS/MH classification was 79.84%, very similar to the results obtained when the SVS/MH classification was compared with results using the predicted cases [78.05%, see Additional file 1: Figure S1 and Additional file 2: Figure S2]. This comparison (Table 2) revealed that the classifications based on the BHM (via exceedence probabilities or predicted cases) allocated a higher proportion of municipalities to categories two and three (moderate and intense transmission). Specifically, the classification based on the exceedence probabilities categorized between two and four times more municipalities as category three than the SVS/MH risk classification. Conversely, the current SVS/MH risk classification identified almost four times more municipalities as class one than the classification based on the posterior estimates of the exceedence probabilities. The average agreement between both classifications over the seven years was considered good (weighted Kappa = 0.69) [further information on yearly agreement is provided in Additional file 3: Table S1]. A good agreement (weighted Kappa = 0.63) on average was also obtained when the SVS/MH classification was compared with the one based on the predicted number of cases (\( \hat{y_{it}} \)) [see Additional file 4: Table S2 for yearly agreement]. However, if the lower risk category (0) was excluded from the comparison the agreement was much lower (0.17 and 0.12 when the SVS/MH classification was compared to the exceedence probabilities and predicted cases from the BHM, respectively), revealing most of the discordant results were obtained in municipalities with some risk as determined by both proposed classification [Table 2 and Additional file 1: Figure S1 to Additional file 2: Figure S2]. Table 2 Comparison of the number of municipalities allocated to the different risk levels depending on the classification followed (BHM or SVS/MH classification) We have explored the spatial distribution for the comparison among all classifications, we demonstrate the scenario for the 2014 pattern, where SVS/MH, BHM-exceedence and BHM-predictions for intense transmission (class 3) are mapped in Fig. 4 [see Additional file 5: Figure S3, Additional file 6: Figure S4, Additional file 7: Figure S5, Additional file 8: Figure S6, Additional file 9: Figure S7, Additional file 10: Figure S8 for the 2008 to 2013 maps], showing that discordant municipalities were located throughout the country. Geographic patterns of the municipalities classified as high-risk (class 3) by the SVS/MH, BHM- exceedence and BHM-predicted predicted number of cases (\( \hat{y_{it}} \)) by the BHM VL is endemic in Brazil, and has been historically distributed across multiple states, especially in the North and Northeast regions of the country. However, recent reports indicate that the disease is expanding within Brazil and is reaching neighboring countries like Argentina and Uruguay [23,24,25]. Recently affected areas in Brazil include states located in the South (such as Rio Grande do Sul) and in the Midwest region [10]. For the study period, municipalities that presented higher number of cases were mostly located in the states of Tocantins, Minas Gerais, Mato Grosso do Sul, Ceará and Piauí (Fig. 1), supporting the results observed in previous studies that had also identified the above states as high-risk areas [26,27,28,29,30]. For the 11 years studied here less than 10% of the municipalities reported at least one case of VL in any given year (mean of municipalities with one or more VL cases during 2004–2014 = 437, min = 380, max = 492). However, VL incidence varied largely in those affected municipalities. The inclusion of both spatially structured and unstructured random effects in the model allowed a better understanding of how the risk was directly explained by the population at risk across the country. The exponentiated posterior estimates for the spatially structured random effect term were above one in multiple regions including Central-Western, Northeast and especially north of Roraima state (Fig. 3-left). High values of ui indicate a positive association between the spatially structured effects and VL in Brazil, signaling the presence of additional risk factors that are not directly related with VL occurrence and that have a spatial component. This spatially-dependent risk may be in part related with the local density of infected reservoirs (dogs), in line with previous studies that described a positive spatial dependency between the occurrence of human and canine VL cases [31]. Therefore, larger concentrations of infected dogs per inhabitants in certain municipalities could lead to increased risk, since dogs are considered the main reservoir of the disease in Latin America and in Brazil in particular [27, 32, 33]. Increased risk may be also explained by other factors. For example, in some areas with high VL incidence like Teresina (Northeastern Brazil) a correlation between VL incidence and more limited urban infrastructures and poorer living conditions has been previously described [26, 34, 35]. Future analysis can expand on our models by incorporating covariates explaining local development as one example. Changes in the environment, such as deforestation due to expansion of the road networks, have been also shown to have a major effect on the risk of VL and other vector-borne diseases [36]. Indeed, the expanding habitat of the vector may be associated to some extent with the increase in VL incidence in areas traditionally considered non-endemic in Brazil, especially in the South and Midwest regions, a situation that may become more concerning in the future [25]. The nearly 80% agreement between the SVS/MH and BHM-exceedence and predicted risk classifications when all risk categories are considered suggests that the current strategy for the classification of municipalities may provide an acceptable approach in a significant proportion of the municipalities in the country. However, when results from municipalities classified in categories 1–3 (i.e., 'some risk') by the three approaches were compared, the agreement dropped largely [Table 2, Additional file 1: Figure S1 and Additional file 2: Figure S2], and major disagreements were identified particularly regarding to the category of higher risk (class 3) as classified by the BHM, that were evident throughout the study period [Additional file 3: Table S1, Additional file 4: Table S2, Fig. 4 and Additional file 5: Figure S3, Additional file 6: Figure S4, Additional file 7: Figure S5, Additional file 8: Figure S6, Additional file 9: Figure S7, Additional file 10: Figure S8 for the 2008 to 2013 maps]: a considerable proportion of these high risk municipalities (between 58% in 2012 and 82% in 2013) were identified to have lower risk according to the SVS/MH classification. The SVS/MH classification seemed to be more sensitive to year-to-year changes (for example, there was a 30% drop in the number of municipalities classified as high risk between 2011 and 2012), which could be due to surveillance artifacts since the risk of VL would not be expected to change so drastically in such a short time-span. The classification yielded by the BHM, on the other hand, provided a more stable risk landscape over time and space due to the smoothing stemming from the inclusion of spatial effects in the model [Fig. 4 and Additional file 5: Figure S3, Additional file 6: Figure S4, Additional file 7: Figure S5, Additional file 8: Figure S6, Additional file 9: Figure S7, Additional file 10: Figure S8]. This is obvious from a close look at the municipalities classified differently by the two approaches, showing that these were typically located neighboring others with a large spatially structured random effect term (υi).The implications in the control of VL may be relevant if municipalities stop the application of control measures without accounting for the risk in neighboring municipalities (Fig. 4). Both "moderate" and "intense transmission" municipalities according to SVS/MH (categories 2 and 3) are subjected to the same disease control measures in terms of resources and active surveillance activities. However, the BHM results suggest that a substantial underestimation may take place when only focusing on numerator data, since every year an average of 131 and 288 additional municipalities were classified as moderate (class 2) and intense (class 3) transmission areas, respectively, using this approach. This highlights the importance of incorporating information on the population at risk as well as spatial and temporal effects most related to the risk of infectious diseases. The comparison between the SVS/MH classification and those based on the exceedence probabilities or the predicted number of cases (\( \hat{y_{it}} \)) revealed that even though agreement was good (weighted Kappa min:0.66-max:0.69) discordances were not only found in municipalities classified as higher risk [Additional file 3: Table S1, Additional file 4: Table S2]. Our current analyses allow the identification of municipalities with higher VL risk that could have been previously inadequately classified according to the methodology adopted by the SVS/MH. The new classification proposed in this study may help to identify municipalities that, despite not presenting high morbidity, are under a high risk of disease transmission, and should therefore be subjected to improved surveillance. Finally, the limitations of this study are mainly associated to the lack of information on neighboring countries for municipalities located at the edge of the study area (Paraguay, Argentina and Bolivia). In addition, location of cases were based on where the notification took place, and may not indicate where the infection actually occurred. However, we suggest that the modeling the incidence ratio and inclusion of spatial and temporal effects and the smoothing technique we used helped to remove the effects of the variation of count cases used by the current MHS risk classification, and hence provide a better approximation of the municipality-level risk. The comparison between the VL risk classification currently in use by the SVS/MH and that obtained through a BHM revealed that raw case counts of VL may be sufficient to indicate disease risk in a large proportion of the municipalities in Brazil, but may underestimate the risk in others, particularly those neighboring high risk areas. Our results identified "hot" areas where disease clustered, and where control and surveillance efforts could be implemented in order to prevent further spread of VL in the country. Resources to support increased measures in those hot areas could come from the many more areas classified as "1" (sporadic transmission) by the SVS/MH compared to those identify by our models. BHM: DIC: Deviance Information Criterion IBGE: Instituto Brasileiro de Geografia e Estatística INLA: Integrated Nested Laplace Approximation RW1: Random walk type 1 SIR: Standardized incidence ratios SVS/MH: Secretaria de Vigilância em Saúde of Brazil's Ministry of Health VL: Visceral leishmaniasis Harhay MO, Olliaro PL, Costa DL, Costa CHN. Urban parasitology: visceral leishmaniasis in Brazil. Trends Parasitol. 2011;27(9):403–9 PubMed PMID: WOS:000295207500007. Malaviya P, Picado A, Singh SP, Hasker E, Singh RP, et al. Visceral Leishmaniasis in Muzaffarpur District, Bihar, India from 1990 to 2008. PLOS ONE. 2011;6(3):e14751. WHO. WHO neglected tropical disease 2014. Available from: http://www.who.int/neglected_diseases/diseases/en/). WHO. First WHO report on neglected tropical diseases 2010. Available from: http://www.who.int/neglected_diseases/2010report/en/. WHO. Number of cases of visceral leishmaniasis reported data by country 2017. Available from: http://apps.who.int/gho/data/node.main.NTDLEISH?lang=en. PAHO. Informe Epidemiológico das Américas. Leishmanioses 2017. Available from: http://iris.paho.org/xmlui/handle/123456789/34113. Lainson R, Rangel EF. Lutzomyia longipalpis and the eco-epidemiology of American visceral leishmaniasis, with particular reference to Brazil - A Review. Mem I Oswaldo Cruz. 2005;100(8):811–27 PubMed PMID: WOS:000235006100001. Missawa NA, Veloso MA, Maciel GB, Michalsky EM, Dias ES. Evidence of transmission of visceral leishmaniasis by Lutzomyia cruzi in the municipality of Jaciara, state of Mato Grosso, Brazil. Rev Soc Bras Med Trop. 2011;44(1):76–8 PubMed PMID: 21340413. Dos Santos SO, Arias J, Ribeiro AA, Hoffmann MD, De Freitas RA, Malacco MAF. Incrimination of Lutzomyia cruzi as a vector of American Visceral Leishmaniasis. Med Vet Entomol. 1998;12(3):315–7 PubMed PMID: WOS:000075782800013. Souza GD, dos Santos E, Andrade JD. The first report of the main vector of visceral leishmaniasis in America, Lutzomyia longipalpis (Lutz & Neiva) (Diptera: Psychodidae: Phlebotominae), in the state of Rio Grande do Sul, Brazil. Mem I Oswaldo Cruz. 2009;104(8):1181–2 PubMed PMID: WOS:000274413300017. Romero GAS, Boelaert M. Control of Visceral Leishmaniasis in Latin America-A Systematic Review. Plos Neglect Trop D. 2010;4(1) PubMed PMID: WOS:000274179500012. Courtemanche C, Soneji S, Tchernis R. Modeling Area-Level Health Rankings. Health Serv Res. 2015;50(5):1413–31. https://doi.org/10.1111/1475-6773.12352 PubMed PMID: 26256684; PubMed Central PMCID: PMCPMC4600354. Spiegelhalter DJ, Best NG, Carlin BR, van der Linde A. Bayesian measures of model complexity and fit. J R Stat Soc B. 2002;64:583–616 PubMed PMID: WOS:000179221100001. Knorr-Held L, Besag J. Modelling risk from a disease in time and space. Stat Med. 1998;17(18):2045–60 PubMed PMID: WOS:000075939000002. Simpson D, Håvard R, Martins GT, Riebler A, Sørbye GS. Penalising model component complexity: A principled, practical approach to constructing priors. Statistical Science. 2015;arXiv:1403.4630. Martino S, Havard R. Implementing approximate Bayesian inference using integrated nested Laplace approximation: a manual for the inla program. NTNU, Norway: Department of Mathematical Sciences; 2009. Team RDC. R : a language and eviroment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2010. Brasil Ministério da Saúde SdVeS. Manual de Vigilância e Controle da Leishmaniose Visceral. 2007. Saúde Md. Guia de Vigilância em Saúde 2016 [cited 1]. Available from: http://portalarquivos.saude.gov.br/images/pdf/2016/novembro/18/Guia-LV-2016.pdf. Lawson AB. Bayesian disease mapping: hierarchical modeling in spatial epidemiology. New York: CRC Press; 2013. Richardson S, Thomson A, Best N, Elliott P. Interpreting posterior relative risk estimates in disease-mapping studies. Environ Health Persp. 2004;112(9):1016–25 PubMed PMID: WOS:000222315800011. Rotejanaprasert C, Lawson A, Bolick-Aldrich S, Hurley D. Spatial Bayesian surveillance for small area case event data. Stat Methods Med Res. 2016;25(4):1101–17 PubMed PMID: WOS:000382871200003. Salomon OD, Basmajdian Y, Fernandez MS, Santini MS. Lutzomyia longipalpis in Uruguay: the first report and the potential of visceral leishmaniasis transmission. Mem Inst Oswaldo Cruz. 2011;106(3):381–2 PubMed PMID: 21655832. Salomon OD, Quintana MG, Bruno MR, Quiriconi RV, Cabral V. Visceral leishmaniasis in border areas: clustered distribution of phlebotomine sand flies in Clorinda, Argentina. Mem Inst Oswaldo Cruz. 2009;104(5):801–4 PubMed PMID: 19820846. Peterson AT, Campbell LP, Moo-Llanes DA, Travi B, Gonzalez C, Ferro MC, et al. Influences of climate change on the potential distribution of Lutzomyia longipalpis sensu lato (Psychodidae: Phlebotominae). Int J Parasitol. 2017. https://doi.org/10.1016/j.ijpara.2017.04.007 PubMed PMID: 28668326. Neto JC, Werneck GL, Costa CHN. Factors associated with the incidence of urban visceral leishmaniasis: an ecological study in Teresina, Piaui State, Brazil. Cad Saude Publica. 2009;25(7):1543–51 PubMed PMID: WOS:000267705400012. Ashford DA, David JR, Freire M, David R, Sherlock I, Eulalio MC, et al. Studies on control of visceral leishmaniasis: impact of dog control on canine and human visceral leishmaniasis in Jacobina, Bahia, Brazil. Am J Trop Med Hyg. 1998;59(1):53–7 PubMed PMID: 9684628. Vieira CP, Oliveira AM, Rodas LA, Dibo MR, Guirado MM, Chiaravalloti NF. Temporal, spatial and spatiotemporal analysis of the occurrence of visceral leishmaniasis in humans in the City of Birigui, state of Sao Paulo, from 1999 to 2012. Rev Soc Bras Med Trop. 2014;47(3):350–8 PubMed PMID: 25075487. Margonari C, Freitas CR, Ribeiro RC, Moura ACM, Timbo M, Gripp AH, et al. Epidemiology of visceral leishmaniasis through spatial analysis, in Belo Horizonte municipality, state of Minas Gerais, Brazil. Mem I Oswaldo Cruz. 2006;101(1):31–8 PubMed PMID: WOS:000236054000007. Antonialli SAC, Torres TG, Paranos AC, Tolezano JE. Spatial analysis of American Visceral leishmaniasis in Mato Grosso do Sul State, Central Brazil. J Infection. 2007;54(5):509–14 PubMed PMID: WOS:000246442900016. Teixeira-Neto RG, da Silva ES, Nascimento RA, Belo VS, de Oliveira CD, Pinheiro LC, et al. Canine visceral leishmaniasis in an urban setting of Southeastern Brazil: an ecological study involving spatial analysis. Parasite Vector. 2014;7 PubMed PMID: WOS:000348547300001. de Araujo VEM, Pinheiro LC, Almeida MCD, de Menezes FC, Morais MHF, Reis IA, et al. Relative Risk of Visceral Leishmaniasis in Brazil: A Spatial Analysis in Urban Area. Plos Neglect Trop D. 2013;7(11) PubMed PMID: WOS:000330378400025. Souza VA, Cortez LR, Dias RA, Amaku M, Ferreira Neto JS, Kuroda RB, et al. Space-time cluster analysis of American visceral leishmaniasis in Bauru, Sao Paulo state, Brazil. Cad Saude Publica. 2012;28(10):1949–64 PubMed PMID: 23090174. de Almeida AS, Medronho RD, Werneck GL. Identification of Risk Areas for Visceral Leishmaniasis in Teresina, Piaui State, Brazil. Am J Trop Med Hyg. 2011;84(5):681–7 PubMed PMID: WOS:000290365100006. Werneck GL, Costa CHN, Walker AM, David JR, Wand M, Maguire JH. Multilevel modelling of the incidence of visceral leishmaniasis in Teresina, Brazil. Epidemiol Infect. 2007;135(2):195–201 PubMed PMID: WOS:000244652800003. Seva AD, Mao L, Galvis-Ovallos F, Tucker Lima JM, Valle D. Risk analysis and prediction of visceral leishmaniasis dispersion in Sao Paulo State, Brazil. PLoS Negl Trop Dis. 2017;11(2):e0005353. https://doi.org/10.1371/journal.pntd.0005353 PubMed PMID: 28166251; PubMed Central PMCID: PMCPMC5313239. We would like to thank Serviço de Vigilância em Saúde, Ministério da Saúde (SVS-MOH), Brasília, Brazil. This study was funded by the Academic Health Center Faculty Research Development Grant Program (FRD #16.36) and CVM-Department of Population Health and Pathobiology- North Carolina State University, Grant/Award Number: Startup fund. The funder had no role in the collation of the data, development of the conceptual framework, analysis of data, interpretation of data, writing of the manuscript, or the decision to submit the paper for publication. Data of reported cases are available through the Secretaria de Vigilância em Saúde of Brazil's Ministry of Health (SVS/MH) upon request and can also be retrieved from the National Information System of Health of the Ministry of Health (Sistema de Informação de Agravos de Notificação [SINAN] do Ministério da Saúde [MS]- http://portalsinan.saude.gov.br/doencas-e-agravos Julio Alvarez and Haakon Christopher Bakka contributed equally to this work. Department of Population Health and Pathobiology, College of Veterinary Medicine, North Carolina State University, 1060 William Moore Drive, Raleigh, NC, 27607, USA Gustavo Machado VISAVET Health Surveillance Center, Universidad Complutense, Avda Puerta de Hierro S/N, 28040, Madrid, Spain Julio Alvarez Departamento de Sanidad Animal, Facultad de Veterinaria, Universidad Complutense, Avda Puerta de Hierro S/N, 28040, Madrid, Spain CEMSE Division, King Abdullah University of Science and Technology, Trondheim, Saudi Arabia Haakon Christopher Bakka Department of Veterinary Population Medicine, College of Veterinary Medicine, University of Minnesota, St Paul, MN, 55108, USA Andres Perez Secretaria de Vigilância em Saúde, Ministério da Saúde (SVS-MH), Brasília, Brazil Lucas Edel Donato, Francisco Edilson de Ferreira Lima Júnior & Renato Vieira Alves School of Veterinary Medicine, University of Surrey, Guildford, Surrey GU2 7A, UK Victor Javier Del Rio Vilas Lucas Edel Donato Francisco Edilson de Ferreira Lima Júnior Renato Vieira Alves GM, JA, VJDRVB and AP authors reviewed the literature, and contributed to the conception and design of the study. VJDRVB, LED, FEFLJ and RVA acquired the leishmaniasis data. GM wrote the code for the spatiotemporal and prior sensitivity analysis. GM, JA and HCB reviewed and improved the codes. GM, JA, VJDRVB, AP, LED, FEFLJ, HCB and RVA interpreted and discussed the results, wrote the manuscript, and revised it critically. All authors read and approved the final manuscript. Correspondence to Gustavo Machado. Additional file 1: Figure S1. Proportion of municipalities classified by the BHM model exceedence probabilities and the SVS/MH classification. (TIFF 26367 kb) Figure S2. Proportion of municipalities classified by the BHM model-predicted risk class and the SVS/MH classification. (TIFF 26367 kb) Table S1. Weighted Kappa between BHM model-exceedence probabilities and the SVS/MH classification. (DOCX 18 kb) Table S2. Weighted Kappa between BHM model-predicted risk class and the SVS/MH classification. (DOCX 18 kb) Figure S3. The spatial distribution of all classifications SVS/MH, BHM-exceedence and BHM-predictions for 2008. (TIF 26986 kb) Additional file 10: Machado, G., Alvarez, J., Bakka, H.C. et al. Revisiting area risk classification of visceral leishmaniasis in Brazil. BMC Infect Dis 19, 2 (2019). https://doi.org/10.1186/s12879-018-3564-0 Disease mapping
CommonCrawl
Impacts of Atlantic multidecadal variability on the tropical Pacific: a multi-model study Resolution dependence of CO2-induced Tropical Atlantic sector climate changes W. Park & M. Latif Atlantic and Pacific tropics connected by mutually interactive decadal-timescale processes Gerald A. Meehl, Aixue Hu, … Nan Rosenbloom Future climate change shaped by inter-model differences in Atlantic meridional overturning circulation response Katinka Bellomo, Michela Angeloni, … Jost von Hardenberg Trends in seasonal warm anomalies across the contiguous United States: Contributions from natural climate variability Lejiang Yu, Shiyuan Zhong, … Xindi Bian New insights into natural variability and anthropogenic forcing of global/regional climate evolution Tongwen Wu, Aixue Hu, … Gerald A. Meehl Uncertainty in near-term global surface warming linked to tropical Pacific climate variability Mohammad Hadi Bordbar, Matthew H. England, … Mojib Latif Record warming at the South Pole during the past three decades Kyle R. Clem, Ryan L. Fogt, … James A. Renwick Weakening of the Atlantic Niño variability under global warming Lander R. Crespo, Arthur Prigent, … Emilia Sánchez-Gómez Cross-hemispheric SST propagation enhances the predictability of tropical western Pacific climate Cheng Sun, Yusen Liu, … Chunzai Wang Yohan Ruprich-Robert ORCID: orcid.org/0000-0002-4008-20261, Eduardo Moreno-Chamarro1, Xavier Levine ORCID: orcid.org/0000-0003-4970-70261, Alessio Bellucci2,3, Christophe Cassou4, Frederic Castruccio ORCID: orcid.org/0000-0002-8397-74525, Paolo Davini6, Rosie Eade7, Guillaume Gastineau8, Leon Hermanson ORCID: orcid.org/0000-0002-1062-67317, Dan Hodson ORCID: orcid.org/0000-0001-7159-67009, Katja Lohmann10, Jorge Lopez-Parages4, Paul-Arthur Monerie9, Dario Nicolì2, Said Qasmi4,11, Christopher D. Roberts ORCID: orcid.org/0000-0002-2958-663712, Emilia Sanchez-Gomez4, Gokhan Danabasoglu5, Nick Dunstone7, Marta Martin-Rey13, Rym Msadek4, Jon Robson ORCID: orcid.org/0000-0002-3467-018X9, Doug Smith ORCID: orcid.org/0000-0001-5708-694X7 & Etienne Tourigny ORCID: orcid.org/0000-0003-4628-14611 npj Climate and Atmospheric Science volume 4, Article number: 33 (2021) Cite this article Atmospheric dynamics Atlantic multidecadal variability (AMV) has been linked to the observed slowdown of global warming over 1998–2012 through its impact on the tropical Pacific. Given the global importance of tropical Pacific variability, better understanding this Atlantic–Pacific teleconnection is key for improving climate predictions, but the robustness and strength of this link are uncertain. Analyzing a multi-model set of sensitivity experiments, we find that models differ by a factor of 10 in simulating the amplitude of the Equatorial Pacific cooling response to observed AMV warming. The inter-model spread is mainly driven by different amounts of moist static energy injection from the tropical Atlantic surface into the upper troposphere. We reduce this inter-model uncertainty by analytically correcting models for their mean precipitation biases and we quantify that, following an observed 0.26 °C AMV warming, the equatorial Pacific cools by 0.11 °C with an inter-model standard deviation of 0.03 °C. Over the 1980–2012 period, the eastern tropical Pacific sea surface temperature (SST) is characterized by a cooling trend that was one of the main causes of the global surface warming slowdown observed during 1998–20121,2,3. This regional cooling contrasts with a direct radiatively forced response expected from the increase in anthropogenic greenhouse gases4 and it is associated with an intensification of the western tropical Pacific easterlies, reflecting changes in the Walker Circulation5,6,7. Such changes have been partly attributed to variations in the tropical Atlantic SST through atmospheric teleconnections8,9. During the same 1980–2012 period, the tropical Atlantic SSTs continued warming, likely due to a combination of anthropogenic-related radiative forcing and internal climate variability10,11. In particular, the leading mode of decadal variability of the North Atlantic SST—namely the Atlantic multidecadal variability (AMV12,13; Fig. 1a, b)—shifted from a cold to a warm phase around 1995–1996, exaggerating the North Atlantic warming trend induced by anthropogenic greenhouse gases14,15. Over the longer 1920–2014 period, warm AMV conditions were also associated with cold SST anomalies in the central and eastern tropical Pacific (Fig. 1c), supporting the existence of a consistent link between the AMV and the tropical Pacific climate16,17. Yet, these observed Pacific changes cannot be unequivocally attributed to the AMV due to the presence of external forcing and internally driven variability outside of the North Atlantic, as well as because of the limited historical record with respect to the timescales considered and observational uncertainties. Coupled global climate model (CGCM) simulations offer the possibility to tackle these limitations. Fig. 1: Observed AMV and related anomalies. a Spatial structure of the AMV-SST anomalies imposed in the numerical simulations. b Time evolution of the observed AMV (dataset: ERSSTv4). c Observed 2-m air temperature difference between positive and negative AMV years (i.e., red minus blue years in (b) dataset: HadCRUT4). Due to the sparseness of the observation in the tropical Pacific before ~1920's68, the composite in (c) is computed only from 1920 onwards, i.e., excluding data marked by the gray shading in (b). Areas, where data were not available for the whole composite period, are masked. Using a hierarchy of numerical models, Li et al.9 demonstrated that the tropical Pacific response to the Atlantic forcing can be decomposed into two phases: Phase-1 an initial Atlantic forcing through diabatic heating and Phase-2 an Indo-Pacific Walker Circulation feedback (cf. Fig. 3 in Li et al.9). In Phase-1, the warm tropical Atlantic SST anomalies in summer (hereafter seasons are relative to the Northern Hemisphere) intensify deep convection and lead to upper tropospheric mass divergence over the tropical Atlantic. This divergence is compensated by upper tropospheric mass convergence and descent over the Central tropical Pacific, which intensifies the surface Trade winds over western tropical Pacific8,18. In Phase-2, the so-called Indo-Pacific feedback reinforces the Trade winds, piling up warm water in the Pacific Warm Pool, where atmospheric deep convection increases. This results in an upper tropospheric mass divergence over the warm pool that enhances Central tropical Pacific descent acting as positive feedback on the anomalies generated by the Atlantic forcing in Phase 19,19. Following El Niño Southern Oscillation (ENSO) dynamics, an increase in summer easterlies in the western tropical Pacific eventually favors colder conditions than normal in the eastern and central Pacific during the following winter20. Fig. 2: Simulated AMV impacts. Multi-model mean and 10-year averaged differences between AMV+ and AMV− simulations ensemble means in terms of 2-m air temperature and for the boreal (a) winter and (b) summer seasons. Stippling indicates regions where less than 80% of the models agree on the sign of the differences. Dashed black lines indicate in (a) the NIÑO3.4 region and in (b) the TROP latitudinal band and its constituent regions: TropInd, TropPac, and TropAtl (cf. indices definition in "Methods"). Given the global importance of the tropical Pacific variability and the predictability arising from the North Atlantic at decadal timescales21,22, this Atlantic–Pacific teleconnection is a potential source of seasonal to decadal climate predictability that needs to be further assessed in models. However, the robustness and the strength of this connection remain unknown and need to be quantified. Here, we present a multi-model assessment of this Atlantic–Pacific connection using 21 ensemble simulations from 13 CGCMs (Supplementary Tables 1 and 2) that largely comply with the CMIP6/DCPP-C protocol23. Following this protocol, the same observed AMV SST anomalies (Fig. 1a) are imposed in the North Atlantic of each CGCM to investigate the worldwide teleconnections associated with the observed AMV (see "Methods"). We note that in those idealized AMV simulations, extra heat is added to (or removed from) the climate system to maintain a stationary AMV signal in the North Atlantic for 10 years. This artificial heat prevents a realistic simulation of the relationship between AMV and the global mean surface temperature. Uncertainty in the Pacific response to AMV forcing We start by discussing the multi-model mean (MMM; cf. "Methods") winter response of the AMV experiments. Associated with the imposed 0.2 °C tropical North Atlantic warming, the MMM shows a 0.05 °C cooling in the tropical South Atlantic and a 0.1 °C cooling in the central equatorial Pacific (Fig. 2a). The latter extends eastward and poleward in both hemispheres, contrasting with warm anomalies in the western part of the subtropical Pacific basins. In the Indian Ocean, the MMM shows a broad warming response with maximum anomalies localized west of India. The summertime SST anomalies are similar to the winter ones but of weaker amplitude over the central equatorial Pacific (Fig. 2b). Overall, the MMM shows good agreement with observations over the whole tropical Atlantic (even south of the Equator where models are not constrained) as well as North of 10°S in other tropical regions (Fig. 1c). This similarity supports the important driving role of the AMV in the observed changes over the Pacific during the historical period8,9,17,19,24,25,26,27. In addition, the negative response of the tropical Pacific SST to the imposed North Atlantic warming in the AMV experiments implies a dynamical adjustment of the Pacific. Fig. 3: Origins of the inter-model spread response to the observed AMV forcing. Inter-model relationship between several indices. Markers represent the 10-year averaged ensemble mean the difference between AMV+ and AMV− simulations from individual experiments and the three colors code for the different AMV forcing strengths: 1×AMV, 2×AMV, and 3×AMV strength in blue, orange, and magenta, respectively. a Winter NIÑO3.4 SST index versus winter tropical North Atlantic SST (averaged over 5°N–20°N/60°W–10°E). b Winter NIÑO3.4 SST index versus summer TropPac descent (sum of the net vertical mass transport at 500 hPa; a positive value indicates descent). c Summer TropPac descent versus the sum of TropInd and TropAtl ascent. d Summer TropInd ascent versus TropAtl ascent. e Summer TropInd ascent versus atmospheric vertical temperature contrast over the WarmPool region (defined as the 20°S–20°N/90°E–160°W region), and (f) summer TropPac SST versus TROP temperature at 200 hPa. R indicates the inter-model correlation (see "Methods"). The dashed line in (c) materializes a full mass compensation within the tropics; RAtl and RInd indicate the inter-model correlations between summer TropPac and TropAtl ascent and summer TropPac and TropInd ascent, respectively. The Box plots in (d) indicate the minimum/maximum values, the 20th/80th percentiles, and the median from the indices distributions. We now investigate the tropical Pacific response as simulated by each individual model using as a proxy the NIÑO3.4 SST index (cf. indices definition in "Methods" and Fig. 2a). In winter, all experiments simulate La Niña-like cooling in response to an AMV warming except the EC-Earth3P_1Sig experiment that shows weak NIÑO3.4 warming of +0.01 °C (Fig. 3a; see also Supplementary Fig. 8). Though models mostly agree on the sign of the tropical Pacific response, the magnitude of their response varies by an order of magnitude, from 0.01 °C to −0.23 °C, with a MMM of −0.12 °C for a similar ~0.2 °C tropical North Atlantic warming. This large inter-model spread in response to AMV forcing highlights considerable uncertainties in our ability to predict the climate at seasonal to decadal timescales28,29. Origins of the inter-model spread Different tropical Pacific responses among models in winter can be explained by intrinsic model differences in simulating Pacific climate dynamics such as the ones linked to ENSO30. Yet, it is known that ENSO is strongly influenced by tropical Pacific conditions in the previous summer31,32,33,34. In particular, tropical Pacific heat content anomalies and their driving surface winds are known to be predictors of ENSO several months ahead35,36. Therefore, different tropospheric responses to the Atlantic SST forcing during summer can also explain model differences in winter20,37. Here, we find that the winter NIÑO3.4 inter-model spread is mainly associated with the inter-model spread in descent anomalies over Central tropical Pacific during summer (R = −0.9; where R is the inter-model correlation, see "Methods"; Fig. 3b) and associated surface winds. This indicates that the inter-model spread in the winter Equatorial Pacific mainly arises from different tropospheric responses to the AMV forcing during summer. This inference is supported by the weaker inter-model correlation between winter NIÑO3.4 SST and winter Pacific descent responses (R = −0.64, not shown). To further understand the inter-model spread, we explore the origins of the tropical Pacific tropospheric descent anomalies in summer. Figure 3c shows that those subsiding anomalies are nearly fully mass-compensated by ascendant anomalies in other tropical regions. The 20°S–20°N tropical band (TROP) is further decomposed into a broad Indian ocean domain (TropInd), the Central Pacific ocean (TropPac), and a broad Atlantic domain (TropAtl; cf. indices definition in "Methods" and Fig. 2b). Through an analysis of variance (see "Methods"), we find that only 19% of the inter-model variance in TropPac descent anomalies is associated with the inter-model variance in TropAtl ascent anomalies, but 69% with the TropInd ascent ones. These two sources of spread are consistent with the two-phase mechanism detailed in the Introduction to explain the tropical Pacific response to Atlantic warming. In particular, it is consistent with the amplification of the Pacific response through the adjustment feedback of the Indo-Pacific Walker Circulation9. The key finding here is that there is no significant inter-model correlation between TropInd and TropAtl anomalies (R = 0.2, Fig. 3d). This indicates that models simulate different Indo-Pacific Walker Circulation adjustments (Phase-2 Indo-Pacific feedback) for similar Atlantic–Pacific atmospheric bridges (Phase-1 Atlantic forcing). Hence, this implies that the simulated feedback associated with the Indo-Pacific Walker Circulation adjustment is model-dependent and that the differences in this feedback are the source of most of the inter-model spread in the tropical Pacific response to the AMV forcing. We find two possible mechanisms to explain the different Indo-Pacific Walker Circulation adjustments among models in the AMV experiments. As detailed below, either different TropInd ascent or different TropPac SST responses can be the original driver of the different circulation responses. Further targeted experiments would be required to determine which mechanism is dominating here. However, both mechanisms point to the temperature response of the upper tropical troposphere as the key process to understand the inter-model differences: The inter-model spread in TropInd ascent anomalies is tightly connected to the tropospheric lapse rate over the Warm Pool (Fig. 3e). Indeed, the larger the lapse rate (less warming in the upper troposphere compared to the surface), the less stable the troposphere is, and the more convectively active the tropical troposphere becomes. The inter-model spread in TropInd ascent anomalies is therefore linked to different responses in the upper tropospheric warming over the Warm Pool among models (Supplementary Fig. 4a–c). Because in the tropics the upper-tropospheric temperature is constrained by wave dynamical adjustment to be nearly horizontally uniform38,39 (Supplementary Fig. 4d), it implies that the TropInd ascent responses and the Indo-Pacific Walker Circulation responses are controlled by the different upper tropospheric warming among models. The different upper tropospheric warming can lead to different SST warming among models through a "top-down" mechanism by a decrease of the surface latent heat flux40. There is indeed an inter-model correlation of R = 0.86 between the TROP upper tropospheric temperature and the TropPac SST responses (Fig. 3f). This "top-down" warming effect eventually modulates the amplitude of the TropPac descent and of the Indo-Pacific Walker Circulation adjustment. Therefore, for both the TropInd ascent and the TropPac SST mechanisms, the warmer the TROP upper troposphere is in response to an AMV warming, the weaker the Indo-Pacific feedback and the weaker the tropical wintertime Pacific cooling are. Then, in order to understand the inter-model spread in the wintertime Pacific response to AMV, one needs to understand the inter-model spread in TROP upper tropospheric temperature anomalies. As the thermal stratification of the tropical troposphere is primarily controlled by deep convection, upper tropospheric temperature anomalies in the Tropics can be generally traced to regional variations in the atmospheric boundary layer. To investigate the origins of those anomalies we study the moist static energy at the surface, using as estimate the equivalent potential temperature41 (\(\theta _{\mathrm{E}}\); see "Methods"). In order to take into account the different contributions of highly active and less active convective regions in the injection of moist static energy from the surface to the upper troposphere, we weigh surface \(\theta _{\mathrm{E}}\) with local precipitation (Pr in mm d−1) following Sobel et al.42's approach (see "Methods"): $${\mathrm{P}}\theta _{\mathrm{E}} = a \times {\mathrm{Pr}} \times \theta _{\mathrm{E}}/ < a \times {\mathrm{Pr}} >$$ Where \(< . >\) symbols indicate the sum over TROP and \(a\) is the grid cell surface area. The inter-model correlation between the changes of upper tropospheric temperature and our weighted \(\theta _{\mathrm{E}}\) variable (\({\mathrm{P}}\theta _{\mathrm{E}}\)) summed over the tropical band is R = 0.96 (Fig. 4a), confirming the physical link between \({\mathrm{P}}\theta _{\mathrm{E}}\) and the upper tropospheric conditions in the tropics. Fig. 4: Impacts of different injections of moist static energy into the upper troposphere. a Summer TROP temperature at 200 hPa versus the TROP surface equivalent temperature weighted by precipitation. b Summer TROP temperature at 200 hPa versus the \(\theta _{\mathrm{E}}\) anomalies component of the TropAtl surface equivalent potential temperature weighted by precipitation (\({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\)). c Multi-model mean weighted equivalent potential temperature anomalies (\({\mathrm{P}}\theta _{\mathrm{E}}\)) summed over TROP and its contributions from \(\theta _{\mathrm{E}}\) anomalies (\({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\)), precipitation anomalies (\({\mathrm{P}}\prime \theta _{{\mathrm{E}},{\mathrm{C}}}\)), and precipitation and \(\theta _{\mathrm{E}}\) anomalies covariance (\({\mathrm{P}}\prime \theta _{\mathrm{E}}\prime\)). d TROP \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) contributions from TropInd, TropPac, and TropAtl. On (c) and (d), the length of the vertical black lines indicates the inter-model standard deviation associated with the multi-model mean value, and R indicates the inter-model correlation of each component with the upper-tropospheric temperature shown in (a). e Spatial distribution of the \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) inter-model spread over TropAtl and its contributions from inter-model differences in f climatological precipitation, (g) \(\theta _{\mathrm{E}}\) response to AMV, and (h) their covariance. The dashed line defines the eastern border of the East Pacific region used in Fig. 6. The two MetUM-GOML simulations are excluded from the analyses (c), (d), (e)–(h). To further understand the origins of the inter-model spread, we decompose the \({\mathrm{P}}\theta _{\mathrm{E}}\) anomalies into a term linked to precipitation anomalies only, i.e., \({\mathrm{P}}^\prime \theta _{{\mathrm{E}},{\mathrm{C}}}\), a term linked to \(\theta _{\mathrm{E}}\) anomalies only, i.e., \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}^\prime\), and a covariance term, i.e., \({\mathrm{P}}_{\mathrm{C}}^\prime \theta _{\mathrm{E}}^\prime\) (see "Methods"). We find that most of the inter-model spread in upper tropospheric temperature anomalies is coming from differences in the injection of surface moist static energy anomalies into the upper troposphere by the mean model vertical motions (the \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}^\prime\) term; Fig. 4c). Furthermore, we find that the upper tropical troposphere warm anomalies are generated quasi-equally by anomalies occurring in the TropAtl and TropInd regions and, to a lesser extent, in TropPac (Fig. 4d). However, its inter-model spread is primarily driven by the TropAtl and TropPac sectors (black lines), with inter-model correlations between the upper troposphere temperature anomalies and \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}^\prime\) summed over those regions equal to R = 0.96 and R = 0.87, respectively. Because the forcing is coming from the Atlantic in the present experiments, we assume that it is the spread in TropAtl \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}^\prime\) that controls the spread in the tropical upper-tropospheric temperature and that the latter is amplified by the TropPac \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}^\prime\) response. In summary, the analysis of \({\mathrm{P}}\theta _{\mathrm{E}}\) indicates that the inter-model spread in the tropical upper tropospheric temperature anomalies can be explained by different injections of moist static energy from the TropAtl surface into the upper troposphere (Fig. 4b). This is eventually responsible for the modulation of the Indo-Pacific Walker Circulation feedback among models. Hence we identify two summertime variables centered over the TropAtl region that contribute to the inter-model spread in the tropical Pacific response: (1) the divergence of mass in the upper troposphere over TropAtl and (2) the injection of moist static energy anomalies from the TropAtl surface into the upper troposphere by the mean convective activity (\({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\)). Building a bi-linear regression model with those two variables as predictors (see "Methods"), we capture as much as 73% of the inter-model variance in the wintertime NIÑO3.4 SST response (Fig. 5a, b); TropAtl ascent and \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) accounting for 39% and 61% of the total regression model variance, respectively. Fig. 5: Assessing the wintertime tropical Pacific response from two summertime Atlantic predictors. a Inter-model relationship between the wintertime NIÑO3.4 SST index and the outputs of a bi-linear regression model built with the summertime TropAtl ascent and \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) anomalies (see "Methods"). The linear regression between the statistical model and NIÑO3.4 is shown by the black line. b Whisker box plots indicating the minimum/maximum values, the 20th/80th percentiles, and the median from the inter-model distribution of several indices: the wintertime NIÑO3.4 SST index (black), the outputs of the regression model fed with summertime TropAtl ascent, and (blue) TropAtl \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\), (red) TropAtl \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) (i.e., \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) computed using observed precipitation climatology), and (green) TropAtl \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{{\mathrm{E}},{\mathrm{cor}}}^\prime\) (i.e., \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) corrected for precipitation climatological biases over the East Pacific in late winter). The two latter box plots account for uncertainties coming from observation estimates (see "Methods"). Bias corrections and reduction of the uncertainty Next, we investigate the origins of the model response differences over TropAtl aiming at narrowing the uncertainty of our numerical estimate of the tropical Pacific response to the observed AMV forcing. We start by decomposing further the \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) variable over the TropAtl region to evaluate whether its inter-model spread is coming from differences among models in climatological precipitation (\({\mathrm{P}}_{\mathrm{C}}^ \ast \left[ {\theta _{\mathrm{E}}^\prime } \right]\)), \(\theta _E\) anomalies (\(\left[ {{\mathrm{P}}_{\mathrm{C}}} \right]\theta _{\mathrm{E}}^{\prime \ast }\)), or a combination of both (\({\mathrm{COV}}\); see "Methods"). We find that all terms contribute to the inter-model spread, but that their respective importance is spatially dependent (Fig. 4e–h). Of particular interest, this analysis demonstrates that different climatological precipitation among models (Fig. 4f) is partly responsible for the inter-model spread. Because the model climatological precipitations are biased relative to observations, it implies that the simulated \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) are also biased, which leads to erroneous estimates of the response to the observed AMV forcing. To minimize this error, we apply a bias correction to the \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) of each model by computing them using the observed climatological precipitation instead of model one: \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\). This suppresses the spread of \({\mathrm{P}}_{\mathrm{C}}^ \ast \left[ {\theta _{\mathrm{E}}^\prime } \right]\) but it introduces a new source of spread coming from observational uncertainties (see "Methods"). This bias correction decreases overall the inter-model variance of \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) over TropAtl by 58%. Feeding our bi-linear NIÑO3.4 regression model with \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) instead of \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\), we quantify that correcting for model mean precipitation biases helps to reduce the inter-model response variance over the tropical Pacific by 35% (Fig. 5b). Over the eastern Pacific (i.e., the western part of the wide TropAtl sector as shown in Fig. 4g), it is mainly the different \(\theta _{\mathrm{E}}\) responses among models that drive the inter-model spread in \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) and, a fortiori, in \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) (Fig. 4g, see also Supplementary Fig. 11). \(\theta _{\mathrm{E}}\) anomalies there are largely associated with surface temperature changes but their sign and amplitude are model-dependent (Supplementary Fig. 9), leading to compensating anomalies in the MMM (Fig. 2b). In the following, we demonstrate that the spread in \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) over the eastern Pacific is explained by the different model climatological precipitations during February–March–April (Fig. 6g, h). Fig. 6: Impacts of the tropical East Pacific ITCZ mean biases. a–f Monthly evolution of the differences between AMV+ and AMV− ensemble means zonally averaged over the East Pacific region (i.e., the western part of TropAtl) shown in (g) for the multi-model means between the five models simulating the northernmost climatological position of the ITCZ during February–March–April over East Pacific (Group 1: ECMWF-HR, IPSL-CM6, HadGEM3, ECMWF-LR, CESM1) and the southernmost (Group 2: EC-Earth3, CNRM-CM6, CNRM-CM5, EC-Earth3P, CMCC), cf. x-axis in (h). a–f show the differences in terms of SST, precipitation, and net surface fluxes, respectively (surface fluxes are defined as positive from the atmosphere to the ocean). Arrows in (e) and (f) represent the surface wind anomalies. In (a)–(g), contours indicate the climatological precipitation and stippling means that not all models in the group agree on the sign of the anomalies. Months are indicated by their first letter and a 3-month running mean is applied. g Inter-model map regression of the climatological precipitation on the \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}^{\prime \ast }\) index (shading; units: mm d-1 per inter-model standard deviation of \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}^{\prime \ast }\)) and multi-model mean climatological precipitation (contours; units: mm d-1). \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}^{\prime \ast }\) was computed from five observation estimates and only their averaged regression is shown here (see "Methods"). h Inter-model relationship between summertime \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}^{\prime \ast }\) summed over East Pacific and the late winter centroid of the climatological precipitation over East Pacific, defined as the latitude at which there is the same amount of zonally averaged precipitation North and South. The linear regression between the two indices is shown by the black line. Only the averaged values computed from the five \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}^{\prime \ast }\) obtained from the different observation estimates are shown here. The vertical green solid lines indicate the precipitation centroids from the observation estimates and the horizontal green dashed lines reveal the \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}^{\prime \ast }\) values associated with these observed centroids, assuming the same statistical relationship as the inter-model one. R indicates the inter-model correlation averaged over the five estimates of \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}^{\prime \ast }\). The MetUM-GOML simulations are excluded from all the analyses of this figure. During summer, all models simulate westerly anomalies north of 5°N associated with a northward shift of the Inter-tropical Convergence Zone (ITCZ) over the East Pacific in response to the AMV warming (Fig. 6c–f). Yet, by dividing the models into two sub-groups based on the state of their late winter climatological precipitation, we show that this shift is more pronounced for the models simulating a more northward position of the climatological ITCZ in late winter. Those models simulate an SST cooling around 7°N on the southern flank of the ITCZ in summer (Fig. 6a, b), where the other models simulate warming, which explains the inter-model spread in \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\). For the models with the largest ITCZ shift, we find that the westerly anomalies follow the seasonal migration of the precipitation anomalies; those are present north of the Equator since the winter when their cooling effect on the ocean is greatest (Fig. 6c, e). This suggests that preconditioning of the summertime cooling around 7°N during previous seasons occurs through feedback between ITCZ position, SST, wind, and surface flux anomalies43. Yet, all models tend to simulate a northward shift of the ITCZ in winter (Fig. 6c, d) but only some of them simulate such preconditioning. This inter-model disagreement is coming from different model climatological precipitation in late winter. For models simulating an ITCZ located north of the Equator, the northward shift of the ITCZ increases the mean south-westerlies and their associated turbulent heat fluxes around the Equator, which tends to cool locally the SST. On the other hand, for models simulating an ITCZ located South of the Equator, the northward shift of the ITCZ weakens the mean north-easterlies and their cooling effect on the equatorial SST. Given the high correlation between the climatological precipitation in February–March–April and the summertime \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) response over East Pacific (R = −0.87; Fig. 6h), we use this information to further correct our estimate of the Tropical Pacific response to AMV. Associated with the observed February–March–April climatological precipitations, we estimate summertime East Pacific \(P_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) values ranging from −0.09 °C and −0.13 °C (cf. green lines in Fig. 6h). Substituting these \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) values for each model to the contribution of the East Pacific into the TropAtl \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\), we obtain TropAtl \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{{\mathrm{E}},{\mathrm{cor}}}^\prime\) that we consider as our best estimate of the \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) response to the observed AMV forcing (see "Methods"). Feeding our bi-linear regression model with \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{{\mathrm{E}},{\mathrm{cor}}}^\prime\) instead of \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\), we quantify that correcting both summertime and late winter precipitation mean biases reduces by 65% the inter-model variance in our analytical estimate of the NIÑO3.4 response (Fig. 5b). We also investigated the potential origin of the inter-model spread over Atlantic–Africa (i.e., the eastern part of the TropAtl region, cf. Fig. 4e–h). We found that the inter-model spread in \(\left[ {{\mathrm{P}}_{\mathrm{C}}} \right]\theta _{\mathrm{E}}^{\prime \ast }\) anomalies is associated with different signs in the SST and specific surface humidity responses around the eastern equatorial Atlantic (Supplementary Figs. 3 and 4). However, we did not identify the physical processes controlling the different model behaviors (cf. Supplementary Discussion). Using 21 coordinated simulations from 13 different CGCMs, we show that: In response to an AMV warming, all models simulate tropical Pacific changes reminiscent of La Niña conditions. This result confirms the influence of the Atlantic on climate variability at the global scale and it supports the idea that the AMV has contributed to the 1998–2012 global warming slowdown through its impacts on the tropical Pacific. However, the strength of the connection varies by a factor of 10 between the models. The tropical Pacific response to the Atlantic forcing is driven by changes in (1) the Atlantic–Pacific Walker Circulation and (2) the amount of moist static energy injected from the Atlantic surface into the upper troposphere. The latter is responsible for most of the uncertainty in our current numerical model estimates of the Pacific response to the observed AMV, mainly because of different mean precipitation climatology. Partially correcting for mean model precipitation biases, we reduce this uncertainty and we specifically quantified that the NIÑO3.4 response to an observed 0.26 °C AMV warming ranges from −0.05 °C to −0.16 °C with a median value of −0.11 °C and an inter-model standard deviation of 0.03 °C. We acknowledge that this estimate is still subject to model limitations. In particular, we reduce uncertainty by correcting a posteriori for model precipitation biases. Any possible interactions between those biases and surface equivalent potential temperature responses to AMV would still affect our estimate. Therefore, our analysis highlights the importance of reducing mean climate model biases in order to properly simulate and predict the global AMV impacts. Although this study focuses on decadal timescales signals, the discussed mechanisms take place at monthly timescales. Our study shows then the potential for improving climate predictions from seasonal to decadal timescales through a better representation of the impacts of the Atlantic on tropical Pacific28,29,44. The discussed mechanisms very likely also act to shape the Pacific mean state and their differences among models45,46,47, which are partly responsible for the inter-model spread in climate projections48. Based on our findings, we suggest that the analysis of the injection into the upper troposphere of moist static energy from the Atlantic surface can be used as an interpretative framework to understand the inter-model uncertainties around future climate simulations. Finally, we note that several observational and model-based studies49,50,51 suggest the existence of a two-way interaction between Atlantic and Pacific at decadal timescale: an AMV warming driving a Pacific cooling, which eventually drives an Atlantic cooling. Due to the experimental protocol used in the present article, we could only focus on the representation by models of the Atlantic impacts on the Pacific. To persist in exploring the sources of climate predictability at multi-annual timescale and their current limits due to model uncertainty, a similar multi-model study to this one should be completed but investigating the Pacific impacts on the Atlantic. The 21 experiments from 13 different CGCMs used in this study are listed in Supplementary Tables 1 and 2; it represents a total of 12,320 simulated years. Following the DCPP-C protocol23, two sets of ensemble simulations have been performed for each experiment, in which time-invariant SST anomalies corresponding to the warm (AMV+) and cold (AMV−) phases of the observed AMV were imposed over the North Atlantic using SST nudging. To capture the potential response and adjustment of other oceanic basins to the AMV anomalies, the simulations were integrated for 10 years with fixed external forcing conditions. Large ensemble simulations were performed in order to robustly estimate the climate impacts of the AMV (from 10 to 50 members depending on the model, cf. Supplementary Table 2). An extensive description of the experimental protocol is provided in the Technical note for AMV DCPP-C simulations: https://www.wcrp-climate.org/wgsip/documents/Tech-Note-1.pdf. Over the North Atlantic (Equator-65°N/80°W–0°), the spatial correlation of the SST anomalies in each simulation and the observed AMV target varies between 0.66 and 0.86, with a multi-model average value of 0.79, indicating that all simulations are constrained by similar SST conditions in the North Atlantic. We note that the idealized AMV simulations underestimate by ~20% the amplitude of the observed AMV target. This is because we do not impose a very strong nudging in the experimental protocol to allow ocean-atmosphere coupling and variability at high frequency (as recommended by the CMIP6/DCPP-C protocol23), which tends to dissipate the heat anomalies imposed at the surface. Further evaluation of the experimental protocol is provided in Supplementary. Some simulations deviate from the AMV DCPP-C protocol. CESM1 simulations used an observed AMV pattern computed from the ERSSTv3b dataset52 instead of ERSSTv453. CNRM-CM6-1-HR, EC-Earth3P-HR, EC-Earth3P-LR, ECMWF-IFS-HR, and ECMWF-IFS-LR used a constant 1950 or 1990 (instead of 1850) external forcing background (cf. Supplementary Table 2). The impact of the external forcing background on the results is tested with the CNRM-CM5 models for which AMV simulations have been performed with both 1850 and 1990 backgrounds. We did not find evidence for the impact of the protocol differences on the results discussed in this article. In addition, the MetUM-GOML-HR and MetUM-GOML-LR simulations used a 1000-m mixed-layer ocean model and 1990 external forcing background. Those models offer insights on the role played by the ocean dynamics in the documented climate responses when compared to the models with full ocean dynamics. Finally, the imposed AMV forcing strength is not the same for all simulations. As detailed in the column "AMV strength" of Supplementary Table 2, the imposed AMV anomalies vary between 1, 2, and 3 times the observed AMV standard deviation. Assuming linearity in the AMV responses, we weight each simulation by dividing their output by the AMV forcing strength in order to compare the results from all the AMV experiments. This is done for all the figures in the article. This enables us to create a larger multi-model ensemble and to evaluate more precisely the origins of the inter-model spread. Scaled outputs from experiments performed with the same model but with different AMV strengths are often indistinguishable, which suggests that the linear assumption is a reasonable approximation for the analyses of this study. Yet, we highlight the different AMV strengths by different colors in the figures (1×AMV: blue; 2×AMV: orange; 3×AMV: magenta). MMM and inter-model correlations (R) The MMM is computed by averaging the ensemble mean of each simulation, regardless of the number of ensemble members (i.e, there is no weighting). The outputs of each simulation are scaled by their AMV strength forcing prior to computing the MMM (as described above). For models for which several sets of experiments have been performed (with different magnitudes of AMV anomalies and/or different external forcing backgrounds), we average all the experiments of each model together prior to computing the MMM in order to not bias the results toward an over-represented model (e.g., CNRM-CM5 or EC-Earth3P). Because of the absence of ocean dynamics in the two MetUM-GOML models, those models are not taken into account in the computation of the MMM. Similarly to the computation of the MMM, the inter-model correlation R is computed after averaging all the ensemble means from the same model (if more than one experiment was performed) in order to give the same weight to all models. We also computed the inter-model correlation based on all the ensemble means from all the simulations (i.e., no averaging of experiments from the same model prior to the computation of the correlation) but no significant differences between the two correlations were found for the relationship investigated in this article. Because of the absence of ocean dynamics in the two MetUM-GOML models, those models are not taken into account in the computation of inter-model correlations. Regions definition To assess the tropical Pacific response, we use the NIÑO3.4 index defined as the SST averaged over 5°S–5°N and 170°W–120°W (Fig. 2a). Based on the summer MMM anomalies of precipitation and vertical velocity at 500hPa (Supplementary Fig. 3e, f), we decomposed the 20°S–20°N tropical band (TROP) into three main regions: a broad Indian region spanning from 30°E to 135°E, a central Pacific region spanning from 135°E to 120°W, and a broad Atlantic region spanning from 120°W to 30°E (Fig. 2b). We label those regions TropInd, TropPac, and TropAtl, respectively. In addition, an East Pacific and an Atlantic–Africa regions (embedded into TropAtl) are used in Figs. 4 and 6 that cover 120°W–80°W/20°S–20°N and 80°W–30°E/20°S–20°N, respectively. Taking advantage of the quasi-mass compensation of the vertical motion in the TROP region (Fig. 3c), we estimate the origins of the inter-model spread in TropPac descent through an analysis of variance: \(S_{{\mathrm{TropPac}}}^2 \sim S_{{\mathrm{TropAtl}} + {\mathrm{TropInd}}}^2 = S_{{\mathrm{TropAtl}}}^2 + S_{{\mathrm{TropInd}}}^2 + {\mathrm{COV}}\), where \(S_{{\mathrm{TropPac}}}^2\), \(S_{{\mathrm{TropAtl}}}^2\) and \(S_{T{\mathrm{ropInd}}}^2\) are the inter-model variance in TropPac, TropAtl, and TropInd descent anomalies, respectively; \({\mathrm{COV}}\) is the covariance term between TropAtl and TropInd descent anomalies and \(S_{{\mathrm{TropAtl}} + {\mathrm{TropInd}}}^2\)is the inter-model variance of descent anomalies averaged over the whole Trop region excluding the TropPac region. We find that \(S_{{\mathrm{TropAtl}}}^2\), \(S_{{\mathrm{TropInd}}}^2\) and COV explains 19%, 69%, and 12% of \(S_{{\mathrm{TropAtl}} + {\mathrm{TropInd}}}^2\), respectively. Equivalent potential temperature \(\theta _{\mathbf{E}}\) Theoretically, the equivalent potential temperature can be defined as \(\theta _{\mathrm{E}}\sim \theta exp\left( {\frac{{L_{\mathrm{C}}q_{\mathrm{S}}}}{{C_PT}}} \right)\), where \(\theta\) is the dry potential temperature, \(L_{\mathrm{C}}\) is the latent heat of condensation, \(q_{\mathrm{S}}\) is the saturation of mixing ratio, \(C_{\mathrm{P}}\) is the specific heat of dry air, and \(T\) is the temperature. This formula explicitly shows that \(\theta _{\mathrm{E}}\) is similar to the potential temperature for dry air mass (which remains constant during adiabatic processes) but it corrects for the energy associated with the air mass moisture, assuming that all the energy released by condensation/evaporation remains in the air mass (pseudo-adiabatic process). Here we used the NCL function "pot_temp_equiv_tlcl" (https://www.ncl.ucar.edu/Document/Functions/Contributed/pot_temp_equiv_tlcl.shtml) to compute \(\theta _E\). This function is based on Eq. (39) from Bolton54, which gives more accurate results than the theoretical formula given above but that requires the computation of the temperature at the lifted condensation level. Such temperature is estimated with the NCL function "tlcl_rh_bolton" (https://www.ncl.ucar.edu/Document/Functions/Contributed/tlcl_rh_bolton.shtml), which is based on Eq. (22) from Bolton54. Weighted equivalent potential temperature \({\mathbf{P}}\theta _{\mathbf{E}}\) as a proxy for the upper-tropospheric temperature Over the oceans, the mean tropospheric temperature profile is often considered to be in a moist-adiabatic convective equilibrium with the mean SST55 (Supplementary Fig. 4f), as the SST controls directly the atmospheric boundary layer energy content. Yet, convective adjustment can act directly only in regions of frequent precipitation, which are mostly over warm SST regions. In regions of no convection, the surface has no direct means of influencing the free troposphere, and the SST anomalies cannot shape the tropospheric temperature profile. Hence, there is no evident physical reason for considering mean tropical SST variations as a proxy for upper tropospheric temperature anomalies. To take into account the different contributions of local SST to the upper-tropospheric temperature, Sobel et al. 42 introduced a more appropriate proxy by weighting the SST with the local precipitation before computing the tropical average. We follow this method here, but we generalize it in order to account for the effect of deep convection overland on the upper tropospheric temperature anomalies56 and we compute the precipitation weighted equivalent potential temperature \({\mathrm{P}}\theta _{\mathrm{E}}\), cf. Eq. (1). We note \(<\, f \,>\) as the sum of the values of a given field \(f\) for all tropical grid points within 20°S–20°N. We define \(f = f_{\mathrm{C}} + f\prime\), where \(f\prime\) is the departure of \(f\) from \(f_{\mathrm{C}}\), which is the time-averaged ensemble mean of the AMV- experiments (the AMV- experiment being considered as the reference state). We also define \(= \left[ f \right] + f^ \ast\), where \(f^ \ast\) is the departure of \(f\) from its multi-model mean \(\left[ f \right]\). Decompositions of \({\mathbf{P}}\theta _{\mathbf{E}}\) In the article, \({\mathrm{P}}\theta _{\mathrm{E}} = \frac{{a \times {\mathrm{Pr}} \times \theta _{\mathrm{E}}}}{{ < a \times {\mathrm{Pr}} > }}\) is first decomposed into a term linked to precipitation anomalies only \({\mathrm{P}}\prime \theta _{{\mathrm{E}},{\mathrm{C}}} = \frac{{a \times {\mathrm{Pr}}\prime \times \theta _{{\mathrm{E}},{\mathrm{C}}}}}{{ < a \times {\mathrm{Pr}}\prime > }} - \frac{{a \times {\mathrm{Pr}} \times \theta _{\mathrm{E}}}}{{ < a \times {\mathrm{Pr}} > }} \times \frac{{a \times {\mathrm{Pr}} \times \theta _{\mathrm{E}} \times {\mathrm{Pr}}\prime }}{{ < a \times {\mathrm{Pr}} > }}\), a term linked to \(\theta _{\mathrm{E}}\) anomalies only \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime = \frac{{a \times {\mathrm{Pr}}_{\mathrm{C}} \times \theta _{\mathrm{E}}\prime }}{{ < a \times {\mathrm{Pr}}_{\mathrm{C}} > }}\), and a covariance term \({\mathrm{P}}\prime \theta _{\mathrm{E}}\prime = \frac{{a \times {\mathrm{Pr}}\prime \times \theta _{\mathrm{E}}\prime }}{{ < a \times {\mathrm{Pr}}\prime > }}\). In a second time \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) is decomposed into a term linked to the climatological precipitation differences among models \({\mathrm{P}}_{\mathrm{C}}^ \ast \left[ {\theta _{\mathrm{E}}^\prime } \right] = \frac{{a \times {\mathrm{Pr}}_{\mathrm{C}}^ \ast \times \left[ {\theta _{\mathrm{E}}\prime } \right]}}{{ < a \times {\mathrm{Pr}}_{\mathrm{C}}^ \ast > }}\), a term link to the different \(\theta _{\mathrm{E}}\) response to AMV among models \(\left[ {P_{\mathrm{C}}} \right]\theta _{\mathrm{E}}^{\prime \ast } = \frac{{a \times \left[ {{\mathrm{Pr}}_{\mathrm{C}}} \right] \times \theta _{\mathrm{E}}^{\prime \ast }}}{{ < a \times \left[ {{\mathrm{Pr}}_{\mathrm{C}}} \right] > }}\), and a covariance term \({\mathrm{COV}} = \frac{{a \times {\mathrm{Pr}}_{\mathrm{C}}^ \ast \times \theta _{\mathrm{E}}^{\prime \ast }}}{{ < a \times {\mathrm{Pr}}_{\mathrm{C}}^ \ast > }}\). Bi-linear regression model The coefficient of the bi-linear regression model is computed using as predicand the wintertime NIÑO3.4 SST index and as the two predictors the summertime vertical ascent summed over TropAtl and the summertime \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\) summed over TropAtl. The two MetUM-GOML simulations are excluded from the computation of the regression model coefficients and, similarly, as for the inter-model correlation, all models share the same weight. Bias corrections and observational uncertainties Two bias corrections are applied to \({\mathrm{P}}_{\mathrm{C}}\theta _{\mathrm{E}}\prime\). First, we compute this variable by using observed climatological precipitation (\({\mathrm{Pr}}_{{\mathrm{Obs}}}\)) instead of model climatological precipitation: \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime = \frac{{a \times {\mathrm{Pr}}_{{\mathrm{Obs}}} \times \theta _{\mathrm{E}}\prime }}{{ < a \times {\mathrm{Pr}}_{{\mathrm{Obs}}} > }}\). Then, we decomposed the TropAtl \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) into its regional components coming from the Atlantic–Africa region, the East Pacific region, and their covariance term. The East Pacific component is then substituted by a value estimated from the observed climatological precipitation and the inter-model relationship between JJAS East Pacific \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\) and February–March–April climatological precipitation centroid over East Pacific (Fig. 6h). Following this substitution, we sum again the different components of TropAtl \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\)to obtain \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{{\mathrm{E}},{\mathrm{cor}}}^\prime\). In order to account for observational uncertainties57,58, we compute for each model five \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{\mathrm{E}}\prime\)and \({\mathrm{P}}_{{\mathrm{Obs}}}\theta _{{\mathrm{E}},{\mathrm{cor}}}^\prime\) values using different observation estimates. Observational and reanalysis datasets The SST from the ERSSTv453 and ERSSTv3 datasets52 were used to extract the observed AMV pattern imposed in the simulations. The HadCRUT459 data set was used to compute the observed AMV composites shown in Fig. 1c. CMAP60,61, GPCPv2.362, TRMMv7 at 0.5° of spatial resolution63,64, MSWEPv2.665, and ERA-Interim66 data sets were used for mean bias corrections of model precipitation (cf. Figs. 5 and 6). Depending on data availability, we used different periods to compute the observed estimate mean state: 1979-2017 for CMAP, GPCPv2.3, and MSWEPv2.6; 1998-2011 for TRMMv7; and 1979–2018 for ERA-Interim. The data generated and analyzed during the current study are available from the corresponding author YRR on reasonable request. The code developed to analyze the data of the current study is available on request from the corresponding author. Meehl, G. A., Arblaster, J. M., Fasullo, J. T., Hu, A. & Trenberth, K. E. Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods. Nat. Clim. Chang. 1, 360–364 (2011). Kosaka, Y. & Xie, S. P. Recent global-warming hiatus tied to equatorial Pacific surface cooling. Nature 501, 403–407 (2013). Trenberth, K. E. & Fasullo, J. T. Earth's future an apparent hiatus in global warming? Earth's future. Earth's Futur 1, 19–32 (2013). IPCC. Climate Change 2013—The Physical Science Basis. (Cambridge University Press, 2014). https://doi.org/10.1017/CBO9781107415324. England, M. H. et al. Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus. Nat. Clim. Change 4, 222–227 (2014). Delworth, T. L., Zeng, F., Rosati, A., Vecchi, G. A. & Wittenberg, A. T. A link between the hiatus in global warming and North American drought. J. Clim. 28, 3834–3845 (2015). Douville, H., Voldoire, A. & Geoffroy, O. The recent global warming hiatus: what is the role of Pacific variability? Geophys. Res. Lett. 42, 880–888 (2015). McGregor, S. et al. Recent Walker circulation strengthening and Pacific cooling amplified by Atlantic warming. Nat. Clim. Change 4, 888–892 (2014). Li, X., Xie, S. P., Gille, S. T. & Yoo, C. Atlantic-induced pan-tropical climate change over the past three decades. Nat. Clim. Change 6, 275–279 (2016). Terray, L. Evidence for multiple drivers of North Atlantic multi-decadal climate variability. Geophys. Res. Lett. 39, 6–11 (2012). Sutton, R. T. et al. Atlantic multidecadal variability and the U.K. ACSIS program. Bull. Am. Meteorol. Soc. 99, 415–425 (2018). Schlesinger, M. E. & Ramankutty, N. An oscillation in the global climate system of period 65-70 years. Nature 367, 723–726 (1994). Knight, J. R., Allan, R. J., Folland, C. K., Vellinga, M. & Mann, M. E. A signature of persistent natural thermohaline circulation cycles in observed climate. Geophys. Res. Lett. 32, 1–4 (2005). Booth, B. B. B., Dunstone, N. J., Halloran, P. R., Andrews, T. & Bellouin, N. Aerosols implicated as a prime driver of twentieth-century North Atlantic climate variability. Nature 484, 228–232 (2012). Ting, M., Kushnir, Y., Seager, R. & Li, C. Forced and internal twentieth-century SST trends in the North Atlantic. J. Clim. 22, 1469–1481 (2009). Zhang, R. & Delworth, T. L. Impact of the Atlantic multidecadal oscillation on North Pacific climate variability. Geophys. Res. Lett. 34, 2–7 (2007). Chafik, L. et al. Global linkages originating from decadal oceanic variability in the subpolar North Atlantic. Geophys. Res. Lett. 43, 10,909–10,919 (2016). Kucharski, F., Kang, I. S., Farneti, R. & Feudale, L. Tropical Pacific response to 20th century Atlantic warming. Geophys. Res. Lett. 38, 1–5 (2011). Ruprich-Robert, Y. et al. Assessing the climate impacts of the observed Atlantic multidecadal variability using the GFDL CM2.1 and NCAR CESM1 global coupled models. J. Clim. 30, 2785–2801 (2017). Polo, I., Martin-Rey, M., Rodriguez-Fonseca, B., Kucharski, F. & Mechoso, C. R. Processes in the Pacific La Niña onset triggered by the Atlantic Niño. Clim. Dyn. 44, 115–131 (2015). Dunstone, N. J., Smith, D. M. & Eade, R. Multi-year predictability of the tropical Atlantic atmosphere driven by the high latitude North Atlantic Ocean. Geophys. Res. Lett. 38, 1–6 (2011). Yeager, S. G. et al. Predicting near-term changes in the earth system: A large ensemble of initialized decadal prediction simulations using the community earth system model. Bull. Am. Meteorol. Soc. 99, 1867–1886 (2018). Boer, G. J. et al. The decadal climate prediction project (DCPP) contribution to CMIP6. Geosci. Model Dev. 9, 3751–3777 (2016). Kajtar, J. B., Santoso, A., McGregor, S., England, M. H. & Baillie, Z. Model under-representation of decadal Pacific trade wind trends and its link to tropical Atlantic bias. Clim. Dyn. 50, 1471–1484 (2018). McGregor, S., Stuecker, M. F., Kajtar, J. B., England, M. H. & Collins, M. Model tropical Atlantic biases underpin diminished Pacific decadal variability. Nat. Clim. Chang. 8, 493–498 (2018). Levine, A. F. Z., Frierson, D. M. W. & McPhaden, M. J. AMO forcing of multidecadal pacific ITCZ variability. J. Clim. 31, 5749–5764 (2018). Park, J. H., Kug, J. S. & An, S. Il & Li, T. Role of the western hemisphere warm pool in climate variability over the western North Pacific. Clim. Dyn. 53, 2743–2755 (2019). Keenlyside, N. S., Ding, H. & Latif, M. Potential of equatorial Atlantic variability to enhance El Niño prediction. Geophys. Res. Lett. 40, 2278–2283 (2013). Chikamoto, Y. et al. Skilful multi-year predictions of tropical trans-basin climate variability. Nat. Commun. 6, (2015). Bellenger, H., Guilyardi, E., Leloup, J., Lengaigne, M. & Vialard, J. ENSO representation in climate models: From CMIP3 to CMIP5. Clim. Dyn. 42, 1999–2018 (2014). Meinen, C. S. & McPhaden, M. J. Observations of warm water volume changes in the equatorial Pacific and their relationship to El Nino and La Nina. J. Clim. 13, 3551–3559 (2000). Wang, C. & Picaut, J. 2004 Wang picaut. Earth's Clim. 147, 21–48 (2004). McPhaden, M. J. A 21st century shift in the relationship between ENSO SST and warm water volume anomalies. Geophys. Res. Lett. 39, 1–5 (2012). Lengaigne, M. et al. Mechanisms controlling warm water volume interannual variations in the equatorial Pacific: Diabatic versus adiabatic processes. Clim. Dyn. 38, 1031–1046 (2012). Bosc, C. & Delcroix, T. Observed equatorial Rossby waves and ENSO-related warm water volume changes in the equatorial Pacific Ocean. J. Geophys. Res. Ocean 113, 1–14 (2008). Neske, S. & McGregor, S. Understanding the Warm Water Volume Precursor of ENSO Events and its Interdecadal Variation. Geophys. Res. Lett. 45, 1577–1585 (2018). Martín-Rey, M., Rodríguez-Fonseca, B. & Polo, I. Atlantic opportunities for ENSO prediction. Geophys. Res. Lett. 42, 6802–6810 (2015). Held, I. M. & Hou, A. Y. Nonlinear axially symmetric circulations in a nearly inviscid atmosphere. J. Atmos. Sci. 37, 515–533 (1980). Sobel, A. H. & Bretherton, C. S. Modeling tropical precipitation in a single column. J. Clim. 13, 4378–4392 (2000). Xiang, B., Wang, B., Lauer, A., Lee, J. Y. & Ding, Q. Upper tropospheric warming intensifies sea surface warming. Clim. Dyn. 43, 259–270 (2014). Emanuel, K. A., David Neelin, J. & Bretherton, C. S. On large‐scale circulations in convecting atmospheres. Q. J. R. Meteorol. Soc. 120, 1111–1143 (1994). Sobel, A. H., Held, I. M. & Bretherton, C. S. The ENSO signal in tropical tropospheric temperature. J. Clim. 15, 2702–2706 (2002). Jia, F., Wu, L., Gan, B. & Cai, W. Global warming attenuates the tropical Atlantic-Pacific teleconnection. Sci. Rep. 6, 1–7 (2016). Luo, J. J., Liu, G., Hendon, H., Alves, O. & Yamagata, T. Inter-basin sources for two-year predictability of the multi-year la Niña event in 2010-2012. Sci. Rep. 7, 1–7 (2017). Wang, C., Zhang, L., Lee, S. K., Wu, L. & Mechoso, C. R. A global perspective on CMIP5 climate model biases. Nat. Clim. Chang. 4, 201–205 (2014). Kucharski, F., Syed, F. S., Burhan, A., Farah, I. & Gohar, A. Tropical Atlantic influence on Pacific variability and mean state in the twentieth century in observations and CMIP5. Clim. Dyn. 44, 881–896 (2014). Zhang, L., Wang, C., Song, Z. & Lee, S.-K. Remote effect of the model cold bias in the tropical North Atlantic on the warm bias in the tropical southeastern Pacific. J. Adv. Model. Earth Syst. 6, 1016–1026 (2014). Cai, W. et al. Pantropical climate interactions. Science (80-.). 363, eaav4236 (2019). d'Orgeville, M. & Peltier, W. R. On the Pacific decadal oscillation and the atlantic multidecadal oscillation: might they be related? Geophys. Res. Lett. 34, 3–7 (2007). Nigam, S., Sengupta, A. & Ruiz-Barradas, A. Atlantic–Pacific links in observed multidecadal SST variability: is the Atlantic multidecadal oscillation's phase reversal orchestrated by the Pacific decadal oscillation? J. Clim. 33, 5479–5505 (2020). Meehl, G. A. et al. Atlantic and Pacific tropics connected by mutually interactive decadal-timescale processes. Nat. Geosci. 14, 36–42 (2021). Smith, T. M., Reynolds, R. W., Peterson, T. C. & Lawrimore, J. Improvements to NOAA's historical merged land-ocean surface temperature analysis (1880–2006). J. Clim. 21, 2283–2296 (2008). Huang, B. et al. Extended reconstructed sea surface temperature version 4 (ERSST.v4). Part I: upgrades and intercomparisons. J. Clim. 28, 911–930 (2015). Bolton, D. The computation of equivalent potential temperature. Mon. Weather Rev. 108, 1046–1053 (1980). Stone, P. H. & Carlson, J. H. Atmospheric lapse rate regimes and their parameterization. J. Atmos. Sci. 36, 415–423 (1979). Byrne, M. P. & O'Gorman, P. A. Land-ocean warming contrast over a wide range of climates: convective quasi-equilibrium theory and idealized simulations. J. Clim. 26, 4000–4016 (2013). Herold, N., Alexander, L. V., Donat, M. G., Contractor, S. & Becker, A. How much does it rain over land? Geophys. Res. Lett. 43, 341–348 (2016). Herold, N., Behrangi, A. & Alexander, L. V. Large uncertainties in observed daily precipitation extremes over land. J. Geophys. Res. Atmos. 122, 668–681 (2017). Morice, C. P., Kennedy, J. J., Rayner, N. A. & Jones, P. D. Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: the HadCRUT4 data set. J. Geophys. Res. Atmos. 117, 1–22 (2012). Xie, P., Arkin, P. A. & Janowiak, J. E. CMAP: the CPC merged analysis of precipitation. Adv. Glob. Chang. Res 28, 319–328 (2007). Climate Prediction Center, National Centers for Environmental Prediction, National Weather Service, NOAA (U.S. Department of Commerce, CPC Merged Analysis of Precipitation (CMAP), 1995). Adler, R. et al. The global precipitation climatology project (GPCP) monthly analysis (new version 2.3) and a review of 2017 global precipitation. Atmosphere 9, 138 (2018). Huffman, G. J., Adler, R. F., Bolvin, D. T. & Nelkin, E. J. The TRMM multi-satellite precipitation analysis (TMPA). In Satellite Rainfall Applications for Surface Hydrology. pp. 3–22 (Springer Netherlands, 2010). https://doi.org/10.1007/978-90-481-2915-7_1. Tropical Rainfall Measuring Mission (TRMM). TRMM Radar Rainfall Statistics L3 1 month (5×5) and (0.5×0.5) degree V7. (2011). Beck, H. E. et al. MSWep v2 Global 3-hourly 0.1° precipitation: Methodology and quantitative assessment. Bull. Am. Meteorol. Soc. 100, 473–500 (2019). Berrisford, P. et al. The ERA-Interim Archive Version 2.0. (2011). UCAR/NCAR/CISL/TDD. The NCAR Command Language (Version 6.6.2). (2019) https://doi.org/10.5065/D6WD3XH5. Deser, C., Alexander, M. A., Xie, S.-P. & Phillips, A. S. Sea surface temperature variability: patterns and mechanisms. Ann. Rev. Mar. Sci. 2, 115–143 (2010). Y.R.-R. was founded by the European Union's Horizon 2020 Research and Innovation Program in the framework of the Marie Skłodowska-Curie grant INADEC (Grant agreement 800154). E.M.-C. acknowledges funding from the European Commission's Horizon 2020 projects PRIMAVERA (Grant Agreement 641727). X.L. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement H2020-MSCA-COFUND-2016-754433. A.B. and D.N. acknowledge funding from the European Commission's Horizon 2020 project EUCP (Grant agreement 776613). F.C. and G.D. were supported by the US National Science Foundation (NSF) under the Collaborative Research EaSM2 Grant OCE-1243015 to NCAR and by the US National Oceanic and Atmospheric Administration (NOAA) Climate Program Office under the Climate Variability and Predictability Program Grant NA13OAR4310138. NCAR is a major facility sponsored by the US NSF under Cooperative Agreement 1852977. Acknowledgments are made for the use of ECMWF's computing and archive facilities in this research, in particular, P.D. thanks ECMWF for providing computing time in the framework of the special project SPITDAVI. R.E., N.D., L.H., and D.S. were supported by the Met Office Hadley Center 522 Climate Program funded by BEIS and Defra and by the European Commission Horizon 2020 EUCP 523 project (GA 776613). J.L.-P. was funded by the European Union's Horizon 2020 Research and Innovation Program in the framework of the PRIMAVERA project (Grant Agreement 641727). J.R. and D.H. were funded by NERC via NCAS and the ACSIS project (NE/N018001/1), and JR was also funded by the NERC SMURPHS project (NE/N006054/1). M.M.-R. was funded by the European Union's Horizon 2020 Research and Innovation Program in the framework of the Marie Skłodowska-Curie grant FESTIVAL (Grant agreement 797236). E.T. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 748750 (SPFireSD project). The analysis and plots of this paper were performed with the NCAR Command Language (Version 6.6.2; 2019)67. Barcelona Supercomputing Center, Barcelona, Spain Yohan Ruprich-Robert, Eduardo Moreno-Chamarro, Xavier Levine & Etienne Tourigny Fondazione Centro EuroMediterraneo sui Cambiamenti Climatici, Bologna, Italy Alessio Bellucci & Dario Nicolì Consiglio Nazionale delle Ricerche, Istituto di Scienze dell'Atmosfera e del Clima, Bologna, Italy Alessio Bellucci CECI, Université de Toulouse, CNRS, Cerfacs, Toulouse, France Christophe Cassou, Jorge Lopez-Parages, Said Qasmi, Emilia Sanchez-Gomez & Rym Msadek National Center for Atmospheric Research, Boulder, CO, USA Frederic Castruccio & Gokhan Danabasoglu Consiglio Nazionale delle Ricerche, Istituto di Scienze dell'Atmosfera e del Clima, Torino, Italy Paolo Davini Met Office, Exeter, UK Rosie Eade, Leon Hermanson, Nick Dunstone & Doug Smith LOCEAN, Sorbonne Université/IRD/CNRS/MNHN, Paris, France Guillaume Gastineau Department of Meteorology, National Centre for Atmospheric Science, University of Reading, Reading, UK Dan Hodson, Paul-Arthur Monerie & Jon Robson Max Planck Institute for Meteorology, Hamburg, Germany Katja Lohmann CNRM, Université de Toulouse, Météo-France, CNRS, Toulouse, France Said Qasmi ECMWF, Reading, UK Christopher D. Roberts Instituto de Ciencias del Mar, CSIC, Barcelona, Spain Marta Martin-Rey Yohan Ruprich-Robert Eduardo Moreno-Chamarro Xavier Levine Christophe Cassou Frederic Castruccio Rosie Eade Leon Hermanson Dan Hodson Jorge Lopez-Parages Paul-Arthur Monerie Dario Nicolì Emilia Sanchez-Gomez Gokhan Danabasoglu Nick Dunstone Rym Msadek Jon Robson Doug Smith Etienne Tourigny Y.R.-R. designed the study, performed the analysis, and wrote the initial article. E.M.-C. and X.L. contributed to the data analysis and to the interpretation of the results. Y.R.-R., A.B., C.C., F.C., P.D., R.E., G.G., L.H., D.H., J.L.-P., P.-A.M., D.N., S.Q., C.R. and E.S.-G. performed the simulations used in the study. All authors contributed to the manuscript preparation and the discussions that led to the final version of the article. Correspondence to Yohan Ruprich-Robert. Ruprich-Robert, Y., Moreno-Chamarro, E., Levine, X. et al. Impacts of Atlantic multidecadal variability on the tropical Pacific: a multi-model study. npj Clim Atmos Sci 4, 33 (2021). https://doi.org/10.1038/s41612-021-00188-5 Early warning signal for a tipping point suggested by a millennial Atlantic Multidecadal Variability reconstruction Simon L. L. Michel Didier Swingedouw Myriam Khodri Coupled climate response to Atlantic Multidecadal Variability in a multi-model multi-resolution ensemble Daniel L. R. Hodson Pierre-Antoine Bretonnière Retish Senan Climate Dynamics (2022) About the Partner For Authors and Referees npj Climate and Atmospheric Science (npj Clim Atmos Sci) ISSN 2397-3722 (online)
CommonCrawl
Many familiar physical quantities can be specified completely by giving a single number and the appropriate unit. For example, "a class period lasts 50 min" or "the gas tank in my car holds 65 L" or "the distance between two posts is 100 m." A physical quantity that can be specified completely in this manner is called a scalar quantity. Scalar is a synonym of "number." Time, mass, distance, length, volume, temperature, and energy are examples of scalar quantities. Scalar quantities that have the same physical units can be added or subtracted according to the usual rules of algebra for numbers. For example, a class ending 10 min earlier than 50 min lasts (50 min – 10 min) = 40 min. Similarly, a 60-cal serving of corn followed by a 200-cal serving of donuts gives (60 cal + 200 cal) = 260 cal of energy. When we multiply a scalar quantity by a number, we obtain the same scalar quantity but with a larger (or smaller) value. For example, if yesterday's breakfast had 200 cal of energy and today's breakfast has four times as much energy as it had yesterday, then today's breakfast has 4(200 cal) = 800 cal of energy. Two scalar quantities can also be multiplied or divided by each other to form a derived scalar quantity. For example, if a train covers a distance of 100 km in 1.0 h, its speed is 100.0 km/1.0 h = 27.8 m/s, where the speed is a derived scalar quantity obtained by dividing distance by time. Many physical quantities, however, cannot be described completely by just a single number of physical units. For example, when the U.S. Coast Guard dispatches a ship or a helicopter for a rescue mission, the rescue team must know not only the distance to the distress signal, but also the direction from which the signal is coming so they can get to its origin as quickly as possible. Physical quantities specified completely by giving a number of units (magnitude) and a direction are called vector quantities. Examples of vector quantities include displacement, velocity, position, force, and torque. In the language of mathematics, physical vector quantities are represented by mathematical objects called vectors. We can add or subtract two vectors, and we can multiply a vector by a scalar or by another vector, but we cannot divide by a vector. The operation of division by a vector is not defined. Source: University Physics Volume 1, OpenStax CNX https://courses.lumenlearning.com/suny-osuniversityphysics/chapter/2-1-scalars-and-vectors Basically: a scalar has only magnitude, whereas a vector has magnitude and direction. Application: I might have gone for a 4 km walk (scalar) but whether I walked in a straight line, took turns, or went 2 km out and turned around to walk 2 km back would tell me a lot more information (vector). Looking ahead: We will talk about this again in sections 1.3 on vectors and in section 1.4 and 1.5 on dot products and cross products. 1st Law: Newton's first law states that: "A body at rest will remain at rest unless acted on by an unbalanced force. A body in motion continues in motion with the same speed and in the same direction unless acted upon by an unbalanced force." This law, also sometimes called the "law of inertia", means that bodies maintain their current velocity unless a force is applied to change that velocity. If an object is at rest with zero velocity it will remain at rest until some force begins to change that velocity, and if an object is moving at a set speed and in a set direction it will remain at that same velocity until some force begins to change that velocity. Net Forces: It is important to note that the net force is what will cause a change in velocity. The net force is the sum of all forces acting on the body. For example, we can imagine gently pushing on the rock in the figure above and observing that the rock does not move. This is because we will have a friction force equal in magnitude and opposite in direction opposing our gentle pushing force. The sum of these two forces will be equal to zero, therefore the net force is zero and the change in velocity is zero. Rotational Motion: Newton's first law also applies to moments and rotational velocities. A body will maintain it's current rotational velocity until a net moment is exerted to change that rotational velocity. This can be seen in things like toy tops, flywheels, stationary bikes, and other objects that will continue spinning once started until brakes or friction stop them. Source: Engineering Mechanics, Jacob Moore et al., http://mechanicsmap.psu.edu/websites/1_mechanics_basics/newtons_first_law/firstlaw.html 2nd Law: Newton's second law states that: "When a net force acts on any body with mass, it produces an acceleration of that body. The net force will be equal to the mass of the body times the acceleration of the body" F→=ma→ You will notice that the force and the acceleration in the equation above have an arrow above them. This means that they are vector quantities, having both a magnitude and a direction. Mass on the other hand is a scalar quantity having only a magnitude. Based on the above equation, you can infer that the magnitude of the net force acting on the body will be equal to the mass of the body times the magnitude of the acceleration, and that the direction of the net force on the body will be equal to the direction of the acceleration of the body. Rotational Motion: Newton's second law also applies to moments and rotational velocities. The revised version of the second law equation states that the net moment acting on the object will be equal to the mass moment of inertia of the body about the axis of rotation (I) times the angular acceleration of the body. M→=I∗α→ You should again notice that the moment and the angular acceleration of the body have arrows above them, indicating that they are vector quantities with both a magnitude and direction. The mass moment of inertia on the other hand is a scalar quantity having only a magnitude. The magnitude of the net moment will be equal to the mass moment of inertia times the magnitude of the angular acceleration, and the direction of the net moment will be equal to the direction of the angular acceleration. Source: Engineering Mechanics, Jacob Moore et al., http://mechanicsmap.psu.edu/websites/1_mechanics_basics/newtons_second_law/secondlaw.html 3rd Law: Newton's Third Law states "For any action, there is an equal and opposite reaction." By "action" Newton meant a force, so for every force one body exerts on another body, that second body exerts a force of equal magnitude but opposite direction back on the first body. Since all forces are exerted by bodies (either directly or indirectly), all forces come in pairs, one acting on each of the bodies interacting. Though there may be two equal and opposite forces acting on a single body, it is important to remember that for each of the forces a Third Law pair acts on a separate body. This can sometimes be confusing when there are multiple Third Law pairs at work. Below are some examples of situations where multiple Third Law pairs occur. Source: Engineering Mechanics, Jacob Moore et al., http://mechanicsmap.psu.edu/websites/1_mechanics_basics/newtons_third_law/thirdlaw.html Basically: These 3 laws form the foundation of statics and dynamics. It makes our problems interesting! In statics, we don't do a lot with rotation. 1st law: the motion of an object won't change unless there is a force to cause the change. 2nd law: Combination of all forces = mass * acceleration 3rd law: A system of interacting objects can be split up into parts, where forces are used to model the other part. Forces are equal (same size) and opposite (their directions cancel out – one up, one down Application: 1st law: a rock rolling down the hill will keep going unless it hits a tree. 2nd law: the amount of forces on the rock and how massive (heavy) it is will determine how much it is accelerating (or decelerating). 3rd law: the rock is pushing on the ground with the same amount of force as the ground is pushing on the rock, but in the opposite direction. Looking ahead: You'll see these concepts again in Ch 7 on Inertia (1st law), Section 2.3 and 4.3 on equillibrium equations (2nd law), Section 4.2 on system free-body diagrams (3rd law). Giving numerical values for physical quantities and equations for physical principles allows us to understand nature much more deeply than qualitative descriptions alone. To comprehend these vast ranges, we must also have accepted units in which to express them. We shall find that even in the potentially mundane discussion of meters, kilograms, and seconds, a profound simplicity of nature appears: all physical quantities can be expressed as combinations of only seven base physical quantities. We define a physical quantity either by specifying how it is measured or by stating how it is calculated from other measurements. For example, we might define distance and time by specifying methods for measuring them, such as using a meter stick and a stopwatch. Then, we could define average speed by stating that it is calculated as the total distance traveled divided by time of travel. Measurements of physical quantities are expressed in terms of units, which are standardized values. For example, the length of a race, which is a physical quantity, can be expressed in units of meters (for sprinters) or kilometers (for distance runners). Without standardized units, it would be extremely difficult for scientists to express and compare measured values in a meaningful way. Two major systems of units are used in the world: SI units (for the French Système International d'Unités), also known as the metric system, and English units (also known as the customary or imperial system). English units were historically used in nations once ruled by the British Empire and are still widely used in the United States. English units may also be referred to as the foot–pound–second (fps) system, as opposed to the centimeter–gram–second (cgs) system. SI Units: Base and Derived Units In any system of units, the units for some physical quantities must be defined through a measurement process. These are called the base quantities for that system and their units are the system's base units. All other physical quantities can then be expressed as algebraic combinations of the base quantities. Each of these physical quantities is then known as a derived quantity and each unit is called a derived unit. The choice of base quantities is somewhat arbitrary, as long as they are independent of each other and all other quantities can be derived from them. Typically, the goal is to choose physical quantities that can be measured accurately to a high precision as the base quantities. The reason for this is simple. Since the derived units can be expressed as algebraic combinations of the base units, they can only be as accurate and precise as the base units from which they are derived. Based on such considerations, the International Standards Organization recommends using seven base quantities, which form the International System of Quantities (ISQ). These are the base quantities used to define the SI base units. The following table lists these seven ISQ base quantities and the corresponding SI base units. ISQ Base Quantity SI Base Unit Length Meter (m) Mass Kilogram (kg) Time Second (s) Electrical current Ampere (A) Thermodynamic temp. Kelvin (K) Amount of substance Mole (mol) Luminous intensity Candela (cd) You are probably already familiar with some derived quantities that can be formed from the base quantities. For example, the geometric concept of area is always calculated as the product of two lengths. Thus, area is a derived quantity that can be expressed in terms of SI base units using square meters (m x m = m2). Similarly, volume is a derived quantity that can be expressed in cubic meters (m3). Speed is length per time; so in terms of SI base units, we could measure it in meters per second (m/s). Volume mass density (or just density) is mass per volume, which is expressed in terms of SI base units such as kilograms per cubic meter (kg/m3). Angles can also be thought of as derived quantities because they can be defined as the ratio of the arc length subtended by two radii of a circle to the radius of the circle. This is how the radian is defined. Depending on your background and interests, you may be able to come up with other derived quantities, such as the mass flow rate (kg/s) or volume flow rate (m3/s) of a fluid, electric charge (A·s), mass flux density [kg/(m2·s), and so on. We will see many more examples throughout this text. For now, the point is that every physical quantity can be derived from the seven base quantities, and the units of every physical quantity can be derived from the seven SI base units. Source: University Physics Volume 1, OpenStax CNX, https://courses.lumenlearning.com/suny-osuniversityphysics/chapter/1-2-units-and-standards/ While most Canadian companies use SI, much manufacturing still uses English units, so it's important for you to be familiar with them. What is a big number in feet? What is small? It's important to know. The most important advice is to stay in one unit system. So if you are doing a homework problem that has a mixture, convert to one system to be consistent. Challenge your self to try the one you aren't comfortable with. Here is a table of the most common quantities that we'll use in this class: Quantity SI Unit English Length m (meter), km (kilometer), mm (milimeter) ft (foot), mi (mile), in (inch) Mass kg (kilogram) slug Force N (Newton) lb (pound) Pressure Pa (Pascal) = N/m2 psi (pound per square inch) = lb/in2 Very helpful additional information about units is at this webpage: https://www.physics.nist.gov/cuu/Units/index.html Basically: Units give us a standard so we canuse the same language to describe a concept. Application: In 1999, after taking 286 days for NASA Mars Orbiter satellite to get to Mars, a conversion error between N and lb caused the $125 million satellite to be lost, forever. Click here for more fun conversion error stories. If you want to design ANYTHING, you need to be sure everyone involved is using the same unit system. Looking Ahead: The next section (1.1.4) will look at converting the units back and forth between the two systems. It is often necessary to convert from one unit to another. For example, if you are reading a European cookbook, some quantities may be expressed in units of liters and you need to convert them to cups. Or perhaps you are reading walking directions from one location to another and you are interested in how many miles you will be walking. In this case, you may need to convert units of feet or meters to miles. Let's consider a simple example of how to convert units. Suppose we want to convert 80 m to kilometers. The first thing to do is to list the units you have and the units to which you want to convert. In this case, we have units in meters and we want to convert to kilometers. Next, we need to determine a conversion factor relating meters to kilometers. A conversion factor is a ratio that expresses how many of one unit are equal to another unit. For example, there are 12 in. in 1 ft, 1609 m in 1 mi, 100 cm in 1 m, 60 s in 1 min, and so on. In this case, we know that there are 1000 m in 1 km. Now we can set up our unit conversion. We write the units we have and then multiply them by the conversion factor so the units cancel out, as shown: [latex]80 m\times\frac{1 km}{1000 m}=0.080 km[/latex] Note that the unwanted meter unit cancels, leaving only the desired kilometer unit. You can use this method to convert between any type of unit. Now, the conversion of 80 m to kilometers is simply the use of a metric prefix, as we saw in the preceding section, so we can get the same answer just as easily by noting that [latex]80m=8.0\times10^1m=8.0\times10^{-2}km=0.080km[/latex] since "kilo-" means 103 and 1=−2+3. However, using conversion factors is handy when converting between units that are not metric or when converting between derived units, as the following examples illustrate. Source: University Physics Volume 1, OpenStax CNX, https://courses.lumenlearning.com/suny-osuniversityphysics/chapter/1-3-unit-conversion/ Going back and forth between SI and English will become very useful skill. If you can memorize km to mi, ft to m, and inches to ft, you'll be able to communicate better with coworkers. Here are common conversions you'll need for this course: Quantity SI English Convert Length 1 km = 1000 m 1 m = 1000 mm 1 mi = 5,280 ft 1 ft = 12 in 1 m = 3.28 ft 2.2 km = 1 mi Mass kg slug 1 slug = 14.6 kg Force N lb 1 lb = 4.448 N Pressure Pa psi 1psi = 6895 Pa All of the other units that we will encounter will be a mix of these units (intensity w = N/m or lb/ft). One additional conversion that is common is 1 lb = 2.2 kg, though this only works on Earth because it is mixing kg and lb (see next section). For a full table, MechanicsMap has a pdf available: http://mechanicsmap.psu.edu/websites/UnitConversion.pdf Units outside the SI that are accepted for use with the SI Name Symbol Value in SI units minute (time) min 1 min = 60 s hour h 1 h = 60 min = 3600 s day d 1 d = 24 h = 86 400 s degree (angle) ° 1° = ( /180) rad minute (angle) 1 = (1/60)° = ( /10 800) rad second (angle) 1 = (1/60) = ( /648 000) rad liter L 1 L = 1 dm3 = 10-3 m3 metric ton (a) t 1 t = 103 kg neper Np 1 Np = 1 bel (b) B 1 B = (1/2) ln 10 Np (c) electronvolt (d) eV 1 eV = 1.602 18 x 10-19 J, approximately unified atomic mass unit (e) u 1 u = 1.660 54 x 10-27 kg, approximately astronomical unit (f) au 1 au = 149 597 870 700 m, exactly (a) In many countries, this unit is called "tonne." (b) The bel is most commonly used with the SI prefix deci: 1 dB = 0.1 B. (c) Although the neper is coherent with SI units and is accepted by the CIPM, it has not been adopted by the General Conference on Weights and Measures (CGPM, Conférence Générale des Poids et Mesures) and is thus not an SI unit. (d) The electronvolt is the kinetic energy acquired by an electron passing through a potential difference of 1 V in vacuum. The value must be obtained by experiment, and is therefore not known exactly. (e) The unified atomic mass unit is equal to 1/12 of the mass of an unbound atom of the nuclide 12C, at rest and in its ground state. The value must be obtained by experiment, and is therefore not known exactly. (f) The astronomical unit of length was redefined by the XXVIII General Assembly of the International Astronomical Union (Resolution B2, 2012). Source: https://www.physics.nist.gov/cuu/Units/outside.html Basically: Different industries use different standards. English is common in the US. SI is standard many other places, however not generally in aerospace. Looking Ahead: Always always always check what unit you're using. So many students lose points on homework and the test because they aren't paying attention to units. Weight is the force exerted by gravity. While all objects with mass exert an attractive force of gravity on all other objects with mass, that force is usually negligible unless the mass of one of the objects is very large. For an object near the surface of the Earth, we can, to a very good degree of approximation, assume that the only force of gravity on the object is from the Earth. We usually label the force of gravity on an object as Fg. All objects near the surface of the Earth will experience a weight, as long as they have a mass. If an object has a mass, m, and is located near the surface of the Earth, it will experience a force (its weight) that is given by: [latex]\vec F_g=m\vec g[/latex] where g is the Earth's "gravitational field" vector and points towards the centre of the Earth. Near the surface of the Earth, the magnitude of the gravitational field is approximately g = 9.81 m/s2. The gravitational field is a measure of the strength of the force of gravity from the Earth (it is the gravitational force per unit mass). The magnitude of the gravitational field is weaker as you move further from the centre of the Earth (e.g. at the top of a mountain, or in Earth's orbit). The gravitational field is also different on different planets; for example, at the surface of the moon, it is approximately gm = 1.62 m/s2 (six times less) – thus the weight of an object is six times less at the surface of the moon (but its mass is still the same). As we will see, the magnitude of the gravitational field from any spherical body of mass M (e.g a planet) is given by: [latex]g(r)=G\frac{M}{r^2}[/latex] where G = 6.67 × 10−11 is Newton's constant of gravity, and r is the distance from the centre of the object. Although we have not yet introduced the concept of mass, it is worth emphasizing that mass and weight are different (they have different dimensions). Mass is an intrinsic property of an object, whereas weight is a force of gravity that is exerted on that object because it has mass and is located next to another object with mass (e.g. the Earth). On Earth, when we measure our weight, we usually do so by standing on a spring scale, which is designed to measure a force by compressing a spring. We are thus measuring mg, which can easily be related to our mass since, on Earth, weight and mass are related by a factor of g = 9.81 m/s2; this is usually what leads to the confusion between mass and weight. Source: Introductory Physics, Ryan Martin et al., https://openlibrary.ecampusontario.ca/catalogue/item/?id=4c3c2c75-0029-4c9e-967f-41f178bebbbb page 106 In the English language, the words 'mass' and 'weight' are used interchangeably. A person might say, "I weigh 50 kg", but in statics language, that's wrong! Or more accurately, that language isn't precise enough for statics. An object's mass is the same whether they are on the moon, Mars, or Earth. However, their weight changes because the constant of gravity with which that planet is pulling changes (see above description comparing the moon and Earth). g = 9.81 m/s2 (SI) and g = 32.2 ft/s2 (English) Weight = mass * gravitational constant Units of mass are kg (SI) or slugs (English) whereas units of weight/force are N (SI) or lb (English). Because 'slugs' is such an odd, unfamiliar unit, the graphic on the left uses real slugs to help you remember to say "my mass is 50 kg (or 3.43 slug)" or "I weigh 490 N (or 110 lb)". While most Canadian companies use SI units, it's important to be familiar with English, so you should learn slugs. You don't want to be excluded from a conversation at your future job. Note lbm (pound-mass) is not used in this book, though some textbooks use it as a mass value. When lb is used, it is assumed to be lbf (pound-force). Basically: Mass and force are two different quantities. Mass is in kg (SI) or slug (English) and weight is in N (SI) and lb (English). Application: Mass stays the same, but weight changes from the Earth to the moon. Looking ahead: This will become very important when we look at forces in Section 4.1. Right triangle: a triangle containing a 90° angle. Pythagorean theorem: a relation among the three sides of a right triangle which states that the square of the hypotenuse is equal to the sum of the squares of the other two sides (legs). Using the Pythagorean theorem can find the length of the missing side in a right triangle. ▪ c is the longest side of the triangle (hypotenuses). ▪ Other two sides (legs) of the triangle a and b can be exchanged. Source: Key Concepts of Intermediate Level Math, Meizhong Wang and the College of New Caledonia, https://openlibrary.ecampusontario.ca/catalogue/item/?id=d8bdc88b-5439-4652-b4bb-2948f0d5c625, page 136. A special cases of the right triangle is called a 3-4-5 triangle, or a Pythagorean triple. The two short sides are 3 and 4, and the hypotenuse is 5! Many of your homework problems will use this coincidence so you can save on the math by remembering 3-4-5 triangles: 32 + 42 = 52, 9 + 16 = 25. Wow! Basically: The pythagorean theorum will help you find a lot of information throughout this course. The longest side c2 = a2 + b2 Application: If I have a 6 ft ladder leaned up against a wall whose base is 2 ft from the wall, the pythagorean theorum helps you to calculate the vertical height of the ladder (b2 = 62 – 22 ). Looking Ahead: You'll use this to help find geometrical aspects of the problems, expecially when we get into trusses in Ch 5. The Six Basic Trigonometric Functions Trigonometric functions allow us to use angle measures, in radians or degrees, to find the coordinates of a point on any circle—not only on a unit circle—or to find an angle given a point on a circle. They also define the relationship among the sides and angles of a triangle. To define the trigonometric functions, first consider the unit circle centered at the origin and a point P=(x,y) on the unit circle. Let θ be an angle with an initial side that lies along the positive x-axis and with a terminal side that is the line segment OP. We can then define the values of the six trigonometric functions for θ in terms of the coordinates x and y. Let P=(x,y) be a point on the unit circle centered at the origin O. Let θ be an angle with an initial side along the positive x-axis and a terminal side given by the line segment OP. The trigonometric functions are then defined as $$\sin\theta=y\;\;\;\csc\theta=\frac{1}{y}\\\cos\theta=x\;\;\;\sec\theta=\frac{1}{x}\\\tan\theta=\frac{y}{x}\;\;\;\cot\theta=\frac{x}{y}$$ If x=0, secθ and tanθ are undefined. If y=0, then cotθ and cscθ are undefined. We can see that for a point P=(x,y) on a circle of radius r with a corresponding angle θ, the coordinates x and y satisfy: [latex]\cos \theta =\frac {x}{r}[/latex] [latex]x=r\cos\theta[/latex] [latex]\sin \theta =\frac {y}{r}[/latex] [latex]x=r\sin\theta[/latex] The values of the other trigonometric functions can be expressed in terms of x, y, and r: The table below shows the values of sine and cosine at the major angles in the first quadrant. From this table, we can determine the values of sine and cosine at the corresponding angles in the other quadrants. The values of the other trigonometric functions are calculated easily from the values of sinθ and cosθ: Trigonometric Identities A trigonometric identity is an equation involving trigonometric functions that is true for all angles θ for which the functions are defined. We can use the identities to help us solve or simplify equations. The main trigonometric identities are listed next. Source: Calculus Volume 1, Gilbert Strang & Edwin "Jed" Herman, https://openstax.org/books/calculus-volume-1/pages/1-3-trigonometric-functions We often refer to this as SOH-CAH-TOA: Sine = Opposite / Hypotenuse >> S = O/H >> SOH Cosine = Adjacent / Hypotenuse >> C = A / H >> CAH Tangent = Opposite / Adjacent >> T = O / A >> TOA I remember that cos is close – the side that's close to the angle is cosine. (It kind of rhymes and 'close' is a more familiar word than 'adjacent'). Basically: Trigonometric functions will help you to solve problems. You'll use SOH-CAH-TOA in many statics problems, whether to componentize a vector or resolve a force. Application: A 6ft ladder leaning up against a house is at a 60 degree angle. We can find the vertical height where the ladder reaches the house by using height = 6 ft * sin 60 degrees. (sin = opp / hyp) Looking Ahead: Chapter 4 (forces) and Chapter 5 (trusses) will use calculation of angles a lot. Previous: Chapter 1: Fundamental Concepts Next: 1.2 XYZ Coordinate Frame
CommonCrawl
Energy-efficient precoding in multicell networks with full-duplex base stations Zhichao Sheng1,2, Hoang Duong Tuan2, Ho Huu Minh Tam2, Ha H. Nguyen ORCID: orcid.org/0000-0001-6481-04223 & Yong Fang1 EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 48 (2017) Cite this article This paper considers multi-input multi-output (MIMO) multicell networks, where the base stations (BSs) are full-duplex transceivers, while uplink and downlink users are equipped with multiple antennas and operate in a half-duplex mode. The problem of interest is to design linear precoders for BSs and users to optimize the network's energy efficiency. Given that the energy efficiency objective is not a ratio of concave and convex functions, the commonly used Dinkelbach-type algorithms are not applicable. We develop a low-complexity path-following algorithm that only invokes one simple convex quadratic program at each iteration, which converges at least to the local optimum. Numerical results demonstrate the performance advantage of our proposed algorithm in terms of energy efficiency. Energy saving has become a pressing ecological/economical concern in dealing with global warming. From this perspective, it is important to reduce the amount of carbon emissions associated with operating modern and sophisticated communication networks [1, 2]. Energy saving also helps to reduce the operational cost since energy consumption constitutes a significant portion of the network expenditure. Green cellular networks (see, e.g., [3, 4]), which aim at optimizing energy efficiency (EE) for communications in terms of bits per joule per hertz have drawn considerable research interests in recent years (see, e.g., [5–8] and the references therein). In fact, EE has been recognized as the new figure-of-merit in assessing the quality and efficiency of future communication networks and beyond (see, e.g., [9–11]). For multicell networks, EE requires new approaches for interference management as compared to the more traditional performance metrics [12–17], which mainly aim at maximizing the spectral efficiency (SE) in terms of bits per second per hertz. Full-duplex (FD) communication, which allows simultaneous transmission and reception (over the same frequency band) to and from multiple downlink users (DLUs) and multiple uplink users (ULUs), respectively, has emerged as one of the key techniques for the fifth-generation (5G) networks [18–23]. Nevertheless, a challenging issue in realizing FD communication is that the interference is very severe, not only because of the residual FD self-interference (SI) but also the cross interference between the uplink and the downlink transmissions. In this paper, we consider the design of linear precoders to optimize energy efficiency under quality-of-service (QoS) constraints in FD multi-input multi-output (MIMO) multicell networks. Specifically, the BSs are equipped with multiple antennas and operate in the FD mode. There are two separate groups of multi-antenna users (UEs) in each cell, the ULUs and the DLUs, and both groups operate in the hall-duple (HD) mode. To the authors' best knowledge, such precoder design problem has not been thoroughly addressed, even for MIMO cooperative multicell networks with half-duplex base stations. It is pointed out that, since the rate function of the users is nonconcave, the QoS in terms of user's minimum rate constitutes difficult nonconvex constraints, which are addressed very recently in [24] in a different optimization problem. On the other hand, the EE objective is not a ratio of concave and convex functions in order to facilitate the Dinkelbach-type algorithm [25], which is the main tool for obtaining computational solutions of EE optimization problems (see, e.g. [26–28] and the references therein). To get around such nonconvexity issue, references [29] and [30] consider the specific zero-forcing precoders to completely cancel the interferences so that the user's rate function becomes concave and the QoS constraints become convex, while the EE objective becomes a ratio of concave and convex functions. This then allows the application of the Dinkelbach-type algorithm. It should be noted, however, that although having the convex QoS constraints on zero-forcing precoders, the EE optimization problems are still very difficult and there are no polynomial-time algorithms available to solve them. Furthermore, EE optimization for zero-forcing precoders only applies to the case that the number of antennas at each BS is much larger than the total number of users' antennas. As for the FD cooperative multicell networks considered in this paper, the interference cannot be completely canceled out due to the presence of self-interference [19, 21, 22]; hence, the Dinkelbach-type algorithm is not applicable. Motivated from the above observations, the aim of this paper is to develop a novel solution approach that directly tackles the nonconvexity of the concerned EE optimization problem. The proposed algorithm is a path-following computational procedure, which invokes a simple convex quadratic program at each iteration. The rest of the paper is structured as follows. Section 2 provides the problem formulation. Section 3 develops its computational solution. Section 4 is devoted to numerical examples. Section 5 concludes the paper. Notation. All variables are denoted by mathematical sans serif letters. Vectors and matrices are boldfaced. I n denotes the identity matrix of size n×n, while 1 n×m is the all-one matrix of size n×m. The notation (·)H stands for the Hermitian transpose, |A| denotes the determinant of a square matrix A, and Trace(A) denotes the trace of a matrix A. The inner product 〈X,Y〉 is defined as Trace(X H Y), and therefore, the Frobenius squared norm of a matrix X is ||X||2=Trace(X X H). The notation A≽B (A≻B, respectively) means that A−B is a positive semidefinite (definite, respectively) matrix. \(\mathbb {E}[\!\cdot ]\) denotes the expectation operator and ℜ{·} denotes the real part of a complex number. System model and optimization problem formulations We consider an MIMO cooperative network consisting of I cells. As illustrated in Fig. 1, the BS of cell i∈{1,…,I} serves a group of D DLUs in the downlink (DL) channel and a group of U ULUs in the uplink (UL) channel. Each BS operates in the FD mode and is equipped with \({N} \triangleq N_{1}+N_{2}\) antennas, where N 1 antennas are used to transmit and the remaining N 2 antennas to receive signals. In cell i, DLU (i,j D ) and ULU (i,j U ) operate in the HD mode and each is equipped with N r antennas. Similar to other works on precoding and interference suppression (see, e.g., [6, 12, 13, 15–17, 29] and references therein), it is assumed in this paper that there are high-performance channel estimation mechanisms in place and a central processing unit is available to collect and disseminate the relevant CSI. Illustration of FD multicell network In the DL, a complex-valued vector \(\mathbf {s}_{i,j_{\mathsf {D}}}\in \mathbb {C}^{d_{1}}\) is the symbols intended for DLU (i,j D ), where \(\mathbb {E}\left [\mathbf {s}_{i,j_{\mathsf {D}}}(\mathbf {s}_{i,j_{\mathsf {D}}})^{H}\right ]=\mathbf {I}_{d_{1}}\), d 1 is the number of concurrent data streams, and d 1≤ min{N 1,N r }. Denote by \(\mathsf {V}_{i,j_{\mathsf {D}}}\in \mathbb {C}^{{N}_{1}\times d_{1}}\) the complex-valued precoding matrix for DLU (i,j D ). Similarly, in the UL, \(\mathbf {s}_{i,j_{\mathsf {U}}}\in \mathbb {C}^{d_{2}}\) is the symbols sent by ULU (i,j U ), where \(\mathbb {E}\left [\mathbf {s}_{i,j_{\mathsf {U}}}(\mathbf {s}_{i,j_{\mathsf {U}}})^{H}\right ]=\mathbf {I}_{d_{2}}\), d 2 is the number of concurrent data streams, and d 2≤ min{N 2,N r }. The precoding matrix of ULU (i,j U ) is denoted as \(\mathsf {V}_{i,j_{\mathsf {U}}}\in \mathbb {C}^{N_{r}\times d_{2}}\). Define $$ \begin{array}{lll} {\mathcal{I}} &\triangleq& \{1,2,\dots,I\}; \quad {\mathcal{D}} \triangleq \{1_{\mathsf{D}},2_{\mathsf{D}},\dots,D_{\mathsf{D}}\};\\ {\mathcal{U}} &\triangleq& \{1_{\mathsf{U}},2_{\mathsf{U}},\dots,U_{\mathsf{U}}\};\quad {\mathcal{S}}_{1} \triangleq {\mathcal{I}}\times {\mathcal{D}}; \quad {\mathcal{S}}_{2} \triangleq {\mathcal{I}}\times {\mathcal{U}};\\ \mathsf{V} &\triangleq& [\!\mathsf{V}_{i,j}]_{(i,j)\in {\mathcal{S}}_{1}\cup{\mathcal{S}}_{2}}. \end{array} $$ In the DL channel, the received signal at DLU (i,j D ) is expressed as: $$\begin{array}{@{}rcl@{}} y_{i,j_{\mathsf{ D}}} &\triangleq& \underbrace{\mathbf{H}_{i,i,j_{\mathsf{D}}}\mathsf{V}_{i,j_{\mathsf{D}}} \mathbf{s}_{i,j_{\mathsf{D}}}}_{{\mathsf{desired\,\, signal}}} +\underbrace{\sum_{(m,\ell_{\mathsf{D}})\in{\mathcal S}_{1}\setminus (i,j_{\mathsf{D}})} \mathbf{H}_{m,i,j_{\mathsf{D}}}\mathsf{V}_{m,\ell_{\mathsf{D}}} \mathbf{s}_{m,\ell_{\mathsf{D}}}}_{{\mathsf{DL\,\, interference}}}\\ &&+\underbrace{\sum_{\ell_{\mathsf{U}}\in{\mathcal U}}\mathbf{H}_{i,j_{\mathsf{D}},\ell_{\mathsf{U}}}\mathsf{V}_{i,\ell_{\mathsf{U}}}\mathbf{s}_{i,\ell_{\mathsf {U}}}}_{{\mathsf{UL intracell\,\, interference}}} + \mathbf{n}_{i,j_{\mathsf{D}}}, \end{array} $$ where \(\mathbf {H}_{m,i,j_{\mathsf {D}}}\in \mathbb {C}^{N_{r}\times {N}_{1}}\) and \(\mathbf {H}_{i,j_{\mathsf {D}},\ell _{\mathsf {U}}}\in \mathbb {C}^{N_{r}\times N_{r}}\) are the channel matrices from BS m to DLU (i,j D ) and from ULU (i,ℓ U ) to DLU (i,j D ), respectively. Also, \(\mathbf {n}_{i,j_{\mathsf {D}}}\) is the additive white Gaussian noise (AWGN) sample, modeled as circularly symmetric complex Gaussian random variable with variance \(\sigma _{\mathsf {D}}^{2}\). Suppose that each BS i employs dirty-paper coding (DPC)-based transmission strategy (see, e.g., [31]) in broadcasting signals to its users. Then the corresponding DL throughput for user (i,j D ) is [32, eq. (4)] $$ f_{i,j_{\mathsf{D}}}(\mathsf{V}) \triangleq \ln\left|\mathbf{I}_{N_{r}}+{\mathcal L}_{i,j_{\mathsf{D}}}(\mathsf{V}_{i,j_{\mathsf{D}}}){\mathcal L}_{i,j_{\mathsf{D}}}^{H}(\mathsf{V}_{i,j_{\mathsf {D}}}) \Psi_{i,j_{\mathsf{D}}}^{-1}(\mathsf{V})\right|, $$ where \({\mathcal {L}}_{i,j_{\mathsf {D}}}(\mathsf {V}_{i,j_{\mathsf {D}}}) \triangleq \mathbf {H}_{i,i,j_{\mathsf {D}}}\mathsf {V}_{i,j_{\mathsf {D}}}\) and $$\begin{array}{@{}rcl@{}} {\mathcal{L}}_{i,j_{\mathsf{D}}}(\mathsf{V}_{i,j_{\mathsf{D}}}){\mathcal L}^{H}_{i,j_{\mathsf{D}}}(\mathsf{V}_{i,j_{\mathsf{D}}}) &=& \mathbf{H}_{i,i,j_{\mathsf{D}}}\mathsf{V}_{i,j_{\mathsf{D}}}\mathsf{V}_{i,j_{\mathsf{D}}}^{H}\mathbf{H}_{i,i,j_{\mathsf{D}}}^{H}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \Psi_{i,j_{\mathsf{D}}}(\mathsf{V}) &\triangleq &\sum_{(m,\ell_{\mathsf{D}})\in{\mathcal S}_{1}\setminus\{(i,\ell_{\mathsf{D}}), \ell=j, \ldots,D\}} \mathbf{H}_{m,i,j_{\mathsf{D}}}\mathsf{V}_{m,\ell_{\mathsf{D}}}\mathsf{V}_{m,\ell_{\mathsf{D}}}^{H}\mathbf{H}_{m,i,j_{\mathsf {D}}}^{H} \\ &&+\sum_{\ell_{\mathsf{U}}\in{\mathcal{U}}}\mathbf{H}_{i,j_{\mathsf{D}},\ell_{\mathsf{U}}}\mathsf{V}_{i,\ell_{\mathsf {U}}}\mathsf{V}_{i,\ell_{\mathsf{U}}}^{H}\mathbf{H}_{i,j_{\mathsf{D}},\ell_{\mathsf{U}}}^{H}+ \sigma_{\mathsf{D}}^{2}\mathbf{I}_{N_{r}}. \end{array} $$ Note that DPC-based broadcasting is a capacity achieving transmission, which enables user (i,j D ) view the term \(\sum _{k_{{\mathsf {D}}}<j_{{\mathsf {D}}}}\mathbf {H}_{i,i,j_{{\mathsf {D}}}}\mathsf {V}_{i,k_{{\mathsf { D}}}}\mathbf {s}_{i,k_{{\mathsf {D}}}}\) as known non-causally and thus reduces it from the interference in (3) [33, Lemma 1]. This term is still present under conventional broadcast, for which the interference mapping \(\Psi _{i,j_{\mathsf {D}}}(\mathsf {V})\) in (3) becomes $$\begin{array}{@{}rcl@{}} \Psi_{i,j_{\mathsf{D}}}(\mathsf{V}) &\triangleq &\sum_{(m,\ell_{\mathsf{D}})\in{\mathcal S}_{1}\setminus\{(i,j_{\mathsf{D}})\}} \mathbf{H}_{m,i,j_{\mathsf{D}}}\mathsf{V}_{m,\ell_{\mathsf{D}}}\mathsf{V}_{m,\ell_{\mathsf{D}}}^{H}\mathbf{H}_{m,i,j_{\mathsf{D}}}^{H} \\ &&+\sum_{\ell_{\mathsf{U}}\in{\mathcal U}}\mathbf{H}_{i,j_{\mathsf{D}},\ell_{\mathsf{U}}}\mathsf{V}_{i,\ell_{\mathsf{U}}}\mathsf{V}_{i,\ell_{\mathsf{U}}}^{H}\mathbf{H}_{i,j_{\mathsf{D}},\ell_{\mathsf{U}}}^{H}+ \sigma_{\mathsf{D}}^{2}\mathbf{I}_{N_{r}}. \end{array} $$ It is pointed out that our below development is still applicable to the case of conventional broadcast. In the UL channel, the received signal at BS i can be expressed as $${\kern-16.5pt} \begin{aligned} y_{i} \triangleq& \underbrace{\sum_{\ell_{\mathsf{U}}\in{\mathcal U}} \mathbf{H}_{i,\ell_{\mathsf{U}},i}\mathsf{V}_{i,\ell_{\mathsf{U}}} \mathbf{s}_{i,\ell_{\mathsf{U}}}}_{{\mathsf{desired\,\, signal}}} +\underbrace{\sum_{m\in{\mathcal I}\setminus \{i\}}\sum_{\ell_{\mathsf{U}}\in{\mathcal U}}\mathbf{H}_{m,\ell_{\mathsf{U}},i}\mathsf{V}_{m,\ell_{\mathsf{U}}} \mathbf{s}_{m,\ell_{\mathsf{U}}}}_{{\mathsf{UL\,\, interference}}}\\ &\!\!+\underbrace{\mathbf{H}_{i}^{{\mathcal S}{\mathcal I}}\sum_{\ell_{\mathsf{D}}\in{\mathcal D}}\mathsf{V}_{i,\ell_{\mathsf{D}}} \tilde{\mathbf{s}}_{i,\ell_{\mathsf{D}}}}_{{\mathsf{residual\,\, SI}}} +\!\underbrace{\sum_{m\in{\mathcal I}\setminus\{i\}}\!\mathbf{H}_{m,i}^{{\mathcal B}}\sum_{j_{\mathsf{D}}\in{\mathcal D}}\!\mathsf{V}_{m,j_{\mathsf{D}}}\mathbf{s}_{m,\mathbf{s}_{\mathsf{D}}}}_{{\mathsf {DL intercell\,\, interference}}}+ \mathbf{n}_{i}, \end{aligned} $$ where \(\mathbf {H}_{m,\ell _{\mathsf {U}},i}\in \mathbb {C}^{{N}_{2}\times N_{r}}\) and \(\mathbf {H}^{{\mathcal B}}_{m,i}\in \mathbb {C}^{{N}_{2}\times {N}_{1}}\) are the channel matrices from ULU (m,ℓ U ) to BS i and from BS m to BS i, respectively. The channel matrix \(\mathbf {H}_{i}^{{\mathcal S}{\mathcal I}}\in \mathbb {C}^{{N}_{2}\times {N}_{1}}\) represents the residual self-loop channel from the transmit antennas to the receive antennas at BS i after all real-time interference cancelations in both analog and digital domains [22, 34] are accounted for (more detailed discussion on modelling the SI channel can be found in [22, 34]). The additive Gaussian noise vector \(\tilde {\mathbf {s}}_{i,\ell _{\mathsf {D}}}\) with \(\mathbb {E}\left [\tilde {\mathbf {s}}_{i,j_{\mathsf {D}}}(\tilde {\mathbf {s}}_{i,j_{\mathsf {D}}})^{H}\right ]=\sigma _{\text {SI}}^{2}\mathbf {I}_{d_{1}}\) models the effects of the analog circuit's non-ideality and the limited dynamic range of the analog-to-digital converter (ADC) [19, 23, 34, 35]. The SI level \(\sigma _{\text {SI}}^{2}\) is the ratio of the average SI powers before and after the SI cancelation process. Lastly, n i is the AWGN sample, modeled as circularly-symmetric complex Gaussian random variable with variance \(\sigma _{\mathsf {U}}^{2}\). By treating the entries of the self-loop channel \(\mathbf {H}_{i}^{{\mathcal {S}}{\mathcal {I}}}\) in (7) as independent circularly symmetric complex Gaussian random variables with zero mean and unit variance, the power of the residual SI in (7) is $${} {{\begin{aligned} \sigma_{\text{SI}}^{2}\mathbb{E}\left\{\mathbf{H}_{i}^{{\mathcal{S}}{\mathcal I}}\left(\sum_{\ell_{D}\in{\mathsf{D}}}\mathsf{V}_{i,\ell_{D}}\mathsf{V}_{i,\ell_{D}}^{H}\right)(\mathbf{H}_{i}^{{\mathcal S}{\mathcal I}})^{H}\right\} = \sigma_{\text{SI}}^{2}\left(\sum_{\ell_{D}\in{\mathsf{D}}}||\mathsf{V}_{i,\ell_{D}}||^{2}\right) \mathbf{I}_{N_{r}}. \end{aligned}}} $$ It is important to point out that the above power expression only depends on the BS transmit power, and it cannot be changed by precoder matrices \(\mathsf {V}_{i,\ell _{\sf D}}\). Given that the minimum mean square error–successive interference cancelation (MMSE-SIC) detector is the most popular detection method in uplink communications, this type of receiver is also adopted in this paper. Under the MMSE-SIC receiver, the achievable uplink throughput at BS i is given as [36] $$\begin{array}{*{20}l} f_{i}(\mathsf{V}) &\triangleq \ln\left|\mathbf{I}_{{N}_{2}} + {\mathcal L}_{i}(\mathsf{V}_{{\mathsf{U}} i}){\mathcal L}_{i}^{H}(\mathsf{V}_{{\mathsf{U}} i})\Psi_{i}^{-1}(\mathsf{V})\right|, \end{array} $$ $$ \begin{aligned} \mathsf{V}_{{\mathsf{U}} i} &\triangleq (\mathsf{V}_{i,\ell_{\mathsf{U}}})_{\ell_{\mathsf{U}}\in{\mathcal U}}, {\mathcal L}_{i}(\mathsf{V}_{{\mathsf{U}} i})\\ &\triangleq \left[ \mathbf{H}_{i,1_{\mathsf{U}},i}\mathsf{V}_{i,1_{\mathsf{U}}}, \mathbf{H}_{i,2_{\mathsf{U}},i}\mathsf{V}_{i,2_{\mathsf{U}}}, \dots, \mathbf{H}_{i,U_{\mathsf{U}},i}\mathsf{V}_{i,U_{\mathsf{U}}} \right], \end{aligned} $$ $$ \begin{aligned} {\mathcal{L}}_{i}(\mathsf{V}_{{\mathsf{U}} i}){\mathcal L}_{i}^{H}(\mathsf{V}_{{\mathsf{U}} i}) &= \sum_{\ell=1}^{U}\mathbf{H}_{i,\ell_{\mathsf{U}},i}\mathsf{V}_{i,\ell_{\mathsf{U}}}\mathsf{V}_{i,\ell_{\mathsf{U}}}^{H}\mathbf{H}_{i,\ell_{\mathsf{U}},i}^{H}, \end{aligned} $$ $${\kern-16.5pt} \begin{aligned} \Psi_{i}(\mathsf{V}) \triangleq& \sum_{m\in{ I}\setminus \{i\}}\sum_{\ell_{\mathsf{U}}\in{\mathcal U}}\mathbf{H}_{m,\ell_{\mathsf{U}},i}\mathsf{V}_{m,\ell_{\mathsf{U}}} \mathsf{V}_{m,\ell_{\mathsf{U}}}^{H}\mathbf{H}_{m,\ell_{\mathsf{U}},i}^{H}\\&+\sigma_{\text{SI}}^{2}(\sum_{\ell_{D}\in{\mathsf{D}}}||\mathsf{V}_{i,\ell_{D}}||^{2}) \mathbf{I}_{N_{r}}\\ &+\sum_{m\in{\mathcal I}\setminus\{i\}}\mathbf{H}_{m,i}^{{\mathcal B}}\!\left(\sum_{j_{\mathsf{D}}\in{\mathcal D}}\mathsf{V}_{m,j_{\mathsf{D}}}\mathsf{V}_{m,j_{\mathsf{D}}}^{H}\!\!\right)\!(\mathbf{H}_{m,i}^{{\mathcal B}})^{H} +\sigma_{\mathsf{U}}^{2}\mathbf{I}_{{N}_{2}}. \end{aligned} $$ Following [37], the consumed power \(P^{\text {tot}}_{i}\) of cell i can be modeled as $$ P^{\text{tot}}_{i}(\mathsf{V}) = \zeta P^{t}_{i}(\mathsf{V}) + P^{\text{BS}} + UP^{\text{UE}}, $$ where \(P^{t}_{i}(\mathsf {V})\triangleq \underset {{j_{\mathsf {D}}\in {\mathcal D}}}{\sum } ||\mathsf {V}_{i,j_{\mathsf {D}}}||^{2}+\underset {j_{\mathsf {U}}\in {\mathcal U}}{\sum } ||\mathsf {V}_{i,j_{\mathsf {U}}}||^{2}\) is the total transmit power of BS and UEs in cell i and ζ is the reciprocal of drain efficiency of power amplifier. Alo P BS=N 1 P b and P UE=N r P u are the circuit powers of BS and UE, respectively, where P b and P u represent the per-antenna circuit power of BS and UEs, respectively. Consequently, the energy efficiency of cell i is defined by $$ \frac{\underset{j_{\sf D}\in{\mathcal D}}{\sum}f_{i,j_{\sf D}}(\mathsf{V})+f_{i}(\mathsf{V})}{P^{\text{tot}}_{i}(\mathsf{V})}. $$ In this paper, we consider the following precoder design to optimize the network's energy efficiency: $$\begin{array}{*{20}l} \max_{\mathsf{V}}\ \min_{i\in {\mathcal I}}\frac{\underset{j_{\mathsf{D}}\in{\mathcal D}}{\sum}f_{i,j_{\mathsf{D}}}(\mathsf{V})+f_{i}(\mathsf{V})} {P^{\text{tot}}_{i}(\mathsf{V})} & \mathrm{s.t.} \end{array} $$ (15a) $$\begin{array}{*{20}l} \sum_{j_{\mathsf{D}}\in{\mathcal D}}|| \mathsf{V}_{i,j_{\mathsf{D}}}||^{2} \le P^{\max}_{\text{BS}}, \ i\in{\mathcal I},& \end{array} $$ (15b) $$\begin{array}{*{20}l} ||\mathsf{V}_{i,j_{\mathsf{U}}}||^{2} \le P^{\max}_{\text{UE}}, \ (i,j_{\mathsf{U}})\in{\mathcal S}_{2},& \end{array} $$ (15c) $$\begin{array}{*{20}l} f_{i,j_{\mathsf{D}}}(\mathsf{V}) \ge r_{i,j_{\sf D}}^{\min}, \ (i,j_{\mathsf{D}})\in{\mathcal S}_{1} & \end{array} $$ (15d) $$\begin{array}{*{20}l} f_{i}(\mathsf{V}) \ge r_{i}^{{\mathsf{U}},\min}, \ i\in{\mathcal I},& \end{array} $$ (15e) where (15b)-(15c) limit the transmit powers for each BS and ULU, while (15d)-(15e) are the QoS constraints for both downlink and uplink transmissions. On the other hand, the problem of optimizing the energy efficiency in DL transmission only is formulated as follows: $$\begin{array}{*{20}l} {}\max_{\mathsf{V}^{\text{DL}}=[\mathsf{V}_{i,j_{{\mathsf{D}}}}]_{(i,j_{{\mathsf{D}}})\in{\mathcal S}_{1}}}\min_{i\in {\mathcal I}}\frac{\underset{j_{\mathsf{D}} \in{\mathcal D}}{\sum}f_{i,j_{\mathsf{D}}}^{\text{DL}}(\mathsf{V}^{\text{DL}})} {\zeta\underset{j_{\mathsf{ D}}\in{\mathcal D}}{\sum}||\mathsf{V}_{i,j_{\mathsf{ D}}}||^{2} + P^{\text{BS}}}\ \mathrm{s.t.}\,\,\,(15b),& \end{array} $$ $$\begin{array}{*{20}l} f_{i,j_{\mathsf{ D}}}^{\text{DL}}(\mathsf{V}^{\text{DL}})\ge r_{i,j_{\mathsf{ D}}}^{\min}, \ (i,j_{\mathsf{ D}})\in{\mathcal S}_{1}& \end{array} $$ $$\begin{array}{@{}rcl@{}} f_{i,j_{\mathsf{D}}}^{\text{DL}}(\mathsf{V}^{\text{DL}})&\triangleq&\ln\left|\mathbf{I}_{N_{r}}+\mathbf{H}_{i,i,j_{\mathsf {D}}}\mathsf{V}_{i,j_{\mathsf{D}}}\mathsf{V}_{i,j_{\mathsf{D}}}^{H} \mathbf{H}_{i,i,j_{\mathsf{D}}}^{H}\right.\\ &&\times (\sum_{(m,\ell_{\mathsf{ D}})\in{\mathcal S}_{1}\setminus\{(i,\ell_{\mathsf {D}}), \ell=j, \ldots,D\}} \mathbf{H}_{m,i,j_{\mathsf{ D}}}\mathsf{V}_{m,\ell_{\mathsf{ D}}}\\ &&\times \left.\left.\mathsf{V}_{m,\ell_{\mathsf{ D}}}^{H}\mathbf{H}_{m,i,j_{\mathsf{ D}}}^{H}+\sigma_{\mathsf{ D}}^{2}\mathbf{I}_{N_{r}} \right)^{-1}\right|. \end{array} $$ Likewise, the problem of optimizing the energy efficiency in the UL transmission only is $$\begin{array}{*{20}l} \max_{\mathsf{V}^{\text{UL}}=[\mathsf{V}_{i,\ell_{{\mathsf{ U}}}}]_{(i,\ell_{{\mathsf{ U}}})\in{\mathcal S}_{2}}}\min_{i\in {\mathcal I}}\frac{f_{i}^{\text{UL}}(\mathsf{V}^{\text{UL}})} {\zeta\underset{\ell_{\sf U}\in{\mathcal U}}{\sum} ||\mathsf{V}_{i,\ell_{\mathsf{U}}}||^{2} + UP^{\text{UE}}}\ \ \mathrm{s.t.}\,\,\,\, {(15c)},& \end{array} $$ $$\begin{array}{*{20}l} f_{i}^{\text{UL}}(\mathsf{V}^{\text{UL}})\geq r_{i}^{{\mathsf{ U}},\min}, \ i\in{\mathcal I},& \end{array} $$ $${} {{\begin{aligned} f_{i}^{\text{UL}}(\mathsf{V}^{\text{UL}})&\triangleq \ln\left|{\vphantom{\left(\underset{m\in{\mathcal I}\setminus \{i\}}{\sum}\sum_{\ell_{\mathsf{ U}}\in{\mathcal U}}\mathbf{H}_{m,\ell_{\mathsf{ U}},i}\mathsf{V}_{m,\ell_{\mathsf{ U}}} \mathsf{V}_{m,\ell_{\mathsf{U}}}^{H}\mathbf{H}_{m,\ell_{\mathsf{ U}},i}^{H}+\sigma_{\mathsf{ U}}^{2}\mathbf{I}_{{N}_{2}}\right)}}\mathbf{I}_{{N}_{2}} +\sum_{\ell=1}^{U}\mathbf{H}_{i,\ell_{\mathsf{ U}},i}\mathsf{V}_{i,\ell_{\mathsf{U}}}\mathsf{V}_{i,\ell_{\mathsf{ U}}}^{H}\mathbf{H}_{i,\ell_{\mathsf{U}},i}^{H}\right.\\ &\left.\quad\!\times\left(\underset{m\in{\mathcal I}\setminus \{i\}}{\sum}\sum_{\ell_{\mathsf{ U}}\in{\mathcal U}}\!\mathbf{H}_{m,\ell_{\mathsf{ U}},i}\mathsf{V}_{m,\ell_{\mathsf{ U}}} \mathsf{V}_{m,\ell_{\mathsf{U}}}^{H}\mathbf{H}_{m,\ell_{\mathsf{ U}},i}^{H}+\sigma_{\mathsf{ U}}^{2}\mathbf{I}_{{N}_{2}}\!\!\right)^{-1}\right|. \end{aligned}}} $$ As discussed before, for the downlink EE optimization problem (16), references [29] and [30] apply zero-forcing precoders so that all the interference terms in (5) are completely canceled, making \(f_{i,j_{\sf D}}^{\text {DL}}(\mathsf {V}^{\text {DL}})=\ln \left |\mathbf {I}_{N_{r}}+\mathbf {H}_{i,i,j_{\mathsf {D}}}\mathsf {V}_{i,j_{\mathsf { D}}}\mathsf {V}_{i,j_{\mathsf { D}}}^{H} \mathbf {H}_{i,i,j_{\mathsf { D}}}^{H}/\sigma _{{\mathsf { D}}}^{2}\right |\). Then by making the variable change \(\mathsf {X}_{i,j_{\mathsf { D}}}=\mathsf {V}_{i,j_{\mathsf { D}}}\mathsf {V}_{i,j_{\mathsf { D}}}^{H}\), the EE optimization for zero-forcing precoders becomes: $$\begin{array}{*{20}l} &\!\!\!\!\!\!\!\!\!\!\!\!\max_{\mathsf{X}^{\text{DL}}=[\mathsf{X}_{i,j_{{\mathsf{D}}}}]_{(i,j_{{\mathsf{ D}}})\in{\mathcal S}_{1}}}\min_{i\in {\mathcal I}}\\ &\times\frac{\underset{j_{\sf D}\in{\mathcal D}}{\sum} \ln\left|\mathbf{I}_{N_{r}}+\mathbf{H}_{i,i,j_{\mathsf{ D}}}\mathsf{X}_{i,j_{\mathsf{ D}}}^{H} \mathbf{H}_{i,i,j_{\mathsf{ D}}}^{H}/\sigma_{{\mathsf{ D}}}^{2}\right|} {\zeta\underset{j_{\sf D}\in{\mathcal D}}{\sum}\text{Trace}(\mathsf{X}_{i,j_{\mathsf {D}}}) + P^{\text{BS}}}\quad \mathrm{s.t.}\quad \end{array} $$ $$\begin{array}{*{20}l} &\sum_{j_{\mathsf {D}}\in{\mathcal D}}\text{Trace}(\mathsf{X}_{i,j_{\mathsf{ D}}}) \le P^{\max}_{\text{BS}}, \ i\in{\mathcal I}, \end{array} $$ $$\begin{array}{*{20}l} &\ln\left|\mathbf{I}_{N_{r}}+\mathbf{H}_{i,i,j_{\mathsf{ D}}}\mathsf{X}_{i,j_{\mathsf{ D}}}^{H} \mathbf{H}_{i,i,j_{\mathsf{D}}}^{H}/\sigma_{{\mathsf{ D}}}^{2}\right|\ge r_{i,j_{\mathsf{ D}}}^{\min}, \ (i,j_{\mathsf{D}})\in{\mathcal S}_{1}, \end{array} $$ $$\begin{array}{*{20}l} &\mathsf{X}^{\text{DL}}\in Z_{zf}, \end{array} $$ where the last linear constraint (20d) is to explicitly specify a zero-forcing precoder. Since the numerator in the objective (20a) is concave in \(\mathsf {X}_{i,j_{\sf D}}\), the problem expressed in (20) is maximin optimization of concave-convex function ratios. To solve such problem, references [29] and [30] use the Dinkelbach-type algorithm [25]. Specifically, the optimal value of (20) is found as the maximum of γ for which the optimal value of the following convex program is nonnegative: $${} {{\begin{aligned} \max_{\mathsf{X}^{\text{DL}}=[\mathsf{X}_{i,j_{{\mathsf{ D}}}}]_{(i,j_{{\mathsf {D}}})\in{\mathcal S}_{1}}}\min_{i\in {\mathcal I}} \left[\sum_{j_{\sf D}\in{\mathcal D}} \ln\left|\mathbf{I}_{N_{r}}+\mathbf{H}_{i,i,j_{\mathsf{D}}}\mathsf{X}_{i,j_{\mathsf{ D}}}^{H} \mathbf{H}_{i,i,j_{\mathsf{ D}}}^{H}/\sigma_{{\mathsf{ D}}}^{2}\right|\right.\\\left. -\gamma\left(\zeta\sum_{j_{\mathsf{ D}}\in{\mathcal D}}\text{Trace}(\mathsf{X}_{i,j_{\mathsf {D}}}) + P^{\text{BS}}\right)\right] \quad \mathrm{s.t.}\ \ \ (20b), (20c), (20d). \end{aligned}}} $$ It should be noted that, although being convex for fixed γ, the program (21) is still computationally difficult. This is because the concave objective function and convex constraints (20c) in (21) involve log-det functions. In fact, no polynomial-time algorithms are known to find the solution. Another issue is that the zero-forcing constraint (20d) in (21) would rule out the effectiveness of the optimization, unless the total number (N·I) of the BSs' antennas is much larger than the total number (I·D·N r ) of DLUs' antennas. The optimal value of (7) is still the maximum of γ>0 such that the optimal value of the following program is nonnegative $${\kern-15.9pt} {{\begin{aligned} \max_{\mathsf{V}}\ \min_{i\in {\mathcal I}}\ \left[\underset{j_{\mathsf{D}}\in{\mathcal D}}{\sum}f_{i,j_{\mathsf{ D}}}(\mathsf{V})+f_{i}(\mathsf{V})-\gamma P^{\text{tot}}_{i}(\mathsf{V})\right]\quad\mathrm{s.t.}\quad \protect{(15b)-(15e)}. \end{aligned}}} $$ However, problem (22) is a very difficult nonconvex optimization even for a fixed γ>0 because its objective function is obviously nonconcave while its constraints (15e) are highly nonconvex. In fact, one can see that (22) for a fixed γ is not easier than the original nonconvex optimization problem (15). In the next section we will develop a path-following procedure for computing the solution of (15) that avoids the setting (22). Path-following quadratic programming With the newly introduced variable t=(t 1,…,t I ), t i >0,i=1,2,…,I and under the convex quadratic constraints $$ \zeta\underset{j_{\sf D}\in{\mathcal D}}{\sum}|| \mathsf{V}_{i,j_{\mathsf{D}}}||^{2}+\sum_{j_{\mathsf{U}}\in{\mathcal U}} ||\mathsf{V}_{i,j_{\mathsf{U}}}||^{2}+P^{\text{BS}}+UP^{\text{UE}}\leq \mathsf{t}_{i}, i\in{\mathcal I}, $$ problem (15) is equivalently expressed by $$\begin{array}{@{}rcl@{}} &\underset{\mathsf{V},\mathsf{t}}{\max}\ {\mathcal P}(\mathsf{V},\mathsf{t})\triangleq \min_{i\in {\mathcal I}}\frac{\sum_{j_{\mathsf{ D}}\in{\mathcal D}}f_{i,j_{\mathsf{ D}}}(\mathsf{V})+f_{i}(\mathsf{V})} {\mathsf{t}_{i}}&\\ & \mathrm{s.t.}\quad \protect{(15b), (15c), (15d), (15e), (23)}. \end{array} $$ $$ \begin{array}{lll} {\mathcal M}_{i,j_{\mathsf{ D}}}(\mathsf{V}) &\triangleq&{\mathcal L}_{i,j_{\mathsf{ D}}}(\mathsf{V}_{i,j_{\mathsf{ D}}}){\mathcal L}_{i,j_{\mathsf{ D}}}^{H}(\mathsf{V}_{i,j_{\mathsf{ D}}}) +\Psi_{i,j_{\mathsf{ D}}}(\mathsf{V})\\ & \succeq& \Psi_{i,j_{\mathsf{ D}}}(\mathsf{V}), \end{array} $$ $$ \begin{array}{lll} {\mathcal M}_{i}(\mathsf{V}) &\triangleq& {\mathcal L}_{i}(\mathsf{V}_{{\mathsf{ U}} i}){\mathcal L}_{i}^{H}(\mathsf{V}_{{\mathsf{U}} i})+\Psi(\mathsf{V}_{i})\\ & \succeq& \Psi(\mathsf{V}_{i}). \end{array} $$ At \(\mathbf {V}^{(\kappa)}\triangleq \left [\mathbf {V}^{(\kappa)}_{i,j}\right ]_{(i,j)\in {\mathcal S}_{1}\cup {\mathcal S}_{2}}\), which is feasible to (15b)-(15e), define the following quadratic functions in V: $${} {{\begin{aligned} \Theta^{(\kappa)}_{i,j_{\mathsf {D}}}(\mathsf{V}) &\triangleq a_{i,j_{\mathsf{ D}}}^{(\kappa)}+2\Re\left\{\langle \Psi_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i,j_{\mathsf{ D}}}(\mathbf{V}^{(\kappa)}_{i,j_{\mathsf{ D}}}),{\mathcal L}_{i,j_{\mathsf{ D}}}(\mathsf{V}_{i,j_{\mathsf{ D}}})\rangle\right\}\\ &\quad-\langle \Psi_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)}) -{\mathcal M}_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)}), {\mathcal M}_{i,j_{\mathsf{ D}}}(\mathsf{V})\rangle\\ &=a_{i,j_{\mathsf{ D}}}^{(\kappa)}+2\Re\left\{\text{Trace}\left((\mathbf{V}^{(\kappa)}_{i,j_{\mathsf {D}}})^{H}\mathbf{H}_{i,i,j_{\mathsf{ D}}}^{H} \Psi_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)})\mathbf{H}_{i,i,j_{\mathsf{ D}}}\mathsf{V}_{i,j_{\mathsf{ D}}}\right)\right\}\\ &\quad-\sum_{(m,\ell_{\mathsf{ D}})\in{\mathcal S}_{1}\setminus\{(i,\ell_{\mathsf{ D}}), \ell=j+1, \ldots,D\}}\! \text{Trace} \left(\mathsf{V}_{m,\ell_{\mathsf{ D}}}^{H}\mathbf{H}_{m,i,j_{\mathsf{ D}}}^{H}\left(\Psi_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)})\right.\right.\\ &\quad\left.\left.\!-{\mathcal M}_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)})\right)\mathbf{H}_{m,i,j_{\mathsf{ D}}}\mathsf{V}_{m,\ell_{\mathsf{ D}}}\right)\\ &\quad-\sum_{\ell_{\mathsf{ U}}\in{\mathcal U}}\text{Trace}\left(\mathsf{V}_{i,\ell_{\mathsf{ U}}}^{H} \mathbf{H}_{i,j_{\mathsf{ D}},\ell_{\mathsf{ U}}}^{H} \left(\Psi_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)})\right.\right.\\ &\quad\left.\left.-{\mathcal M}_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)})\right) \mathbf{H}_{i,j_{\mathsf {D}},\ell_{\mathsf{ U}}} \mathsf{V}_{i,\ell_{\mathsf{ U}}}\right)\\ &\quad-\sigma_{\sf D}^{2}\text{Trace}\left(\Psi_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)}) -{\mathcal M}_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)})\right) \end{aligned}}} $$ $${} {{\begin{aligned} \Theta^{(\kappa)}_{i}(\mathsf{V}) &\triangleq a_{i}^{(\kappa)}+2\Re\left\{\langle \Psi_{i}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i}(\mathbf{V}^{(\kappa)}_{{\mathsf {U}} i}),{\mathcal L}_{i}(\mathsf{V}_{{\mathsf{ U}} i})\rangle\right\}\\ &\quad-\langle \Psi_{i}^{-1}(\mathbf{V}^{(\kappa)})-{\mathcal M}_{i}^{-1}(\mathbf{V}^{(\kappa)}),{\mathcal M}_{i}(\mathsf{V}) \rangle\\ &= a_{i}^{(\kappa)}+2\sum_{\ell=1}^{U}\!\Re\!\left\{\text{Trace}\left((\mathbf{V}^{(\kappa)}_{i,\ell_{\mathsf {U}}})^{H}\mathbf{H}_{i,\ell_{\mathsf{ U}},i}^{H} \Psi_{i}^{-1}(\mathbf{V}^{(\kappa)})\mathbf{H}_{i,\ell_{\mathsf{ U}},i}\mathsf{V}_{i,\ell_{\mathsf{ U}}}\right)\!\right\}\\ &\quad-\sum_{m\in{\mathcal I}}\sum_{\ell_{\mathsf{ U}}\in{\mathcal U}}\text{Trace}\left(\mathsf{V}_{m,\ell_{\mathsf{ U}}}^{H}\mathbf{H}_{m,\ell_{\mathsf{ U}},i}^{H} (\Psi_{i}^{-1}(\mathbf{V}^{(\kappa)})\right.\\ &\left.\quad-{\mathcal M}_{i}^{-1}(\mathbf{V}^{(\kappa)}))\mathbf{H}_{m,\ell_{\mathsf {U}},i}\mathsf{V}_{m,\ell_{\mathsf{ U}}} \right)\\ &\quad-\sigma_{\text{SI}}^{2}\text{Trace}(\left(\Psi_{i}^{-1}(\mathbf{V}^{(\kappa)})-{\mathcal M}_{i}^{-1}(\mathbf{V}^{(\kappa)})\right)\sum_{\ell_{\mathsf{D}}\in{\mathcal D}}||\mathsf{V}_{i,\ell_{\mathsf{ D}}}||^{2}\\ &\quad-\sum_{m\in{\mathcal I}\setminus\{i\}}\sum_{j_{\mathsf{ D}}\in{\mathcal D}}\text{Trace}\left(\mathsf{V}_{m,j_{\mathsf{ D}}}^{H}(\mathbf{H}_{m,i}^{{\mathcal B}})^{H} (\Psi_{i}^{-1}(\mathbf{V}^{(\kappa)})\right.\\ &\left.\quad-{\mathcal M}_{i}^{-1}(\mathbf{V}^{(\kappa)}))\mathbf{H}_{m,i}^{{\mathcal B}}\mathsf{V}_{m,j_{\sf D}}\right)\\ &\quad-\sigma_{\sf U}^{2}\text{Trace}\left(\Psi_{i}^{-1}(\mathbf{V}^{(\kappa)})-{\mathcal M}_{i}^{-1}(\mathbf{V}^{(\kappa)}) \right). \end{aligned}}} $$ These functions are concave because \(\Psi _{i,j_{\mathsf { D}}}^{-1}(\mathbf {V}^{(\kappa)}) -{\mathcal M}_{i,j_{\mathsf { D}}}^{-1}(\mathbf {V}^{(\kappa)})\succeq 0\) and \(\Psi _{i}^{-1}(\mathbf {V}^{(\kappa)})-{\mathcal M}_{i}^{-1}(\mathbf {V}^{(\kappa)})\succeq 0\). Also $${} \begin{aligned} 0>a_{i,j_{\mathsf{ D}}}^{(\kappa)}&= f_{i,j_{\mathsf{ D}}}(\mathbf{V}^{(\kappa)})- \langle \Psi_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i,j_{\mathsf{ D}}}(\mathbf{V}^{(\kappa)}_{i,j_{\mathsf{ D}}}),{\mathcal L}_{i,j_{\mathsf{ D}}}(\mathbf{V}^{(\kappa)}_{i,j_{\mathsf{ D}}})\rangle\\ 0>a_{i}^{(\kappa)} &= f_{i}(\mathbf{V}^{(\kappa)})-\langle \Psi_{i}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i}(\mathbf{V}^{(\kappa)}_{{\mathsf{ U}} i}),{\mathcal L}_{i}(\mathbf{V}^{(\kappa)}_{{\mathsf{ U}} i})\rangle, \end{aligned} $$ which follows from the inequality1 $$ \ln|\mathbf{I}+\mathbf{X}|\leq \text{Trace}(\mathbf{X}),\quad \forall\ \mathbf{X}\succeq 0. $$ The following result shows that the highly nonlinear and nonconcave functions \(f_{i,j_{\mathsf {D}}}(\cdot)\) and f i (·) in problem (15) can be globally and locally approximated by concave quadratic functions. It is true that [24] $$\begin{array}{*{20}l} f_{i,j_{{\mathsf{D}}}}(\mathbf{V}^{(\kappa)})=\Theta^{(\kappa)}_{i,j_{{\mathsf {D}}}}(\mathbf{V}^{(\kappa)})&\, \text{and} \, f_{i,j_{{\mathsf{D}}}}(\mathsf{V})\geq\Theta^{(\kappa)}_{i,j_{{\mathsf{D}}}}(\mathsf{V})\quad\forall\ \mathsf{V}, \end{array} $$ $$\begin{array}{*{20}l} f_{i}(\mathbf{V}^{(\kappa)})=\Theta^{(\kappa)}_{i}(\mathbf{V}^{(\kappa)})&\,\text{and}\, f_{i}(\mathsf{V})\geq\Theta^{(\kappa)}_{i}(\mathsf{V})\quad\forall\ \mathsf{V}. \end{array} $$ It follows from the above theorem that the nonconvex QoS constraints (15d) and (15e) can be innerly approximated by the following convex quadratic constraints: $$ \Theta^{(\kappa)}_{i,j_{\sf D}}(\mathsf{V}) \geq r_{i,j_{\sf D}}^{\min}, \ (i,j_{\sf D})\in{\mathcal S}_{1}; \quad \Theta^{(\kappa)}_{i}(\mathsf{V}) \geq r_{i}^{{\sf U},\min}, \ i\in{\mathcal I}. $$ These constraints also yield $${} {{\begin{aligned} \Re\left\{\langle \Psi_{i,j_{\sf D}}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i,j_{\sf D}}(\mathbf{V}^{(\kappa)}_{i,j_{\sf D}}),{\mathcal L}_{i,j_{\sf D}}(\mathsf{V}_{i,j_{\sf D}})\rangle\right\}&\geq\\ -a_{i,j_{\sf D}}^{(\kappa)}+\langle \Psi_{i,j_{\sf D}}^{-1}(\mathbf{V}^{(\kappa)}) -{\mathcal M}_{i,j_{\sf D}}^{-1}(\mathbf{V}^{(\kappa)}), {\mathcal M}_{i,j_{\sf D}}(\mathsf{V})\rangle&\geq 0, \ (i,j_{\sf D})\in{\mathcal S}_{1}, \end{aligned}}} $$ $${} {{\begin{aligned} \Re\left\{\langle \Psi_{i}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i}(\mathbf{V}^{(\kappa)}_{{\sf U} i}),{\mathcal L}_{i}(\mathsf{V}_{{\mathsf{U}} i})\rangle\right\}&\geq&\\ -a_{i}^{(\kappa)}+\langle \Psi_{i}^{-1}(\mathbf{V}^{(\kappa)})-{\mathcal M}_{i}^{-1}(\mathbf{V}^{(\kappa)}),{\mathcal M}_{i}(\mathsf{V}) \rangle&\geq& 0, \ i\in{\mathcal I}. \end{aligned}}} $$ Therefore, by using the inequality2 $${} {{\begin{aligned} \frac{x}{t_{i}}\geq 2\frac{\sqrt{x^{(\kappa)}}\sqrt{x}}{t_{i}^{(\kappa)}}-\frac{x^{(\kappa)}}{(t_{i}^{(\kappa)})^{2}}t_{i}\quad\forall x>0, x^{(\kappa)}>0, t_{i}>0, t_{i}^{(\kappa)}>0, \end{aligned}}} $$ $${} \begin{array}{rll} \displaystyle\frac{\Re\left\{\langle \Psi_{i,j_{\sf D}}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i,j_{\sf D}}(\mathbf{V}^{(\kappa)}_{i,j_{\sf D}}), {\mathcal L}_{i,j_{\mathsf{ D}}}(\mathsf{V}_{i,j_{\mathsf{ D}}})\rangle\right\}}{\mathsf{t}_{i}} &\geq&\varphi_{i,j_{\mathsf{ D}}}^{(\kappa)}(\mathsf{V}_{i,j_{\mathsf{ D}}},\mathsf{t}_{i}),\\ \frac{\Re\left\{\langle \Psi_{i}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i}(\mathbf{V}^{(\kappa)}_{{\mathsf{ U}} i}),{\mathcal L}_{i}(\mathsf{V}_{{\mathsf{ U}} i})\rangle\right\}}{\mathsf{t}_{i}}&\geq&\varphi^{(\kappa)}_{i}(\mathsf{V}_{{\mathsf{ U}} i},\mathsf{t}_{i}) \end{array} $$ $${} {{\begin{aligned} \varphi_{i,j_{\mathsf{D}}}^{(\kappa)}(\mathsf{V}_{i,j_{\mathsf{D}}},\mathsf{t}_{i})&\triangleq 2b_{i,j_{\mathsf{D}}}^{(\kappa)}\sqrt{\Re\left\{\langle \Psi_{i,j_{\mathsf{D}}}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i,j_{\mathsf{D}}}(\mathbf{V}^{(\kappa)}_{i,j_{\mathsf{D}}}),{\mathcal L}_{i,j_{\mathsf{D}}}(\mathsf{V}_{i,j_{\mathsf{D}}})\rangle\right\}}- c_{i,j_{\mathsf{D}}}^{(\kappa)}\mathsf{t}_{i}\\ \varphi^{(\kappa)}_{i}(\mathsf{V}_{{\mathsf{U}} i},\mathsf{t}_{i})&\triangleq 2b_{i}^{(\kappa)}\sqrt{\Re\left\{\langle \Psi_{i}^{-1}(\mathbf{V}^{(\kappa)}){\mathcal L}_{i}(\mathbf{V}^{(\kappa)}_{{\mathsf{U}} i}),{\mathcal L}_{i}(\mathsf{V}_{{\mathsf{U}} i})\rangle\right\}} -c_{i}^{(\kappa)}\mathsf{t}_{i}, \end{aligned}}} $$ $$ \begin{aligned} b_{i,j_{\mathsf{D}}}^{(\kappa)}&=\frac{\sqrt{\langle \Psi_{i,j_{\sf D}}^{-1}\left(\mathbf{V}^{(\kappa)}\right){\mathcal{L}}_{i,j_{\mathsf{D}}}\left(\mathbf{V}^{(\kappa)}_{i,j_{\mathsf{D}}}\right),{\mathcal L}_{i,j_{\mathsf{ D}}}\left(\mathbf{V}^{(\kappa)}_{i,j_{\mathsf{ D}}}\right)\rangle} }{\mathsf{t}_{i}^{(\kappa)}}\\ &\quad>0,\ c_{i,j_{\mathsf{ D}}}^{(\kappa)}=\left(b_{i,j_{\mathsf{ D}}}^{(\kappa)}\right)^{2}>0,\\ b_{i}^{(\kappa)}&=\frac{\sqrt{\Re\left\{\langle \Psi_{i}^{-1}\left(\mathbf{V}^{(\kappa)}\right){\mathcal L}_{i}\left(\mathbf{V}^{(\kappa)}_{{\mathsf{ U}} i}\right),{\mathcal L}_{i}\left(\mathbf{V}^{(\kappa)}_{{\mathsf{ U}} i}\right)\rangle\right\}}}{\mathsf{t}_{i}^{(\kappa)}}\\ &\quad>0,\ c_{i}^{(\kappa)}=\left(b_{i}^{(\kappa)}\right)^{2}>0. \end{aligned} $$ It is pointed out that functions \(\varphi _{i,j_{\sf D}}^{(\kappa)}\) and \(\varphi ^{(\kappa)}_{i}\) are concave [38]. Furthermore, define functions $$ \begin{aligned} g^{(\kappa)}_{i,j_{\mathsf{D}}}(\mathsf{V},\mathsf{t}_{i})&\triangleq \frac{a_{i,j_{\mathsf {D}}}^{(\kappa)}}{\mathsf{t}_{i}}+2\varphi_{i,j_{\mathsf{ D}}}^{(\kappa)}(\mathsf{V}_{i,j_{\mathsf {D}}},\mathsf{t}_{i})\\ &\quad- \frac{\langle \Psi_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)}) -{\mathcal M}_{i,j_{\mathsf{ D}}}^{-1}(\mathbf{V}^{(\kappa)}), {\mathcal M}_{i,j_{\mathsf {D}}}(\mathsf{V})\rangle}{\mathsf{t}_{i}},\\ g^{(\kappa)}_{i}(\mathsf{V},\mathsf{t}_{i}) &\triangleq \frac{a_{i}^{(\kappa)}}{\mathsf{t}_{i}} +2\varphi^{(\kappa)}_{i}(\mathsf{V}_{{\mathsf{U}} i},\mathsf{t}_{i})\\ &\quad-\frac{\langle \Psi_{i}^{-1}(\mathbf{V}^{(\kappa)})-{\mathcal M}_{i}^{-1}(\mathbf{V}^{(\kappa)}),{\mathcal M}_{i}(\mathsf{V}) \rangle}{\mathsf{t}_{i}}, \end{aligned} $$ which are concave. This can be justified by observing that the first terms of these two functions, \(a_{i,j_{\mathsf {D}}}^{(\kappa)}/\mathsf {t}_{i}\) and \(a_{i}^{(\kappa)}/\mathsf {t}_{i}\), are concave as \(a_{i,j_{\mathsf { D}}}^{(\kappa)}<0\) and \(a_{i}^{(\kappa)}<0\) by (29), while their second terms have been shown to be concave as above, and their third terms are concave according to [39]. We now address the nonconvex problem (15) by successively solving the following convex quadratic program (QP): $${} \begin{aligned} \max_{\mathsf{V},\mathsf{t}}\ {\mathcal P}^{(\kappa)}&(\mathsf{V},\mathsf{t})\triangleq \min_{i\in{\mathcal I}} \left[\sum_{j_{\mathsf{D}}\in{\mathcal D}}g^{(\kappa)}_{i,j_{\mathsf{ D}}}(\mathsf{V},\mathsf{t}_{i}) +g^{(\kappa)}_{i}(\mathsf{V},\mathsf{t}_{i})\right]\\&\mathrm{s.t.}\quad {(15b), (15c), (23), (33)}. \end{aligned} $$ Note that (41) involves n=2(N 1·d 1·I·D+N r ·d 2·I·U)+I scalar real variables and m=I·D+3·I+I·U quadratic constraints so its computational complexity is \(\mathcal {O}(n^{2}m^{2.5}+m^{3.5})\). Let (V (κ),t (κ))be a feasible point to (24). The optimal solution (V (κ+1),t (κ+1)) of convex program (41) is feasible to the nonconvex program (24) and it is better than (V (κ),t (κ)), i.e., $$ {\mathcal P}(\mathbf{V}^{(\kappa+1)}, \mathbf{t}^{(\kappa)})\geq {\mathcal P}(\mathbf{V}^{(\kappa)}, \mathbf{t}^{(\kappa)}). $$ as long as (V (κ+1),t (κ+1))≠(V (κ),t (κ)). Consequently, once initialized from a feasible point (V (0),t (0)) to (24), the κ-th QP iteration (41) generates a sequence {V (κ)} of feasible and improved points toward the nonconvex program (24), which converges to an optimal solution of (15). Under the stopping criterion $${} \left|\left({\mathcal P}(\mathbf{V}^{(\kappa+1)}, \mathbf{t}^{(\kappa+1)})- {\mathcal P}(\mathbf{V}^{(\kappa)},\mathbf{t}^{(\kappa)})\right)/{\mathcal P}(\mathbf{V}^{(\kappa)},\mathbf{t}^{(\kappa)})\right| \leq \epsilon $$ for a given tolerance ε>0, the QP iterations will terminate after finitely many iterations. The proof of the above proposition is based on the theory of sequential optimization [40]. For completeness, it is provided in the Appendix section. □ The proposed path-following quadratic prgramming that solves problem (15) is summarized in Algorithm 1. Before closing this section, it is pointed out that a feasible initial point (V (0),t (0)) to (24) can be founded by solving $$ \max_{\mathsf{V}}\min_{(i,j_{\mathsf{ D}})\in{\mathcal S}_{1}}\ \left\{\frac{f_{i,j_{\mathsf{ D}}}(\mathsf{V})}{r_{i,j_{{\mathsf{D}}}}^{\min}},\frac{f_{i}(\mathsf{V})}{r_{i}^{{\mathsf{U}},\min}}\right\}\ :\ \protect{(15b), (15c)}, $$ with iterations $$ \max_{\mathsf{V}}\min_{(i,j_{\mathsf{ D}})\in{\mathcal S}_{1}}\ \left\{\frac{\Theta^{(\kappa)}_{i,j_{\mathsf{ D}}}(\mathsf{V})}{r_{i,j_{{\mathsf{ D}}}}^{\min}},\frac{\Theta^{(\kappa)}_{i}(\mathsf{V})}{r_{i}^{{\mathsf{ U}},\min}}\right\}\ :\ {(15b), (15c)}, $$ which terminate as soon as $${} f_{i,j_{\mathsf{ D}}}(\mathbf{V}^{(\kappa)})/r_{i,j_{\mathsf{ D}}}^{\min} \geq 1 \ \text{and} \ f_{i}(\mathbf{V}^{(\kappa)})/r_{i}^{{\mathsf{ U}},\min}\geq 1, \ \forall (i,j_{\mathsf{ D}})\in{\mathcal S}_{1}. $$ Numerical results For the purpose of illustrating the performance advantage (in terms of the EE) of the proposed FD precoder design, the FD BSs can be reconfigured to operate in the HD mode with N=N 1+N 2 antennas at each BS. In particular, each BS operating in the HD mode serves all the DLUs in the downlink and all the ULUs in the uplink, albeit in two separate resource blocks (e.g., time or frequency). We then apply Algorithm 1 to solve the EE optimization problems (16) and (18). Suppose that V opt,DL and V opt,UL are their optimal solutions. Accordingly, we compare the optimal value of (15) with $$ \min_{i\in {\mathcal I}}\frac{\left(\underset{j_{\mathsf{ D}}\in{\mathcal D}}{\sum}f_{i,j_{\mathsf{D}}}(\mathbf{V}^{\mathrm{opt,DL}})+f_{i}^{\text{UL}}(\mathbf{V}^{\mathrm{opt,UL}}) \right)/2} {\zeta\left(\underset{j_{\mathsf{D}}\in{\mathcal D}}{\sum}||\mathbf{V}^{\text{opt}}_{i,j_{\mathsf{ D}}}||^{2} +\underset{j_{\mathsf{U}}\in{\mathcal U}}{\sum}|| \mathbf{V}^{\text{opt}}_{i,\ell_{\mathsf{ U}}}||^{2}\right) +P^{\text{BS}}+ UP^{\text{UE}}}, $$ where the fraction 1/2 in the numerator accounts for the fact that two time slots are used in HD downlink and uplink communications and P BS=(N 1+N 2)P b . The channel matrix between a BS and a user at a distance d is generated according to the path loss model for line-of-sight (LOS) communications as \(10^{-{PL}_{\text {LOS}}/20}\tilde {H}\), where P L LOS=103.8+20.9 log10d and each entry of \(\tilde {H}\) is an independent circularly symmetric Gaussian random variable with zero mean and unit variance [41]. The channel matrix from a ULU to a DLU at a distance d is assumed to follow the non-line-of-sight (NLOS) path loss model as \(10^{-{PL}_{\text {NLOS}}\slash 20}\tilde {H}\) with P L NLOS=145.4+37.5 log10d [41]. For the FD mode, the number N 1 of transmit antennas and the number N 2 of receive antennas at a BS are 4 and 2, respectively. The numbers of concurrent downlink and uplink data streams are assumed to be equal to the number of antennas at a DLU/ULU, i.e., d 1=d 2=N r . The precoding matrices \(\mathsf {V}_{i,j_{\mathsf {D}}}\) and \(\mathsf {V}_{i,j_{\mathsf {U}}}\) in (2) and (7) are of dimensions N 1×N r and N r ×N r , respectively. The rate constraints in (15d) and (15e) are set as \(r_{i,j_{\mathsf {D}}}^{\min }=2\) bps/Hz and \(r_{i}^{{\mathsf {U}},\min }=2\) bps/Hz, respectively. The circuit powers for each antenna in BS and UE are P b =1.667 W and P u =50 mW, respectively [37]. To arrive at the final figures, 100 simulation runs are carried out and the results are averaged. Table 1 lists other 3GPP LTE network parameters that are used in all simulations [41]. For simplicity, the drain efficiency of power amplifier ζ is assumed to be 100 % for both the downlink and uplink transmissions. Table 1 Simulation parameters used in all numerical examples Effect of SI in a single-cell network with fixed users The example network in Fig. 2 is used to study the energy efficiency performance of Algorithm 1. By considering a single-cell network with fixed-location users, one can focus on the effect of SI while isolating those of the intracell and intercell interferences. Figure 3 shows the energy efficiency results for two cases of N r =1 and N r = 2. It is clear that the EE under the FD mode degrades as \(\sigma ^{2}_{\text {SI}}\) increases. In particular, FD EE is more than double the HD EE when \(\sigma _{\text {SI}}^{2} \le -100\) dB. Figures 4 and 5 further illustrate the data rates of FD and HD modes for N r =2 and N r =1, respectively. It can be seen that, due to the adverse effect of SI, the data rates of DL and UL in the FD model degrade with increasing of \(\sigma ^{2}_{\text {SI}}\). For the case of N r =2, Fig. 4 shows that the data rates in the FD mode are higher than that of the HD mode when \(\sigma _{\text {SI}}^{2}~\le ~-120\) dB. Similarly, it is clear from Fig. 5 that the data rates in the FD mode with N r =1 are superior than that in the HD mode when \(\sigma _{\text {SI}}^{2} \le -110\) dB. A single-cell network with 2 DLUs and 2 ULUs Energy efficiency of the single-cell network for FD and HD modes Data rates of DL and UL for FD and HD modes with N r =2 in the single-cell network Effect of intracell interference in a single-cell network The example network in Fig. 6 is examined to study the EE performance of Algorithm 1 when the intracell interference changes but \(\sigma _{\text {SI}}^{2}\) is fixed at \(\sigma _{\text {SI}}^{2}=-110\) dB. The location of the DLU is fixed at point B but the location of the ULU is varied. For each position of the ULU at a point A on a circle of radius 90 m, the EE quantity is found by Algorithm 1. By keeping the small-scale fading parameter unchanged, a small angle \(\widehat {\text {AOB}}\) in Fig. 6 results in a small path loss and accordingly a large intracell interference level. A single-cell network with one fixed DLU and one moving ULU. The DLU location is fixed at point B, whereas the ULU is located at any point A on the circle of radius of 90 m It can be observed from Fig. 7 that the FD EE is always much higher than the HD EE if the intracell interference is sufficiently small. The largest gain in EE is achieved when the ULU-DLU distance is maximum (i.e., \(\widehat {\text {AOB}}=\pi \)), in which case, the intracell interference is smallest. In addition, Figs. 8 and 9 plot the data rates of FD and HD modes for N r =2 and N r =1, respectively. For the case of N r =2, the data rates of FD DL at \(\widehat {\text {AOB}}=0\) are smaller than that of the HD DL. This is expected since when the ULU is very close to DLU at \(\widehat {\text {AOB}}=0\), the intracell interference is strongest. When the ULU-DLU distance becomes larger, the data rates of FD are significantly higher than the data rates of HD. In the case of N r =1, the data rates of FD DL are only smaller than that of HD DL at \(\widehat {\text {AOB}}=0\), while the data rates of FD UL are higher than that of HD UL at every position of the ULU. Energy efficiency of the single-cell network under FD and HD modes Multi-cell networks In the last simulation scenario, we compare the FD EE and HD EE for a three-cell network as depicted by Fig. 10. The positions of the ULU and DLU in each cell are fixed at distances 2r/3=66.67 m and r/2=50 m from their serving BS, respectively. Figure 11 shows that the EE decreases with the increasing level of SI. A three-cell network with 1 DLU and 1 ULU. The cell radius is r=100 m Effect of SI on the energy efficiency in a three-cell network The convergence behavior of Algorithm 1 is demonstrated in Fig. 12 for the network in Fig. 10, where the error tolerance for convergence is set as ε = 10−3. As can be seen, the proposed algorithm monotonically improves the objective value after every iteration. Table 2 shows that the convergence occurs within about 32 iterations. Note also that each iteration only involves one simple convex QP, which can be solved very efficiently by any available convex solvers such as CVX [42]. The data rates of the minimum cell energy efficiency for N r =2 and N r =1 are provided in Figs. 13 and 14, respectively. For the case of N r =2, the data rates of FD DL are slightly smaller than that of HD DL, but the gap between FD UL and HD UL data rates becomes larger as \(\sigma ^{2}_{\text {SI}}\) increases. In the case of N r =1, although the data rates of FD DL are higher than that of the HD DL, it decreases with increasing \(\sigma ^{2}_{\text {SI}}\). On the contrary, the data rates of FD UL are smaller than that of the HD UL. Convergence of Algorithm 1 for the FD energy efficiency with ε = 10−3 Data rates of DL and UL for FD and HD modes with N r =2 in a three-cell network Table 2 The average number of iterations required by Algorithm 1 We have designed novel linear precoders for base stations and users in order to maximize the energy efficiency of a multicell network in which full-duplex BSs simultaneously transmit to and receive from their half-duplex users. The precoders are found by a low-complexity iterative algorithm that requires solving only one simple convex quadratic program at each iteration. It has also been proved that the proposed path-following algorithm is guaranteed to monotonically converge. Simulation results have been presented in various network scenarios to demonstrate the performance advantage of the proposed precoders in terms of energy efficiency. 1 Function ln|I+X| is concave in X≽0 so its first-order approximation at 0, which is Trace(X), is its upper bound [38]. 2 Function x 2/t is convex on x>0 and t>0, so its first-order approximation at \((\bar {x},\bar {t})\), which is \(2\bar {x}\mathsf {x}/\bar {t}-\bar {x}\mathsf {t}/\bar {t}^{2}\), is its lower bound [38]. By (31) and (32), any (V,t) feasible to the convex program (41) is also feasible to the nonconvex program (24). As (V (κ),t (κ)) is feasible to (41), it follows that $$ \begin{aligned} {\mathcal P}(\mathbf{V}^{(\kappa+1)},\mathbf{t}^{(\kappa+1)})&\geq {\mathcal P}^{(\kappa)}(\mathbf{V}^{(\kappa+1)},\mathbf{t}^{(\kappa+1)})\\ &> {\mathcal P}^{(\kappa)}(\mathbf{V}^{(\kappa)},\mathbf{t}^{(\kappa)})= {\mathcal P}(\mathbf{V}^{(\kappa)},\mathbf{t}^{(\kappa)}) \end{aligned} $$ as far as (V (κ+1),t (κ+1))≠(V (κ),t (κ)), hence showing (42). Since the sequence {(V (κ),t (κ))} is bounded by constraint (15c), by Cauchy's theorem, there is a convergent subsequence \(\{(\mathbf {V}^{(\kappa _{\nu })},\mathbf {t}^{(\kappa _{\nu })})\}\), i.e., $${\lim}_{\nu\rightarrow +\infty}\left[{\mathcal P}(\mathbf{V}^{(\kappa_{{\nu}+1})},\mathbf{t}^{(\kappa_{\nu+1})})- {\mathcal P}(\mathbf{V}^{(\kappa_{\nu})},\mathbf{t}^{(\kappa_{\nu})})\right]=0. $$ For every κ, there is ν such that κ ν ≤κ and κ+1≤κ ν+1. It follows from (42) that $$ \begin{array}{lll} 0& \leq &\underset{\kappa\rightarrow+\infty}{\lim}\left[{\mathcal P}(\mathbf{V}^{(\kappa+1)},\mathbf{t}^{(\kappa+1)})-{\mathcal P}(\mathbf{V}^{(\kappa)},\mathbf{t}^{(\kappa)})\right]\\ &\leq&\underset{\nu\rightarrow+\infty}{\lim}\left[{\mathcal P}(\mathbf{V}^{(\kappa_{{\nu}+1})},\mathbf{t}^{(\kappa+1)})-{\mathcal P}(\mathbf{V}^{(\kappa_{\nu})},\mathbf{t}^{(\kappa_{\nu})})\right]\\ &=&0, \end{array} $$ $$ {\lim}_{\kappa\rightarrow+\infty}\left[{\mathcal P}(\mathbf{V}^{(\kappa+1)},\mathbf{t}^{(\kappa+1)})-{\mathcal P}(\mathbf{V}^{(\kappa)},\mathbf{t}^{(\kappa)})\right]=0. $$ For a given tolerance ε > 0, the iterations will therefore terminate after finitely many iterations under the stopping criterion $$\begin{array}{*{20}l}{} \left|\left({\mathcal P}_{1}(\mathbf{V}^{(\kappa+1)},\mathbf{t}^{(\kappa+1)})- {\mathcal P}(\mathbf{V}^{(\kappa)},\mathbf{t}^{(\kappa)})\right)/{\mathcal P}(\mathbf{V}^{(\kappa)},\mathbf{t}^{(\kappa)})\right| \leq \epsilon. \end{array} $$ Each accumulation point \((\bar {\mathbf {V}},\bar {\mathbf {t}})\) of the sequence {(V (κ),t (κ))} satisfies the minimum principle necessary condition for optimality [40]. A Fehske, G Fettweis, J Malmodin, G Biczok, The global footprint of mobile communications: the ecological and economic perspective. IEEE Commun. Mag.49(8), 55–62 (2011). G Auer, et al., How much energy is needed to run a wireless network?. IEEE Wireless Cmmun. Mag.18:, 5 (2011). Y Chen, S Zhang, S Xu, G Li, Fundamental trade-offs on green wireless networks. IEEE Commun. Mag.49(6), 30–37 (2011). Z Hasan, H Boostainimehr, VK Bhargava, Green cellular networks: a survey, some research issues and challenges. IEEE Commun. Surv. Tutorials. 13(4), 524–540 (2011). YS Soh, TQS Quek, M Kountouris, H Shin, Energy efficient heterogeneous cellular networks. IEEE J. Sel. Areas Commun.31(5) (2013). Z Xu, C Yang, GY Li, Y Liu, S Xu, Energy-efficient CoMP precoding in heterogeneous networks. IEEE Trans. Signal Process.62(4), 1005–1017 (2014). E Bjornson, L Sanguinetti, J Hoydis, M Debbah, Optimal design of enerygy-efficient multi-user MIMO systems: is massive MIMO the answer. IEEE Trans. Wireless Commun.14(6), 3059–3075 (2015). Y Xin, D Wang, J Li, H Zhu, J Wang, X You, Area spectral efficiency and area energy efficiency of massive MIMO cellular systems. IEEE Trans. Vehic. Technol. 65(5), 3243–3254 (2016). RLG Cavalcante, S Stanczak, M Schubert, A Eisenlatter, U Turke, Toward energy-efficienct 5G wireless communications technologies. IEEE Sig. Process. Mag. 13(11), 24–34 (2014). I C-L, C Rowell, S Han, Z Xu, G Li, Z Pan, Toward green and soft: a 5G perspective. IEEE Commun. Mag. 13(2), 66–73 (2014). S Buzzi, I C-L, TE Klein, HV Poor, C Yang, A Zappone, A survey of energy-efficient techniques for 5G networks and challenges ahead. IEEE J. Sel. Areas Commun.34(4), 697–709 (2016). CTK Ng, H Huang, Linear precoding in cooperative MIMO cellular networks with limited coordination clusters. IEEE J. Sel. Areas Commun.28(9), 1446–1454 (2010). SS Christensen, R Agarwal, E Carvalho, JM Cioffi, Weighted sum-rate maximization using weighted MMSE for MIMO-BC beamforming design. IEEE Trans. Wirel. Commun.7(12), 4792–4799 (2008). K Wang, X Wang, W Xu, X Zhang, Coordinated linear precoding in downlink multicell MIMO-OFDMA networks. IEEE Trans. Sig. Process.60(8), 4264–4277 (2012). B Song, RL Cruz, BD Rao, Network duality for multiuser MIMO beamforming networks and applications. IEEE Trans. Commun.55(3), 618–630 (2007). DWH Cai, TQS Quek, CW Tan, SH Low, Max-min SINR coordinated multipoint downlink transmission - Duality and algorithms. IEEE Trans. Sig. Process.60(10), 5384–5395 (2012). Y Huang, CW Tan, BD Rao, Joint beamforming and power control in coordinated multicell: max-min duality, effective network and large system transition. IEEE Trans. Wirel. Commun.12(6), 2730–2742 (2013). Y-S Choi, H Shirani-Mehr, Simultaneous transmission and reception: algorithm, design and system level performance. IEEE Trans. Wirel. Commun.12(12), 5992–6010 (2013). A Sabharwal, et al., In-band full-duplex wireless: challenges and opportunities. IEEE J. Sel. Areas Commun.32(9), 1637–1652 (2014). M Heino, et al., Recent advances in antenna design and interference cancellation algorithms for in-band full duplex relays. IEEE Commun. Mag.5:, 91–101 (2015). E Everett, A Sahai, A Sabharwal, Passive self-interference suppression for full-duplex infrastructure nodes. IEEE Trans. Wirel. Commun.13(2), 680–694 (2014). M Duarte, A Sabharwal, V Aggarwal, R Jana, KK Ramakrishnan, CW Rice, NK Shankaranarayanan, Design and characterization of a full-duplex multiantenna system for WiFi networks. IEEE Trans. Veh. Technol.63(3), 1160–1177 (2014). L Anttila, et al., in Modeling and efficient cancelation of nonlinear self-interference in MIMO full-duplex transceivers. Proc. of Globecom (IEEEAustin, 2014), pp. 777–783. HHM Tam, HD Tuan, DT Ngo, Successive convex quadratic programming for quality-of-service management in full-duplex MU-MIMO multicell networks. IEEE Trans. Comm.64: (2016). W Dinkelbach, On nonlinear fractional programming. Manag. Sci.13(7), 492–498 (1967). DWK Ng, ES Lo, R Schober, Energy-efficient resource allocation in multi-cell OFDMA systems with limited feedback capacity. IEEE Trans. Wirel. Commun.11(10), 3618–3631 (2012). A Zappone, E Jorswieck, Energy efficiency in wireless networks via fractional programming theory. Found Trends Comm Inf Theory. 11(3-4), 185–396 (2015). A Zappone, L Sanguinetti, G Bacci, E Jorswieck, M Debbah, Energy-efficient power control: a look at 5G wireless technologies. IEEE Trans. Sig. Process.64(4), 1668–1683 (2016). O Tervo, LN Tran, M Juntti, Optimal energy-efficient transmit beamforming for multi-user MISO downlink. IEEE Trans. Sig. Process.63(10), 5574–5588 (2015). QD Vu, LN Tran, R Farrell, EK Hong, Energy-efficient zero-forcing precoding design for small-cell networks. IEEE Trans. Commun.64(2), 790–804 (2016). H Weingarten, Y Steinberg, S Shamai, The capacity region of the Gaussian multiple-input multiple-output broadcast channel. IEEE Trans. Inf. Theory. 52(9), 3936–3964 (2006). L-N Tran, M Juntti, M Bengtsson, B Ottersten, Weighted sum rate maximization for MIMO broadcast channels using dirty paper coding and zero-forcing methods. IEEE Trans. Commun.61(6), 2362–2372 (2013). W Yu, JM Cioffi, Sum capacity of Gaussian vector broadcast channels. IEEE Trans. Inf. Theory. 50(9), 1875–1892 (2004). M Duarte, C Dick, A Sabharwal, Experiment-driven characterization of full-duplex wireless systems. IEEE Trans. Wirel. Commun.11(12), 4296–4307 (2012). D Korpi, et al., Full-duplex transceiver system calculations: analysis of adc and linearity challenges. IEEE Trans. Wirel. Commun.13(7), 3821–3836 (2014). D Tse, P Viswanath, Fundamentals of wireless communication (Cambridge University Press, New York, 2005). Book MATH Google Scholar C Xiong, GY Li, S Zhang, Y Chen, S Xu, Energy-efficient resource allocation in OFDMA networks. IEEE Trans. Commun.60(12), 3767–3778 (2012). H Tuy, Convex analysis and global optimization (second Edition) (Springer, Verlag Berlin Heidelberg, 2016). B Dacorogna, P Marechal, The role of perspective functions in convexity, polyconvexity, rank-one convexity and separate convexity. J. Convex Anal.15(2), 271–284 (2008). MathSciNet MATH Google Scholar BR Marks, GP Wright, A general inner approximation algorithm for nonconvex mathematical programms. Oper. Res.26(4), 681–683 (1978). 3GPP TS 36.814 V9.0.0, 3GPP technical specification group radio access network, evolved universal terrestrial radio access (E-UTRA): Further advancements for E-UTRA physical layer aspects (Release 9) (2010). M Grant, S Boyd, CVX: Matlab software for disciplined convex programming, version 2.1 (2014). http://cvxr.com/cvx. Accessed 1 May 2016. The work is supported by NSF of China (61271213, 61673253) and the Ph.D. Programs Foundation of Ministry of Education of China (20133108110014). School of Communication and Information Engineering, Shanghai University, Shanghai, China Zhichao Sheng & Yong Fang Faculty of Engineering and Information Technology, University of Technology Sydney, Broadway, Sydney, NSW 2007, Australia Zhichao Sheng, Hoang Duong Tuan & Ho Huu Minh Tam Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada Ha H. Nguyen Zhichao Sheng Hoang Duong Tuan Ho Huu Minh Tam Yong Fang Correspondence to Ha H. Nguyen. Sheng, Z., Tuan, H.D., Tam, H.H. et al. Energy-efficient precoding in multicell networks with full-duplex base stations. J Wireless Com Network 2017, 48 (2017). https://doi.org/10.1186/s13638-017-0831-5 Cooperative multicell network Full-duplexing transceiver Precoder design Path-following convex quadratic programming Full-Duplex Radio: Theory, Design, and Applications
CommonCrawl
Spatial heterogeneity can undermine the effectiveness of country-wide test and treat policy for malaria: a case study from Burkina Faso Denis Valle ORCID: orcid.org/0000-0002-9830-88761, Justin Millar1 & Punam Amratia1 Malaria Journal volume 15, Article number: 513 (2016) Cite this article Considerable debate has arisen regarding the appropriateness of the test and treat malaria policy broadly recommended by the World Health Organization. While presumptive treatment has important drawbacks, the effectiveness of the test and treat policy can vary considerably across regions, depending on several factors such as baseline malaria prevalence and rapid diagnostic test (RDT) performance. To compare presumptive treatment with test and treat, generalized linear mixed effects models were fitted to data from 6510 children under five years of age from Burkina Faso's 2010 Demographic and Health Survey. The statistical model results revealed substantial regional variation in baseline malaria prevalence (i.e., pre-test prevalence) and RDT performance. As a result, a child with a positive RDT result in one region can have the same malaria infection probability as a demographically similar child with a negative RDT result in another region. These findings indicate that a test and treat policy might be reasonable in some settings, but may be undermined in others due to the high proportion of false negatives. High spatial variability can substantially reduce the effectiveness of a national level test and treat malaria policy. In these cases, region-specific guidelines for malaria diagnosis and treatment may need to be formulated. Based on the statistical model results, proof-of-concept, web-based tools were created that can aid in the development of these region-specific guidelines and may improve current malaria-related policy in Burkina Faso. Presumptive treatment for malaria has historically been the norm throughout much of sub-Saharan Africa (SSA). However, multiple problems plague presumptive treatment of malaria. First, considerable overlap in symptoms exists between malaria and other diseases [e.g., pneumonia; 1]. As a result, people that require treatment for other diseases might be inadvertently treated for malaria and sent home, with important consequences in terms of morbidity (and potentially mortality) and cost to the individual [2–4]. Second, there is substantial concern that presumptive treatment can promote more rapid emergence of anti-malarial drug resistance [5, 6]. Finally, as SSA countries transition to using artemisinin-based combination therapy (ACT) as their primary drugs for malaria treatment, presumptive treatment might be financially unsustainable given the higher costs of ACT [4, 7]. As a result of these concerns, the World Health Organization (WHO) has strongly promoted test and treat [8] as the primary malaria treatment policy for SSA countries. In this policy, "every suspected malaria case should be tested" and "every confirmed case should be treated with a quality-assured anti-malarial medicine" [8]. Because microscopy (the gold standard method in the majority of SSA countries [9]) is often unavailable in remote rural settings, the implementation of test and treat has relied heavily on rapid diagnostic tests (RDTs) for parasitological diagnostics [4, 5]. Recent studies have shown that the use of RDTs can lead to substantially improved targeting of anti-malarials when compared to only clinical diagnosis [10, 11]. In addition, despite the increased costs associated with using RDTs, test and treat might still be a cost-effective approach if it reduces the waste associated with providing ACT to uninfected individuals and if it decreases patient costs associated with repeated visits to health facilities ([2] but see [12]). Despite the benefits associated with using RDTs, it has been repeatedly documented that providers and patients often ignore negative RDT results because of the potentially high stakes associated with a false negative result [13–15]. Indeed, while presumptive treatment certainly leads to over-treatment, test and treat can lead to both under and over-treatment because of false negative and false-positives RDT results, respectively. A recent Cochrane review indicates that RDTs have good overall performance (i.e., sensitivity greater than 90 % and specificity greater than 95 % on average) but finds considerable variation between studies [16]. In particular, RDT performance varied substantially due to differences in the study population (e.g., treatment-seeking individuals versus a random sample of the population), the reference standard [e.g., microscopy or polymerase chain reaction (PCR)], type of RDT and RDT manufacturer, and environmental conditions (e.g., extreme temperature or humidity may damage RDT lots) [4, 9, 17]. As an example of potential limitations of RDTs, the widely used HRP2-based RDTs (e.g., Paracheck®) are Plasmodium falciparum specific and thus fail to detect other Plasmodium species, will fail to detect P. falciparum if parasites have a mutation or deletion of the HRP-2 gene, and may detect the HRP-2 protein long after parasitaemia has been cleared from the host [18]. The relative merits of test and treat compared to presumptive treatment will depend on multiple factors, including performance characteristics of the diagnostic tests (sensitivity and specificity), baseline infection prevalence, and costs (both direct and indirect) associated with false-positives and false-negatives. In this article, Demographic and Health Survey (DHS) data from Burkina Faso were used to show that there is substantial spatial heterogeneity, even within regions of the same country, in relation to baseline infection prevalence and diagnostic test performance. As a result, a countrywide test and treat policy may yield unacceptably high levels of false-negative results in some regions, suggesting that presumptive treatment might be a better alternative in these settings. Proof-of-concept, web-based tools were developed that can aid policy makers in developing region-specific malaria diagnostic and treatment policies. This analysis was based on DHS data collected in Burkina Faso in 2010 (available at [19]). These data were collected through a two-stage sampling design where the first stage consisted of sampling clusters (total of 574 zones de dénombrement), with probability proportional to population size, and the second stage involved sampling households within each cluster with equal probability based on a complete listing of all the households. Children between 6 and 59 months old were tested for malaria using microscopy and a P. falciparum HRP2 protein-based RDT (Paracheck®) after obtaining consent from the caregiver. RDT results were available 15 min after blood collection while microscopy slides were later evaluated at the Centre National de Recherche et de Formation sur le Paludisme (CNRFP), the reference laboratory for malaria in Burkina Faso. Two independent technicians read each blood slide and, in the case of discrepancy, a third microscopist evaluated the slide [20, 21]. In total, microscopy and RDT results were collected from 6510 children. Additional details regarding data collection and diagnostic tests can be found in [20, 21]. Statistical models Two malaria biomarkers [microscopy (M) and RDT (R)] were modelled as a function of individual level covariates (X) using probit mixed regression models, where the intercept and slope parameters were allowed to vary for each region. Only relatively simple covariates were included in the statistical models, such as fever in the previous two weeks (no = 0, yes = 1), age (categorized into five age groups: 6–11, 12–23, 24–35, 36–47, 48–60 months), gender (0 = girl, 1 = boy), and urban (no = 0, yes = 1). These covariates were chosen because they have been shown elsewhere to be important malaria predictors and because they can be readily assessed with minimal training [22, 23]. The model for each biomarker had the same overall specification. Let a binary response variable (either microscopy or RDT result) for the i-th individual in the j-th region be denoted by \(y_{ij}\). Assume that: $$y_{ij} \sim Bernoulli\left( {\Phi \left( {\beta_{0j} + \beta_{1j} x_{ij1} + \beta_{2j} x_{ij2} + \cdots } \right)} \right)$$ where \(\Phi\) is the standard normal cumulative distribution function, \(x_{ij1} , \ldots ,x_{ijP}\) are covariates, and \(\beta_{0j} , \ldots ,\beta_{Pj}\) are regression parameters. Standard priors and hyper-priors for these parameters were adopted, namely: $$\beta_{pj} \sim N\left( {\alpha_{p} , \tau_{p}^{2} } \right)$$ $$\tau_{p} \sim Unif\left( {0,100} \right)$$ $$\varvec{\alpha}\sim N\left( {0,{\varvec{\Sigma}}} \right)$$ where \({\varvec{\Sigma}}\) is a diagonal matrix with diagonal elements \([100,1,1, \ldots ,1].\) Notice that \(\alpha_{p}\) summarizes the effect of covariate p across all regions while \(\tau_{p}^{2}\) measures the variability of the region-specific effects \(\beta_{pj}\). Although these models are fairly standard and can be fitted in a frequentist or Bayesian framework [24], a Bayesian framework was chosen to better represent parameter uncertainty in outputs. Regression slope estimates for which the 95 % credible interval did not include zero were judged to be statistically significant. These individual models were combined using Bayes theorem. Let (M|X) denote the model for microscopy results (M) given covariates (X). Furthermore, let (R|M,X) denote the model for the RDT results (R) given microscopy results (M) and covariates (X). Then, the probability of malaria infection (assuming microscopy is the gold standard) given RDT results (R) and covariate values (X) is given by: $$\begin{aligned} &p\left( {M = 1|R,X} \right) \\ \nonumber &= \frac{{p\left( {R|M = 1,X} \right)p\left( {M = 1|X} \right)}}{{p\left( {R|M = 0,X} \right)p\left( {M = 0|X} \right) + p\left( {R|M = 1,X} \right)p\left( {M = 1|X} \right)}} \end{aligned}$$ The approach described above was chosen because of the interpretability of the results (e.g., the RDT models reveal how RDT sensitivity and specificity are influenced by covariates) and because it enables interactions to emerge naturally (e.g., the effect of age on RDT performance may vary as a function of infection status). Additional details regarding these statistical models are provided in Additional file 1. All models were fitted, and figures were created, using customized R code [25]. One potential concern with the model described above is that sample size might not be large enough within each strata to allow for reliable estimation of the different regression parameters. This was not a significant hindrance to this analysis given that all strata had at least 30 individuals. Finally, a ten-fold, cross-validation exercise revealed that the model described above had better out-of-sample predictive performance than two other more standard statistical models (Additional file 2). In order to broaden the application of this analysis, two online tools were developed that enable policy makers and other potential users to interact with our statistical modeling results. These tools were created using 'Shiny' [26], a freely available package in R that enables the creation of interactive web applications without requiring modellers to know HTML, CSS, or JavaScript. These proof-of-concept tools were created to help bridge the gap between our statistical models and actionable policy decisions. Overall, individuals in urban areas tended to have a much lower risk of malaria and this risk tended to increase significantly with age (Fig. 1a). Furthermore, there was considerable heterogeneity between regions regarding malaria risk differences in rural and urban areas (Fig. 1b). In relation to RDT performance, the models indicate that the probability of a true positive RDT result (RDT sensitivity) was not consistently influenced by any of the covariates (Fig. 1c). On the other hand, the probability of a false positive RDT result was significantly higher for older children (two to four years old children) and was lower for individuals in urban areas (Fig. 1e). Both RDT sensitivity and specificity varied considerably from region to region ('intercept' in Fig. 1d, f). Parameter estimates (circles) and 95 % credible intervals (vertical lines). Results for the 3 different statistical models are displayed in each row of panels: microscopy (M|X; a, b), RDT sensitivity (R|M = 1,X; c, d), and one minus RDT specificity (R|M = 0,X; e, f). Age groups 1, 2, 3, and 4 refer to children 12–23, 24–35, 36–47, and 48–59 months old, respectively. Left panels depict the average of the random slope parameter p in model k (\(\alpha_{p}^{\left( k \right)}\)). Statistically significant parameters (i.e., 95 % credible intervals do not overlap with zero) are highlighted with black lines while non-significant results are depicted with grey lines. Right panels depict regional heterogeneity in effect sizes, represented by the variance of random parameter p in model k (\(\tau_{p}^{2\left( k \right)}\)). A detailed description of our statistical model is provided in Additional file 1. Se and Sp stand for sensitivity and specificity, respectively What do the findings described above imply in relation to the post-test probability of infection \(p\left( {M |R,X} \right)\)? RDT results substantially changed the probability of infection (red and blue circles in Fig. 2) compared to the pre-test probability of infection (i.e., baseline prevalence; black circles in Fig. 2), which indicate that RDTs are very informative regarding the likelihood of infection. However, individuals with a negative RDT result still have a 20–70 % chance of being infected in rural settings (blue solid circles in Fig. 2a–c). For instance, the infection probability of a four years old child with an RDT-negative result in rural areas of the Hauts Basins region (blue solid circles in Fig. 2c) can be equal to the infection probability of a RDT-positive child of the same age in the same region from an urban area (red solid circles in Fig. 2f). Three regions in Fig. 2 were selected to illustrate the patterns between infection status and the main covariates but an online tool was also developed to enable users to explore all factors included in our analysis for all regions (available through the website [27]). Probability of malaria infection for three regions in Burkina Faso (Sud-Ouest, Sahel, and Hauts Basins). Results are shown as a function of age group, urban/rural setting, and RDT result, for boys with no fever history in the previous 2 weeks. Age groups 0, 1, 2, 3, and 4 refer to children 6–11, 12–23, 24–35, 36–47, 48–59 months old, respectively. Pre-test probability of infection is shown in black, post-test probability of infection for RDT-negative individuals (RDT −) is shown in blue, and post-test probability of infection for RDT-positive individuals (RDT +) is shown in red. A large vertical distance between the red and blue solid circles indicates that RDT results are very informative regarding infection status In addition to variation between urban and rural areas within a particular region, this analysis also identified substantial differences between regions. For instance, RDT results seem more informative in the Sud-Ouest region than in the Sahel region (compare vertical distance between the blue and red solid circles in Fig. 2a, b, d, e). Similarly, although urban areas tend to have lower infection probability than rural areas, there seems to be substantial heterogeneity among urban areas. For example, probability of infection in urban areas of the Hauts Basins region is much lower than that in urban areas in Sud-Ouest and Sahel (compare Fig. 2d–f). These results suggest a complicated relationship between RDT outcomes and post-test probability of infection, indicating that a single malaria diagnosis and treatment policy even just for urban areas in Burkina Faso may not be effective. For instance, a test and treat policy might not be suitable for urban areas in the Sahel region, given the high probability of infection (>0.3) of RDT-negative children, whereas it might be a suitable option for urban areas in the Hauts Basin region. The relevance of spatial heterogeneity for malaria diagnosis and treatment policy can be illustrated with a hypothetical example. For instance, if policy makers decided that it is unacceptable to use a diagnostic test with a probability of false-negative results above 30 % (for example), then presumptive treatment would be recommended for all rural areas in Burkina Faso and for urban settings in the Sahel and Est regions. In another online tool that we have developed (available through the website [28]), readers can explore the geographic implications regarding recommended presumptive treatment for different thresholds of the probability of false-negative results. The statistical model for malaria microscopy confirms several of the relationships often described in malaria epidemiology in endemic regions, such as the increase in malaria prevalence with age [29] and generally higher prevalence in rural settings when compared to urban areas [30]. However, the impact of some of these factors (e.g., age and urban vs rural settings) on RDT performance is less commonly explored [but see 23, 31, 32]. The modelling results suggest that RDT specificity is higher for children in urban settings and younger children (Fig. 1e). This might be due to these children being less likely to have had a past malaria infection since false positives will typically arise on individuals with past malaria infections due to the relatively long time that the target antigen persists in the blood [33, 34]. Alternatively, these RDT results might be correct but microscopy might have failed to detect these infected individuals, which is likely to be a common phenomenon for individuals with low parasitaemia levels [35–37]. Finally, substantial geographical variation was observed regarding RDT sensitivity and specificity. Multiple reasons could explain this, including differences in the proportion of individuals with past exposure to malaria, exposure of RDTs to excessive heat and humidity during storage and distribution, intra-species diversity of the target antigen, and the parasite stage-specific expression of the antigen [4, 17, 38]. Most importantly, these multiple statistical models were combined to highlight the substantial spatial heterogeneity in malaria risk given RDT results. Under a countrywide test and treat policy, in some regions children with very similar infection probabilities might be denied treatment in a rural setting (due to a negative RDT result) but receive treatment in an urban setting (due to a positive RDT result). To avoid this, region-specific guidelines for malaria diagnosis and treatment might need to be developed, a process that can be potentially aided by the developed tool. Differently from past approaches that have relied on a threshold for the pre-test probability of clinical malaria [39], the proposed tool focuses on the probability of false-negative results to determine which areas a test and treat policy might be warranted. Obviously there are a number of other considerations that should be taken into account when determining policies for malaria diagnosis and treatment. For instance, the results of this study, in conjunction with direct and indirect cost information, would be very informative for policy makers if used in a decision-theoretic framework. In particular, local heterogeneities (e.g., longer travel times and lower wages in rural settings) may play an important role in determining what is most cost effective in each setting. Nevertheless, this analysis clearly illustrates important shortcomings of a uniform test and treat policy across Burkina Faso. An important caveat to this study is that the data contain children that were sampled regardless of their symptoms and there was no information on symptoms on the day of the survey. Although presence of fever during the previous two weeks was controlled for (a variable that was surprisingly unimportant), baseline malaria prevalence and RDT performance are likely to be different for individuals that seek help in health facilities due to the presence of symptoms. Thus, using data on syndromic individuals would certainly be more appropriate when designing policies for malaria diagnosis and treatment at health facilities. Unfortunately, data over large geographical regions on treatment-seeking individuals that are tested with multiple diagnostic methods (i.e., microscopy and RDT) are scarce. Additional studies are warranted to determine if similar spatial heterogeneities are found when using syndromic data in Burkina Faso and in other malaria-endemic regions. Nevertheless, the results in this article are relevant for determining where a mass screen and treat strategy is likely to be more effective than a mass drug administration approach, a topic that has received renewed interest by policy makers and researchers and for which evidence is still relatively scarce in moderate to high malaria transmission areas ([40] but see [31, 41]). Finally, although this study focuses on children under five years, similar ideas are likely to be applicable for comparing intermittent screening and treatment (IST) as an alternative approach to diagnosis by symptoms only or intermittent preventive treatment for pregnant women [42–45]. One potential concern regarding this study refers to using microscopy as the gold standard method. Although routine microscopy can yield highly variable results, it is important to note that microscopy was conducted by the parasitology laboratory of the CNRFP, the reference laboratory for malaria in Burkina Faso. Staff at this centre participate in various quality control programmes in parasitology (e.g., proficiency testing programmes to comply with the College of American Pathologists and WHO-AFRO checklists) and are subject to rigorous internal and external quality control [21, 46]. Furthermore, DHS surveys across the majority of the SSA countries have relied on expert microscopy and these microscopy results have been extensively used by the scientific community as the gold standard (e.g., to create national malaria prevalence maps). Finally, expert microscopy is commonly used as the gold standard method against which RDT is evaluated [20, 34, 47, 48]. Although even expert microscopy can miss a large proportion of asymptomatic carriers [41], often because of low total parasite density [35–37], this does not explain the high proportion of positive microscopy but negative RDT results (i.e., false-negative results) found here in some regions. Why is there such a high proportion of false-negative results? There may be multiple non-exclusive reasons. RDTs might have been exposed to extreme conditions of temperature and humidity which may have compromised its performance (as documented for Burkina Faso in [49]). The prevalence of other Plasmodium species might have been high (e.g., a 13.2 % prevalence of Plasmodium malariae was found in a village in rural Burkina Faso in 2010 [50]), undermining a RDT that exclusively targets P. falciparum. Finally, it has been shown that the performance of RDT (both pLDH-based and HRP-2-based) is reduced in patients with lower parasite density [31, 51, 52]. While patients with false negative results may be deemed clinically irrelevant given their likely low parasite density [51, 53], this reasoning does not apply here because this study focuses on children under five and even low parasite densities are likely to result in fever [51] and lower mean haemoglobin levels [54] for this age group. Furthermore, it has been shown that malaria transmission readily occurs at low parasitaemia [55, 56] and that low parasitaemia infections significantly contribute to malaria transmission due to the high prevalence of these infections [35, 56, 57]. These findings suggest that, even if false negative results arise from low parasitaemia, it remains important to treat these individuals. It is also important to realize that, for children with very high malaria prevalence (e.g., older children in rural areas), even a RDT with high sensitivity is likely to generate a relatively high probability of false negatives [58]. For instance, if sensitivity is 0.95 and malaria prevalence is 0.8, Bayes theorem indicates that the probability of false negative is $$\begin{aligned} &p\left( {M = 1|R = 0} \right) \\ &\ge \frac{{p\left( {R = 0|M = 1} \right)p\left( {M = 1} \right)}}{{p\left( {R = 0|M = 1} \right)p\left( {M = 1} \right) + p\left( {R = 0|M = 0} \right)p\left( {M = 0} \right)}} \\ &= \frac{{\left( {1 - 0.95} \right) \times 0.8}}{{\left( {1 - 0.95} \right) \times 0.8 + 1 \times \left( {1 - 0.8} \right)}} = \frac{1}{6} \end{aligned}$$ In other words, among all RDT negative children, at least one out of six will actually be infected with malaria. In this calculation, RDT specificity \(p\left( {R = 0|M = 0} \right)\) was assumed to be 100 % but the probability of false negative results can be even higher for lower values of RDT specificity. Indeed, the above scenario is very optimistic. For instance, using the same DHS data from Burkina Faso, Samadoulougou et al. [20] have shown that RDT sensitivity was actually closer to 90 % (89.9 %, with 95 % confidence interval CI of 89–90.8) and specificity was very low (50.4 %, 95 % CI of 48.3–52.6). Another important remark is that the 2010 DHS data for Burkina Faso was predominantly collected during the rainy season. As a result, the well documented seasonal differences in malaria risk and RDT performance in Burkina Faso [32, 41, 44, 51–53, 56, 59–65] were not accounted for in the statistical models. These seasonal differences can be as dramatic as the geographical differences described here. For instance, similar to the observed pattern for the urban vs rural areas in the Hauts Basins region, a negative RDT test has been shown to reduce the probability of malaria to almost zero during the dry season but not in the rainy season [51]. An important implication of not taking into account seasonality in our models is that any policy derived from these tools is likely to be applicable only for the rainy season. Finally, DHS data were collected during approximately the same time period for each region except for the Centre, Centre-Nord, Nord, and Plateau Central regions. Because of these differences regarding when data were collected in each region, geographical comparisons involving the regions cited above should be interpreted carefully as they may be confounded with seasonality effects. Similar to the findings in [39, 51], the analysis presented here suggests that a generalized test-based policy should not be used uniformly across all contexts. In particular, even with improved diagnostic methods (e.g., positive control wells; [66]), the 'back-of-the-envelope' calculation above reveals how the probability of false-negative results will still be relatively large when prevalence is high. Unfortunately, because prevalence varies significantly even within the same region (e.g., according to age groups, season, rural vs urban), developing sound and straightforward diagnostic guidelines remains an important challenge. This article has shown how different statistical models can come together to inform context specific guidelines for malaria diagnosis and treatment, potentially improving the use of resources (e.g., reducing wasted RDTs in regions where false-negative probabilities are very high) and reducing malaria burden. Ultimately, bridging the gap between information users and these statistical models will be critical to foster evidence-based decision making and better resource allocation. Källander K, Nsungwa-Sabiiti J, Peterson S. Symptom overlap for malaria and pneumonia—policy implications for home management strategies. Acta Trop. 2004;90:211–4. Hume JCC, Barnish G, Mangal T, Armazio L, Streat E, Bates I. Household cost of malaria overdiagnosis in rural Mozambique. Malar J. 2008;7:33. Amexo M, Tolhurst R, Barnish G, Bates I. Malaria misdiagnosis: effects on the poor and vulnerable. Lancet. 2004;364:1896–8. Bell D, Wongsrichanalai C, Barnwell JW. Ensuring quality and access for malaria diagnosis: how can it be achieved? Nature Rev Microbiol. 2006;4:682–95. D'Acremont V, Lengeler C, Mshinda H, Mtasiwa D, Tanner M, Genton B. Time to move from presumptive malaria treatment to laboratory-confirmed diagnosis and treatment in African children with fever. PLoS Med. 2009;6:e252. Bloland PB. Drug resistance in malaria. Geneva: World Health Organization, Department of Communicable Disease Surveillance and Response; 2001. Abeku TA, Kristan M, Jones C, Beard J, Mueller DH, Okia M, et al. Determinants of the accuracy of rapid diagnostic tests in malaria case management: evidence from low and moderate transmission settings in the East African highlands. Malar J. 2008;7:202. World Health Organization. T3: Test. Treat. Track initiative. http://www.who.int/malaria/areas/test_treat_track/en/. Wongsrichanalai C, Barcus MJ, Muth S, Sutamihardja A, Wernsdorfer WH. A review of malaria diagnostic tools: microscopy and rapid diagnostic test (RDT). Am J Trop Med Hyg. 2007;77:119–27. Ansah EK, Epokor M, Whitty CJM, Yeung S, Hansen KS. Cost-effectiveness analysis of introducing RDTs for malaria diagnosis as compared to microscopy and presumptive diagnosis in central and peripheral public health facilities in Ghana. Am J Trop Med Hyg. 2013;89:724–36. Ansah EK, Narh-Bana S, Epokor M, Akanpigbiam S, Quartey AA, Gyapong J, et al. Rapid testing for malaria in settings where microscopy is available and peripheral clinics where only presumptive treatment is available: a randomised controlled trial in Ghana. BMJ. 2010;340:c930. Yukich J, D'Acremont V, Kahama J, Swai N, Lengeler C. Cost savings with rapid diagnostic tests for malaria in low-transmission areas: evidence from Dar es Salaam, Tanzania. Am J Trop Med Hyg. 2010;83:61–8. Cohen J, Dupas P, Schaner S. Price subsidies, diagnostic tests, and targeting of malaria treatment: evidence from a randomized controlled trial. Am Econ Rev. 2015;105:609–45. Bisoffi Z, Sirima BS, Angheben A, Lodesani C, Gobbi F, Tinto H, et al. Rapid malaria diagnostic tests vs. clinical management of malaria in rural Burkina Faso: safety and effect on clinical decisions. A randomized trial. Trop Med Int Health. 2009;14:491–8. Hamer DH, Ndhlovu M, Zuovac D, Fox M, Yeboah-Antwi K, Chanda P, et al. Improved diagnostic testing and malaria treatment practices in Zambia. JAMA. 2007;297:2227–31. Abba K, Deeks JJ, Olliaro P, Naing CM, Jackson SM, Takwoingi Y, Donegan S, Garner P. Rapid diagnostic tests for diagnosing uncomplicated P. falciparum malaria in endemic countries. Cochrane Database Syst Rev. 2011;(7):CD008122. doi:10.1002/14651858.CD008122.pub2. Cheng A, Bell D. Evidence behind the WHO guidelines: hospital care for children: what is the precision of rapid diagnostic tests for malaria? J Trop Pediatr. 2006;52:386–9. Moody A. Rapid diagnostic tests for malaria parasites. Clin Microbiol Rev. 2002;15:66–78. ICF. The Demographic and Health Surveys (DHS) Program. http://www.dhsprogram.com/. Samadoulougou S, Kirakoya-Samadoulougou F, Sarrassat S, Tinto H, Bakiono F, Nebie I, et al. Paracheck rapid diagnostic test for detecting malaria infection in under five children: a population-based survey in Burkina Faso. Malar J. 2014;13:1. Institut National de la Statistique et de la Demographie, ICF International. Enquete Demographique et de Santé et à Indicateurs Multiples du Burkina Faso 2010. Calverton: INSD and ICF International; 2012. Adigun AB, Gajere EN, Oresanya O, Vounatsou P. Malaria risk in Nigeria: Bayesian geostatistical modelling of 2010 malaria indicator survey data. Malar J. 2015;14:156. Magalhaes RJS, Clements ACA. Mapping the risk of anaemia in preschool-age children: the contribution of malnutrition, malaria, and helminth infections in West Africa. PLoS Med. 2011;8:e1000438. Gelman A, Hill J. Data analysis using regression and multilevel/hierarchical models. Cambridge: Cambridge University Press; 2007. R Core Team. A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2013. RStudio. Shiny: a web application framework for R. http://www.shiny.rstudio.com/. Valle D, Millar J, Amratia P. Probability of malaria infection in Burkina Faso. https://denisvalle.shinyapps.io/burkina_faso_tool/. Valle D, Millar J, Amratia P. Regions for presumptive treatment in Burkina Faso. https://denisvalle.shinyapps.io/burkina_faso_map/. Smith DL, Guerra CA, Snow RW, Hay SI. Standardizing estimates of the Plasmodium falciparum parasite rate. Malar J. 2007;6:131. Pond BS. Malaria indicator surveys demonstrate a markedly lower prevalence of malaria in large cities of sub-Saharan Africa. Malar J. 2013;12:313. Tiono AB, Ouedraogo A, Diarra A, Coulibaly S, Soulama I, Konate AT, et al. Lessons learned from the use of HRP-2 based rapid diagnostic test in community-wide screening and treatment of asymptomatic carriers of Plasmodium falciparum in Burkina Faso. Malar J. 2014;13:30. Maltha J, Guiraud I, Lompo P, Kabore B, Gillet P, Van Geet C, et al. Accuracy of PfHRP2 versus Pf-pLDH antigen detection by malaria rapid diagnostic tests in hospitalized children in a seasonal hyperendemic malaria transmission area in Burkina Faso. Malar J. 2014;13:20. Kyabayinze DJ, Tibenderana JK, Odong GW, Rwakimari JB, Counihan H. Operational accuracy and comparative persistent antigenicity of HRP2 rapid diagnostic tests for Plasmodium falciparum malaria in a hyperendemic region of Uganda. Malar J. 2008;7:221. Swarthout TD, Counihan H, Senga RKK, van den Broek I. Paracheck-Pf accuracy and recently trated Plasmodium falciparum infections: is there a risk of over-diagnosis? Malar J. 2007;6:58. Slater HC, Ross A, Ouedraogo A, White LJ, Nguon C, Walker PGT, et al. Assessing the impact of next-generation rapid diagnostic tests on Plasmodium falciparum malaria elimination strategies. Nature. 2015;528:S94–101. Kattenberg JH, Tahita CM, Versteeg IAJ, Tinto H, Traore Coulibaly M, D'Alessandro U, et al. Evaluation of antigen detection tests, microscopy, and polymerase chain reaction for diagnosis of malaria in peripheral blood in asymptomatic pregnant women in Nanoro, Burkina Faso. Am J Trop Med Hyg. 2012;87:251–6. Hopkins H, Bebell L, Kambale W, Dokomajilar C, Rosenthal PJ, Dorsey G. Rapid diagnostic tests for malaria at sites of varying transmission intensity in Uganda. J Infect Dis. 2008;197:510–8. Baker J, McCarthy J, Gatton M, Kyle DE, Belizario V, Luchavez J, et al. Genetic diversity of Plasmodium falciparum Histidine-Rich Protein 2 (PfHRP2) and its effect on the performance of PfHRP2-Based rapid diagnostic tests. J Infect Dis. 2005;192:870–7. Bisoffi Z, Tinto H, Sirima BS, Gobbi F, Angheben A, Buonfrate D, et al. Should malaria treatment be guided by a point of care rapid test? A threshold approach to malaria management in rural Burkina Faso. PLoS ONE. 2013;8:e58019. WHO. Mass drug administration, mass screening and treatment and focal screening and treatment for malaria. Geneva: World Health Organization; 2015. Tiono AB, Guelbeogo MW, Sagnon NF, Nebie I, Sirima SB, Mukhopadhyay A, et al. Dynamics of malaria transmission and susceptibility to clinical malaria episodes following treatment of Plasmodium falciparum asymptomatic carriers: results of a cluster-randomized study of community-wide screening and treatment, and a parallel entomology study. BMC Infect Dis. 2013;13:535. Tagbor H, Cairns M, Bojang K, Coulibaly SO, Kayentao K, Williams J, et al. A non-inferiority, individually randomized trial of intermittent screening and treatment versus intermittent preventive treatment in the control of malaria in pregnancy. PLoS ONE. 2015;10:e0132247. Tahita MC, Tinto H, Menten J, Ouedraogo J-B, Guiguemde RT, van Geertruyden JP, et al. Clinical signs and symptoms cannot reliably predict Plasmodium falciparum malaria infection in pregnant women living in an area of high seasonal transmission. Malar J. 2013;12:464. Williams JE, Cairns M, Njie F, Quaye SL, Awine T, Oduro A, et al. The performance of a rapid diagnostic test in detecting malaria infection in pregnant women and the impact of missed infections. Clin Infect Dis. 2015;62:837–44. Kyabayinze DJ, Zongo I, Cunningham J, Gatton M, Angutoko P, Ategeka J, et al. HRP2 and pLDH-based rapid diagnostic tests, expert microscopy, and PCR for detection of malaria infection during pregnancy and at delivery in areas of varied transmission: a prospective cohort study in Burkina Faso and Uganda. PLoS ONE. 2016;11:e0156954. Ibrahim F, Dosoo D, Kronmann KC, Ouedraogo I, Anyorigiya T, Abdul H, et al. Good clinical laboratory practices improved proficiency testing performance at clinical trials centers in Ghana and Burkina Faso. PLoS ONE. 2012;7:e39098. Nkrumah B, Acquah SEK, Ibrahim L, May J, Brattig N, Tannich E, et al. Comparative evaluation of two rapid field tests for malaria diagnosis: Partec Rapid Malaria Test and Binax Now Malaria Rapid Diagnostic Test. BMC Infect Dis. 2011;11:143. Ansah EK, Narh-Bana S, Affran-Bonful H, Bart-Plange C, Cundill B, Gyapong M, et al. The impact of providing rapid diagnostic malaria tests on fever management in the private retail sector in Ghana: a cluster randomized trial. BMJ. 2015;350:h1019. Albertini A, Lee E, Coulibaly SO, Sleshi M, Faye B, Mationg ML, et al. Malaria rapid diagnostic test transport and storage conditions in Burkina Faso, Senegal, Ethiopia and the Philippines. Malar J. 2012;11:406. Gneme A, Guelbeogo WM, Riehle MM, Tiono AB, Diarra A, Kabre GB, et al. Plasmodium species occurrence, temporal distribution and interaction in a child-aged population in rural Burkina Faso. Malar J. 2013;12:67. Bisoffi Z, Sirima SB, Menten J, Pattaro C, Angheben A, Gobbi F, et al. Accuracy of a rapid diagnostic test on the diagnosis of malaria infection and of malaria—attributable fever during low and high transmission season in Burkina Faso. Malar J. 2010;9:192. Diarra A, Nebie I, Tiono A, Sanon S, Soulama I, Ouedraogo A, et al. Seasonal performance of a malaria rapid diagnostic test at community health clinics in a malaria-hyperendemic region of Burkina Faso. Parasit Vectors. 2012;5:103. Bisoffi Z, Sirima SB, Meheus F, Lodesani C, Gobbi F, Angheben A, et al. Strict adherence to malaria rapid test results might lead to a neglect of other dangerous diseases: a cost benefit analysis from Burkina Faso. Malar J. 2011;10:226. McElroy PD, ter Kuile FO, Lal AA, Bloland PB, Hawley WA, Oloo AJ, et al. Effect of Plasmodium falciparum parasitemia density on hemoglobin concentrations among full-term, normal birth weight children in western Kenya, IV. The Asembo Bay cohort project. Am J Trop Med Hyg. 2000;62:504–12. Churcher TS, Bousema Walker M, Drakeley C, Schneider P, Ouedraogo AL, et al. Predicting mosquito infection from Plasmodium falciparum gametocyte density and estimating the reservoir of infection. eLife. 2013;2:e00626. Ouedraogo AL, Goncalves BP, Gneme A, Wenger EA, Guelbeogo MW, Ouedraogo A, et al. Dynamics of the human infectious reservoir for malaria determined by mosquito feeding assasys and ultrasensitive malaria diagnosis in Burkina Faso. J Infect Dis. 2016;213:90–9. Ouedraogo AL, Bousema T, Schneider P, de Vlas SJ, Ilboudo-Sanogo E, Cuzin-Ouattara N, et al. Substantial contribution of submicroscopical Plasmodium falciparum gametocyte carriage to the infectious reservoir in an area of seasonal transmission. PLoS ONE. 2009;4:e8410. Graz V, Willcox M, Szeless T, Rougemont A. "Test and treat" or presumptive treatment for malaria in high transmission situations? A reflection on the latest WHO guidelines. Malar J. 2011;10:136. Ouedraogo A, Tiono AB, Diarra A, Sanon S, Yaro JB, Ouedraogo E, et al. Malaria morbidity in high and seasonal malaria transmission area of Burkina Faso. PLoS ONE. 2013;8:e50036. im Kampe EO, Muller O, Sie A, Becher H. Seasonal and temporal trends in all-cause and malaria mortality in rural Burkina Faso, 1998–2007. Malar J. 2015;14:300. Beiersmann C, Bountogo M, Tiendrebeogo J, de Allegri M, Louis VR, Coulibaly B, et al. Falciparum malaria in young children of rural Burkina Faso: comparison of survey data in 1999 with 2009. Malar J. 2011;10:296. Ilboudo-Sanogo E, Tiono A, Sagnon N, Cuzin-Ouattara N, Nebie I, Sirima SB, et al. Temporal dynamics of malaria transmission in two rural areas of Burkina Faso with two ecological differences. J Med Entomol. 2010;47:618–24. Geiger C, Agustar HK, Compaore G, Coulibaly B, Sie A, Becher H, et al. Declining malaria parasite prevalence and trends of asyymptomatic parasitaemia in a seasonal transmission setting in north-western Burkina Faso between 2000 and 2009–2012. Malar J. 2013;12:27. Schrot-Sanyan S, Gaidot-Pagnier S, Abou-Bacar A, Sirima SB, Candolfi E. Malaria relevance and diagnosis in febrile Burkina Faso travellers: a prospective study. Malar J. 2013;12:270. Tiono AB, Diarra A, Sanon S, Nebie I, Konate AT, Pagnoni F, et al. Low specificity of a malaria rapid diagnostic test during an integrated community case management trial. Infect Dis Ther. 2013;2:27–36. Perkins MD, Bell DR. Working without a blidfold: the critical role of diagnostics in malaria control. Malar J. 2008;7(Suppl 1):S5. DV wrote the first draft, conducted the analyses, and developed the online tools. PA and JM provided critical feedback and substantially edited the manuscript. All authors read and approved the final manuscript. We thank Joanna Tucker-Lima and Paul Psychas for providing feedback on an earlier version of this manuscript. The datasets supporting the conclusions of this article are available in the DHS data portal http://www.dhsprogram.com/data/dataset/Burkina-Faso_Standard-DHS_2010.cfm?flag=0. This research was funded through an Early Career Seed Fund to DV from the University of Florida. School of Forest Resources and Conservation, University of Florida, 136 Newins-Ziegler Hall, Gainesville, FL, 32611, USA Denis Valle , Justin Millar & Punam Amratia Search for Denis Valle in: Search for Justin Millar in: Search for Punam Amratia in: Correspondence to Denis Valle. 12936_2016_1565_MOESM1_ESM.docx Additional file 1. Description of statistical models. Additional file 2. Model validation. Valle, D., Millar, J. & Amratia, P. Spatial heterogeneity can undermine the effectiveness of country-wide test and treat policy for malaria: a case study from Burkina Faso. Malar J 15, 513 (2016) doi:10.1186/s12936-016-1565-2 Malaria diagnostics Presumptive treatment Test and treat Spatial heterogeneity
CommonCrawl
Controllable ion transport by surface-charged graphene oxide membrane Mengchen Zhang1, Kecheng Guan1, Yufan Ji1, Gongping Liu1, Wanqin Jin1 & Nanping Xu1 Nature Communications volume 10, Article number: 1253 (2019) Cite this article Mechanical and structural properties and devices Ion transport is crucial for biological systems and membrane-based technology. Atomic-thick two-dimensional materials, especially graphene oxide (GO), have emerged as ideal building blocks for developing synthetic membranes for ion transport. However, the exclusion of small ions in a pressured filtration process remains a challenge for GO membranes. Here we report manipulation of membrane surface charge to control ion transport through GO membranes. The highly charged GO membrane surface repels high-valent co-ions owing to its high interaction energy barrier while concomitantly restraining permeation of electrostatically attracted low-valent counter-ions based on balancing overall solution charge. The deliberately regulated surface-charged GO membranes demonstrate remarkable enhancement of ion rejection with intrinsically high water permeance that exceeds the performance limits of state-of-the-art nanofiltration membranes. This facile and scalable surface charge control approach opens opportunities in selective ion transport for the fields of water transport, biomimetic ion channels and biosensors, ion batteries and energy conversions. A common natural phenomenon "like charges repel while unlike charges attract" is the rule of ion transport that is essential to our daily life1. In biological membranes, selectivity filter of an ion channel is enriched by charged residues, which enable biological systems to achieve ultrahigh efficiency while displaying selectivity for transmembrane ion transport2,3,4. These fascinating properties of biological membranes have motivated researchers to exploit synthetic membranes with ionic transport channels that have received particular attention in the selective removal of salts from water to produce industrial soft water and potable water5,6. Graphene oxide (GO) membrane is expected to share structural features with biological membrane owing to its water-transport pathways through assembled GO laminates, which has generated immense interest from the scientific community to study its transport properties and mechanisms7,8,9,10,11. Computer simulations12 and self-diffusion measurements13,14 have demonstrated that specific ions could selectively permeate through GO membrane mainly based on size sieving effect and interaction between ions and GO membrane, while the high-throughput manufacture and industrial implementation of these GO membranes need to be further studied to validate their practicality and scalability. However, in pressure-driven filtration processes, GO membranes generally failed to selectively transport ions, mainly because their interlayer spacing was too large to sieve ions especially when GO membranes were swollen in water15,16. Numerous attempts to modulate the interlayer spacing of GO membranes have been undertaken by partial reduction17, cross-linking18, and building multilayer architectures19,20. Precise tuning GO interlayer spacing within subnanometer range is challenging21 and remains difficult to achieve high salt rejection in a pressured filtration process, which furthermore sacrifices the intrinsically fast water permeation through the interlayer channels. Herein, inspired by the charge interaction principle and the function of biological ion channels, we demonstrate a strategy of creating surface charges on GO membrane to realize controllable ion transport without impeding water filtration though the GO membrane. Tunable charges attached on the surface of pre-stacked GO laminates exhibited dominant electrostatic repulsion against doubly charged co-ions (with charge like the membrane surface charge) while suppressing weak electrostatic attraction toward singly charged counter-ions (with charge unlike the membrane surface charge). By simply manipulating the charge interactions between the membrane surface and the ions in water, transport of ions from typical AB2- or A2B-type salts can be prevented while the water remains free to permeate through the membrane (Fig. 1a). Design of surface-charged graphene oxide (GO) membrane. a Schematic of the design of surface-charged GO membranes by coating polyelectrolytes on the surface of GO laminates to realize controllable ion transport. Coating polycations such as polydiallyl dimethyl ammonium (PDDA), polyethylene imine (PEI), and polyallylamine hydrochloride (PAH) led the GO membrane to exclude AB2-type salts based on the positively charged membrane surface, which exhibits a dominant electrostatic repulsion against divalent cations A2+, which is favored over electrostatic attraction with monovalent anions B−; coating polyanions such as polystyrene sulfonate (PSS), polyacrylic acid (PAA), and sodium alginate (SA) led the GO membrane to exclude A2B-type salts based on the negatively charged membrane surface, which exhibits a dominant electrostatic repulsion against divalent anions B2−, which is favored over electrostatic attraction with monovalent cations A+. b Schematic of the preparation of surface-charged GO membranes. GO laminates were first prepared by filtrating GO aqueous suspension on a porous polyacrylonitrile (PAN) substrate via pressured-assisted filtration–deposition method, followed by dip-coating a dilute polyelectrolyte solution on surface of pre-stacked GO laminates to form the surface-charged GO membranes. c Photograph of large-area surface-charged GO membrane (GO deposition amount of 5 mg with 0.1 wt% PDDA polyelectrolyte surface coating) with a diameter of 15 cm (effective area: ~180 cm2). d Scanning electron microscopic cross-sectional views of surface-charged GO membranes on top of a porous PAN substrate (GO deposition amount of 0.5 mg with 0.1 wt% PEI polyelectrolyte surface coating; membrane diameter of 4.7 cm with effective area of ~17.35 cm2). e Surface charge densities of surface-charged GO membranes calculated from the measured membrane zeta potentials based on Gouy–Chapman theory. Insets are molecular structures of the surface polyelectrolytes with ionized functional groups Manipulation of GO membrane surface charge First, we prepared stabilized GO laminates using low O/C ratio (0.186) GO materials on top of a porous surface hydrolyzed polyacrylonitrile (h-PAN) substrate with strong interfacial adhesion22, followed by the attachment of tunable surface charges via dip-coating an array of selected polyelectrolytes (polycations: polydiallyl dimethyl ammonium (PDDA), polyethylene imine (PEI), polyallylamine hydrochloride (PAH); polyanions: polystyrene sulfonate (PSS), polyacrylic acid (PAA), sodium alginate (SA)) on the surface of pre-stacked GO laminates to create surface-charged GO membranes (Fig. 1b and Supplementary Fig. 1). Representative photos of a typical membrane and its morphology are displayed in Fig. 1c, d, showing a uniform, large-area (~15 cm in diameter) membrane with a thin, defect-free charged GO layer of ~100 nm thickness (Supplementary Figs 2 and 3). The as-prepared surface-charged GO membrane preserved the laminar structure of the GO laminate (Supplementary Fig. 4) with an ultrathin polyelectrolyte layer (Supplementary Figs 5, 6 and Table 1) firmly integrated on the surface via hydrogen binding and/or electrostatic attraction (Supplementary Figs 7, 8 and 10a). X-ray photoelectron spectroscopy (XPS; Supplementary Fig. 8) and infrared (IR; Supplementary Fig. 9) spectra of surface-charged GO membranes showed newly introduced functional groups on the GO membrane surface derived from the top polyelectrolytes. Protonation of amine groups or deprotonation of sulfonic/carboxyl/hydroxyl groups in water accounted for the charge properties of the membrane surface, which could be finely tuned by the intensity and amount of these ionizable functional groups. As quantified by membrane surface charge density23, PDDA with stronger protonation than PEI and PAH produced most positively charged GO-PDDA membrane with surface charge density of +1.8 mC m−2; likewise, the most negatively charged GO-PSS membrane with surface charge density of −2.32 mC m−2 resulted from the attached PSS with the highest deprotonation than PAA and SA (Fig. 1e). An identical order of membrane surface charge density with respect to the zeta potential of the attached polyelectrolytes (Supplementary Fig. 10) suggests that the charge properties of polyelectrolytes can be easily translated onto GO membrane surface via a simple coating method. Ion transport behavior through surface-charged GO membrane To explore the roles of membrane surface charges, we investigated ion transport behavior through the surface-charged GO membranes by evaluating salt permeability and water/salt selectivity23 in filtration measurements using the model of saline containing MgCl2 or Na2SO4 (Fig. 2a, b). Tuning the membrane surface charge from highly positive to highly negative, MgCl2 permeability showed a continuous increase while Na2SO4 permeability showed an approximately linear reduction. Consequently, extraordinarily high H2O/MgCl2 selectivity of 2.2 × 105 was achieved in the highly positively charged GO membrane (GO-PDDA), but it underwent an exponential decay of more than an order of magnitude as the membrane became negatively charged (GO-PSS). Conversely, the relatively low H2O/Na2SO4 selectivity of the highly positively charged GO membrane (GO-PDDA) was rapidly promoted by over 20-fold, reaching 5.4 × 105 as the membrane surface charge was tuned to a highly negative value (GO-PSS). The exactly reverse trends led to the hypothesis that a positively charged GO membrane would tend to prevent transport of an AB2 salt containing the divalent cation (A2+), whereas a negatively charged GO membrane would show a tendency to exclude A2B salt containing the divalent anion (B2−). The permeation of salts appeared to be dominated by electrostatic repulsion of the charged membrane surface against high-valent co-ions, although there was also an electrostatic attraction between membrane surface charges and a low-valent counter-ion. Accordingly, we speculated that ion transport through the surface-charged GO membrane is also related to the ion valence, which might determine the electrostatic interactions with the membrane surface as well. We therefore conducted a set of filtrations by varying the valence ratio of cation and anion (Z+/Z−) of the salts (Fig. 2c, d). The results indicated that salts permeation through the surface-charged GO membranes was closely dependent on the valence ratio of cation and anion. For membrane surfaces with positive charges (e.g., GO-PDDA) that repelled the cation while attractingthe anion, salt permeation was suppressed as the Z+/Z− changed from 2/1 to 1/2, leading to water/salt selectivity ranked in the order of MgCl2 > MgSO4 > Na2SO4. In contrast, the relationship between salt permeability and water/salt selectivity versus Z+/Z− displayed the reverse order in negatively charged membranes (e.g., GO-PSS) where the cation was attracted while the anion was repelled. There is a competition between electrostatic repulsion of co-ions against the membrane surface and the electrostatic attraction of counter-ions toward the membrane surface. In case of either Z+/Z− < 1 (e.g., Na2SO4) for positively charged membranes or Z+/Z− > 1 (e.g., MgCl2) for negatively charged membranes, the attraction of high-valent counter-ions could dominate over the repulsion of low-valent co-ions, thereby facilitating the salt permeation through the membrane. Ion transport mechanism of surface-charged graphene oxide (GO) membranes. a MgCl2 permeability and H2O/MgCl2 selectivity and b Na2SO4 permeability and H2O/Na2SO4 selectivity of surface-charged GO membranes with various membrane zeta potentials obtained by streaming potential measurements and surface charge densities calculated from Gouy–Chapman equation. Orange squares are MgCl2 permeability and orange circles are H2O/MgCl2 selectivity; Green squares are Na2SO4 permeability and green circles are H2O/ Na2SO4 selectivity; Solid lines are best fits for the data. c, d Salt permeability and water/salt selectivity of c positively charged and d negatively charged GO membranes for MgCl2, MgSO4, and Na2SO4 salts with varied Z+/Z– (ratio of the valence of cation and anion) values. e–g Surface element integration model predictions of Derjaguin–Landau–Verwey–Overbeek (DLVO) interaction energies between a charged ion and the charged membrane surface by adding Van der Waals attraction and electrostatic repulsion. e Orange: net DLVO interaction energy (solid line), electrostatic repulsion (dashed line), and Van der Waals attraction (dotted line) between Mg2+ and the GO-PDDA membrane; Yellow: net DLVO interaction energy (solid line), electrostatic repulsion (dashed line), and Van der Waals attraction (dotted line) between Na+ and the GO-PDDA membrane; f Navy: net DLVO interaction energy (solid line), electrostatic repulsion (dashed line), and Van der Waals attraction (dotted line) between SO42− and the GO-PSS membrane; Blue: net DLVO interaction energy (solid line), electrostatic repulsion (dashed line), and Van der Waals attraction (dotted line) between Cl− and the GO-PSS membrane; g Navy: net DLVO interaction energy (solid line), electrostatic repulsion (dashed line), and Van der Waals attraction (dotted line) between SO42− and the GO-PSS membrane; Green: net DLVO interaction energy (solid line), electrostatic repulsion (dashed line), and Van der Waals attraction (dotted line) between SO42− and the GO membrane. h Calculation formulas and schematic of the surface element integration (SEI) model in the calculation for DLVO interactions. Error bars represent standard deviations for three measurements Interestingly, the NaCl transport behaviors were almost unchanged with positively and negatively charged GO membrane (Supplementary Fig. 11), owing to a balanced electrostatic interaction with monovalent co-ions and counter-ions. The variation of MgSO4 permeability in positively and negatively charged GO membranes reflects the additional size discrimination effect on the ionic transport (detailed discussion can be found in the note for Supplementary Fig. 11). In addition to the salt valence ratio that controls ion transport based on the dominant electrostatic exclusion effect in dilute salt solution, the salt concentration is another factor affecting ion transport via an electrostatic screening effect. MgCl2 (or Na2SO4) permeability of the positively (or negatively) charged GO membrane was enhanced by increasing the salt concentration in the feed solution (Supplementary Fig. 12). A GO membrane with a given surface charge density possesses a certain capacity of repelling or attracting ions. Once excessive ions have been introduced, e.g., feeding with a high salt concentration, the charge screening effect would lessen the exclusion effect, which ultimately contributes to the higher ion transport rate through the membrane. Ion transport mechanism through surface-charged GO membrane To support our hypothesis, we further examined the underlying ion transport mechanism taking place in the surface-charged GO membranes with the aid of theoretical arithmetic. The repulsive force of the charged membrane surface against co-ions can be reflected by the interaction energy between ions and the membrane surface (Fig. 2e and Supplementary Fig. 13), which was calculated by using the Derjaguin–Landau–Verwey–Overbeek (DLVO) theory24,25 that involves estimation of Van der Waals and electrostatic double-layer interaction energies. Van der Waals attraction was determined using Hamaker's microscopic approach, and electrostatic repulsion was derived from the solution of Poisson–Boltzmann equation. We employed a surface element integration (SEI) model for the DLVO interaction calculation (Fig. 2f), which considers the total interaction energy between an ion and the membrane surface by integrating the interaction energy per unit area. We compared the net DLVO interaction energy curves (Fig. 2e–g) for (i) the highly positively charged GO-PDDA membrane against divalent co-ions Mg2+ versus monovalent Na+; (ii) the highly negatively charged GO-PSS membrane against divalent co-ions SO42− versus monovalent Cl−; and (iii) the highly negatively charged GO-PSS membrane versus the negatively charged pristine GO membrane against SO42−. Surprisingly, the charged membrane surface exhibited a great interaction energy barrier for high-valent co-ion transport that exponentially decayed for the low-valent co-ion either in the positive charge case (i) or the negatively charge case (ii). It is reasonable to assume that the attraction of the charged membrane surface for the counter-ions would follow the same rule. Thus these results demonstrate the possibility of controlling the ion transport through a designed surface-charged membrane aimed at a given type of salt. For example, for a given AB2-type salt, a positively charged membrane would be expected to exhibit a much higher force in repulsion to A2+ than in attraction to B−, thereby tending to repel A2+ co-ions from the membrane. To balance the charge in solution, B− counter-ions would be simultaneously excluded. This accounts for the fact that the observed AB2-type salt (e.g., MgCl2) permeability is several folds lower than that of A2B-type salt (e.g., Na2SO4) in positively charged GO membranes. Similarly, A2B-type salt (e.g., Na2SO4) permeation is restricted by negatively charged GO membranes. In addition, apparently, increasing the interaction energy barrier by enhancing the surface charge density in case (iii) is efficient to improve the exclusion toward salts containing divalent co-ions. Therefore, a membrane with highly positive charges exhibits a restraint of the transport of MgCl2, whereas a membrane with highly negative charges generates a significant energy barrier for the transport of Na2SO4. Nanofiltration performance of surface-charged GO membrane The controllable ion transport achieved in surface-charged GO membranes encouraged us to apply them in nanofiltration. The charged GO membranes with highly tunable exclusion against divalent salts while allowing free permeation of monovalent salts (Supplementary Fig. 14) perfectly fit the spectrum of the nanofiltration process, which is used to remove polyvalent salts while reserving beneficial mineral salts in applications such as the production of potable water5,6. Nanofiltration is also regarded as a loose-structure and low-pressure alternative to reverse osmosis for high-throughput and low-energy desalination applications26. We measured the separation performance of pristine GO and surface-charged GO membranes in the nanofiltration process (Fig. 3a, b). The pristine GO membranes allowed fast water transport but failed to sieve salts out of water with a sharp permeation cut-off of hydrated radii ~4.7 Å (Supplementary Fig. 15), which was determined by the intrinsic interlayer distance for GO laminates that swell in water16,21,27. In real water treatment applications, appropriate permeability (water permeance) as well as enhanced selectivity (salt rejection) are critically required for high efficiency of desalination processes26. Remarkable improvement in salts rejection was stimulated by altering the surface charges of the GO membrane. The positively charged GO-PEI membrane exhibited MgCl2 rejection up to ~95%, which is 2.3 times higher than that of the optimized GO membrane (~42%). Similarly, the highest Na2SO4 rejection of ~86% for the pristine GO membrane could be further increased to ~96% in the negatively charged GO-PAA membrane. Noting that a critical thickness of GO membrane (~100 nm in our case) is needed to provide an ultra-smooth and defect-free platform for the uniform deposition of polyelectrolyte layer, and thus well-distributed surface charges can be achieved to perform effective electrostatic exclusion function for the membrane. Notably, the water permeance had almost no decrease in these surface-charged GO membranes, indicating that fast water permeation channels within the GO laminates are well preserved. By contrast, the use of conventional hybrid approaches, such as layer-by-layer or mixed matrix to incorporate polyelectrolytes into GO laminates, severely compromised these fast water transport channels, resulting in 4–5 times lower water permeance than the pristine GO membrane with similar membrane thickness (Supplementary Figs 16–18). Membrane performance comparison. a Water permeance and MgCl2 rejection of pristine graphene oxide (GO) and positively charged GO-PEI membrane as a function of membrane thickness (GO deposition amounts of 0.1, 0.2, 0.5, 0.8, and 1.0 mg with 0.1 wt% polyethylene imine (PEI) surface coating) under 2 bar filtration at feed concentration of 50 ppm. b Water permeance and Na2SO4 rejection of pristine GO and negatively charged GO-PAA membrane as a function of membrane thickness (GO deposition amounts of 0.1, 0.2, 0.5, 0.8, and 1.0 mg with 0.1 wt% polyacrylic acid (PAA) surface coating) under 2 bar filtration at feed concentration of 50 ppm. Dashed lines: pristine GO membranes; solid lines: surface-charged GO membranes. Yellow and green upward arrows indicate the remarkable improvements of surface-charged GO membranes in salt rejection. Error bars represent standard deviations for three measurements. c MgCl2 rejection with water permeance of positively charged GO membranes (GO-PDDA marked as orange hexagon, GO-PEI marked as orange up-triangle, GO-PAH marked as orange left-triangle, GO-PDDA in long time measurement marked as orange spotted hexagon, TiO2 intercalated GO-PDDA marked as orange star). d Na2SO4 rejection with water permeance of negatively charged GO membranes (GO-SA marked as green hexagon, GO-PAA marked as green up-triangle, GO-PSS marked as green left-triangle, GO-PSS in long time measurement marked as green spotted hexagon, TiO2 intercalated GO-PSS marked as green star) in this work, as well as comparison with two-dimensional-material membranes (marked as gray squares), thin film nanocomposite (TFN) and/or thin film composite (TFC) membranes (marked as gray circles), and commercial polymeric nanofiltration membranes (marked as gray regions, e.g., NF270, NF90, NF200 membranes from Dow; DK, DL series of membranes from GE; ESNA series of membranes from Hydranautics). For references, see Supplementary Tables 2 and 3 in detail Based on the attractive salts exclusion capability derived from surface charges of the GO membrane, we further improved the water permeance (from ~15 to ~56 L m−2 h−1 bar−1) without substantially reducing salts rejection by intercalating nanoparticles into the surface-charged GO laminate (Supplementary Figures 19 and 20)28,29. This promising result suggests that the surface charge control approach could also allow an independent optimization of GO nanochannels to further boost water transport. In addition, the attachment of polyelectrolytes allows regulation of the surface hydrophilicity of the GO membrane to better tune the sorption behavior of water or other components in the feed (Supplementary Fig. 21), which can contribute to accelerated water permeation (Supplementary Fig. 22). Distinct from existing GO-based membranes with carefully tuned transport channels that often suffer a trade-off between salt rejection and water permeance, our surface-charged GO membrane have achieved remarkable advancement in salts rejection without compromising fast water permeance. The rationally designed surface-charged GO membranes exhibit MgCl2 rejection of 93.2% with water permeance of 51.2 L m−2 h−1 bar−1 and Na2SO4 rejection of 93.9% with water permeance of 56.8 L m−2 h−1 bar−1, which are far beyond the performance limits of GO membranes (Fig. 3c, d). Such excellent performance is superior to that of most state-of-the-art nanofiltration membranes including two-dimensional (2D)-material membranes (marked as squares), thin film nanocomposite and/or thin film composite membranes (marked as circles), and commercial polymeric nanofiltration membranes (marked as gray regions) (Tables S1 and S2). Also, the surface-charged GO membranes with controllable ion transport properties clearly overcome the salt permeability and water/salt selectivity trade-off observed for many polymeric membranes (Supplementary Fig. 23)30. We also employed our surface-charged GO membranes under aggressive high-pressure and long-term operation conditions that reflect the practical stability and feasibility of these membranes. Promisingly, we observed that the separation performance of our surface-charged GO membranes remained almost stable over the high pressure of 6 bar and the long period of 120 h (Supplementary Fig. 24). Moreover, we demonstrated scalability of the facile surface charge controlling strategy by fabricating a 15 cm-diameter surface-charged GO membrane via the same approach whose effective area (176.7 cm2) is 10–60 times larger than the reported GO-based membranes (Supplementary Table 4). Four small pieces of membranes incised from different locations of the large membrane exhibited desirable salt retention capability with water permeance of ~10.5 L m−2 h−1 bar−1 and MgCl2 rejection of ~90% (Supplementary Fig. 25). In summary, our work demonstrates a methodology for manipulation of surface charge to realize controllable ion transport through a GO membrane, in which desirable electrostatic interactions with charged ions were successfully created and finely tailored by the attachment of ionizable functional groups with various protonation/deprotonation abilities on the surface of pre-stacked GO laminates. The proposed ion transport mechanism clarified the controlling factors of the interaction energy barrier that arose between charged ions and the charged membrane surface. Surface polyelectrolyte layer with tunable charge properties offered desirable interactions with charged ions to control the ionic transport, meanwhile underlaying GO laminate with 2D graphene capillaries provided fast water transport nanochannels. By rational design of the membrane surface charge and the transport channels, the resulting surface-charged GO membranes exhibited outstanding rejection of salts and ultrahigh water permeance in a nanofiltration process, whose performance was far beyond the performance limit of state-of-the-art nanofiltration membranes. The approach of tuning surface charges to control ion transport, demonstrated here, establishes a platform that could be of interest in a variety of applications, such as water transport, studies of biomimetic ion channels and biosensors, ion batteries, and energy conversions. Membrane preparations GO powder was prepared by modified Hummer's method and then was dissolved in deionized water followed by ultrasonication for 30 min to obtain stable and homogeneous GO aqueous suspension. GO membranes were prepared by a pressure-assisted filtration-deposition method. The sheet-flat PAN ultrafiltration membrane with a molecular weight cut-off of 100,000 Da was used as substrate. GO nanosheets in the aqueous suspension uniformly deposit on the substrate to form well-assembled GO laminates under the pressure of 2 bar using a self-designed filtration device. The GO membranes with different thickness were obtained by depositing different amounts (0.1, 0.2, 0.5, 0.8 and 1.0 mg) of GO nanosheets. The surface-charged GO membranes were further prepared by a simple dip-coating method. PDDA, PEI, PAH, PSS, PAA, and SA are selected on basis of their different charge intensity and water sorption ability. The polyelectrolyte coating concentrations can be easily tuned (0.05, 0.1, 0.2, 0.3 wt%). Specifically, 0.1 wt% polyelectrolyte aqueous solutions were poured into the device and kept still for 30 min before being poured out. The final membranes were rinsed with deionized water and dried at room temperature. The resultant membranes were referred to as GO-PDDA, GO-PEI, GO-PAH, GO-PSS, GO-PAA, and GO-SA membranes, respectively. Membrane characterizations The membrane surface morphologies and membrane thicknesses were imaged and measured by field-emission scanning electron microscope (S4800, Hitachi, Japan) at a voltage of 5 kV and current of 10 μA. The membranes surface phase and height profiles with roughness data were measured by atomic force microscopy (XE-100 Park SYSTEMS, Korea) in the range of 5 × 5 µm2 operated in the non-contact mode. The zeta potential of polyelectrolyte aqueous solutions (0.1 mg mL−1, pH 7) were investigated by Zeta potential analyzer (Zetasizer Nano ZS90, Malvern, UK). The surface charge properties of membranes were analyzed by a SurPASS electrokinetic analyzer (Anton Paar GmbH, Austria) through streaming potential measurements. A 0.001 M KCl solution was used to measure the zeta potential of the membrane initially under neutral pH. After that, the pH of the solution increased gradually to pH 11 and then decreased to pH 2.6 by autotitration with 0.1 M HCl and 0.1 M NaOH solutions, respectively. Fourier transform IR (AVATAR-FT-IR-360, Thermo Nicolet, USA) spectra of membranes in the range of 4000–750 cm−1 were displayed to characterized the surface functionalized groups. XPS (Thermo ESCALAB 250, USA) was employed to determine the surface chemistry of membranes. The crystal phases of the samples were examined by X-ray diffraction (model D8 Advance, Bruker) with Cu Kα radiation to further calculate the d-spacing of the membranes. A spectroscopic ellipsometer (Compete EASEM-2000 U, J. A. Woollam) with the wavelength ranged from 250 to 1000 nm at an incident angle of 70° was applied to measure the thickness of the polyelectrolyte layers coated on the surface of Si wafers. The B spline model was used to fit data. More than four spots on the surface of Si wafer were selected and the average value was reported for the thickness. The quartz crystal microbalance technique (QCM200 Quartz Crystal Microbalance, Stanford Research Systems, Inc.) was used to evaluate the water sorption ability of polyelectrolytes. The polyelectrolyte layer was coated onto a gold-coated quartz sensor, and air with certain humidity was driven into the QCM chamber with dynamic weight data recorded. The surface hydrophilicity of membranes was evaluated by detecting the static water contact angle at room temperature using a contact angle measurement system (Drop Shape Analyzer-DSA100, Kruss, Germany). Salts concentrations in feed and permeate solutions were obtained using electrical conductivity (FE38-Standard, METTLER TOLEDO, Switzerland). Calculation of membrane surface charge density The membrane charge density (σ, mC m−2) is calculated by membrane zeta potential according to Gouy–Chapman equation31 as follows: $$\sigma = - {\it{\epsilon }}\kappa \xi \frac{{{\mathrm{sinh}}\left( {\frac{{F\xi }}{{2RT}}} \right)}}{{\frac{{F\xi }}{{2RT}}}}$$ where \(\kappa ^{ - 1} = \left( {\frac{{\varepsilon RT}}{{2F^2C}}} \right)^{\frac{1}{2}}\) is Debye length, ξ (mV) is membrane zeta potential obtained through streaming potential measurements, R = 8.3145 J mol−1 K−1 is gas constant, F = 96485 C mol−1 is Faraday constant, T = 298 K is absolute temperature, and \({\it{\epsilon }} = 6.933 \times 10^{ - 10}\,{\mathrm{F}}\,{\mathrm{m}}^{ - 1}\) is permittivity. Membrane performance measurements Membrane performance is tested using nanofiltration process under 2 bar by a self-designed filtration device at room temperature. The effective area of the membranes is 17.35 cm2. The water permeance (J, L m−2 h−1 bar−1) was measured with deionized water, and the salt rejection (R, %) was determined using salts (i.e., MgCl2, Na2SO4, NaCl) in the form of 50 ppm aqueous solutions. Each data was obtained by a new membrane sample, and at least three membranes were tested to validate the reproducibility. The membranes were first conditioned under nanofiltration operation for 2 h before collecting permeate samples. During the filtration process, we recycled the permeation into the feed tank to maintain a stable salt concentration in the feed. Water permeance and salt rejection are calculated as follows: $$J = \frac{V}{{A \times t \times P}}$$ $$R = \left( {1 - \frac{{c_{{p}}}}{{c_{{f}}}}} \right) \times 100{\mathrm{\% }}$$ where V is the volume of permeate collected (L), A is the membrane effective area (m2), t is the permeation time (h), P is the applied pressure (bar), and cp and cf are the concentrations of the permeate and feed solution, respectively. The transport of water and salt through membranes was described in terms of the solution–diffusion model23. The water flux, JW (g cm−2 s−1), can be expressed as follows: $$J_{{W}} = \frac{{D_{{W}}}}{L}\left( {C_{{ {W}},{{F}}}^{ {m}} - C_{{ {W}},{ {P}}}^{{m}}} \right) = \frac{{D_{{W}}C_{{ {W}},{{F}}}^{ {m}}}}{L}\left( {1 - \frac{{C_{{{W}},{ {P}}}^{{m}}}}{{C_{{{W}},{{F}}}^{ {m}}}}} \right)$$ where DW (cm2 s−1) is the average water diffusion coefficient in the membrane; L (cm) is the thickness of the membrane; and \(C_{{\mathrm{W}},{\mathrm{F}}}^{\mathrm{m}}\) and \(C_{{\mathrm{W}},{\mathrm{P}}}^{\mathrm{m}}\) (g cm−3) are water concentration in the membrane on the feed and permeate side, respectively. On account of the pressure and osmotic pressure difference between the feed and permeate sides of the membrane, Eq. (4) is often written as follows: $$J_{{W}} = \frac{{D_{{W}}C_{{{W}},{{F}}}^{{m}}}}{L}\frac{{\bar V}}{{RT}}\left( {{{\Delta }}P - {{\Delta }}\pi } \right)$$ where \(\bar V\) (cm3 mol−1) is the partial molar volume of water, which is typically well approximated by the molar volume of pure water when the water uptake varies little with salt concentration over the salt concentration range of interest; R (83.1 cm3 bar mol−1 K−1) is the gas constant, T (K) is the absolute temperature; ΔP (bar) is the pressure difference across the membrane, and Δπ (bar) is the osmotic pressure difference across the membrane. The water partition (or solubility) coefficient, KW, is defined as the ratio of water concentration in the membrane to that in the contiguous solution: $$K_{{W}} = \frac{{C_{{{W}},{{F}}}^{{m}}}}{{C_{{{W}},{{F}}}}}$$ For relatively dilute solutions, CW,F is approximately equal to the density of pure water, ρW (g cm−3). Combining Eqs. (5) and (6) yields: $$J_{{W}} = \frac{{P_{{W}}}}{L}\frac{{\rho _{{W}}\bar V}}{{RT}}\left( {\Delta P - \Delta \pi } \right) = \frac{{P_{{W}}}}{L}\frac{{M_{{W}}}}{{RT}}\left( {\Delta P - \Delta \pi } \right) = A\left( {\Delta P - \Delta \pi } \right)$$ where A is the effective membrane permeance to water, and PW is the membrane permeability to water. As indicated in Eq. (7), A is related to water permeability PW as follows: $$A = \frac{{P_{{W}}}}{L}\frac{{M_{{W}}}}{{RT}}$$ According to the solution–diffusion model, the salt flux through membrane, JS (g cm−2 s−1), can be given as: $$J_{{S}} = \frac{{P_{{S}}}}{L}\left( {C_{{{S}},{{F}}} - C_{{{S}},{{P}}}} \right) = \frac{{P_{{S}}}}{L}{{\Delta }}C_{{S}} = B{{\Delta }}C_{{S}}$$ where PS (cm2 s−1) is the salt permeability; CS,F and CS,P (g cm−3) are the salt concentrations in the solution on the feed and permeate sides of the membrane, respectively, and ΔCS is the salt concentration difference. It worth noticing that our focus is on salt and water transport properties through the membrane, so the concentration polarization is not considered. In addition, B = PS/L is the reported salt flux. ΔCS and Δπ are typically related as Δπ = ΔCSRT. The capability of the membrane to remove salt from a feed solution is often characterized in terms of salt rejection, R (%), which can be presented as follows within the context of the solution–diffusion model: $$R = \frac{{(P_{{W}}/P_{{S}})(\bar V/PT)({\mathrm{\Delta }}P - {\mathrm{\Delta }}\pi )}}{{1 + (P_{{W}}/P_{{S}})(\bar V/PT)({\mathrm{\Delta }}P - {\mathrm{\Delta }}\pi )}} \times 100{\mathrm{\% }} = \frac{{(A/B)({\mathrm{\Delta }}P - {\mathrm{\Delta }}\pi )}}{{1 + (A/B)({\mathrm{\Delta }}P - {\mathrm{\Delta }}\pi )}} \times 100{\mathrm{\% }}$$ Consequently, water permeability and salt permeability can be expressed as follows: $$P_{{W}} = K_{{W}}D_{{W}}$$ $$P_{{S}} = K_{{S}}D_{{S}}$$ The ideal water/salt selectivity, αW/S, is defined as the ratio of water permeability to salt permeability: $${\mathrm{\alpha }}_{{{W}}/{{S}}} = \frac{{P_{{W}}}}{{P_{{S}}}} = \frac{{K_{{W}}}}{{K_{{S}}}} \times \frac{{D_{{W}}}}{{D_{{S}}}}$$ where KW/KS is the water/salt solubility selectivity, and DW/DS is the water/salt diffusivity selectivity. Calculation of DLVO interaction The DLVO interactions between charged ions and surface-charged GO membranes are calculated using the SEI technique24. The SEI technique considers the total interaction energy between hydrated ion and planar membrane surface by integrating the interaction energy per unit area: $$U\left( D \right) = {\int\int} {E(h)dA}$$ Here U is the interaction energy between hydrated ion and membrane surface, D is the closest distance between them, E is the interaction energy per unit area between ion and membrane surface separated by a distance h, and dA is the projected differential surface area of the ion. To provide a facile description of the mathematical formulation, the analysis presented here employs a cylindrical coordinate system, and the expression for the interaction energy becomes: $$U\left( D \right) = \mathop {\int}\limits_0^{2{\mathrm{\pi }}} {\mathop {\int}\limits_0^a E } \left( h \right)r\,dr\,d\theta$$ $$h = D + a - \sqrt {a^2 - r^2}$$ where a is radius of the hydrated ion and h is the vertical distance between a circular arc (differential surface area r dr dθ) of hydrated ion and the point on the membrane surface directly below it. In this study, we use the DLVO interaction energy per unit area between hydrated ion and membrane surface obtained by adding the Hamaker expression for the van der Waals interaction and the constant potential electrostatic double-layer interaction energy expression. The total DLVO interaction energy per unit area is thus given as: $$\begin{array}{*{20}{l}} {E_{{{DLVO}}}\left( h \right)} \hfill & = \hfill & {E_{{{VDW}}}\left( h \right) + E_{{{EDL}}}\left( h \right) = - \frac{{A_{{H}}}}{{12\pi h^2}}} \hfill \\ {} \hfill & {} \hfill & { + \frac{{{\it{\epsilon \epsilon }}_0\kappa }}{2}\left[ {\left( {\psi _{{s}}^2 + \psi _{{m}}^2} \right)\left( {1 - {{coth}}\,\kappa h} \right) + \frac{{2\psi _{{s}}\psi _{{m}}}}{{{{sinh}}\,\kappa h}}} \right]} \hfill \end{array}$$ Here AH is Hamaker constant, \({\it{\epsilon }}\) is dielectric constant of solvent, \({\it{\epsilon }}_0\) is dielectric permittivity of vacuum, ψs is surface potential of hydrated ions, ψm is surface potential of surface-charged GO membrane, and κ is the inverse Debye screening length, respectively. The surface potential of hydrated ions is calculated according to Coulomb's law as follows: $$\psi _{{s}} = \frac{q}{{4{{\pi }}{\it{\epsilon }}r}}$$ where ψs is surface potential of hydrated ion, q is the charge of the ion (which equals to 1.602 × 10−19 C for monovalent ions such as Na+ and Cl−, and 3.204 × 10−19 C for divalent ions such as Mg2+ and SO42−), ϵ is dielectric constant, and r is the hydration radius of hydrated ions (which is 0.358, 0.332, 0.428, and 0.379 nm for Na+, Cl−, Mg2+, and SO42−, respectively32). The source data underlying Figs. 1d, e, 2a–g, and 3a–d are provided as a Source Data file. The data that support the findings of this study are available from the corresponding author upon reasonable request. Hille, B. Ion Channels of Excitable Membranes (Sinauer, Sunderland, MA, 2001). Hinds, B. J. et al. Aligned multiwalled carbon nanotube membranes. Science 303, 62–65 (2004). ADS CAS Article Google Scholar Fornasiero, F. et al. Ion exclusion by sub 2-nm carbon nanotube pores. Proc. Natl. Acad. Sci. USA. 105, 17250–17255 (2008). Tunuguntla, R. H. et al. Enhanced water permeability and tunable ion selectivity in subnanometer carbon nanotube porins. Science 357, 792–796 (2017). Shannon, M. A. et al. Science and technology for water purification in the coming decades. Nature 452, 301–310 (2008). Elimelech, M. & Phillip, W. A. The future of seawater desalination: energy, technology, and the environment. Science 333, 712–717 (2011). Nair, R. R. et al. Unimpeded permeation of water through helium-leak–tight graphene-based membranes. Science 335, 442–444 (2012). Liu, G., Jin, W. & Xu, N. Graphene-based membranes. Chem. Soc. Rev. 44, 5016–5030 (2015). Saraswat, V. et al. Invariance of water permeance through size-differentiated graphene oxide laminates. ACS Nano 12, 7855–7865 (2018). Wei, N., Peng, X. & Xu, Z. Breakdown of fast water transport in graphene oxides. Phys. Rev. E 89, 012113 (2014). Devanathan, R. et al. Molecular dynamics simulations reveal that water diffusion between graphene oxide layers is slow. Sci. Rep. 6, 29484 (2016). Tanugi, D. C. & Grossman, J. C. Water desalination across nanoporous graphene. Nano Lett. 12, 3602–3608 (2012). Joshi, R. K. et al. Precise and ultrafast molecular sieving through graphene oxide membranes. Science 343, 752–754 (2014). Sun, P. et al. Selective ion penetration of graphene oxide membranes. ACS Nano 7, 428–437 (2012). Hu, M. & Mi, B. Enabling graphene oxide nanosheets as water separation membranes. Environ. Sci. Technol. 47, 3715–3723 (2013). Zheng, S. et al. Swelling of graphene oxide membranes in aqueous solution: characterization of interlayer spacing and insight into water transport mechanisms. ACS Nano 11, 6440–6450 (2017). Mi, B. Graphene oxide membranes for ionic and molecular sieving. Science 343, 740 (2014). Hung, W. S. et al. Cross-linking with diamine monomers to prepare composite graphene oxide-framework membranes with varying d-spacing. Chem. Mater. 26, 2983–2990 (2014). Zhao, Y. et al. Formation of morphologically confined nanospaces via self-assembly of graphene and nanospheres for selective separation of lithium. J. Mater. Chem. A 6, 18859–18864 (2018). Zhao, Y. et al. Tunable nanoscale interlayer of graphene with symmetrical polyelectrolyte multilayer architecture for lithium extraction. Adv. Mater. Interfaces 5, 1701449 (2018). Chen, L. et al. Ion sieving in graphene oxide membranes via cationic control of interlayer spacing. Nature 550, 415–418 (2017). Zhang, M. et al. Effect of substrate on formation and nanofiltration performance of graphene oxide membranes. J. Membr. Sci. 574, 196–204 (2019). Geise, G. M. et al. Water permeability and water/salt selectivity tradeoff in polymers for desalination. J. Membr. Sci. 369, 130–138 (2011). Bhattacharjee, S. & Elimelech, M. Surface element integration: a novel technique for evaluation of DLVO interaction between a particle and a flat plate. J. Colloid Interface Sci. 193, 273–285 (1997). Hoek, E. M. V. et al. Effect of membrane surface roughness on colloid-membrane DLVO interactions. Langmuir 19, 4836–4847 (2003). Mohammad, A. W. et al. Nanofiltration membranes review: recent advances and future prospects. Desalination 356, 226–254 (2015). Yang, Q. et al. Ultrathin graphene-based membrane with precise molecular sieving and ultrafast solvent permeation. Nat. Mater. 16, 1198 (2017). Werber, J. R. et al. The critical need for increased selectivity, not increased water permeability, for desalination membranes. Environ. Sci. Technol. Lett. 3, 112–120 (2016). Zhang, M. et al. Nanoparticles@ rGO membrane enabling highly enhanced water permeability and structural stability with preserved selectivity. AIChE J. 63, 5054–5063 (2017). Park, H. B. et al. Maximizing the right stuff: the trade-off between membrane permeability and selectivity. Science 356, eaab0530 (2017). Bowen, W. R. & Cao, X. W. Electrokinetic effects in membrane pores and the determination of zeta-potential. J. Membr. Sci. 140, 267–273 (1998). Tansel, B. et al. Significance of hydrated radius and hydration shells on ionic permeability during nanofiltration in dead end and cross flow modes. Sep. Purif. Technol. 51, 40–47 (2006). This work was financially supported by the National Natural Science Foundation of China (grant nos. 21490585, 21776125, and 51861135203), the Innovative Research Team Program by the Ministry of Education of China (grant no. IRT17R54), and the Topnotch Academic Programs Project of Jiangsu Higher Education Institutions (TAPP). We thank Shipeng Sun for offering commercial NF membrane samples and Andrew Livingston, Bill Koros and Ho Bum Park for helpful discussions. State Key Laboratory of Materials-Oriented Chemical Engineering, Jiangsu National Synergetic Innovation Center for Advanced Materials, College of Chemical Engineering, Nanjing Tech University, 5 Xinmofan Road, 210009, Nanjing, P.R. China Mengchen Zhang, Kecheng Guan, Yufan Ji, Gongping Liu, Wanqin Jin & Nanping Xu Mengchen Zhang Kecheng Guan Yufan Ji Gongping Liu Wanqin Jin Nanping Xu W.J. and G.L. conceived the idea. W.J., G.L. and M.Z. designed the experiments and wrote the manuscript. M.Z. performed the experiments. M.Z., K.G. and Y.J. prepared the data graphs. M.Z., K.G., Y.J., G.L., W.J. and N.X. discussed the results and commented on the manuscript. Correspondence to Gongping Liu or Wanqin Jin. Journal peer review information: Nature Communications thanks Shen Jiangnan, Anthony Straub, and the other anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Source Data File Zhang, M., Guan, K., Ji, Y. et al. Controllable ion transport by surface-charged graphene oxide membrane. Nat Commun 10, 1253 (2019). https://doi.org/10.1038/s41467-019-09286-8 Fast electrophoretic preparation of large-area two-dimensional titanium carbide membranes for ion sieving Junjie Deng , Zong Lu , Li Ding , Zhong-Kun Li , Yanying Wei , Jürgen Caro & Haihui Wang Chemical Engineering Journal (2021) Differentiating Solutes with Precise Nanofiltration for Next Generation Environmental Separations: A Review Yangying Zhao , Tiezheng Tong , Xiaomao Wang , Shihong Lin , Elliot M. Reid & Yongsheng Chen Environmental Science & Technology (2021) Analysis of Mg2+/Li+ separation mechanism by charged nanofiltration membranes: visual simulation Nan Li , Changsheng Guo , Haiting Shi , Zhiwei Xu , Ping Xu , Kunyue Teng , Mingjing Shan & Xiaoming Qian Nanotechnology (2021) Semi-interpenetrating chitosan/ionic liquid polymer networks as electro-responsive biomaterials for potential wound dressings and iontophoretic applications Akel F. Kanaan , Ana P. Piedade , Hermínio C. de Sousa & Ana M.A. Dias Materials Science and Engineering: C (2021) Laminated GO membranes for water transport and ions selectivity: Mechanism, synthesis, stabilization, and applications Yu-Lei Xing , Guo-Rong Xu , Zi-Han An , Yan-Hui Liu , Ke Xu , Qian Liu , He-Li Zhao & Rasel Das Separation and Purification Technology (2021) Editors' Highlights Top Articles of 2019 Nature Communications ISSN 2041-1723 (online)
CommonCrawl
Construction of fatty acid derivatives from rubber seed oil as α-glucosidase inhibitors based on rubber seed oil Jiahao Liu1, Renwei Zhang1, Kaili Nie1, Changsheng Liu1,2, Li Deng1 & Fang Wang1 Natural free fatty acids show inhibitory effects on α-glucosidase and can hence have potential applications in diabetes treatment. This study indicated that the inhibitory effect of fatty acids showed a significant negative correlation with affinity energy (− 0.87) and melting point (− 0.88). Guided by this relationship, two promotion strategies of hydration and esterification were put forward to increase the inhibitory effect of fatty acids on α-glucosidase. The hydration can import an extra hydroxy group into the C=C bond of fatty acids, that will enhance the interaction with α-glucosidase, while the esterification will lower the melting point of fatty acids, and promote the inhibitory effect. Hydroxy fatty acids and fatty acid isopropyl esters possessed higher inhibitory effects than the natural fatty acids. Then, rubber seed oil was modified into novel fatty acid derivatives with higher inhibitory effect on α-glucosidase. The inhibitory IC50 of hydroxy products and isopropanol esters was 0.42 ± 0.01 μM and 0.57 ± 0.01 μM, respectively. The result reveals a feasible route to construct fatty acid derivatives from natural oil with α-glucosidase inhibitory effect. Diabetes has become a big challenge for the global health system and brings high costs of medicines for patients (Chatterjee et al. 2017). It is estimated that by 2045, the population of diabetes patients will reach 693 million (Cho et al. 2018), and long-term diabetes will lead to chronic damage and dysfunction of various tissues, especially eyes, kidneys, heart, blood vessels and nerves (Deedwania 2018). α-Glucosidase inhibitors, which prevent postprandial hyperglycemia by delaying the digestion of carbohydrates in the gut, are usually employed for controlling postprandial blood glucose levels (Xu et al. 2019). Many non-sugar products from the natural sources perform well in inhibiting the activity of α-glucosidase, drawing tremendous attention for their abundant source and good biocompatibility (Mata et al. 2013), such as terpene, alkaloid, quinine, flavonoid, phenol, phenylpropanoid, as well as organic acids and ester (Kumar et al. 2011; Zy et al. 2014). At the same time, lots of seed oils have been found as an effective α-glucosidase inhibitor (Teng and Chen 2016). Kutoh et al. reported the dietary omega-3 polyunsaturated fatty acids and their metabolic derivatives are potential drugs to prevent and treat diabetes (Kutoh et al. 2021). Chen and Kang reported that the hexane extract from oriental melon seeds was an effective inhibitor for α-glucosidase with 35.30% inhibition rate at 2 mg/mL (Chen and Kang 2013). Pi et al. found that methanol extracts of Arctium lappa L. showed a good inhibition towards α-glucosidase with the IC50 of 30 μmol/mL (Pi et al. 2010) and the inhibitory compounds were identified as methyl palmitate, methyl linoleate and methyl linolenate. They further revealed that fatty acids with double bond had stronger inhibitory ability (Miyazawa et al. 2005). Among the common fatty acids, oleic acid, linoleic acid and linolenic acid possessed higher inhibitory effects towards α-glucosidase than others (Su et al. 2013). Free fatty acids (FFA) and their derivates seem to be attractive potential α-glucosidase inhibitors to treat diabetes. Previous studies on the inhibition ability of various natural FFAs on the α-glucosidase showed that FFAs with longer carbon chain and more C=C bonds performed a higher inhibition effect (Teng and Chen 2016). However, the inhibitory effect of free fatty acids on α-glucosidase needs to be improved to enhance its application potential in medicine at present. Hence, developing proper FFA derivates with good inhibitory effects is attractive. In this paper, the structure–activity relationship was investigated by analyzing the inhibitory performance and their properties of various FFAs (physical properties and virtual molecular affinity energy). Then two strategies, hydration and esterification, were proposed to improve the inhibition ability of FFA on α-glucosidase activity. At the same time, this paper applies these two promotion strategies to rubber seed oil, a natural oil, and constructs fatty acid derivatives as improved α-glucosidase inhibitors, providing a feasible route to construct fatty acid derivatives with α-glucosidase inhibitory effect from natural oil. α-Glucosidase from recombinant yeast (50 units/mg), p-nitrophenyl-α-d-glucopyranoside (α-PNPG, 99%), α-linolenic acid (C18:3, 98%), and isopropyl-β-d-thiogalactoside (IPTG) are purchased from Shanghai yuanye Bio-Technology Co. Ltd. Oleic acid (OA,C18:1, 85%) and linoleic acid (LA,C18:2,85%) are purchased from TCI Co. Ltd. DHA (C22:6, 98%) is purchased from Sigma-Aldrich Co. Ltd. Caprylic acid (C8:0), capric acid (C10:0), lauric acid (C12:0), myristic acid (C14:0), palmitic acid (C16:0), stearic acid (SA,C18:0), methanol, ethanol, propanol, butanol, amyl alcohol, hexanol, octanol, isopropanol, toluene, and other reagents are analytical reagent. Novozymes 435 (Candida antarctic lipase B, CALB, 10,000 U/g) is purchased from Novozymes. Rubber seed oil is purchased from Xishuangbanna Huakun Biotechnology Co. Ltd. IC50 of α-glucosidase inhibition The measurement of α-glucosidase inhibitory (IC50) referred to the reported method (Chen and Kang 2013). Fatty acids and their derivates were diluted in 0.1 M, pH = 6.9 phosphate buffer with 0.5% (v/v) DMSO as cosolvent, to prepare 400 mM to 0.25 mM sample solutions. 50 μL sample solution and 100 μL 0.5 U α-glucosidase (prepared in the same phosphate buffer) were added into 1.5-mL plastic tubes, incubated at 37 °C, 10 min. Then, 50 μL 5 mM α-PNPG (prepared in the same buffer) was added into the plastic tube, incubated for another 10 min at 37 °C. At the end, 1 M Na2CO3 was added to terminate the reaction. Then, 200 μL solution was taken out and added to the 96-well microplate, measured at 405 nm: $$\% \text{inhibition} = \left({1} - \frac{ \text{Absorbance of sample} - {\text{absorbance of control}} } {{ \text{Absorbance of control} }} \right) \times 100.$$ IC50 was the sample concentration that inhibits the activity of 50% α-glucosidase. The Inhibitory score measures the inhibitory intensity of each sample on α-glucosidase, calculated by 1/IC50. Data were subjected to the one-way ANOVA analysis by SPSS 23.0, shown as mean ± SD (n = 3). Molecular docking and structure–function analysis Molecular docking The structure of α-glucosidase was obtained by the homologous modeling based on 3aj7 (Thao et al. 2021). Each compound was docking into the active site of α-glucosidase (Glu273, Asp346) by Auto Dock Vina 27 (Trott and Olson 2009). The conformation of each compound was generated to obtain the best affinity energy, in which the higher absolute value of affinity energy, the stronger protein–ligand interaction. Correlation analysis and regression model The bivariate Pearson correlation analysis was performed by SPSS 23.0; all the comparisons are made on statistical analysis of variance (ANOVA). The linear regression model applied the data analysis function of EXCEL. Hydration of unsaturated fatty acids Em-OAH hydratase can perform asymmetric addition of water to the C=C bond at Δ9 position of oleic acid (C18:1, OA), linoleic acid (C18:2) and linolenic acid (C18:3,LA), yielding 10-hydroxystearic acid 10-HSA),10-hydroxy-cis-12-octadecenoic acid (10-HOA) and 10-hydroxy-cis-12,15-octadecenoic acid, respectively (Yan et al. 2020), and the situation of adding water to the acid given in the chemical formula is described in Additional file 1: Fig. S7. Em-OAH hydratase was expressed in E. coli BL21 and obtained after freeze-drying the cell sediment. The enzymatic synthesis of hydroxyl fatty acids was performed in 200 mL of 50 mM citrate/phosphate buffer (pH 6.5) containing 2 g FFA, 2 g freeze-dried cell, and 100 μL Tween 80 at 35 °C in a 500-mL conical flask for 20 h (Nie et al. 2020). After the reaction, 1 mL of ethyl acetate was added to the reaction system to extract hydrated products. After centrifugation at 5000 rpm, the ethyl acetate solution was collected and samples were analyzed by GC. Esterification of oleic acid with various alcohols 2 g (7.09 mmol) oleic acid and equimolar quantities of alcohols were added in 50-mL triangular flask. 1 g Novozyme 435 (CALB), 10 mL toluene, and 0.25 g 4A molecular sieve were added in the flask, incubated at 45 °C, 200 rpm for 24 h. CALB and molecular sieve were separated by filtration. Toluene was removed by rotary evaporation to obtain products, and samples were analyzed by GC. The hydrolysis of rubber seed oil and hydration of hydrolyzed fatty acids 10 g rubber seed oil, 2.3 g KOH, 4.4 mL deionized water and 26.4 mL 95% ethanol were added into a three-neck flask. The reaction was performed at 60 °C for 1 h with nitrogen reflux. At the end of the reaction, 25 mL distilled water was added, and then 30 mL n-hexane was used to extract unsaponifiable matter. The hydration layer was diluted to pH = 1 with 3 M HCl, and then free fatty acids were extracted with n-hexane. The extract was dried with anhydrous sodium sulfate and then filtered off. Then, n-hexane was removed by rotary evaporation. After methyl esterification, the sample was analyzed by gas chromatography, which was used to analyze the fatty acid composition of rubber seed oil. The measurement of acid value of rubber seed oil referred to the reported method (Khan et al. 2018). The hydration method of hydrolyzed fatty acids from rubber seed oil was the same as the previous method in 2.3. Esterification of rubber seed oil with isopropyl alcohol The response surface method V8.0.5 with Design Expert was applied to optimize key factor of the esterification reaction between rubber seed oil and the isopropyl alcohol, including temperature (X1), enzyme quantity (X2) and substrate ratio (X3). The range and the levels of the test variables are given in Table 1. The products were collected and analyzed by GC–MS. Table 1 Factors and levels of Box–Behnken experiments for optimization of esterification reaction conditions Analytical procedure GC analysis for hydrolysis products The products of the hydration reaction were analyzed by GC (Shimadzu Corporation of Japan) equipped with a DB-Wax column (30 m × 0.25 mm × 0.25 μm), using nitrogen as carrier gas. The temperature of the injector was set at 240 °C. The initial temperature of the column was 60 °C, maintained for 1 min. Then the column temperature was increased to 200 °C at a rate of 25 °C/min, and maintained for 1 min. Finally, the column temperature was increased to 230 °C at a rate of 3 °C/min and maintained for 12 min. The contents of products were calculated by area normalization. GC analysis for hydration products The products of the hydration reaction were analyzed by GC (Shimadzu Corporation of Japan) equipped with a DB-1 ht column (30 m × 0.53 mm × 0.15 μm), using nitrogen as carrier gas. The temperature of the injector was set at 380 °C. The initial temperature of the column was 200 °C, then the column temperature was increased to 350 °C at a rate of 35 °C/min, and maintained for 2 min. Finally, the column temperature was increased to 380 °C at a rate of 20 °C/min and maintained for 13 min, the contents of products were calculated by area normalization. GC–MS analysis for products The products of the esterification were analyzed by GC–MS (Agilent 7890A-5975C) equipped with a HP-5 ms column (30 m × 0.25 mm × 0.1 μm, J&W Scientific Columns, Agilent Technologies), using helium as carrier gas. The temperature of the injector was set at 290 °C. The Initial temperature of the column was 50 °C, maintained for 1 min. Then the column temperature was increased to 290 °C at a rate of 6 °C/min, and maintained for 10 min. The MS was operated with an ion-source of 280 °C and 70 eV. Peaks were identified with a library search of NIST98. GC–MS results were quantified using the peak area normalization method. All measurements were conducted in triplicate. The contents of products were calculated with the method of area normalization. Structure function analysis of FFA on the inhibition towards α-glucosidase The inhibitory ability of various natural FFAs on the α-glucosidase has been researched (Table S2). In order to obtain a quantitative result between the dependent variables and independent variables, two discrete variables (carbon length and C=C bond number) of fatty acids were developed into three continuous variables (affinity energy, melting point, and calculated log P (Tetko and Tanchuk 2002), shown in Table 2. At the same time, the inhibitory score (1/IC50) was defined as the dependent variable. Based on these continuous independent variables and one dependent variable, the quantitative research can be conducted by SPSS. The SPSS correlation analysis showed that the affinity energy had a significant negative correlation with inhibitory score at a correlation coefficient of − 0.87 (p < 0.01). Fatty acids with a higher absolute value of affinity energy illustrated a stronger ligand–protein interaction with α-glucosidase, bringing a stronger inhibitory effect. Rahim et al. had the similar findings that thiazole derivatives with a higher absolute value of affinity energy performed a better inhibitory ability towards α-glucosidase (Rahim et al. 2015). Besides, the melting point showed a significant negative correlation with the inhibitory score at a correlation coefficient of − 0.88 (p < 0.01), revealing that lowering the melting point can enhance the inhibitory ability. However, the correlation coefficient of log P (0.35, p = 0.30) cannot illustrate a significant association. Based on the significant correlation among the affinity, melting point and inhibition score, a linear regression model was built (Additional file 1: Table S1): Table 2 The IC50, inhibitory score, affinity energy and melting point of various FFA $${\text{Y}}_{\text{inhibitory score}} = -{0.18}{\text{x}}_{{\text{affinity energy}}}-{0.019}{\text{x}}_{\text{Melting point}}, {\text{r}}^{2}=0.76 \quad \left(**\right).$$ The coefficients of affinity energy and melting point are − 0.18 and − 0.019, respectively. The SPSS correlation analysis and linear regression model indicated: (1) enhancing the absolute value of affinity energy between FFA and α-glucosidase might improve its inhibitory effect; (2) lowering the melting point of FFA might strengthen its inhibitory effect; (3) the affinity energy with coefficient of − 0.18 tends to have a larger influence than the melting point with coefficient of − 0.019. The improvement of inhibitory ability on α-glucosidase by modified fatty acids Based on the linear correlation, the improvement of affinity energy absolute value will increase the inhibitory effect with a coefficient of − 0.18. Affinity energy refers to interaction between ligands and proteins. Compared with common fatty acids, the hydroxy fatty acid contains a hydroxy group, which can form an extra hydrogen bond with the cavity pocket of α-glucosidase and improve the affinity energy absolute value (Fig. 1). Compared with SA, the extra hydroxy group of 10-HSA in C10 can form two hydrogen bonds with Tyr310 and Arg309, strengthening the interaction between ligands and protein from − 24.7 kJ/mol of SA to − 29.0 kJ/mol of 10-HSA (Fig. 1a, b). Molecular docking results of OA and 10-HOA showed the similar pattern (Fig. 1c, d). The extra hydroxy group of 10-HOA in C10 can form two extra hydrogen bonds with Asp346 and Arg309, improving the affinity energy from − 26.40 kJ/mol of OA to − 27.20 kJ/mol of 10-HOA. FFAs and α-glucosidase complex binding site, prepared with LigPlot+. a SA, b HSA, c OA, d HOA In the hydration reaction, to obtain a hydroxy group and strengthen the ligand–protein interaction, oleic acid and linoleic acid were hydrated by Em-OAH, yielding 95% 10-HSA (Additional file 1: Fig. S1) and 45% 10-HOA, respectively. 10-HOA was purified by the acetone freeze crystallization and TLC column chromatography (methanol:dichloromethane = 1:40), yielding a 94% purity (Additional file 1: Fig. S1). A significant improvement of inhibitory ability was found in 10-HSA and 10-HOA (Table 3). The extra hydroxy group improved the inhibitory score of 0.090 (SA) and 1.24 (OA) to 0.58 (10-HSA) and 2.58 (10-HOA). Paul et al. (2010) also reported that a hydroxy fatty acid, 10-hydroxy-8 (E)-octadecenoic acid, was a useful inhibitor for α-glucosidase. Our result supports that the construction of an extra hydroxy group in normal FFAs can improve their inhibitory performance. Table 3 IC50, inhibitory score, affinity energy and melting point of hydroxy fatty acid The improvement of inhibitory ability on α-glucosidase by esterification At the same time, the linear regression indicated that lowering the melting point might bring a higher inhibitory effect with a coefficient of − 0.019. FFA with higher melting point might have a poor interaction with α-glucosidase, since they will crystallize and precipitate at ambient temperature. The linear regression indicated that lowering the melting point might bring a higher inhibitory effect with a coefficient of − 0.019. The esterified modification will adjust the melting point of FFA by affecting the intermolecular Van der Waals' force (Yao et al. 2008). Different alcohol modifications lead to the change of melting points. Hence, a series of oleate esters were synthesized and their inhibitory ability was measured. After the esterification, methyl oleate, ethyl oleate, propyl oleate, butyl oleate, amyl oleate, hexyl oleate, octanol oleate, and isopropyl oleate with purity over 95% were obtained. With the increase of carbon atoms in the alcohol moiety, the melting point of oleate esters decreases sharply from 13.00 °C of OA to − 33.40 °C of isopropyl oleate and then increases to − 2.90 °C of octyl oleate, but the affinity energy absolute value of oleate esters reduced slightly (Table 4). Among all esters, isopropyl oleate possessed the lowest melting point of -33.40 °C, and relatively good affinity energy of -25.50 kJ/mol. Isopropyl oleate also showed a better inhibitory score of 1.67 and IC50 of 0.60 ± 0.01 μM. The improvement of inhibitory ability fits the previous finding that lower melting point contributed to a better inhibitory ability. Table 4 IC50, inhibitory score, affinity energy and melting point of oleate esters The improvement of inhibitory ability on α-glucosidase modified rubber seed oil The composition of fatty acids of rubber seed oil Firstly, we illustrate the structure–function relationship of fatty acids on the inhibitory effect towards α-glucosidase. Based on the SPSS correlation analysis, which showed that the affinity energy and the melting point had significant negative correlation with inhibitory scores at a correlation coefficient of − 0.87 (p < 0.01) and − 0.88 (p < 0.01), respectively, we designed and synthesized two fatty acid derivates (hydroxy fatty acids and fatty acid isopropyl ester) with better inhibitory ability. Compared with the natural FAAs, the IC50 of hydrated products, 10-HSA and 10-HOA, were improved, respectively. Secondly, the esterified modification can lower the melting point of FFA and increase the inhibitory performance. Among oleate esters, isopropyl oleate possessed the best inhibitory effect with IC50 of 0.60 ± 0.01 μM. Rubber seed oil is a natural oil with abundant unsaturated fatty acids and the composition of fatty acids is identified in Table 5. Table 5 Composition of hydrolyzed fatty acid by rubber seed oil The hydration of hydrolyzed fatty acids from rubber seed oil The natural rubber seed oil does not have any inhibitory effect on α-glucosidase. Rubber seed oil was hydrolyzed to obtain hydrolyzed fatty acids, which then were hydrated by Em-OAH method. Unreacted fatty acids were removed by freeze crystallization. Hydroxyl products contain 90% hydroxyl fatty acid mixture, including 10-HSA, 10-HOA and 10-HLA (Additional file 1: Fig. S3). Compared with the natural rubber seed oil, the inhibitory ability of hydroxy fatty acids on α-glucosidase was significantly improved to 0.42 ± 0.01 μM. The results showed that the rubber seed oil which had no inhibitory effect on α-glucosidase could be modified by hydrolytic reaction and hydration reaction, and thus it had the ability to inhibit α-glucosidase. Esterification of rubber seed oil with isopropanol Isopropyl ester was obtained by one-step enzymatic transesterification between rubber seed oil with isopropanol. The response surface method was selected to optimize the key factors in transesterification (Additional file 1: Table S3). Through statistical analysis of variance (ANOVA), the results were analyzed and reported in Additional file 1: Table S4. The F-value = 109.46 implies a significant model and R2 = 0.9984 indicates a well-matched model. Besides, X1, X2, X3, X12, X22 and X32 are significant model terms with p-values < 0.05, indicating that they are significant model terms. The ratio of substrates seems to have the least influence on the process, and the reaction temperature was the most influential parameter, which is expressed by the sum of squares. Three-dimensional response surface curves of different standards are given in Fig. 2, which are drawn for the yield of isopropyl fatty acid obtained in response surface design, so as to study interaction between three selected variables and determine the optimal content of each variable. The quadratic effect of temperature, enzyme quantity and substrate ratio affected the fatty acid isopropyl ester production (p < 0.0001), significantly. Based on the response surface, we developed a quadratic non-linear polynomial equation: Three-dimensional response surface. a Effect of substrate ratio and temperature on yield; b effect of enzyme amount and temperature on yield; c effect of enzyme amount and temperature on yield $${\text{Yield}}= 95.15+6.31 {\text{X}} 1+4.88 {\text{X}} 2+2.90 {\text{X}} 3-1.21 {\text{X}}1{\text{X}}2-1.23{\text{X}}{1}{\text{X}}3+2.27 {\text{X}}{2}{\text{X}}{3}-3.35{\text{X}}{1}^{2} -3.61{\text{X}{2}}^{2} -4.63{\text{X}{3}}^{2} \, {\text{R}^{2}} = 0.99.$$ The optimum technological parameters were determined as temperature 52 °C, substrate ratio 5.84, enzyme quantity 7.00%, based on the analysis of Design Expert, the predicted yield and the actual yield were 98.88% and 98.00%, respectively (Additional file 1: Fig. S4). The products were collected and analyzed by GC–MS, the composition of products is shown in Table 6. Compared with the natural oil, the inhibitory effect of isopropyl esters was improved significantly with IC50 of 0.57 ± 0.01 μM. Table 6 Composition ratio of esterification reaction products The result reveals that hydration and isopropyl esterification are useful strategies to improve the inhibitory of FFAs towards α-glucosidase. Then, the hydration and esterification strategy were applied for rubber seed oil to construct fatty acids derivatives with better α-glucosidase inhibition. Both the hydration and esterification strategy can convert the rubber seed oil with almost zero α-glucosidase inhibition into two effectively fatty acids derivatives, with IC50 of 0.42 ± 0.01 μM for hydroxy product and 0.57 ± 0.01 μM for isopropanol esters. This paper reveals a feasible route to construct fatty acid derivatives from α-glucosidase inhibitory effect from natural oil. All data generated or analyzed during this study are included in this published article [and its Additional file]. Chatterjee S, Chatterjee S, Khunti K, Khunti K, Davies MJ, Davies MJ (2017) Type 2 diabetes. Lancent 389(10085):2239–2251 Chen L, Kang YH (2013) In vitro inhibitory effect of oriental melon (Cucumis melo L. var. makuwa Makino) seed on key enzyme linked to type 2 diabetes. J Funct Foods 5:981–986 Cho NH, Shaw JE, Karuranga S, Huang Y, Fernandes JDD, Ohlrogge AW, Malanda B (2018) IDF Diabetes Atlas: Global estimates of diabetes prevalence for 2017 and projections for 2045. Diabetes Res Clin Pr 138:271–281 Deedwania P (2018) Dangers of hypoglycemia in cardiac patients with diabetes: time to switch to safer. Newer Drugs 24(12):1063–1072 Khan MAR, Ara MH, Mamun SM (2018) Fatty acid composition and chemical parameters of Liza parsia. J Appl Chem 8(3):1–8 Kumar V, Prakash O, Kumar S, Narwal S (2011) α-glucosidase inhibitors from plants: A natural approach to treat diabetes. Pharmacogn Rev 5:19–29 Kutoh E, Kuto AN, Wada A, Kurihara R, Kojima R (2021) Regulations of free fatty acids and diabetic parameters in drug naïve subjects with type 2 diabetes treated with canagliflozin monotherapy. Drug Research 76(3):468–483 Mata R, Cristians S, Escandon-Rivera J-R, K., Rivero-Cruz, I. (2013) Mexican antidiabetic herbs: valuable sources of inhibitors of alpha -glucosidases. J Nat Prod 76(3):468–483 Miyazawa M, Yagi N, Taguchi K (2005) Inhibitory compounds of α-glucosidase activity from Arctium lappa L. J Oleo Sci 54:589–594 Nie K, Lu D, Sun B, Fang Y, Ning Z, Wang M (2020) Enzymatic hydration of linoleic acid followed with selective chain cleavage for biofuels and biomaterials production. J Biobased Mater Bio 14:723–731 Paul S, Hou CT, Sun CK (2010) α-Glucosidase inhibitory activities of 10-hydroxy-8(E)-octadecenoic acid: an intermediate of bioconversion of oleic acid to 7,10-dihydroxy-8(E)-octadecenoic acid. New Biotechnol 27:419–423 Pi F, Shinzawa H, Czarnecki MA, Iwahashi M, Suzuki M, Ozaki Y (2010) Self-assembling of oleic acid (cis-9-octadecenoic acid) and linoleic acid (cis-9, cis-12-octadecadienoic acid) in ethanol studied by time-dependent attenuated total reflectance (ATR) infrared (IR) and two-dimensional (2D) correlation spectroscopy. J Molr Struct 974:40–45 Rahim F, Ullah H, Javid MT, Wadood A, Taha M, Ashraf M, Shaukat A, Junaid M, Hussain S, Rehman W (2015) Synthesis, in vitro evaluation and molecular docking studies of thiazole derivatives as new inhibitors of α-glucosidase. Bioorg Chem 62:15–21 Su CH, Hsu CH, Ng LT (2013) Inhibitory potential of fatty acids on key enzymes related to type 2 diabetes. BioFactors 39:415–421 Teng H, Chen LJCRIFS (2016) α-Glucosidase and α-amylase inhibitors from seed oil: A review of liposoluble substance to treat diabetes. Crit Rev Food Sci 57(16):3438–3448 Tetko IV, Tanchuk VY (2002) Application of associative neural networks for prediction of lipophilicity in ALOGPS 2.1 program. J Chem Inf Comput Sci 42:1136 Thao TTP, Bui TQ, Quy PT, Bao NC, Van Loc T, Van Chien T, Chi NL, Van Tuan N, Van Sung T, Nhung NTA (2021) Isolation, semi-synthesis, docking-based prediction, and bioassay-based activity of Dolichandrone spathacea iridoids: new catalpol derivatives as glucosidase inhibitors. RSC Adv 11:11959–11975 Trott O, Olson A (2009) Software News and Update AutoDock Vina: Improving the Speed and Accuracy of Docking with a New Scoring Function. J Comput Chem 31(2):455–461 Xu Y, Xie L, Xie J, Liu Y, Chen W (2019) Pelargonidin-3-O-rutinoside as a novel α-glucosidase inhibitor for improving postprandial hyperglycemia. Chem Commun 55:39–42 Yan Z, B., Engin, Eser, P., Kristensen, Z., and Engineering, G. (2020) Fatty acid hydratase for value-added biotransformation. Chinese J of Chem Phy 28:64–76 Yao L, Hammond E, Tong W (2008) Melting points and viscosities of fatty acid esters that are potential targets for engineered oilseed. J Am Oil Chem Soc 85:77–82 Zy A, Wei ZA, Ffa B, Yong ZA, Wk A (2014) α-Glucosidase inhibitors isolated from medicinal plants. Food Sci Hum Well 3:136–174 Funding for this study was provided by National Natural Science Foundation of China (Grant Nos. 21978019, 21978020, 22078013), Thanks to everyone who contributed to this article. Beijing Bioprocess Key Laboratory and State Key Laboratory of Chemical Resource Engineering, College of Life Science and Technology, Beijing University of Chemical Technology (BUCT), Beijing, 100029, People's Republic of China Jiahao Liu, Renwei Zhang, Kaili Nie, Changsheng Liu, Li Deng & Fang Wang Sinovac Biotech Ltd, Beijing, China Changsheng Liu Jiahao Liu Renwei Zhang Kaili Nie Li Deng Fang Wang All authors provided substantial contributions to the conception or design of the work, or the acquisition, analysis, or interpretation of data for the work. All authors drafted the work, revised it critically for important intellectual content, and provided final approval of the version to be published. All authors had access to full data and analyses presented in this manuscript. All authors read and approved the final manuscript. Corresponding authors Correspondence to Changsheng Liu or Li Deng. All authors have seen and approved the final version of the manuscript being submitted. The article is the authors' original work, has not received prior publication and is not under consideration for publication elsewhere. Supporting information. Liu, J., Zhang, R., Nie, K. et al. Construction of fatty acid derivatives from rubber seed oil as α-glucosidase inhibitors based on rubber seed oil. Bioresour. Bioprocess. 9, 23 (2022). https://doi.org/10.1186/s40643-022-00492-9 α-Glucosidase Rubber seed oil
CommonCrawl
Results for 'R. D. Burbank' (try it on Scholar) Le Monde de Mr Descartes, Ou le Traité de la Lumiere, Et des Autres Principaux Objets des Sens [Signed D.R.]. Avec Un Discours du Mouvement Local, & Un Autre des Fiévres, Composez Selon les Principes du Méme Auteur. [REVIEW]R. D. - 1664details Microtwinning in Epitaxial Nickel-Iron Films.R. D. Burbank & R. D. Heidenreich - 1960 - Philosophical Magazine 5 (52):373-382.details Philosophy of Film in Aesthetics Bookmark 5 citations Women Families and the Future. Sexual Relationships and Marriage Worldwide.[Fact Sheet].V. K. Burbank, C. Williamson, S. Engelbrecht, M. Lambrick, E. J. van Rensburg, R. Wood, W. Bredell, A. L. Williamson, D. J. Barthlow & P. F. Horan - 1995 - Ethos: Journal of the Society for Psychological Anthropology 23 (1):33-46.details Feminist Ethics in Normative Ethics Caravan Cities. By M. Rostovtzeff. Translated by D. And T. Talbot Rice. Pp. Xiv + 232; 25 Plates, 5 Maps and Plans. Oxford: Clarendon Press, and London: H. Milford, 1932. 15s. [REVIEW]D. B. R. - 1933 - Journal of Hellenic Studies 53 (1):125-126.details The Long-Term Sustenance of Sustainability Practices in MNCs: A Dynamic Capabilities Perspective of the Role of R&D and Internationalization. [REVIEW]Subrata Chakrabarty & Liang Wang - 2012 - Journal of Business Ethics 110 (2):205-217.details What allows MNCs to maintain their sustainability practices over the long-term? This is an important but under-examined question. To address this question, we investigate both the development and sustenance of sustainability practices. We use the dynamic capabilities perspective, rooted in resource-based view literature, as the theoretical basis. We argue that MNCs that simultaneously pursue both higher R&D intensity and higher internationalization are more capable of developing and maintaining sustainability practices. We test our hypotheses using longitudinal panel data from 1989 to (...) 2009. Results suggest that MNCs that have a combination of both high R&D intensity and high internationalization are (i) likely to develop more sustainability practices and (ii) are likely to maintain more of those practices over a long-term. As a corollary, MNCs that have a combination of both low R&D and low internationalization usually (i) end up developing little or no sustainability practices and (ii) find it difficult to sustain whatever little sustainability practices they might have developed. (shrink) Business Ethics in Applied Ethics Bookmark 10 citations Does Trust Matter for R&D Cooperation? A Game Theoretic Examination.Marie-Laure Cabon-Dhersin & Shyama V. Ramani - 2004 - Theory and Decision 57 (2):143-180.details The game theoretical approach to R&D cooperation does not investigate the role of trust in the initiation and success of R&D cooperation: it either assumes that firms are non-opportunists or that the R&D cooperation is supported by an incentive mechanism that eliminates opportunism. In contrast, the present paper focuses on these issues by introducing incomplete information and two types of firms: opportunist and non-opportunist. Defining trust as the belief of each firm that its potential collaborator will respect the contract, it (...) identifies the trust conditions under which firms initiate R&D alliances and contribute to their success. The higher the spillovers, the higher the level of trust required to initiate R&D cooperation for non-opportunists, while the inverse holds for opportunists. (shrink) Moral States and Processes in Normative Ethics A Critique of R.D. Alexander's Views on Group Selection.David Sloan Wilson - 1999 - Biology and Philosophy 14 (3):431-449.details Group selection is increasingly being viewed as an important force in human evolution. This paper examines the views of R.D. Alexander, one of the most influential thinkers about human behavior from an evolutionary perspective, on the subject of group selection. Alexander's general conception of evolution is based on the gene-centered approach of G.C. Williams, but he has also emphasized a potential role for group selection in the evolution of individual genomes and in human evolution. Alexander's views are internally inconsistent and (...) underestimate the importance of group selection. Specific themes that Alexander has developed in his account of human evolution are important but are best understood within the framework of multilevel selection theory. From this perspective, Alexander's views on moral systems are not the radical departure from conventional views that he claims, but remain radical in another way more compatible with conventional views. (shrink) Evolution of Phenomena in Philosophy of Biology Group Selection in Philosophy of Biology R&D Cooperation in Emerging Industries, Asymmetric Innovative Capabilities and Rationale for Technology Parks.Vivekananda Mukherjee & Shyama V. Ramani - 2011 - Theory and Decision 71 (3):373-394.details Starting from the premise that firms are distinct in terms of their capacity to create innovations, this article explores the rationale for R&D cooperation and the choice between alliances that involve information sharing, cost sharing or both. Defining innovative capability as the probability of creating an innovation, it examines firm strategy in a duopoly market, where firms have to decide whether or not to cooperate to acquire a fixed cost R&D infrastructure that would endow each firm with a firm-specific innovative (...) capability. Furthermore, since emerging industries are often characterized by high technological uncertainty and diverse firm focus that makes the exploitation of spillovers difficult, this article focuses on a zero spillover context. It demonstrates that asymmetry has an impact on alliance choice and social welfare, as a function of ex-post market competition and fixed costs of R&D. With significant asymmetry no alliance may be formed, while with similar firms the cost sharing alliance is dominant. Finally, it ascertains the settings under which the equilibrium outcome is distinct from that maximizing social welfare, thereby highlighting some conditions under which public investment in a technology park can be justified. (shrink) Complex Joint R&D Projects: From Empirical Evidence to Managerial Implications.N. Arranz & J. C. Fdez de Arroyabe - 2009 - Complexity 15 (1):61-70.details Ethics in Value Theory, Miscellaneous R. D. Laing the Philosophy and Politics of Psychotherapy.Andrew Collier - 1977details Psychotherapy in Philosophy of Cognitive Science Innovation Systems in Malaysia: A Perspective of University—Industry R&D Collaboration. [REVIEW]V. G. R. Chandran, Veera Pandiyan Kaliani Sundram & Sinnappan Santhidran - 2014 - AI and Society 29 (3):435-444.details Collaborative research and development (R&D) activities between public universities and industry are of importance for the sustainable development of the innovation ecosystem. However, policymakers especially in developing countries show little knowledge on the issues. In this paper, we analyse the level of university–industry collaboration in Malaysia. We further examine the fundamental conditions that hinder university–industry collaboration despite the government's initiatives to improve such linkages. We show that the low collaboration is a result of an R&D gap between the entities. While (...) the universities engage in basic and fundamental R&D, the private sectors involved in incremental innovation that requires less R&D investments. The different nature of the industries' R&D requires closer cooperation between firms namely buyers, suppliers and technical service providers and not the universities. Among others, the lack of an intermediary role, absorptive capacity and collaborative initiative by the industry also contribute to the problem. The study suggests that the collaborative activities can benefit both if deliberate and effective efforts on reducing the R&D mismatch are made between the universities and industry. Likewise, proper institutional arrangements in coordinating these activities are required. This result seems to reflect the nature of many developing countries' national innovation systems, and therefore, lessons from Malaysia may serve as a good case study. (shrink) Philosophy of Artificial Intelligence in Philosophy of Cognitive Science Doing Second-Order R&D.R. Ison - 2014 - Constructivist Foundations 10 (1):130-131.details Open peer commentary on the article "On Climate Change Research, the Crisis of Science and Second-order Science" by Philipp Aufenvenne, Heike Egner & Kirsten von Elverfeldt. Upshot: Bringing second-order understandings to the doing of climate science is to be welcomed. In taking a second-order turn, it is imperative to reflect on reflection, or report authentically our doings and thus move beyond sterile debates about what ought to be or what second-order doings are or are not. The field of doing second-order (...) R&D is not a terra nullius, so exploring the full range and domains of praxis is warranted. (shrink) Quantifiers in Philosophy of Language The Effect of R&D Intensity on Corporate Social Responsibility.Robert C. Padgett & Jose I. Galan - 2010 - Journal of Business Ethics 93 (3):407-418.details This study examines the impact that research and development (R&D) intensity has on corporate social responsibility (CSR). We base our research on the resource-based view (RBV) theory, which contributes to our analysis of R&D intensity and CSR because this perspective explicitly recognizes the importance of intangible resources. Both R&D and CSR activities can create assets that provide firms with competitive advantage. Furthermore, the employment of such activities can improve the welfare of the community and satisfy stakeholder expectations, which might vary (...) according to their prevailing environment. As expressions of CSR and R&D vary throughout industries, we extend our research by analysing the impact that R&D intensity has on CSR across both manufacturing and non-manufacturing industries. Our results show that R&D intensity positively affects CSR and that this relationship is significant in manufacturing industries, while a non-significant result was obtained in non-manufacturing industries. (shrink) Bridging the Gap Between Innovation and ELSA: The TA Program in the Dutch Nano-R&D Program NanoNed. [REVIEW]Arie Rip & Harro van Lente - 2013 - NanoEthics 7 (1):7-16.details The Technology Assessment (TA) Program established in 2003 as part of the Dutch R&D consortium NanoNed is interesting for what it did, but also as an indication that there are changes in how new science and technology are pursued: the nanotechnologists felt it necessary to spend part of their funding on social aspects of nanotechnology. We retrace the history of the TA program, and present the innovative work that was done on Constructive TA of emerging nanotechnology developments and on aspects (...) of embedding of nanotechnology in society. One achievement is the provision of tools and approaches to help make the co-evolution of technology and society more reflexive. We briefly look forward by outlining its successor program, TA NanoNextNL, in place since 2011. (shrink) Nanotechnology in Applied Ethics Scientific Research Ethics in Applied Ethics The Primacy of Experience in R.D. Laing's Approach to Psychoanalysis.M. Guy Thompson - 2003 - In Roger Frie (ed.), Understanding Experience: Psychotherapy and Postmodernism. Routledge.details This paper explores R. D. Laing's application of existential and phenomenological tradtions, specifically Hegel and Heidegger, to his groundbreaking work with psychotic process as well as psychotherapeutic practice more generally. $99.30 new $107.68 direct from Amazon $139.45 used (collection) Amazon page Gender Diversity in the Boardroom and Risk Management: A Case of R&D Investment.Shimin Chen, Xu Ni & Jamie Y. Tong - 2016 - Journal of Business Ethics 136 (3):599-621.details Increasing gender diversity in the boardroom has been promoted as a way to enhance corporate governance and risk management. This study empirically examines whether boards with more female directors play a role in reducing R&D risk. We first show that female directors help to reduce the positive relationship between R&D investment and future performance volatility. We then report that firms with more gender-diverse boards exhibit a lower adverse effect of R&D on the cost of debt. These results are robust to (...) endogeneity analysis, alternative measures of gender diversity and risky investment, and other sensitivity tests. Overall, our results suggest that female directors improve board effectiveness in risk management with respect to R&D investment. (shrink) R. D. Laing and Theology: The Influence of Christian Existentialism on The Divided Self.Gavin Miller - 2009 - History of the Human Sciences 22 (2):1-21.details The radical psychiatrist R. D. Laing's first book, The Divided Self (1960), is informed by the work of Christian thinkers on scriptural interpretation — an intellectual genealogy apparent in Laing's comparison of Karl Jaspers's symptomatology with the theological tradition of `form criticism'. Rudolf Bultmann's theology, which was being enthusiastically promoted in 1950s Scotland, is particularly influential upon Laing. It furnishes him with the notion that schizophrenic speech expresses existential truths as if they were statements about the physical and organic world. (...) It also provides him with a model of the schizoid position as a form of modern-day Stoicism. Such theological recontextualization of The Divided Self illuminates continuities in Laing's own work, and also indicates his relationship to a wider British context, such as the work of the `clinical theologian' Frank Lake. (shrink) History of Science in General Philosophy of Science The Sociological Imagination of R. D. Laing.Susie Scott & Charles Thorpe - 2006 - Sociological Theory 24 (4):331 - 352.details The work of psychiatrist R. D. Laing deserves recognition as a key contribution to sociological theory, in dialogue with the interactionist and interpretivist sociological traditions. Laing encourages us to identify meaningful social action in what would otherwise appear to be nonsocial phenomena. His interpretation of schizophrenia as a rational strategy of withdrawal reminds us of the threat that others can pose to the self and how social relations are implicated in even the most "private" and "internal" of experiences. He developed (...) a far-reaching critical theory of the self in modern society, which challenges the medicalization and biochemical reduction of human problems. Using the case of shyness as an example, the article seeks to demonstrate the importance of Laing's theories for examining the fragility of the self in relation to contemporary social order. (shrink) Sociology in Social Sciences Protogeometric Pottery. By V. R. d'A. Desborough. Pp. Xvi + 330, with 38 Plates and 1 Map. Oxford: Clarendon Press, 1952. 105s. [REVIEW]Sylvia Benton & V. R. D'A. Desborough - 1956 - Journal of Hellenic Studies 76:124-124.details Classics in Arts and Humanities Social Research in International Agricultural R&D: Lessons From the Small Ruminant CRSP. [REVIEW]Constance M. McCorkle, Michael F. Nolan, Keith Jamtgaard & Jere L. Gilles - 1989 - Agriculture and Human Values 6 (3):42-51.details The uses of the most "social" of the social sciences—sociology and anthropology—in international agricultural research and development (R&D) have often been poorly understood. Drawing upon a decade of work by the Sociology Project of the Small Ruminant Collaborative Research Support Program, this article exemplifies how and where social scientists can and have contributed to major development initiatives, and it illustrates some of the larger lessons to be learned for human values concerns in international agriculture. International Philosophy in Social and Political Philosophy Form in Aristotle: Universal or Particular?: R. D. Sykes.R. D. Sykes - 1975 - Philosophy 50 (193):311-331.details In this paper I ask whether in Aristotle's metaphysical system the form of a non-living sensible substance, such as the form of this house, is or is not universal. I argue that his position as it stands is self-contradictory, and then try to give some account of the pressures that led to this central contradiction in Aristotle's metaphysical thought. Substance in Metaphysics Travailler en projets dans la R & D. Contraintes temporelles et transformations du travail de recherche.Lucie Goussard & Tiffon - 2013 - Temporalités 18.details Cet article traite de la mise en place de l'organisation par projet dans une unité de R & D du secteur de l'énergie. Il montre que l'un des principaux effets de cette politique de « modernisation » est d'avoir transformé les contraintes temporelles qui pèsent sur le travail des chercheurs et ce, à au moins trois niveaux. D'abord, la composante administrative du travail de recherche s'est accrue au détriment du temps consacré à la production scientifique. Ensuite, le projet d'intensification du (...) travail auquel renvoie l'instauration du multiprojets génère des « coûts de coordination » et un phénomène de dispersion dans le travail, peu propices à la concentration intellectuelle que nécessitent les activités de recherche, au point que certaines tâches, comme la lecture ou l'écriture, sont réalisées à contretemps, dans la sphère hors travail et sur des temps de récupération. Quant à l'ajustement des activités aux demandes des clients, il donne lieu à une sorte d'ingénieurisation de la recherche. Tandis que les projets les plus éloignés des préoccupations des directions opérationnelles souffrent d'un manque de crédit, au double sens du terme, ceux commandités par les directions les plus proches du marché et du consommateur final, eux, s'apparentent de plus en plus à des études, court-termistes et directement opératoires. (shrink) Regional Specialisation for Technological Innovation in R&D Laboratories: A Strategic Perspective. [REVIEW]Santanu Roy & Pratap K. J. Mohapatra - 2002 - AI and Society 16 (1-2):100-111.details The present paper attempts to highlight the strategy of regional specialisation for technological innovation in R&D laboratories. The paper makes a proposition that regional specialisation should be recognised as a strategic initiative for technology development in R&D laboratories. The rationale for this strategic initiative has been substantiated with the help of illustrations from the cases of technology development efforts taken up in different laboratories in the country under the Council of Scientific and Industrial Research (CSIR), India. In this direction, CSIR (...) and other centres of excellence have played a pioneering role in the development of various industrial clusters and artisan concentrations in different parts of the country. The implications of adoption or otherwise of this strategy initiative for technological innovation in R&D laboratories have been discussed. (shrink) Review of The Crucible of Experience: R. D. Laing and the Crisis of Psychotherapy. [REVIEW]No Authorship Indicated - 2001 - Journal of Theoretical and Philosophical Psychology 21 (1):94-95.details Reviews the book, The crucible of experience: R. D. Laing and the crisis of psychotherapy by Daniel Burston . Unlike his earlier book, which was more biographical and focused on R. D. Laing's personal experiences, this book is devoted to examining the man's contributions to contemporary psychotherapeutic theory and practice. This, of course, is no easy task as Laing is a notoriously unsystematic thinker, whose work often violated entrenched disciplinary expectations and challenged conventional sensibilities and assumptions. Despite such obvious obstacles, (...) however, Burston does an excellent job laying out Laing's intellectual indebtedness to existentialism and phenomenology, as well as his lasting contributions to existential psychiatry. 2012 APA, all rights reserved). (shrink) Michel Foucault in Continental Philosophy The Wing of Madness: The Life and Work of R.D. Laing. [REVIEW]Duff Waring - 1997 - Journal of Mind and Behavior 18 (4):465-472.details By the time of his death in 1989, R.D. Laing was already history. His status as a countercultural legend remained intact, but he had gone from icon to relic. His intellectual and political credibility reached a peak in the late 1960s that he never regained. For many, the publication of The Politics of Experience and The Bird of Paradise in 1967 presaged his critical demise into bad poetry and bellicose shamanism. Laing himself was keenly aware of his fall from popular (...) grace. (shrink) Aristotle on Scientific Knowledge - R. D. McKirihan: Principles and Proofs: Aristotle's Theory of Demonstrative Science. Pp. Xiv + 340. Princeton, NJ: Princeton University Press, 1992. Cased, £35. [REVIEW]J. D. G. Evans - 1994 - The Classical Review 44 (1):84-85.details Aristotle: Epistemology in Ancient Greek and Roman Philosophy Aristotle: Logic and Philosophy of Language in Ancient Greek and Roman Philosophy F. R. D. Goodyear: Tacitus. (Greece and Rome, New Surveys in the Classics, 4.) Pp. 44. Oxford: Clarendon Press, 1970. Paper, 35p.R. H. Martin - 1977 - The Classical Review 27 (1):117-117.details Infima in the D.R.E. Degrees.D. Kaddah - 1993 - Annals of Pure and Applied Logic 62 (3):207-263.details This paper analyzes several properties of infima in Dn, the n-r.e. degrees. We first show that, for every n> 1, there are n-r.e. degrees a, b, and c, and an -r.e. degree x such that a < x < b, c and, in Dn, b c = a. We also prove a related result, namely that there are two d.r.e. degrees that form a minimal pair in Dn, for each n < ω, but that do not form a minimal pair (...) in Dω. Next, we show that every low r.e. degree branches in the d.r.e. degrees. This result does not extend to the low2 r.e. degrees. We also construct a non-low r.e. degree a such that every r.e. degree b a branches in the d.r.e. degrees. Finally we prove that the nonbranching degrees are downward dense in the d.r.e. degrees. (shrink) Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic J. R. Ellis and R. D. Milns: The Spectre of Philip. Pp. Xiv + 122. Sydney: University Press, 1970. Paper, $A3.D. M. MacDowell - 1972 - The Classical Review 22 (3):425-425.details On the R.E. Predecessors of D.R.E. Degrees.Shamil Ishmukhametov - 1999 - Archive for Mathematical Logic 38 (6):373-386.details Let d be a Turing degree containing differences of recursively enumerable sets (d.r.e.sets) and R[d] be the class of less than d r.e. degrees in whichd is relatively enumerable (r.e.). A.H.Lachlan proved that for any non-recursive d.r.e. d R[d] is not empty. We show that the r.e. degree defined by Lachlan for a d.r.e.set $D\in$ d is just the minimum degree in which D is r.e. Then we study for a given d.r.e. degree d class R[d] and show that there (...) exists a d.r.e.d such that R d] has a minimum element $>$ 0. The most striking result of the paper is the existence of d.r.e. degrees for which R[d] consists of one element. Finally we prove that for some d.r.e. d R[d] can be the interval [a,b] for some r.e. degrees a,b, a $<$ b $<$ d. (shrink) Areas of Mathematics in Philosophy of Mathematics Lucretian Studies D. R. Dudley (Editor): Lucretius. (Studies in Latin Literature and its Influence.) Pp. X+166. London: Routledge and Kegan Paul, 1965. Cloth, 30s. Net. [REVIEW]F. R. D. Goodyear - 1966 - The Classical Review 16 (03):322-323.details Lucretius in Ancient Greek and Roman Philosophy Etruscan Mirrors Corpus Speculorum Etruscorum, USA 1: R. D. De Puma, Midwestern Collections. Pp. 241; 170 Illustrations on Pp. 64–229. Ames: Iowa State University Press, 1987. $36.95. Belgique 1: R. Lambrechts. Pp. 180; 121 Illustrations on Pp. 58–169. Rome: 'L'Erma' di Bretschneider, 1987. L. 250,000. [REVIEW]F. R. Serra Ridgway - 1988 - The Classical Review 38 (2):354-355.details R. D. Connor. The Weights and Measures of England. London: Her Majesty's Stationery Office, 1987. Pp. Xvi + 422. ISBN 0-11-290435-1. £30.00. [REVIEW]W. D. Hackmann - 1988 - British Journal for the History of Science 21 (4):499-499.details The D.R.E. Degrees Are Not Dense.S. Barry Cooper, Leo Harrington, Alistair H. Lachlan, Steffen Lempp & Robert I. Soare - 1991 - Annals of Pure and Applied Logic 55 (2):125-151.details By constructing a maximal incomplete d.r.e. degree, the nondensity of the partial order of the d.r.e. degrees is established. An easy modification yields the nondensity of the n-r.e. degrees and of the ω-r.e. degrees. Two Editions of Virgil R. D. Williams: The Aeneid of Virgil, Books 7–12. Pp. Xxxvi + 516. London: Macmillan, 1973. Cloth, £3. M. Geymonat: P. Vergili Maronis Opera. Pp. Xxviii + 708. Turin: Paravia, 1972. Paper, L. 7,000. [REVIEW]D. A. West - 1976 - The Classical Review 26 (01):34-36.details LACTANTIUS ON STATIUS R. D. Sweeney (Ed.): Lactantius Placidus in Statii Thebaida Commentum I; Anonymi in Statii Achilleida Commentum: Fulgentii Ut Fingitur Planciadis Super Thebaiden Commentariolum . (Bibliotheca Scriptorum Graecorum Et Romanorum Teubneriana). Pp. Lxxxviii + 704. Stuttgart and Leipzig: B. G. Teubner, 1997. Cased, DM 248. ISBN: 3-8154-1823-. [REVIEW]D. E. Hill - 2000 - The Classical Review 50 (01):57-.details I Experientially Remember, Therefore I Exist? A Reply To R. D. Smith.D. Lloyd - 1983 - Philosophy of Education 17 (1):97-102.details Philosophical Analysis and Education. Edited by R. D. Archambault. (London: Routledge and Kegan Paul. 1965. Pp. Xii + 212. Price 28s.). [REVIEW]D. W. Hamlyn - 1966 - Philosophy 41 (157):283-.details I Experientially Remember, Therefore I Exist? A Reply to R. D. Smith.D. I. Lloyd - 1983 - Journal of Philosophy of Education 17 (1):97–102.details Philosophy of Education in Philosophy of Social Science ROSENKRANTZ, R. D. E. T. Jaynes: Papers on Probability, Statistics and Statistical Physics. [REVIEW]D. Costantini - 1984 - Scientia 78 (19):41.details Rosenkrantz, R. D. E. T. Jaynes: Papers On Probability, Statistics And Statistical Physics. [REVIEW]D. Costantini - 1984 - Scientia, Rivista di Scienza 78 (119):41.details ARCHAMBAULT, R. D. .-"Philosophical Analysis and Education". [REVIEW]D. W. Hamlyn - 1966 - Philosophy 41:283.details R. D. Archer-Hind, The Timacus of Plato. [REVIEW]R. L. Nettleship - 1889 - Mind 14:127.details Plato's Works in Ancient Greek and Roman Philosophy HICKS, R. D. - Aristotle de Anima. [REVIEW]G. R. T. Ross - 1908 - Mind 17:535.details Aristotle: Philosophy of Mind in Ancient Greek and Roman Philosophy OSENKRANTZ, R. D.: "Inference, Method and Decision". [REVIEW]R. G. Swinburne - 1978 - British Journal for the Philosophy of Science 29:301.details Bayesian Reasoning in Philosophy of Probability A Constructive Look at the Completeness of the Space $\Mathcal{D} (\Mathbb{R})$.Hajime Ishihara & Satoru Yoshida - 2002 - Journal of Symbolic Logic 67 (4):1511-1519.details We show, within the framework of Bishop's constructive mathematics, that (sequential) completeness of the locally convex space $\mathcal{D} (\mathbb{R})$ of test functions is equivalent to the principle BD-N which holds in classical mathemtatics, Brouwer's intuitionism and Markov's constructive recursive mathematics, but does not hold in Bishop's constructivism. The D.R.E. Degrees Are Not Dense.S. Cooper, Leo Harrington, Alistair Lachlan, Steffen Lempp & Robert Soare - 1991 - Annals of Pure and Applied Logic 55 (2):125-151.details R. A. Crossland: Immigrants From the North. (Cambridge Ancient History, Revised Edition, Vol. I, Ch. Xxvii.) Pp. 61. Cambridge: University Press, 1967. Paper, 6s. Net.R. D. Barnett: Phrygia and the Peoples of Anatolia in the Iron Age. (Cambridge Ancient History, Revised Edition, Vol. Ii, Ch. Xxx.) Pp. 32. Cambridge: University Press, 1967. Paper, 3s. 6d. Net. [REVIEW]John Boardman - 1968 - The Classical Review 18 (3):356-356.details Microbes at Work. Micro-Organisms, the D.S.I.R. And Industry in Britain, 1900–1936.Keith Vernon - 1994 - Annals of Science 51 (6):593-613.details The study of micro-organisms in Britain in the early twentieth century was dominated by medical concerns, with little support for non-medical research. This paper examines the way in which microbes came to have a place in industrial contexts in the 1920s and early 1930s. Their industrial capacity was only properly recognized during World War I, with the development of fermentation processes to make required organic chemicals. Post-war research sponsored by chemical and food industries and the D.S.I.R. established the industrial significance (...) of microbes. The primary focus here is the D.S.I.R. work which aimed to pull microbes away from medical concerns and promote the role of microbes in British industry. (shrink) Philosophy of Biology, Miscellaneous in Philosophy of Biology A Non-Splitting Theorem for D.R.E. Sets.Xiaoding Yi - 1996 - Annals of Pure and Applied Logic 82 (1):17-96.details A set of natural numbers is called d.r.e. if it may be obtained from some recursively enumerable set by deleting the numbers belonging to another recursively enumerable set. Sacks showed that for each non-recursive recursively enumerable set A there are disjoint recursively enumerable sets B, C which cover A such that A is recursive in neither A ∩ B nor A ∩ C. In this paper, we construct a counterexample which shows that Sacks's theorem is not in general true when (...) A is d.r.e. rather than r.e. (shrink) Model Theory in Logic and Philosophy of Logic 1 — 50 / 999
CommonCrawl
Computational Social Networks Detection of strong attractors in social media networks Ziyaad Qasem1, Marc Jansen1, Tobias Hecking2 & H. Ulrich Hoppe2 Computational Social Networks volume 3, Article number: 11 (2016) Cite this article Detection of influential actors in social media such as Twitter or Facebook plays an important role for improving the quality and efficiency of work and services in many fields such as education and marketing. The work described here aims to introduce a new approach that characterizes the influence of actors by the strength of attracting new active members into a networked community. We present a model of influence of an actor that is based on the attractiveness of the actor in terms of the number of other new actors with which he or she has established relations over time. We have used this concept and measure of influence to determine optimal seeds in a simulation of influence maximization using two empirically collected social networks for the underlying graphs. Our empirical results on the datasets demonstrate that our measure stands out as a useful measure to define the attractors comparing to the other influence measures. Social media have become an important information resource to gain insights into and acquire knowledge about a wide variety of more or less numerous communities interacting through the internet. Moreover, applying analytic techniques to social media data can support better informed decision-making processes in numerous fields, such as marketing, politics and education. One prominent aspect of such analytics is the characterization and detection of influential actors in social networks. There are several studies on social media which have suggested different approaches and specific measures to solve the problem of influential actor detection. In this paper, we elaborate on a new approach for the detection of influential actors which is based on quantifying the contribution of this actor to increasing the size of the network by attracting new active members of the specific subcommunity [1]. In comparison to weighted or unweighted indegree measures, our new measure would only count those neighbors who were new to the network when the relationship to the actor in focus was first established. In other words, an actor who has a high value in terms of this measure has been an important "target" node for the attraction of new members to the network and this for increasing the overall size of the network. A formal specification of this property (referred to as "T measure") is given in the first part of the paper. Our approach can be applied to social networks in which timestamps are attached to edges connecting to actors. In the evaluation section of this paper, we apply our approach first to dataset from the Asterisk open source software developer community (a relatively small community with less than 1400 members and much less active actors) to test whether the influential actors who are already known from the Asterisk mailing list can be also identified using our approach. Second, we use a bigger dataset based on Twitter communication around #EndTaizSiege and #coup_suffocates_Taiz (related to recent events in Yemen). Here, we compare our approach with other standard measures such as indegree, and betweeness in terms of how good these measures are if used to generate seeds for an independent cascade diffusion process. The objective of studying our T measure in the field of information diffusion is to show that T measure is effective to define influential actors who are effective in attracting others to become active in a specific community. The rest of the paper is organized as follows: "Literature review" section presents related research. An overview of our proposed approach is given in "Approach" section, which also provides the basic formal definitions. "Implementation" section introduces the concept, followed by the description of our datasets and the experimental results in "Experimental results" section. "Information diffusion and T measure" and "Simulation of attraction processes with time-respecting paths" sections deal with the performance of our approach in the influence maximization problem. Finally, conclusions are drawn and an outlook for further research is described in "Conclusion" section. In this section, we review studies of influence in social media such as Twitter and remind the concept of information diffusion and its relation with the type of influence on which our approach is based. Influence in social media In the field of social media analysis, there exists a large body of research on modeling and measuring influence and on detecting influential actors. Here, social networking platforms such as Twitter are of special interest. However, regarding the manifestation and identification there are still open questions. Researchers have studied influence in social media networks, and many approaches rank users according to their influence. Leavitt et al. [2] employ four features to evaluate influence, which are replies, retweets, mentions, and number of followers. They support statistical results related to these measures, but do not present a global influence measure based on all the suggested criteria. In the work of Cha et al. [3], it could be shown that employing different measures can lead to completely different results when it comes to the task of ranking users according to their importance. Results were presented based on Twitter data and three different measures of influence, namely indegree (number of followers of an actor), retweets (number of retweets containing one's actor name), and mentions (number of mentions containing one's actor name). They presented an in-depth comparison of these measures with the conclusion that different measures can be used to identify different types of influential actors. Indegree tends to be highest for news sites and celebrities, and thus, is suited to model popularity. However, the number of followers (indegree) does necessarily go along with a high number of retweets or mentions. The number of retweets is highest for information aggregation services and the number of mentions for celebrities. Consequently, the way in which a network is extracted from social media content and the measure of influence should be considered carefully with respect to the roles and type of influence one aims to uncover. Azaza et al. [4] proposed a new influence assessment approach depending on belief theory to combine different types of influence markers on Twitter such as retweets, mentions, and replies. They used Twitter dataset of European Election 2014 and deduced the top influential candidates. In our approach, we depend on the retweet relation as a marker to attract others to become active in a specific community in which a specific topic is dealt. As well as, a retweet relation can be understood as a form of information diffusion and as a means of participating in an event in social media [5]. Other researches propose to define influential actors based on link analysis. Twitter User Rank (TURank) [6] is an algorithm which utilizes ranking algorithms to define authoritative actors on Twitter, based on link analysis. TURank introduces actor–tweet graph where nodes are actors and tweets, and edges are follow and retweet relationships. TwitterRank [7] extended PageRank algorithm to measure influential actors in Twitter based on link structure and topical similarity. Apart from the pure network information, influence can also be modeled additionally taking into account the actions of actors (e.g. on Flickr [8]), similarity of actors [9], and produced content associated with each actor [10]. Our work aims for a clear formulation of social influence and a methodology to produce an exact ranking of the actors according to the definition. In concrete, we provide a new type of influence in online social network to emphasize on those actors who attract many outsiders to join the own community in which a specific topic is dealt. For example, in Twitter those actors spawn many retweets on a certain topic from people who have no previous contributions on that topic. This new type of influence led us to propose a new approach to detect those actors, and compare the results with other standard measures. Information diffusion Influence is often related to information diffusion in a network. Information diffusion is the process by which a new idea or innovation spread over the networks by the means of connection among the social network actors [11]. Especially in social media, influential actors can control the diffusion of information through the network to some extent. There are numerous research on the information diffusion over social network. For instance, Gruhl et al. [12] studied and modeled the dynamic of information diffusion on blogsspace environment. Yang et al. [13] proposed a model to capture the attribute of information diffusion which are related to speed, scale, and range. With spreading of information diffusion models and their variations, Vallet et al. [14] used graph rewriting to compare the different information diffusion models. Widely used information diffusion models are the independent cascade (IC) [15, 16] and the linear threshold (LT) [17]. The two models describe different aspects of influence diffusion. IC model focuses on influence among neighbors on social network, and LT model focuses on the threshold behavior in influence diffusion [18]. Kempe et al. [19] proposed to use the IC and LT models to solve the influence maximization problem which asks for a set of actors whose aggregated influence in the social network is maximized, whereas Pei et al. [20] provided strategies to search for spreaders based on the following of information flow rather than simulating the spreading dynamics (modeled_dependent results). The study of [19] was followed by several research on the same problem (e.g. [18, 21, 22]). Furthermore, the features of identifying spreaders measures using independent interaction and threshold models through empirical diffusion data from LiveJournal are discussed in [23]. Morone et al. [24] proposed to map the problem of influence maximization in complex networks onto optimal percolation using Collective Influence (CI) algorithm. In this paper, we evaluated the performance of our measure T in the information diffusion maximization problem by selected sets of top actors based on T measure and other sets which are defined by other standard measures. The advantage of our measure is to consider a new type of influence which refers to actors who attract others to be active in a particular community. Thus, we use the IC model to evaluate the performance of our measure comparing with other standard measures. Our approach is based on this premise: the more a certain actor (Actor a) attracts new actors, the more actor a is important to the social network. Thus, in this approach we tried to evaluate the attractiveness value of social media actor which leads us to detect the attractors. In this section, we will provide some definitions for special terms that help to provide a profound methodology in presenting our approach. This approach is based mainly on the decomposition of data collected from a given social network according to the time period of collection. Let us refer to that period by the term P-period. For instance, if the P-period of a given social network is 30 days, the social network data collection took 30 days. Definition 1 (P-period) P-period is a time duration of the data collection process from social networks. In this paper, the social networks' data are depicted by a graph representation. To distinguish this graph in any context, it is defined under the name P-graph. Thus, we can say that our approach is based on the decomposition of the P-graph into subgraphs depending on the P-period. (P-graph) P-graph is a graph constructed from social network data which have been collected during P-period. Thus, the collected graph during P-period is described by P-graph G(V, E), where V is the set of all actors who joined the community during P-period. E is the set of all connections that have been established between the actors V during P-period. Decomposition of a P-graph based on P-period requires decomposition of the P-period into slices of time so that every subgraph is related to a slice. In our approach, we refer to each slice as P-slice. (P-slice) P-slice is a time slice of P-period. If all P-slices are equidistant, then we define a special case of P-slice as EP-slice. For example, let P-period be 30 days and the number of slices be 5 EP-slices. Then, the value of each EP-slice will be as in Table 1. We notice that each P-slice is included in the later ones. Table 1 EP-slice values for P-period of 30 days (EP-slice) EP-slice is a P-slice such that all P-slices are equidistant. To facilitate the definition of subgraphs of this approach, we will define some terms related to actors according to P-slices. (P-actors) Let \(s_1,s_2,\ldots s_n\) be the P-slices. For every i such that \(0 < i \le n\), the P-actors \(A_i\) is a set of all actors that joined the social network between 0 and \(s_i\). (\(P_s\)-actors) Let \(s_1,s_2,\ldots s_n\) be the P-slices. For every i such that \(0 < i \le n\), the \(P_s\)-actors \(A_{s_i}\) are a set of all actors that joined the social network between the P-slices \(s_{i-1}\) and \(s_i\). P-actors and \(P_s\)-actors with respect to P-slices Figure 1 shows how the P-actors and \(P_s\)-actors are taken with respect to P-slice in our approach. The figure displays the P-actors \(A_3\) and \(P_s\)-actors \(A_{s_3}\) as an example. \(A_3\) joined the social network between P-slices \(s_0\) and \(s_3\) whereas \(A_{s_3}\) joined between P-slices \(s_2\) and \(s_3\). After discussing the terms mentioned above, now it is easy to provide the definitions for the different types of subgraphs which will be used in this approach with. These definitions will be helpful on our way to reach the goal of this approach. (P-subgraph) P-subgraph \(G_i(A_i,E_i)\) is a subgraph of P-graph G which is aggregated until P-slice i. Thus, the aggregated subgraph until P-slice i is described by the P-subgraph \(G_i(A_i,E_i)\), where \(A_i\) is the P-actors \(A_i\). \(E_i= \{(a,b) : a,b\in A_i\}\) By this, we focus on the connections by which the actors attracted the new actors; hence, we can easily measure the actors' attractiveness. The next definition will discuss this issue in formal way. (S-subgraph) The ith S-subgraph \(S_i(A_i,E_{s_i})\) is a subgraph of the P-subgraph \(G_i(A_i,E_i)\) such that \(E_{si}= \{(a,b) : a\in A_{i-1} \ {\rm and} \ b\in A_{s_i}\} \ \cap E_i\) P-subgraphs \(G_{i-1}\) and \(G_{i}\), and S-subgraph \(S_{i}\) From Definition 8, we notice that S-subgraph \(S_i\) contains the new connections by which the new actors \(A_{s_i}\) joined the network. The number of these connections refers to the attractiveness value of the actors \(A_{i-1}\). Later in the implementation section, Definition 8 is used to facilitate the calculation of the attractiveness value T. Figure 2 shows the difference between P-subgraph and S-subgraph in our approach where n is the number of P-slices and \(1<i\le n\). P-subgraph \(G_{i-1}\) is the P-subgraph of the P-slice \(s_{i-1}\), and P-subgraph \(G_{i}\) and S-subgraph \(S_{i}\) are of the P-slice \(s_{i}\). What if the P-graph is a directed graph? The P-subgraph would be directed with the same properties of P-subgraph in Definition 7; however, the definition of the S-subgraph would be slightly different. (Directed S-subgraph) The ith directed S-subgraph \(S_i(A_i,E_{s_i})\) is a subgraph of the directed P-subgraph \(G_i(A_i,E_i)\) such that \(E_{s_i}= \{(a,b) : ( \ a\in A_{i-1} \ and \ b\in A_{s_i} \ ) \ or \ ( \ b\in A_{i-1} \ and \ a\in A_{s_i} \ ) \} \ \cap E_i\) Directed P-subgraphs \(G_{i-1}\) and \(G_{i}\), and directed S-subgraph \(S_{i}\) In Fig. 3, the directed P-subgraph and S-subgraph are shown where n is the number of P-slices and \(1<i\le n\). The directed P-subgraph \(G_{i-1}\) is the P-subgraph the P-slice \(s_{i-1}\), and the directed P-subgraph \(G_{i}\) and S-subgraph \(S_{i}\) are of the P-slice \(s_{i}\). In the next section, we will introduce the implementation of our approach to evaluate the attractiveness value of each actor in online social media. According to the P-slices, the P-graph in this approach is decomposed into n P-subgraphs \(G_1,G_2,\ldots G_n\) and \((n-1)\) S-subgraphs \(S_{2},S_{3},\ldots S_{n}\) where n is the number of P-slices. To evaluate the attractiveness value of each actor in each P-subgraph, we use the formula in next definition. Definition 10 (Attractiveness value T) Let \(s_1,s_2,\ldots s_n\) be the P-slices. For every i such that \(0< i < n\), the attractiveness value of an actor a in P-subgraph \(G_i\) is given by the expression: $$\begin{aligned} T(a_{G_i})= \left\{ \begin{array}{ll} o &{} \text{ if } a \notin A_i \\ \frac{\text{deg}(a_{S_{(i+1)}})}{|A_{s(i+1)}|} &{} \text{ if } a \in A_i \end{array} \right. \end{aligned}$$ where \(T(a_{G_i})\) is the attractiveness value of actor a in P-subgraph \(G_i\), \({\text deg}(a_{S_{(i+1)}})\) is the degree of the same actor but in S-subgraph \(S_{{(i+1)}}\), and \(A_s{(i+1)}\) is the \(P_s\)-actors in S-subgraph \(S_{(i+1)}\). From Fig. 2, we notice that the attractiveness value of the actor \(a_1\) in P-subgraph \(G_{i-1}\) is equal to 2/3 which is resulted from his/her degree in S-subgraph \(S_i\) divided by number of \(A_{s_i}\). Now, we provide the way by which the new measure of attractiveness can be evaluated. Let us call the new measure by T, and it is evaluated as follows: (Measure T) Let \(s_1,s_2,\ldots s_n\) be the P-slices. For every i such that \(0< i < n\), the T value \(T(a_G)\) of an actor a in P-graph G is given by the expression: $$\begin{aligned} T(a_G)={\sum \limits _{i=1}^{n-1} T(a_{G_i})} \end{aligned}$$ where \(T(a_{G_i})\) is evaluated relating to Eq. 1. To normalize the value of T measure to be between 0 and 1, we will divide Eq. 2 by \((n-1)\) as follows: $$\begin{aligned} T_n(a_G)=\frac{\sum \limits _{i=1}^{n-1} T(a_{G_i})}{n-1} \end{aligned}$$ Toy example: P-graph G with three P-slices Figure 4 shows an example of an P-graph G with three P-slices. With respect to our approach definitions, we can expose that we have three P-subgraphs and two S-subgraphs. From Fig. 4, we can get for instance: \(A_{s_2}\) which is the set of the \(P_s\)-actors E, F, G, H, I, J, and K. P-subgraph \(G_2(A_2,E_2)\) where \(A_2\) is the set of the P-actors A, B, C, D, E, F, G, H, I, J, and K. \(E_2\) is the set of the connections (B, A), (B, C), (D, C), (E, C), (H, E), (G, C), (F, C), (I, C), and (K, I). S-subgraph \(S_2(A_2,E_{s_2})\) where \(E_{S_2}\) is the set of the connections (E, C), (H, E), (G, C), (F, C), (I, C), and (K, I). To calculate the attractiveness value of the actor C in the whole P-graph G, we have to calculate \(T(C_{G_1})\) which equals the indegree value of the actor C in the S-subgraph \(S_2\). In this case, it equals 5. In normalized form, we evaluate also the number of \(P_s\)-actors \(A_{s_2}\) which equals 7. Thus, \(T(C_{G_1})\) equals 5/7 \(T(C_{G_2})\) which equals the indegree value of the actor C in the S-subgraph \(S_3\). In this case, it equals 3. In normalized form, \(T(C_{G_2})\) equals 3/6, where 6 is the number of \(P_s\)-actors \(A_{s_3}\). With respect to Eq. 2, the T value of the actor C in the whole P-graph G equals \(T(C_{G_1})\) plus \(T(C_{G_2})\) which is 1.214. In this section, we will describe the type of our dataset, and the characteristic of each type. Furthermore, the experimental results on the different dataset will be discussed in this section. Evaluation strategy An example of the graph representation for Asterisk dataset Our approach has been applied to three different datasets. First, we chose the open source software development project Asterisk. Here, the dataset originated from the communications in the developer mailing lists during 2006 and 2007. The Asterisk dataset contains 13,542 messages and 4694 threads that were discussed by 1324 developers. Two actors are linked if they participated in the same mailing thread. Figure 5 shows an example of an actor a participating once in the same mailing thread with actor b and having shared two mailing threads with actor c. According to our approach and the timestamps in Asterisk dataset, we decomposed the P-period into eight P-slices. According to Definitions 7 and 8, we got eight P-subgraphs and seven S-subgraphs. Second, we gathered a dataset from Twitter via Twitter API from December 31, 2015, to January 06, 2016. The collected dataset is the data of hashtag #EndTaizSiege (14,944 actors and 46,552 connections) that comprises a big connected component (containing 84% of actors), singletons (14%), and smaller components (2%). We worked with the biggest component because that our goal is to evaluate the attractiveness of actors; hence, we focus on the biggest component which is considered as a single interaction domain for actors [3]. Applying our approach leads to decompose P-graph constructed from Twitter dataset into three P-subgraphs and two S-subgraphs based on three P-slices. As a third example, we collected another dataset from Twitter from July 25 to July 30 in 2016. This Twitter dataset relates to the hashtag #coup_suffocates_Taiz (2241 actors and 4419 connections) that comprises a big connected component (containing 1418 actors). We divided the corresponding P-period into three P-slices. As a result, we obtained three P-subgraphs and two S-subgraphs. An example of graph representation for our Twitter datasets The directed weighted P-graph of our collected Twitter datasets is constructed based on retweet activities so that actor a gets incoming connection from actor b if actor b retweeted a tweet of actor a. The weight of connection refers to the number of retweets activity between two connected actors. Figure 6 shows an example where actor a retweeted three tweets of actor b whereas the actor c retweeted two tweets of the actor a. Boyd et al. [5] argued that retweet relation can be understood as a form of information diffusion and as a means of participating in an event in social media. Thus, we focus on retweet relation to evaluate our approach. Furthermore, we considered that retweet activity as attract an actor to become active in the community. Retweet activities over time in Twitter dataset #EndTaizSieg As a matter of fact, the time slicing does not depend on a specific predefined strategy but it has been estimated in accordance to the size of dataset using an equal window size for each slice. For instance, Fig. 7 shows how the P-period with Twitter dataset #EndTaizSiege has been decomposed into equal window size so that we get a fair division of the retweet activities for each time slice. (In our ongoing work, we try to find a general overall strategy for the time period decomposition). For Asterisk mailing lists dataset, we applied our T measure to verify whether our T measure can detect the influential actors. We got that T measure refers to the detection of influential actors in open source software developemnt projects as introduced by Zeini and Hoppe [25]. Actually, in open source projects, it is easy to find out the role of a community member because of the openness of the community archive including the full email communication and all code modifications. Hence, the positions of the actors in Asterisk dataset are well known (e.g. Kevin P. Fleming is a senior software engineer). Table 2 shows the top 10 actors with respect to T, degree, and betweenness measures. Table 2 Top influential actors according to different influence measures over Asterisk dataset To study the relation between T measure and other influence measures in Asterisk dataset, we used Spearman's rank correlation coefficient \(\rho\). Table 3 shows the different values of rank correlation. We notice that the significant correlation between T measure and other influence measures is relatively high. Thus, we can conclude that the attractors have also high values of other influence measures. Table 3 Spearman's rank correlation coefficient over Asterisk dataset For our Twitter datasets #EndTaizSiege and #coup_suffocates_Taiz, we investigate the relation between T measure and standard measures by Spearman's rank correlation coefficient \(\rho\). The results are shown in Tables 4 and 5. The rank correlation between indegree (retweets number) measure and number of followers is very low (\(\rho\) = 0.08). This goes along with the findings of [3]. Thus, we can state that the popularity of actors in terms of the number of followers is not an important factor that affects retweet activities in Twitter. Furthermore, we found that the rank correlation between T and indegree (retweets number) measures is strong (\(\rho\) = 0.6) and consequently, the correlation with the number of followers is low. This is reasonable since the T measure incorporates the indegree. However, in contrast to the indegree the T measure emphasizes attraction of new actors by not counting relations to actors who are already active in the community. This explains that these two measures are not more strongly correlated. Furthermore, we notice that the rank correlation between T and authority measures is high (\(\rho\) = 0.5) but not as high as the correlation between the authority measure and indegree, which leads to the conclusion that the T measure also detects influential actors as the tradtional measures, but puts different emphasis on the attractors. Table 4 Spearman's rank correlation coefficient over Twitter dataset #EndTaizSiege Table 5 Spearman's rank correlation coefficient over Twitter dataset #coup_suffocates_Taiz Tables 6 and 7 show also the correlation by Kendall's rank correlation coefficient. The results shown here support our results which were investigated by Spearman's rank correlation coefficient. Table 6 Kendall's tau rank correlation coefficient over Twitter dataset #EndTaizSiege Table 7 Kendall's tau rank correlation coefficient over Twitter dataset #coup_suffocates_Taiz Tables 8 and 9 show the description of the top influential actors in the Twitter datasets #EndTaizSiege and #coup_suffocates_Taiz with respect to T, indegree, and betweenness measures. The question mark in the table fields refers to an actor who is not a well-known influential actor within the community. We notice here how our T measure refers to the well-known influential actors within the community, or to the famous news accounts. Unlike other measures, the top ten influential actors with respect to T measure are well-known within the community. In our case, the well-known actors have been recognized based on a local expertise, where they are the most renowned actors in the field of human rights and politics who are continually traded their names in the newspapers and news concerning the current situation in Taiz city in Yemen. Their names have not been mentioned explicitly to protect their privacy. Distribution of T measure along with other standard measures over Twitter dataset #EndTaizSiege Furthermore, we can note how the T measure is correlated with other standard measures from Fig. 8 that shows the distribution of T measure along with followers number, indegree, outdegree, and betweenness over the Twitter dataset. Figure 8 supports the results that were presented based on Spearman's and Kendall's rank correlation coefficient. Table 8 Description of top influential actors according to different influence measures in Twitter dataset #EndTaizSieg Table 9 Description of top influential actors according to different influence measures in Twitter dataset #coup_suffocates_Taiz Information diffusion and T measure To assess how well the T measure is suited to uncover influential actors with respect to information diffusion, we simulate the diffusion of information originating from a small seed set of nodes through the Twitter networks using the well-known independent cascade (IC) model [19]. To compare the performance of actors sets selected by the T measure with other influence measures, we selected sets of top actors based on the T measure and sets identified by measures that are known to be good heuristics for seed set selection, namely degree and betweenness centrality [26]. The IC model is an information diffusion model where the information flows over the network through cascade. Actors in the IC model can have two states, either active or inactive. Active means the actor is influenced by the information, and inactive means the actor is not influenced. The IC model calculation starts with an initial set of activated actors. In step t, an actor a will get a single chance to activate each currently inactive neighbor b. Actually, the activation process depends on the propagation probability P of the actors connection. The propagation probability P of a connection is the probability by which an actor can influence the other actors. In Twitter, we have proposed that actor a is influenced by actor b if he/she retweeted from actor b in proportion to the tweets number of actor b. So, the propagation probability P on IC model is based in our Twitter dataset on the connection weight divided by tweets number of target actor. The reason why we use the IC model instead of the LT model is that the linear threshold model is receiver oriented. This means an actor becomes active if a certain fraction of its neighbors are active. This does not account for our purpose where we want to find strong attractors who are likely to attract others. The IC model is sender oriented, and thus, is better suited to simulate attraction processes. Algorithm 1 shows the pseudo code of IC model simulator which takes the seed set S as a parameter, and then evaluates the activated actors for the each actor v in the set S. Finally, it returns the total number of activated actors by whole actors in the set S. Simulation of attraction processes with time-respecting paths In addition to the statistical comparison between the T measure and other standard network measures, we also report results based on simulated attraction processes. To do so, we adapt the IC model that is known to simulate the diffusion of information through a network as described above. Information diffusion and attraction processes have some commonalities but differ in various aspects. In traditional information diffusion models such as the IC model, the network is usually considered as stable in the sense that the set of nodes and the set of edges do not change over time. However, the nodes changes their states "inactive" and "active" during the information diffusion process. Attraction, as it is studied in this paper, is similar in the sense that actors who are not part of the community (i.e. do not have contributed a tweet) are inactive while others are considered as active. On the other hand, the original IC model does not account for the fact that the network grows when new actors become attracted to the community. Thus, the IC model was adapted to take into account the creation times of the edges. These time-varying networks have special characteristics regarding reachability of node pairs since a walk on the graph can only take edges with increasing timestamp, which is known as the time-respecting property (see [27, 28]). In this aspect, we added a new activation rule to the IC model which is: the actor who is activated in time t cannot activate those actors who have been linked with him/her before the time t. To explain this activation rule in more detail, we define the following terms: (Pathtime) The path time of each link in the network is the P-slice number in which this link has been created. (Activation time) The activation time of each activated actor is the path time of the link by which this actor has been activated. Now, we can state that the actor a cannot activate the actor b if the link from b to a has a path time later than the activation time of the actor a. Using this activiation rule, the simulation can be interpreted as an attraction process where actors who are already part of the communities can attract others only if their activity starts after the activator has become active. Previous studies [1] have shown that a seed selection strategy based on indegree yields similar results as a selection strategy based on the T measure. This is also expected with respect to the high correlation between these two measures. However, the benefit of the T measure that distinguishes it from other measures is that time is explicitly taken into account. The experimental results in the next section support the assumption that the T measure can identify important attractors in time-varying networks while it boils down to indegree if time is neglected. IC model under time-respecting paths with different influence measures over Twitter dataset #EndTaizSiege Here, we considered the dataset #EndTaizSiege which is related to an organized event in Yemen. Hence, we got a highly connected component that is suitable for the application of our approach which is basically aimed to identify those actors who contribute to attract others to participate in a specific organized event. We simulated the information diffusion based on the IC model with time-respecting paths for seed sets of sizes \(n = 1 \ldots 25\) which are generated from different influence measures. Figure 9 shows the results of applying the IC model to seeds generated from T, indegree, and betweenness measures. We notice that the T measure yields the best performance in information diffusion under the IC model with time-respecting paths for the seed sizes bigger than 13. Additionally, we statistically verified the results of simulation for each seed set using T test. In case of n (n > 13), the differences among T and indegree measures are significant. For example, results for the seed set 14 show that there is a significant difference in the score of T measure \((M = 1462.1,\,{\it SD} = 85.3802 \,{\text{conditions}}; \,\,t(19) = 14.4854,\,P = 0.0000)\). Table 10 presents the relevant descriptive statistics. Table 10 T test verification for simulation results in case of seed sizes n (n > 13) among T and indegree measures in the dataset #EndTaizSiege Here, we considered the dataset #EndTaizSiege which is related to an organized event in Yemen. Hence, we got a highly connected component that is suitable for the application of our approach which is basically aimed to identify those actors who contribute to attract others to participate in a specific organized event. We simulated the information diffusion based on the IC model with time-respecting paths for seed sets of sizes \(n = 1 \ldots 25\) which are generated from different influence measures. Figure 9 shows the results of applying the IC model to seeds generated from T, indegree, and betweenness measures. We notice that the T measure yields the best performance in information diffusion under the IC model with time-respecting paths for the seed sizes bigger than 13. Additionally, we statistically verified the results of simulation for each seed set using T test. In case of n (n > 13), the differences among T and indegree measures are significant. For example, results for the seed set 14 show that there is a significant difference in the score of T measure \((M = 1462.1, SD = 85.3802 \, {\,\text{conditions}};\, t (19) = 14.4854, P = 0.0000)\). Table 10 presents the relevant descriptive statistics. Furthermore, we consider the dataset #coup_suffocates_Taiz. We simulated here for seed sets of sizes \(n = 1 \ldots 30\) which are generated from different influence measures. Figure 10 shows the results of applying the IC model to seeds generated from T, indegree, and betweenness measures. We notice that the T measure yields the best performance in information diffusion under the IC model with time-respecting paths for the seed sizes bigger than 7. Additionally, we statistically verified the results of simulation for each seed set using T test. In case of n (n > 7), the differences among T and indegree measures are significant. For example, results for the seed set 8 show that there is a significant difference in the score of T measure \((M = 162, SD = 16.946 \, {\,\text{conditions}};\, t (19) = 3.272, P = 0.00)\). Table 11 presents the relevant descriptive statistics. IC model under time-respecting paths with different influence measures over Twitter dataset #coup_suffocates_Taiz Table 11 T test verification for simulation results in case of seed sizes n (n > 7) among T and indegree measures in the dataset #coup_suffocates_Taiz In this paper, we introduced a new approach to detect influential actors based on a new type of influence. Influential actors who are detected by our approach are those actors whose tweets spawn many retweets in a way that leads to an increase in the size of social network. We presented through experiment results how our proposed measure T referred to the influential actors in Asterisk and Twitter datasets. Furthermore, we introduced the relation between T measure and other influence measures using Spearman's rank correlation. Finally, we showed through experiment and statistical tests that the best performance has been yielded by T measure in maximization of influence problem when we took the time into account. Our current work in extending and improving this approach focuses on a differentiation of the role of the actors and different types of communication networks based on the T measure. As well as, we plan to describe our approach on multilayer networks. Furthermore, we are going to study an efficient general strategy to define the size of p-slice depending on the premise: the p-slice is the time that the most tweets get the most of their retweets. Moreover, we intend to study the role of time slicing in making T measure far better than existing measures. Qasem Z, Jansen M, Hecking T, Hoppe HU. On the detection of influential actors in social media. In: 11th international conference on signal-image technology and internet-based systems. Washington, DC, USA: IEEE Computer Society. 2015. p. 421–27. Leavitt A, Burchard E, Fisher D, Gilbert S. The influentials: new approaches for analyzing influence on twitter. Web Ecol Proj. 2009;4:1–18. Cha M, Haddadi H, Benevenuto F, Gummadi PK. Measuring user influence in twitter: The million follower fallacy. International conference on weblogs and social media. ICWSM. 2010;10:10–7. Azaza L, Kirgizov S, Savonnet M, Faiz R. Influence assessment in Twitter Multi-Relational Network. In: 2015 11th international conference on signal-image technology and internet-based systems (SITIS). Washington, DC: IEEE; 2015. p. 436–43. Boyd D, Golder S, Lotan G. Tweet, tweet, retweet: Conversational aspects of retweeting on twitter. In: Hawaii international conference on system sciences. Honolulu: IEEE; 2010. Yamaguchi Y, Takahashi T, Amagasa T, Kitagawa H. Turank: Twitter user ranking based on user–tweet graph analysis. In: international conference on Web information systems engineering. Berlin: Springer; 2010. p. 240–53. Weng J, Lim EP, Jiang J, He Q. Twitterrank: finding topic-sensitive influential twitterers. In: Proceedings of the third ACM international conference on Web search and data mining. London: ACM; 2010. p. 261–70. Anagnostopoulos A, Kumar R, Mahdian M. Influence and correlation in social networks. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. London: ACM; 2008. p. 7–15. Crandall D, Cosley D, Huttenlocher D, Kleinberg J, Suri S. Feedback effects between similarity and social influence in online communities. In: Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining. London: ACM; 2008. p. 160–8. Liu L, Tang J, Han J, Jiang M, Yang S. Mining topic-level influence in heterogeneous networks. In: Proceedings of the 19th ACM international conference on information and knowledge managemen. London: ACM; 2010. p. 199–208. Rogers EM. Diffusion of innovations. 5th ed. New York: Free Press; 2003. Gruhl D, Guha R, Liben-Nowell D, Tomkins A. Information diffusion through blogspace. In: Proceedings of the 13th international conference on World Wide Web. London: ACM; 2004. Yang J, Counts S. Predicting the speed, scale, and range of information diffusion in twitter. International conference on weblogs and social media. ICWSM. 2010;10:355–8. Vallet J, Kirchner H, Pinaud B, Melançon G. A visual analytics approach to compare propagation models in social networks. arXiv: arXiv:1504.02612. 2015. Goldenberg J, Libai B, Muller E. Talk of the network: a complex systems look at the underlying process of word-of-mouth. Mark Lett. 2001;12:211–23. Goldenberg J, Libai B, Muller E. Using complex systems analysis to advance marketing theory development: modeling heterogeneity effects on new product growth through stochastic cellular automata. Acad Mark Sci Rev. 2001;9:1–18. Granovetter M. Threshold models of collective behavior. Am J Sociol. 1978;83:1420–43. Chen W, Yuan Y, Zhang L. Scalable influence maximization in social networks under the linear threshold model. In: 2010 IEEE international conference on data mining. New Jersey: IEEE; 2010. Kempe D, Kleinberg J, Tardos. Maximizing the spread of influence through a social network. In: Proceedings of the ninth ACM SIGKDD international conference on knowledge discovery and data mining. London: ACM; 2003. Pei S, Muchnik L, Andrade Jr JS, Zheng Z, Makse HA. Searching for superspreaders of information in real-world social media. Sci Rep. 2014;4:5547. Kempe D, Kleinberg J, Tardos É. Influential nodes in a diffusion model for social networks. Automata., Languages and Programming. Berlin: Springer; 2005. p. 1127–38. Chen W, Wang Y, Yang S. Efficient influence maximization in social networks. In: Proceedings of the 15th ACM SIGKDD international conference on knowledge discovery and data mining. London: ACM; 2009. Pei S, Makse HA. Spreading dynamics in complex networks. J Stat Mech. 2013;2013:P12002. Morone F, Makse HA. Influence maximization in complex networks through optimal percolation. Nature. 2015;524:65–8. Zeini S, Hoppe U. Community Detection alsAnsatz zur Identifikation von Innovatoren in Sozialen Netzwerken. In: Klaus Meißner, Martin Engelien: Gemeinschaften in Neuen Medien (GeNeMe). Tagungsband. TU Dresden 2011, ISBN 978-3-942710-35-0. 2010. p. 131–40. Mochalova A, Nanopoulos A. On the role of centrality in information diffusion in social networks. In: 21st European conference on information systems. Vienna: ECIS; 2013. p. 101. Holme P, Saramäki J. Temporal networks. Phys Rep. 2012;519:97–125. Casteigts A, Flocchini P, Quattrociocchi W, Santoro N. Time-varying graphs and dynamic networks. In: International conference on Ad-Hoc networks and wireless. Berlin: Springer; 2011. p. 346–59. This work is the result of a close joint effort in which all authors contributed almost equally to defining and shaping the problem definition, formulas, algorithms design, implementation, computational data analysis, and manuscript. ZQ, as the first author, took the lead in composing the first draft of the manuscript, while MJ, TH and HH edited it. All authors have read and approved the final manuscript. Computer Science Institute, University of Applied Science Ruhr West, Lützowstraße 5, 46236, Bottrop, Germany Ziyaad Qasem & Marc Jansen Department of Computer Science and Applied Cognitive Science, University of Duisburg-Essen, Lotharstraße 63, 47057, Duisburg, Germany Tobias Hecking & H. Ulrich Hoppe Ziyaad Qasem Marc Jansen Tobias Hecking H. Ulrich Hoppe Correspondence to Ziyaad Qasem. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Qasem, Z., Jansen, M., Hecking, T. et al. Detection of strong attractors in social media networks. Comput Soc Netw 3, 11 (2016). https://doi.org/10.1186/s40649-016-0036-9 Detection of attractors Independent cascade model Complex Networks
CommonCrawl
The Annals of Statistics Ann. Statist. Volume 8, Number 4 (1980), 870-882. Optimum Kernel Estimators of the Mode William F. Eddy More by William F. Eddy PDF File (903 KB) Let $X_1, \cdots, X_n$ be independent observations with common density $f$. A kernel estimate of the mode is any value of $t$ which maximizes the kernel estimate of the density $f_n$. Conditions are given restricting the density, the kernel, and the bandwidth under which this estimate of the mode has an asymptotic normal distribution. By imposing sufficient restrictions, the rate at which the mean squared error of the estimator converges to zero can be decreased from $n^{-\frac{4}{7}}$ to $n^{-1+\varepsilon}$ for any positive $\varepsilon$. Also, by bounding the support of the kernel it is shown that for any particular bandwidth sequence the asymptotic mean squared error is minimized by a certain truncated polynomial kernel. Ann. Statist., Volume 8, Number 4 (1980), 870-882. https://projecteuclid.org/euclid.aos/1176345080 doi:10.1214/aos/1176345080 Mathematical Reviews number (MathSciNet) links.jstor.org Primary: 62F10: Point estimation Secondary: 62G05: Estimation Location of the maximum parabolic process polynomial kernel Eddy, William F. Optimum Kernel Estimators of the Mode. Ann. Statist. 8 (1980), no. 4, 870--882. doi:10.1214/aos/1176345080. https://projecteuclid.org/euclid.aos/1176345080 The Institute of Mathematical Statistics First Online Future Papers Bump hunting with non-Gaussian kernels Hall, Peter, Minnotte, Michael C., and Zhang, Chunming, The Annals of Statistics, 2004 Estimation of Probability Density by an Orthogonal Series Schwartz, Stuart C., The Annals of Mathematical Statistics, 1967 The $L_1$ Convergence of Kernel Density Estimates Devroye, L. P. and Wagner, T. J., The Annals of Statistics, 1979 Using Stopping Rules to Bound the Mean Integrated Squared Error in Density Estimation Martinsek, Adam T., The Annals of Statistics, 1992 Distribution-Free Lower Bounds in Density Estimation Devroye, Luc and Penrod, Clark S., The Annals of Statistics, 1984 On the Amount of Noise Inherent in Bandwidth Selection for a Kernel Density Estimator Hall, Peter and Marron, J. S., The Annals of Statistics, 1987 Lower Bounds for Nonparametric Density Estimation Rates Boyd, David W. and Steele, J. Michael, The Annals of Statistics, 1978 Some asymptotic results for kernel density estimation under random censorship Zhang, Biao, Bernoulli, 1996 Consistency Properties of Nearest Neighbor Density Function Estimators Moore, David S. and Yackel, James W., The Annals of Statistics, 1977 On the Maximum Deviation of the Sample Density Woodroofe, Michael, The Annals of Mathematical Statistics, 1967 euclid.aos/1176345080
CommonCrawl
Environment-Aware Deployment of Wireless Drones Base Stations with Google Earth Simulator 05/26/2018 ∙ by Aaron French, et al. ∙ Virginia Polytechnic Institute and State University ∙ 0 ∙ share In this paper, a software-based simulator for the deployment of base station-equipped unmanned aerial vehicles (UAVs) in a cellular network is proposed. To this end, the Google Earth Engine platform and its included image processing functions are used to collect geospatial data and to identify obstacles that can disrupt the line-of-sight (LoS) communications between UAVs and ground users. Given such geographical information, three environment-aware optimal UAV deployment scenarios are investigated using the developed simulator. In the first scenario, the positions of UAVs are optimized such that the number of ground users covered by UAVs is maximized. In the second scenario, the minimum number of UAVs needed to provide full coverage for all ground users is determined. Finally, given the load requirements of the ground users, the total flight time (i.e., energy) that the UAVs need to completely serve the ground users is minimized. Simulation results using a real area of the Virginia Tech campus show that the proposed environment-aware drone deployment framework with Google Earth input significantly enhances the network performance in terms of coverage and energy consumption, compared to classical deployment approaches that do not exploit geographical information. In particular, the results show that the proposed approach yields a coverage enhancement by a factor of 2, and a 65 results have also shown the existence of an optimal number of drones that leads to a maximum wireless coverage performance. Aaron French Mohammad Mozaffari Abdelrahman Eldosouky Walid Saad Capacity and Coverage Enhancement Using Long-Endurance Tethered Airborne Base Stations Airborne base stations (carried by drones) have a great potential to enh... 06/27/2019 ∙ by Mustafa A. Kishk, et al. ∙ 0 ∙ share Optimal deployments of UAVs with directional antennas for a power-efficient coverage To provide a reliable wireless uplink for users in a given ground area, ... 11/18/2019 ∙ by Jun Guo, et al. ∙ 0 ∙ share A Stochastic Geometry Model of Backhaul and User Coverage in Urban UAV Networks Wireless access points on unmanned aerial vehicles (UAVs) are being cons... 10/09/2017 ∙ by Boris Galkin, et al. ∙ 0 ∙ share Communications and Control for Wireless Drone-Based Antenna Array In this paper, the effective use of multiple quadrotor drones as an aeri... 12/29/2017 ∙ by Mohammad Mozaffari, et al. ∙ 0 ∙ share Hybrid Cell Outage Compensation in 5G Networks: Sky-Ground Approach Unmanned Aerial Vehicles (UAVs) enabled communications is a novel and at... 03/21/2018 ∙ by Mohamed Y Selim, et al. ∙ 0 ∙ share Optimal deployment of sustainable UAV networks for providing wireless coverage Each UAV is constrained in its energy storage and wireless coverage, and... 03/27/2019 ∙ by Xiao Zhang, et al. ∙ 0 ∙ share Built Infrastructure Monitoring and Inspection Using UAVs and Vision-based Algorithms This study presents an inspecting system using real-time control unmanne... 05/19/2020 ∙ by Khai Ky Ly, et al. ∙ 0 ∙ share Unmanned aerial vehicles (UAVs), or drones have recently attracted significant attention as a promising approach to enhance wireless communication performance [1, 2, 3, 4, 5] . When equipped with wireless base station hardware, drones can supplement the coverage provided by existing cellular infrastructure. The mobility of drones facilitates the creation of line-of-sight (LoS) links with users, ensuring optimal connection strength. This ability, coupled with the reliability and autonomy of drones, lends UAVs attractive qualities to service providers. In particular, UAVs are an effective approach in emergency scenarios such as disaster relief, when unplanned power outages may compound with the increased need for communication, and Internet of Things (IoT) applications [6] , where the quantity and low transmit power of devices may necessitate closer-ranged wireless communications. Meanwhile, UAVs can also be used to complement existing terrestrial cellular systems by bringing additional capacity to crowded areas during temporary events. Furthermore, drones can be deployed to provide necessary wireless connectivity to rural areas in which the presence of large-scale ground wireless infrastructure is limited. To effectively deploy drones drone base stations in wireless networking applications, there is a need for efficient simulators that can simulate different use-case scenarios and ground environments. Though many simulators have been developed for terrestrial base stations [7] , [8] , only some are suited specifically for the analysis of three-dimensional, ad hoc networks [9] . These are typically implemented as extensions of the general network simulators [10] , that operate in two dimensions. UAV-enabled networks are highly dynamic and thus require a proper integration of the movement of UAVs into the simulation environment. Moreover, analysis of these networks is made more challenging by the uncertainty of environmental variables affecting propagation, as well as highly dynamic interference. To account for these UAV features, many models implement probabilistic expressions based on environment type, i.e., rural, urban, or dense urban [11] . Thus, the ability to identify obstacles by processing satellite images can have immense value in that drone simulations can become more deterministic, depending on the accuracy of image processing. While there has been a notable number of works on UAV deployment, they do not consider the potential use of real geographical information for optimal placement of UAVs. For instance, the work in [12] optimizes the altitude of a single UAV for maximizing coverage based on a probabilistic path loss model. Using this model, the authors in [13] studied the coverage maximization problem with minimum number of drone base stations. In [14] , the deployment of an aerial UAV base station for maximizing sum-rate and power gain in a wireless network is studied. These studies use variations of the probabilistic models introduced above, and thus, are not suited for simulation of real-world environments. In contrast to previous studies on UAV deployment, we extract environmental information with great precision by using image processing tools in Google Earth Engine. Subsequently, we build a drone deployment simulator that accepts buildings' locations as inputs, and adaptively determines the optimal positions of the drones for maximizing wireless connectivity in various scenarios. The main contribution of this paper is a novel simulation framework for environment-aware deployment of multiple drone base stations that provide wireless connectivity for ground users. In particular, by exploiting geographical information extracted from the Google Earth Engine, we determine the locations of buildings that disrupt LoS. Then, we use our simulator to investigate three key UAV deployment scenarios. First, we study the optimal placement of drones for maximizing the number of covered ground users. In the second scenario, we aim to provide full coverage for ground users by using a minimum number of drones. Finally, given the load requirements of users, we analyze the optimal deployment of drones for which the total flight time of drones needed to service the users is minimized. Simulation results reveal that our proposed framework that exploit buildings' information on an area in Virginia Tech's campus using Google Earth yields a significant improvement in the coverage and energy efficiency of the drone-enabled wireless networks. Moreover, our results show the existence of an optimal number of drones that maximizes the wireless connectivity. The rest of this paper is organized as follows. In Section II, we present the system model the drone deployment scenarios. In Section III, we describe the developed feature (i.e., obstacle) extraction method from Google Earth. Simulation results are presented in Section IV and conclusions are drawn in Section V. Ii System Model and Drone Deployment Scenarios Consider a set L of L single-antenna wireless users located within a given geographical area. The location of a user i∈L is given by (xi,yi). In this area, a set M of M quadrotor drones are used as flying base stations to provide downlink wireless service to ground users, as shown in Figure 1. The location of a drone j∈M is given by vj=(xDj,yDj,hj). Each user i can be served by one drone j that provides the strongest downlink signal-to-interference-plus-noise-ratio (SINR) for the user such that γij=argmaxj∈Mγij and γij≥γth where γij is the SINR downlink between user i and drone j and γth is threshold SINR required by the user to successfully have wireless service. Here, the SINR for user i that connects to drone j can be given by: γij=ηPjd−αij∑u∈IintηPud−αu+σ2, (1) dij=√(xi−xDj)2+(yi−yDj)2+h2j, (2) where α is the path loss exponent, σ2 is the noise power, η is the path loss constant. dij is the distance between drone-BS j and a given user i. Also, Iint is the set of interfering drone-BSs. We assume that users have fixed locations and that drones can move to certain locations to service the users. Our goal is to optimally deploy the drones, i.e., calculate optimal locations to provide the wireless service in each of the following scenarios. Figure 1: System model for drones' deployment. Ii-a Maximizing the Number of Covered Users In the first scenario, our goal is to maximize the number of covered users under limited resources (available drones). This scenario captures emergency scenarios, e.g., flooding or power outage, or highly unusual wireless service demand, e.g., a fair or a sports event in a stadium. In such cases, the goal of using drones is to provide wireless service to the largest possible number of users. Covering every user in these cases might not be possible due to very high data demand that will require more drones than what is available. Determining the number of drones that can be used in these scenarios depends on the number of available drones and the expected coverage in this geographical area. In emergency cases for example, when more than one geographical area is affected and in need for urgent coverage, drones are to be deployed in these areas according to the percentage of ground users that can be effectively covered with connectivity by the drones. In this scenario, the number of users is fixed to L and the number of drones is fixed to M. The goal is to find the optimal locations of the drones vj, ∀j∈M to maximize the number of covered users. Let \mathbbold1ij be an indicator of whether or not user i is connected to drone j such that: \mathbbold1ij={1if j=argmaxj∈Mγij$and$γij≥γ%th,0if otherwise. (3) The problem can then be formulated as: maxL ∑i∈L∑j∈M\mathbbold1ij (4) s. t. ∑j∈M\mathbbold1ij=1,∀i∈L. (5) The constraint in (5) guarantees that every user is connected to only one drone. Ii-B Full Coverage with a Minimum Number of Drones In this next scenario, every user needs to be covered using the minimum number of drones. Here, unlike the previous scenario, we do not assume limited resources. This scenario usually occurs in public safety and pre-disaster awareness situations in which every user needs to be informed by a disaster mitigation plan. For example, in pre-disaster evacuation, we need to make sure that every user is aware of the upcoming danger in a timely-manner. This can help improve the community resilience against these type of disasters. Covering every user (i.e., full coverage) can be achieved by deploying drones in the targeted geographical area. However, as deploying these drones is usually costly, we need to ensure full coverage while minimizing the number of drones, and, hence the cost. The goal is to calculate the minimum number of drones required to achieve full user coverage to the L available users. This is achieved by calculating the optimal locations of the drones vj, ∀j∈M to achieve full coverage of the users. We use the same indicator \mathbbold1ij as defined in the previous scenario. The problem can then be formulated as: minM ∑j∈M∑i∈L\mathbbold1ij (6) s. t. ∑j∈M\mathbbold1ij=1,∀i∈L, (7) ∑i∈L\mathbbold1ij=L. (8) The first constraint ensures that every user is connected to only one drone and the second constraint ensures that all the users are connected to drones. Ii-C Minimizing Flight Time of Drones in Serving Users In this third scenario, each user needs to download some data using the wireless service and we are interested in minimizing the hover time (service time) of the drones to satisfy this data load for every user. This scenario captures the case in which the consumed energy is of importance as the drones can provide wireless services for only a limited period of time [15] . One example scenario is the case in which the drones are to be deployed in a geographical area that is far from their source and, thus, the drones will have to consume a significant portion of their energy for traveling to the destination. The remaining amount of energy (that will be used to serve the users) needs to be used in the most effective way possible so as to satisfy the demand of the users. Each user, among the L users, is assumed to have a load of data given by βi bits that needs to be satisfied. A drone j can transmit data to a user i with a rate bij bits/second that depends on γij. The time spent by a drone j to serve a user i can then be calculated as: tij=βibij. (9) The total hover time of a drone j can then be calculated as the summation of the times spent to serve all the users connected to this drone. Let Nj be the set of all users connected to drone j, ∀j∈M. Then, the hover time for a drone j∈M will be is given by: tj=∑i∈Njβibij. (10) The goal in this third scenario is to find the optimal locations of the drones to minimize the overall hover time of all drones given that the load of each user needs to be satisfied. The problem can be formulated as: minM ∑j∈M∑i∈Njβibij (11) s. t. ∑j∈M\mathbbold1ij=1,∀i∈L (12) ∑i∈L\mathbbold1ij=L. (13) The constraints are similar to the previous scenario. In this scenario, when every user is connected to a drone, then every user will be in a set Nj of a specific drone j such that: ⋃j∈MNj=L. (14) Then, the problem formulation of (11) will minimize the overall hover time while ensuring that the total load of users is satisfied. Iii Google Earth Engine Simulator for Obstacle Location Extraction To analyze the aforementioned scenarios, using a ground environment-aware approach, we have developed a drone network simulator using MATLAB that takes as input the locations of buildings. To determine these, we now explore the use of Google Earth Engine, a platform that is suitable for analysis and representation of geospatial data. Earth Engine incorporates multiple datasets and image processing tools. The simplest way to use the Earth Engine is through its built-in JavaScript IDE, which we explore in this work. Python is also supported through an API. The platform is well-suited for our application because of the image processing potential, allowing us to estimate and refine network parameters, and the intuitive interface through which users can supervise the building detection process. Various building detection algorithms have been developed, with cited precisions ranging from 80-90% [16, 17, 18] . Accurate algorithms rely on a combination of feature extraction techniques and machine learning. For our application, we circumvent the time and resources needed to train such programs by exploiting the "map view" imagery supplied by Google. In this view, satellite imagery is simplified, wherein features like buildings are identified in the same color. This greatly facilitates automated building identification, under the assumption that Google's own identification techniques are accurate. To extract building locations from map view, we use edge detection. This is implemented most readily in Earth Engine through Canny edge detection, a reliable and very common algorithm [19, 20] . Canny detection applies separate filters to detect horizontal, vertical, and diagonal edges, and computes the gradient magnitude. Finally, non-maximum magnitudes are suppressed, thinning the detected edges. In general applications of edge detection, image noise must be accounted for through the application of Gaussian filters; even then, error is expected. However, the simple, noiseless images provided by map view are ideal candidates for edge detection, and edge detection yields accurate results. To extract lines from this output, we apply the Hough transform to the Canny image [19] . This step is important to correct imperfections in the Canny output. The Hough transform uses an accumulator to detect the presence of a line, then implements a voting algorithm to identify its parameters. Now, we sample and trace each line, noting changes in direction which correspond to building corners. At this point, we can also manually adjust the locations of any vertex. To examine the accuracy of this process we outlined buildings on map view and overlaid them onto the corresponding satellite imagery, shown in Figure 2. Evidently, through this method, buildings are approximated fairly well. Over five test cases that we performed, this process correctly outlined about 95% of each building's correct area, and falsely identified an additional 12%. These figures are consistent with the 80-90% accuracy bounds given in the studies cited above. Additionally, we note that this method tends to overestimate building area. This is permissible, and possibly preferable, for UAV simulations, in which drones should not be deployed within a buffer area around buildings, due to the threat of collision. The limitations of the geometric approximation of buildings in this manner include irregular building shapes, specifically ones with rounded sides. Earth Engine only supports polygons; thus, rounded edges must be represented by some number of vertices, adding inherent error. In summary, we have shown that for building location identification, analysis of Google map data is consistent in accuracy with rigorous processing of satellite imagery, but can be performed at reduced computational cost. Thus, while using minimal computational resources, we have identified the locations of buildings, which will be used as inputs into our developed Google Earth-enabled MATLAB simulator so as to analyze the proposed environment-aware wireless drone base station deployment scenarios. Figure 2: Results of building identification imposed over satellite imagery for a region of the Virginia Tech campus. Iv Simulation Results For our simulations, we consider a 200 m × 200 m area over which users are randomly distributed. Users are assumed to be at ground level, at which z=0. The locations of buildings are known, defined by their vertices at {V1,V2,...,VN}, where each vertex consists of an x and y coordinates. For these simulations, we consider a three-building configuration derived from Figure 2 which is based on a real area from the Virginia Tech campus. As we did not estimate building height during image processing, we model the buildings' z -coordinates as random variables, constrained between 10 and 20 meters, heights appropriate for five-story buildings. Other simulation parameters are listed in Table 1. To evaluate any arrangement of M drones over NC candidate points, (NCM) calculations are required. As NC correlates directly with simulation precision, and hence a large NC is desirable, the computational complexity can quickly become infeasible. To circumvent this, the following heuristic is implemented. We first discretize the target area into some number NC of UAV candidate points, where NC is sufficiently small to enable rapid evaluation. We form the binary power threshold matrix T in which entry (m,n) indicates whether the user at location (xn,yn) receives above a given power Ptmin from candidate point m. Note that we do not yet account for interference, noise, or line-of-sight; our current goal is to establish starting points for further optimization. We incrementally place drones at the candidate points is maximized; in other words, at points with the most potential links. Now, we further discretize the area around each chosen candidate point. Given {V1,V2,...,VN}, we calculate whether a LoS exists by sampling the line segment connecting user i and each candidate point and checking whether any sample point lies within the bounds of a building. If so, we introduce an additional attenuation factor, η, to that potential channel. Finally, we consider interference and noise, and simultaneously solve for the optimal locations of each UAV such that the number of users above a given SINR threshold, γ, is maximized. Figure 3 shows the percentage of covered users as the SINR threshold needed for connectivity varies (this result corresponds to the deployment scenario in Subsection II-A). Clearly, as the SINR threshold or equivalently the receivers' sensitivity increases, the coverage performance of drones decreases. This due to the fact that satisfying a higher SINR requirement is more challenging thus fewer number of users can be covered by the drones. For instance, when increasing the SINR threshold from 2 dBm to 8 dBm, the number of covered users decreases by 63% in the proposed approach. In Figure 3 , we also compare the performance of the proposed deployment approach with a case in which deployment is done based a probabilistic path loss model. In the probabilistic model, a drone can have a LoS link to a ground user with a specific probability, which is given by [21] : PLoS,i=b1(180πθi−15)b2, (15) where θi is the elevation angle (in radians) between the drone i and a user located at (x,y). Also, b1 and b2 are constant values which depend on the environment. As we can see from Figure 3, our approach outperforms the probabilistic case. In our approach, the locations of buildings are known and deterministic as they are obtained from the Google Earth Engine. In the probabilistic case, however, we do not have a complete information about the buildings. Therefore, by exploiting additional information about the environment, our deployment approach leads to a higher coverage performance than the probabilistic-based deployment. As shown in Figure 3, the number of covered users can be increased by up to a factor of 2 while adopting the proposed environment-aware deployment strategy. As an illustrative example, in Figure 4, we show visual output of drone placement, using known building locations. Figure 3: Percentage of covered users versus SINR threshold. Figure 4: An illustrative figure for drones' deployment. Figure 5 shows the impact of the number of drones on the coverage performance for various network sizes (this result corresponds to the deployment scenario in Subsection II-B). Clearly, the coverage performance decreases as the number ground users increases. For a higher number of users, it will be more likely that drone-users communication links will become blocked by obstacles. Consequently, the communication reliability and, hence, the coverage performance degrades. Figure 5 also shows how the number of covered users varies by changing the number of drones. In this case, there is a tradeoff in deploying more drone base stations for providing wireless connectivity. By increasing the number of drones, the coverage can be improved as the drones are placed closer the ground users. However, while using more drones, the aggregated interference increases which reduces the users' SINR. Therefore, there exists an optimal number of drones for which the coverage is maximized. For instance, as we can see from Figure 5, the optimal number of drones for serving 100 users is 6. This figure allow us to determine the minimum number of drones needed to meet a certain coverage requirement. For example, here, a full coverage for 50 users can be achieved by optimally deploying 8 drones over the considered geographical area. Figure 5: Percentage of covered users versus number of drones. Figure 6 shows the total flight time of drones needed for completely servicing the users (this result corresponds to the deployment scenario in Subsection II-C). From this figure, we can see that the flight time of drones increases when the number of buildings (i.e., obstacles) increases. With more obstacles in the environment, drone-to-user communications will experience lower SINR due to the blockage and shadowing effects. As a result, the transmission rate will decrease and the drones must fly longer in order to transmit a required amount of data to each user. From Figure 6, it can be seen that the total flight time of drones increases by 45%, in the proposed deployment case, when the number of buildings increases from 1 to 4. Hence, servicing users located in a harsh environment requires longer flight time, more energy consumption, and thus using more capable drones. In Figure 6, we compare the performance of our proposed environment-aware deployment approach with a random deployment case in which drones are randomly deployed over the geographical area. As we can see from Figure 6, the proposed optimal deployment can yield up to a 65% flight time reduction compared to the random deployment case. Therefore, the proposed approach enhances energy-efficiency of the considered drone-enabled wireless network. Figure 6: Total flight time of drones versus number of buildings (i.e., obstacles). Carrier frequency 2 GHz Drone transmit power 1 W Total noise power spectral density -170 dBm/Hz Number of ground users 200 Bandwidth 1 MHz b1,b2 Parameters in probabilistic channel model 0.36,0.21 [12] Load per ground user 10 Mb Table I: Simulation parameters. V Conclusion In this paper, we have investigated the problem of environment-aware deployment of drone base stations that provide wireless connectivity to ground users. To this end, first, we have developed a drone network simulator that uses the Google Earth Engine in order to extract key information about buildings in the considered geographical area. Then, we have studied the optimal deployment of drones in three practical scenarios. In the first scenario, we have determined the optimal locations of drones such that the number of covered ground users is maximized. In the second scenario, we have minimized the number of drones needed to ensure a full coverage for all users. Finally, we have minimized the flight time of drones required to completely service the users by satisfying their load requirements. Our results have shown that the proposed deployment framework significantly enhances the drone wireless system performance in terms of coverage and energy efficiency. Moreover, our simulation results have demonstrated existence of an optimal number of drones for which the wireless coverage is maximized. [1] M. Mozaffari, W. Saad, M. Bennis, Y.-H. Nam, and M. Debbah, "A tutorial on UAVs for wireless networks: Applications, challenges, and open problems," available online: arxiv.org/abs/1803.00680, 2018. [2] M. Alzenad, A. El-Keyi, F. Lagum, and H. Yanikomeroglu, "3-D placement of an unmanned aerial vehicle base station (UAV-BS) for energy-efficient maximal coverage," IEEE Wireless Communications Letters, vol. 6, no. 4, pp. 434–437, Aug. 2017. [3] Q. Wu, Y. Zeng, and R. Zhang, "Joint trajectory and communication design for uav-enabled multiple access," in Proc. of IEEE Global Telecommunications Conference (GLOBECOM), Singapore, Dec. 2017. [4] I. Bor-Yaliniz and H. Yanikomeroglu, "The new frontier in RAN heterogeneity: Multi-tier drone-cells," IEEE Communications Magazine, vol. 54, no. 11, pp. 48–55, Nov. 2016. [5] M. Mozaffari, A. T. Z. Kasgari, W. Saad, M. Bennis, and M. Debbah, "Beyond 5G with UAVs: Foundations of a 3D wireless cellular network," available online: arxiv.org/abs/1805.06532, 2018. [6] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, "Unmanned aerial vehicle with underlaid device-to-device communications: Performance and tradeoffs," IEEE Transactions on Wireless Communications, vol. 15, no. 6, pp. 3949–3963, June 2016. [7] A. R. Khan, S. M. Bilal, and M. Othman, "A performance comparison of open source network simulators for wireless networks," in Proc. of IEEE International Conference on Control System, Computing and Engineering, Penang, Malaysia, Nov. 2012. [8] J. Lessmann, P. Janacik, L. Lachev, and D. Orfanus, "Comparative study of wireless network simulators," in Proc. of International Conference on Networking, Cancun, Mexico, Apr. 2008. [9] S. Kang, M. Aldwairi, and K.-I. Kim, "A survey on network simulators in three-dimensional wireless ad hoc and sensor networks," International Journal of Distributed Sensor Networks, vol. 12, no. 10, p. 1550147716664740, 2016. [10] B. Newton, J. Aikat, and K. Jeffay, "Simulating large-scale airborne networks with ns-3," in Proc. of the ACM Workshop on ns-3, Barcelona, Spain, May 2015. [11] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, "Efficient deployment of multiple unmanned aerial vehicles for optimal wireless coverage," IEEE Communications Letters, vol. 20, no. 8, pp. 1647–1650, Aug. 2016. [12] A. Al-Hourani, S. Kandeepan, and S. Lardner, "Optimal LAP altitude for maximum coverage," IEEE Wireless Communication Letters, vol. 3, no. 6, pp. 569–572, Dec. 2014. [13] E. Kalantari, H. Yanikomeroglu, and A. Yongacoglu, "On the number and 3D placement of drone base stations in wireless cellular networks," in Proc. of IEEE Vehicular Technology Conference, 2016. [14] M. M. Azari, F. Rosas, K. C. Chen, and S. Pollin, "Joint sum-rate and power gain analysis of an aerial base station," in Proc. of IEEE GLOBECOM Workshops, Dec. 2016. [15] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, "Wireless communication using unmanned aerial vehicles (UAVs): Optimal transport theory for hover time optimization," IEEE Transactions on Wireless Communications, vol. 16, no. 12, pp. 8052–8066, Dec. 2017. [16] A. Zhang, X. Liu, A. Gros, and T. Tiecke, "Building detection from satellite images on a global scale," available online: arxiv.org/abs/1707.08952, 2017. [17] M. Cote and P. Saeedi, "Automatic rooftop extraction in nadir aerial imagery of suburban regions using corners and variational level set evolution," IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 1, pp. 313–328, Jan. 2013. [18] J. P. Cohen, W. Ding, C. Kuhlman, A. Chen, and L. Di, "Rapid building detection using machine learning," Applied Intelligence, vol. 45, no. 2, pp. 443–457, 2016. [19] J. Canny, "A computational approach to edge detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679–698, Nov. 1986. [20] R. O. Duda and P. E. Hart, "Use of the hough transformation to detect lines and curves in pictures," Communications of the ACM, vol. 15, no. 1, pp. 11–15, 1972. [21] A. Hourani, S. Kandeepan, and A. Jamalipour, "Modeling air-to-ground path loss for low altitude platforms in urban environments," in Proc. of IEEE Global Communications Conference (GLOBECOM), Austin, TX, USA, Dec. 2014.
CommonCrawl
A Nature Research Journal Search E-alert Submit My Account Login Article | Open | Published: 06 March 2019 Coarsening Behavior of Particles in Fe-O-Al-Ca Melts Linzhu Wang1, Junqi Li1, Shufeng Yang2, Chaoyi Chen1, Huixin Jin1 & Xiang Li3 Scientific Reportsvolume 9, Article number: 3670 (2019) | Download Citation Reaction kinetics and dynamics The characteristics of particles greatly affect the microstructure and performance of metallic materials, especially their sizes. To provide insight into coarsening phenomena of particles in metallic melts, Fe-O-Al-Ca melt with calcium aluminate particles was selected as a model system. This study uses HT-CSLM, SEM detections and stereological analysis to probe the behavior of particles and their characteristics including size, number density, volume fraction, spreading of particle size, inter-surface distance and distribution of particles. Based on the experimental evidence and calculation of collision, we demonstrate that the coarsening of inclusion particles is not only dependent on the Ostwald growth as studied in previous study, but also on the particle coagulation, and floatation. The collision of particles affects the maximum size of the particles during whole deoxidation process and dominates the coarsening of particles at later stage of deoxidation under the condition without external stirring in Fe-O-Al-Ca melts. The factors influencing collision behaviors and floating properties were also analyzed, which is corresponding to coarsening behavior and change of particle characteristic in the melts with different amounts of Ca addition. Such coarsening mechanism may also be useful in predicting the size of particles in other metallic materials. Particles are inevitable products in metallic materials which form during metal refining, casting, and thermal processing in liquid or solid metals1,2,3,4. It has been a common knowledge that particles play a significant role in determining the continuous castability and performances of materials5,6,7. The Al2O3 particles with high hardness and high melting temperature generate during deoxidation process in Al-killed steels may lead to nozzle clogging8. Another possible side effect is a decrease in machinability and service life, caused by the nucleation and propagation of voids around precipitate particles on the weak grain boundaries5,7,9. However, the effect of particles on the properties of metal is significantly dependent on the particle size distribution, spatial distribution, morphology and composition of particles. Lots of scholars have verified that the particles with certain characteristics can act by pinning grain boundaries, inhibiting grain growth or inducing precipitated phase formation, such as AF (acicular ferrite), thus refining the grain and improving the microstructure4,10,11,12,13,14. Extensive investigations have focused on particle-assisted microstructure control1,15,16. A. Mitchell et al. pointed out that the particles with distance larger than 10 μm and diameter smaller than 1 μm have no impact on macro-performance of metallic products17. The yield strength and tensile strength would increase remarkably for steels with particles less than 0.3 µm18. The fine MgO-containing particles were found to have a facilitating effect on the formation of equiaxed crystallization and refinement of microstructure19,20,21. Yang et al.22 reported that the proportion of AF progressively increased with increasing particle size from 1.0 to 1.8 μm and the ability of particles to induce AF was greatly reduced when the particle size reached 7.0 μm. Particles containing Ce with a size of about 4–7 μm can serve as heterogeneous nucleation sites for AF formation23. In spite of controversies on relation between particles with various compositions on the microstructure of metallic materials, some oxides, sulfides, nitrides and complex compounds (MnS, Ti2O3, TiN, VN, TiO·Al2O3·MnS, ect.) have been displayed promoting intra-granular ferrite nucleation or pinning grain boundaries24,25,26. The dispersed particles containing Ca or Mg are found to serve as heterogeneous nuclei for fine ferrite effectively due to the relatively weak affinity between individual particles and other characteristics27. Furthermore, Ca-containing alloy is commonly used to improve the continuous castability of liquid steel by modifying solid alumina particles into liquid calcium aluminates28,29,30. Yet, inappropriate addition of calcium can lead to the formation of calcium aluminate particles with large size. Wang et al.31 found that the stringer shaped particles longer than 150–350 μm in linepipe steel were deformed from calcium aluminate with 10–20 μm in cast slab which deteriorated the properties of low temperature toughness and hydrogen induced crack of steel. One of the key factors for decreasing the side effects of particles, or determining in the pining effect of particles or their ability on serving as cores of precipitated phase nucleation is the particle size. The formation of particles starts with nucleation which plays an important role in determining the structure, shape and size distribution of the particles. Suito and Ohta et al.32 found that the initial size distribution of particles became narrow in the case of high nucleation rate, which they thought was facilitating to obtain fine particles. After that, the particles are deemed to grow and coarsen by the following steps: the diffusion of reactants to the oxide nuclei, Ostwald ripening33, collision and subsequent coagulation in liquid metal. Lindberg et al.34 reported that the time for attending the 90% of the equilibrium value of particle volume is 0.2 s. Suito and Ohta et al.35 investigated that the growth of particles by diffusion is very fast. In their study, Ostwald ripening dominates the growth of particles in deoxidation process under no fluid flow. However, in our previous study36, the experimental evidence for particles size distribution in Fe-O-Al-Ca melt corresponds to the theoretical results based on Ostwald ripening at early stage of deoxidation but not at later stage. Therefore, it is necessary to study the change of particle size distribution in liquid metal affected by the collision and subsequent coagulation. Collisions between particles and rapid diffusion in the liquid phase increase the number of large particles and enhance particle removal by floatation37. Extensive theoretical studies about the time-dependent particle size distribution and mathematical model have been reported based on collision-coalescence behavior due to turbulent collision38, Stokes collision39,40,41, Brownian collision42. Furthermore, attractive capillary force acted on particles has been also investigated with consideration on chemical compositions, size and distance between particles which is one mechanism for coagulation43,44,45,46. It can be concluded that the particle size distribution is effected by nucleation, growth, coagulation due to collision and attractive force, and floatation behavior of particles in liquid metal47,48,49,50,51,52. However, lots of researches are focused on the behavior of solid particles32,34,53 and there are limited researches on the coarsening of liquid particles in liquid metal. The nucleation and Ostwald ripening of liquid calcium aluminate particles in Fe-O-Al-Ca melt have been investigated in our previous study36. In current study, the coarsening mechanism of particles in Fe-O-Al-Ca melt was studied with the consideration of nucleation, coagulation due to collision and floating properties and verified by experimental data. This study will provide information to understand the relations between characteristics, behavior and coarsening of particles in Fe-O-Al-Ca melts, and will be helpful for predicting and controlling size of particles. Materials used in the present study and high temperature experimental processes are described detailedly in our previous study36. Characteristics of particles were detected by SEM at an accelerating voltage of 15 KV and the transformation of particle characteristic in three-dimensional from that in two-dimensional based on stereological analysis is the same with our previous study36. Geometric standard deviation of particle size distribution ln σ values is calculated by Eq. (1) $$ln{\sigma }={[\frac{{\sum }^{}{n}_{i}{(ln{r}_{i}-ln{r}_{geo})}^{2}}{{\sum }_{i=1}^{n}{n}_{i}}]}^{1/2}$$ Where rgeo is the geometric mean radius of particle given by (r1· r2· r3 …… rn)1/n. The ln σ values are obtained from Eq. (1) using the values for the size and number density of particles which were measured in the deoxidation experiments. The inter-surface distance Dab between two particles can be obtained by measuring the central coordinates and radius of particles by Image-Proplus software and the illustration is shown in Fig. 1. $${D}_{ab}=\sqrt{{({X}_{b}-{X}_{a})}^{2}+{({Y}_{b}-{Y}_{a})}^{2}}-{r}_{a}-{r}_{b}$$ Where Xi and Yi are the central coordinates of particles in the cross section, ri is the equivalent radius and Dab is the inter-surface distance between two particles. Illustration of inter-surface distance between particles. By calculating the inter-surface distances of certain particle with all others, the inter-surface distance between this certain particle with the nearest particle Dmi can be obtained by calculating the minimum value of Dab as Eq. (3) and Dmi is defined as the inter-surface distance of a pair of adjacent particles in this paper. The average inter-surface distance of particles in certain region of sample DAV is the arithmetic mean value of Dmi calculated as Eq. (4). $${D}_{mi}=MIN({D}_{i1},{D}_{i2}\,\cdots \,{D}_{ik})$$ $${D}_{AV}=\frac{{\sum }_{i=1}^{k}{D}_{mi}}{k}$$ In-situ observation of liquid particle behavior The behavior of particles in Fe-O-Al-Ca melt was in-situ observed using high-temperature confocal scanning laser microscope (HT-CSLM) as shown in Fig. 2. Most aggregation and coagulation between liquid calcium aluminates were caused by collision as Fig. 2(a–c). The attraction force was hardly found between most liquid calcium aluminate particles at gas/molten steel interface, even at very small separation (particle C/D moved to particle E and then passed away) as shown in Fig. 2(d–f). The same phenomenon was also observed by Hongbin Yin54, in which they found that the liquid calcium aluminate particles could separate freely after getting in touch with each other at 1/6 seconds. In-situ observation of calcium aluminate inclusions on the surface of Fe-O-Al-Ca melt by HT-CSLM. (a) Particles A and B at 0 s in field 1; (b) particles A and B at 0.22 s in field 1; (c) particles A and B at 0.66 s in field 1; (d) particles C, D and E at 0 s in filed 2; (e) particles C, D and E at 0.55 s in field 2; (f) particles C, D and E at 1.21 s in field 2. Characteristics of particles Morphologies and compositions of particles in steels deoxidized by Al and Ca alloys SEM-EDS micrographs are displayed in Fig. 3. It can be seen that there were lots of calcium aluminate particles in collision and coagulation. Few particles with similar size were found jointed as Fig. 3(a). According to the low melting point diagram Ca-Al-S55, the particles with mole ratio of Al2O3 to CaO in the range of 0.15–1.5 are in liquid or partially liquid state which are regarded as "liquid particles" in this paper. Some solid particle merged with each other and formed into an irregular aggregate by high temperature sintering which was hard to deform and densify as Fig. 3(b). The coagulations between the liquid particles were observed as Fig. 3(c–e) and these aggregates seem susceptible to deform into spherical body. Such difference is thought to be attributed to the difference of inter-diffusion of composing elements and contact area, as the liquid particles are prone to spread on the surface of the other one54. Therefore, it can be concluded that the particles with large discrepancy in size tend to collide and merge, and the deformation as well as densification proceed easily for liquid calcium aluminates particles. SEM-EDS analysis of typical calcium aluminate particles in Fe-O-Al-Ca melts after deoxidation at 1600 °C for 360 s. In order to study the effect of liquid particle on their characteristics, the percentage of liquid particles in Fe-O-Al-Ca melts is illustrated in Fig. 4(a) (C1/2/3 represents that initial adding amount of Ca is 0.25%/0.4%/ 0.78%; A1/2 represents that initial adding amount of Al is 0.05%/0.25%). The experimental condition and chemical compositions of samples were depicted in previous study36. Liquid particle percentage increased with increasing amount of calcium addition in melts. The steels with high calcium addition ([%Ca] = 0.78) after deoxidation 3900 s have an extremely higher percentage of liquid particles than those with low calcium addition ([%Ca] = 0.25, 0.4). Percentage of liquid particles, diameter, number and volume fraction of particles in samples. (a) Percentage of liquid particles in each sample is observed by SEM-EDS in samples after deoxidation for 3900 s and it is calculated based on Ca-Al-S phase diagram55; (b) effect of holding time on particle diameter in three-dimensional based on stereological analysis and the error bars represent the maximum and minimum values of particle size; (c) effect of holding time on number and volume fraction of particles in three-dimensional; (d) change of average diameter and number density of particles in three-dimensional with liquid particle percentage, and the red circle and blue square represent average diameter and number density of particles, respectively. A few hundreds of particles were observed by SEM-EDS in each sample and the planar particle size distribution was transformed into the size, number density and volume fraction of particles in three-dimensional based on stereological analysis. The average size of particles increased, and their number density and volume fraction decreased significantly with holding time as illustrated in Fig. 4(b,c). The error bars in Fig. 4(b) indicates that the particles with 18 μm formed in experiment A2C3 ([% Al]i = 0.25 and [% Ca]i = 0.78) during the first 360 s of deoxidation and subsequently those with 16 μm formed in experiment A1C3 ([% Al]i = 0.05 and [% Ca]i = 0.78), but no particles with size larger than 12 μm were observed in experiment A1C1 ([% Al]i = 0.05 and [% Ca]i = 0.25). The size of the largest particle in each experiment was larger in the steel with higher Ca addition during the first 360 s of deoxidation, and decreased with holding time due to the rapid floatation of large particle56 which was explained in DISCUSSION part. The change of number density and volume fraction for calcium aluminates in Fig. 4(c) suggests that the ascending velocity of particles in the steel containing more liquid particles (A1C3 and A2C3) was larger than that containing more solid particles (A1C1) at early stage due to the relatively larger size and fractal dimension of liquid particles in the steels with high calcium (In spite of larger density for solid calcium aluminate particles, the liquid particles in the steel containing high calcium were larger in size and fractal dimension relatively at the initiation of deoxidation, which accelerated the floatation of these liquid particles). It is reported that the ascending velocity of the condensed particles with fractional characteristic was smaller than that of isometric three dimension spherical particles43, and it decreased with the decreasing value of Df (fractal dimension)57. Based on the expression of Df by Lech Gmachowski58, the liquid aggregates with spherical shape have larger Df than the irregular solid aggregates. With the rapid rise of large particles, the average size of particles increased slightly and their number density decreased slowly after deoxidation for 1800 s. Furthermore, the change of characteristics for particles in the steels containing high calcium (with low number density of particle) was smaller than that in the case with low calcium (with high number density of liquid particle) at later stage of deoxidation. It is thought to be caused by the difference of collision rate affected by the number density which can be verified in DISCUSSION part. Therefore, as illustrated in Fig. 4(d), the average diameter tended to decrease with an increased percentage of liquid particles due to the rapid rise of particles in high Ca containing steel at early stage of deoxidation and less collision at later stage. The number density of particles changed irregularly which is affected by their aggregation and floatation behaviors. Spreading of particle size The geometric standard deviation of particle size distribution ln σ value which represents the spreading of particle size distribution in each experiment was measured in the deoxidation experiments. It is illustrated in Fig. 5(a) that the ln σ values had an increasing trend with the increase of liquid particle percentage after deoxidation for 3900 s and they decreased with time elapsed. As the spreading of a size distribution becomes narrower with a decrease in ln σ, it means that discrepancy in particle size is greater in the case with more liquid particles. Ohta et al.35 reported that ln σ values were dependent on the nucleation rate in the early stage. It can be seen that in Fig. 5(b) that the ln σ values increased in the order of Exp. A1C1 < Exp.A1C3 < Exp.A2C3 at the first 600 s of deoxidation process, in which the theoretical nucleation rates ln I were 484, 313, and 309, respectively (as reported in our previous study36). This result is in agreement with the conclusion that in the case of low nucleation rate, the particle size distribution becomes broader35. The ln σ values of calcium aluminate particle at 3900 s changed non-monotonically with the increase of liquid particle percentage due to the hereditary of particle size distribution from early stage of deoxidation and the change of particle number density. Geometric standard deviation of particles as a function of liquid paricle percentage and holding time in Fe-O-Al-Ca melts. (a) ln σ values of particle as a function of liquid particle percentage in Fe-O-Al-Ca melts after deoxidation at 1600 °C for 3900 s; (b) ln σ values of particle as a function of holding time in experiment A1C1 ([% Al]i = 0.05 and [% Ca]i = 0.25), A1C3 ([% Al]i = 0.05 and [% Ca]i = 0.78) and A2C3 ([% Al]i = 0.25 and [% Ca]i = 0.78). Inter-surface distance between particles The cumulative frequency curves of Dmi (inter-surface distance of adjacent particles) in Fig. 6(a) change little, and the particles with inter-surface distance of 30–100 μm were in larger proportion. The curves in Fig. 6(a) move toward right with increasing addition of calcium at 3900 s which means that the particles were in larger inter-surface distance with more calcium addition. Figure 6(b) shows that the proportion of particles with close inter-surface distance (<10 μm) accounted for 40%, 25% after deoxidation for 360 s in experiments A1C1 and A1C3, and it reduced to 20% and 8% after deoxidation for 600 s. The inter-surface distance between farthest particles increased with time elapsed. The average inter-surface distances of particles in Fig. 6(c,d) show that the DAV values (average inter-surface distance of particles in certain region of sample) decreased with the increasing liquid particle percentage when it was larger than 5%, and they increased with time elapsed. It is noteworthy that the change rule of particle inter-surface distance is contracted with that of particle number density, indicating that the larger the number density of particle is, the closer the particles are. Inter-surface distance of particles in Fe-O-Al-Ca melts. (a) Cummulative frequency of inter-surface distance of adjacent particles Dmi in steels with different amount of Ca addition after deoxidaiton 3900 s and Dmi is obtained based on Eqs (7 and 8). (b) Cummulative frequency of Dmi in steels with holding time in experiments A1C1 ([% Al]i = 0.05 and [% Ca]i = 0.25) and A1C3 ([% Al]i = 0.05 and [% Ca]i = 0.78). (c) Average distance of particles DAV as a function of liquid particle percentage in Fe-O-Al-Ca melts after deoxidation for 3900 s and DAV is obtained based on Eqs (7–9). (d) DAV values of particles as a function of holding time in experiments A1C1 ([% Al]i = 0.05 and [% Ca]i = 0.25), A1C3 ([% Al]i = 0.05 and [% Ca]i = 0.78) and A2C3 ([% Al]i = 0.25 and [% Ca]i = 0.78). Distribution of particles The distribution of area density for particles in Fe-O-Al-Ca melts as a function of holding time and amount of Al and Ca addition is displayed in Fig. 7. The segregation of particles were more serious in Fe-O-Al-Ca melts containing high Ca at early stage of deoxidation due to high number density which enhanced the collision and coagulation of particles, resulting in larger particles. It can explain the phenomenon that the size of largest particle increased in the order of A2C3 > A1C3 > A1C1 as shown in Fig. 4(b). With time elapsed, the area density for particles in steel decreases due to the floatation of particles. As the floatation of particle in the liquid steel following stokes behavior56, the particles with larger size have higher ascending velocity, and thus, in the high Ca containing steel, more particles with large size were removed after deoxidation for 1800s which resulted in lower area density for particles at later stage of deoxidation. It can also be verified in Fig. 7(g–j) and it indicates that the particles distribute more homogenously and their area density decreased with the increasing amount of Ca addition, resulting in relatively fine particle in melts with high Ca addition, corresponding to Fig. 4(d). Distribution of area density for particles on the cross section in Fe-O-Al-Ca melts, counting from SEM images. Collision of particles As result shows, the characteristics of particles in Fe-O-Al-Ca melts are dependent on their collision and coagulation behavior. The collision rate of particles in the liquid steel can be estimated as population balance model for collision44,59: $$\frac{d{n}_{i}}{dt}=\frac{1}{2}\sum _{k=1}^{i=1}(1+{\delta }_{k,i-k}){\beta }_{k,i-k}{n}_{k}-{n}_{i}\sum _{k=1}^{{i}_{M}}(1+{\delta }_{i,k}){\beta }_{i,k}{n}_{k}$$ Where dni/dt is collision rate of particles (mm−3·s−1), ni is the number of size i particles per unit volume (mm−3) and βi,j is the collision frequency between size i and size j particles (m3·s−1). δi,k is the Kronecker delta60, δi,k = 1 for i = k, and δi,k = 0 for i ≠ k. When i = 1, the Equation simplifies to $$\frac{d{n}_{i}}{dt}=-{n}_{1}\sum _{k=1}^{{i}_{M}}(1+{\delta }_{1,k}){\beta }_{1,k}{n}_{k}$$ In this experiment, only Brownian collisions and Stokes collisions happened among the particles in Fe-O-Al-Ca melts, and it's not necessary to consider the turbulent collisions without external stirring condition. Therefore, the collision frequency βi,j between size i and size j particles can be estimated as: $${\beta }_{ij}={\beta }_{ij}^{S}+{\beta }_{ij}^{B}$$ $${\beta }_{ij}^{S}=\frac{2g\pi ({\rho }_{Fe}-{\rho }_{MxOy})}{9\mu }({r}_{i}+{r}_{j})$$ $${\beta }_{ij}^{B}=\frac{2kT{({r}_{i}+{r}_{j})}^{2}}{3\mu {r}_{i}{r}_{j}}$$ Where βSij is Stokes collisions as a result of the difference in ascending velocity of particles and clusters in the liquid steel (m3·s−1), βBij is Brownian collisions as a result of random movements of particles in the melt (m3·s−1), and μ is the dynamic viscosity of steel (=0.006 kg/m·s). The experimental change rate of particle number density (−ΔN/Δt) increases monotonically with collision rate of particles in Fig. 8(a). The observed values of −ΔN/Δt are about 1/9 of calculated collision rate which indicates that not all the particles will coagulation after collision. Compared with the total collision rate in the steel containing low calcium, it is higher in the case of high calcium during the first 600 s, while becomes lower at the later stage of deoxidation process. Moreover, the collision rate of particles decreases with time elapsed which is attributed to a decrease of number density. Figure 8(b) illustrates that the size of largest particle in each sample increases with an increased collision frequency βi,j which decreases with time elapsed. It is verified that the collision behavior of particles affects their size significantly. Collision rate of particles in Al-Ca deoxidation steel. (a) Experimental change rate of number density is obtained by measuring the total number of particles in Fe-O-Al-Ca melts with holding time and plotted against calculated collision rate of particles based on population balance model; (b) collision frequency for particles as a function of maximum size of particle in Fe-O-Al-Ca melts during deoxidation process based on Eq. (3); (c) Collision rate of particles with different size in melts with 0.05% Al addition after deoxidation for 3900 s; (d) Collision rate of particles with different size in melts with 0.25% Al addition after deoxidation for 3900 s, (e) Relationship between arithmetic mean diameter of particles with Dp value and Dp value is the size of particles corresponding to the peak value of curves in Fig. 6c,d. Figure 8(c,d) represents the collision rate of particles with different size at 3900 s. Zone I and zone II in Fig. 8(c,d) represent the particles with certain size increase and decrease, respectively. The particles with small size (<3 μm) decrease and those with large size (>3 μm) increase; the particle change rate of number density for particles, i.e. absolute value ΔN/Δt, increases with the increasing size of particles, and then decreases with further increase of particle size in both zone I and zone II. Comparing the curves in Fig. 8(c,d), it is found that the peak values of curves and the size of particles corresponding to those increase with the decrease of the valley values, which means that, the more the small particles reduce, the more the large particles form by collision and the bigger the produced particles with largest number density are. Furthermore, it is found that the size of particles corresponding to the peak value of curves, Dp, decreases with the increase of Ca addition, which is the same with the changing trend of arithmetic mean diameter of particles observed in experiments. Hence, Dp is plotted with arithmetic mean diameter of particles in Fig. 8(e). It seems that the Dp values go up with an increase of the arithmetic mean diameter of particles linearly, especially at later stage of deoxidation process, while they change little with the increase of during the first 600 s of deoxidation process. It indicates that the collision between particles affects the coarsening of the particles at later stage of deoxidation but not early stage under the condition of no stirring, which is in agreement with the conclusion in previous study36. Influencing factors on collision of particles The effect of liquid particle percentage, average inter-surface distance between particles and ln σ values on the collision rate for particles are summarized in Fig. 8. The changing trend of collision rate with liquid particle percentage is the same with number density in Fig. 4(d) and contrary to inter-suface distance in Fig. 6(c), indicating that the collision rate is mainly affected by particle number density and the distance of particles; as particle number density increases, the inter-surface distance between particles decreases and then the collision rate increases. Figure 9(b) shows an obvious decrease in collision rate with increasing DAV values from 10 to 45 μm; however, with further increasing DAV values, the collision rate exhibits a slight decrease. The change of collision rate with ln σ values in Fig. 8c shows that the collision rate increases with the increase of ln σ values, which means that the collision rate of particles with broad spreading of size distribution is higher than that with narrow spreading of size distribution. Change rule of collision rate for particles in Fe-O-Ca-Al melts. (a) Collision rate for particles as a function of liquid particle percentage; (b) Collision rate for particles as a function of average inter-surface distance between particles; (c) Collision rate for particles as a function of geometric standard deviation of particle size distribution. Coarsening mechanism of particles The coarsening mechanism of particles in Fe-O-Al-Ca melts can be summarized in Fig. 10. The collision and coagulation of particles start after their nucleation and continue during whole deoxidation process. In the case of low nucleation rate and small inter-surface distance, the collision and coagulation tend to occur more easily, and hence, the maximum size of particle is larger in early stage of deoxidation as shown in Fig. 10(a) which is in agreement with the experimental result that the size of largest particle is larger in the steel containing higher Ca. Nevertheless, it is verified in our previous study36 that the average size of particles is mainly dependent on the Ostwald growth. As the rapid rise of liquid particles with large size and fractal dimension in Fig. 10, the inter-surface distance between particles in high Ca-containing melts becomes large and it is larger than that in low Ca-containing melts as in Fig. 10(b–d). With the consumption of Ca, Al and O, the coarsening of particles is mainly affected by their collision, but not Ostwald growth at later stage of deoxidation process. Therefore, the size of particles decreased with an increase of Ca addition after deoxidation for 3900 s. Schematic diagram of particle coarsening in Fe-O-Al-Ca. (a) Characteristics of particles for melts containing high Ca at early stage of deoxidation; (b) Characteristics of particles for melts containing high Ca at later stage of deoxidation; (c) Characteristics of particles for melts containing low Ca at early stage of deoxidation; (d) Characteristics of particles for melts containing low Ca at later stage of deoxidation. The behavior and characteristics of particles in the Fe-O-Al-Ca melts under the condition of no external stirring at 1600 °C was investigated using HT-CSLM, SEM-EDS detection. Most aggregation and coagulation observed between calcium aluminate particles were caused by collision. The characteristics of particles in three-dimensional, i.e. size, number density, volume fraction, spreading of particle size, inter-surface distance and distribution based on stereological analysis indicate that their coarsening is not only dependent on Ostwald growth as studied in previous study, but also collision and coagulation, and floatation. The collision of particles affects the maximum size of particle during whole deoxidation process and dominates the coarsening of particles at later stage of deoxidation. The calculated result based on population balance model indicates that the collision rate of particles increases with an increase of their number density, i.e. decrease of inter-surface distance, and it is high in the case for particles with narrow spreading of size distribution which is affected by nucleation rate. The particles with relatively larger size and fractal dimension have higher ascending velocity, resulting more fine particles with large inter-surface and low collision rate left in the melts. This mechanism can be used to explain that the collision, coarsening behavior and characteristic change of particles in melts with different amounts of Ca addition. The data that support the findings of this study are available from Linzhu Wang upon reasonable request. Wu, C. et al. Precipitation phenomena in Al-Zn-Mg alloy matrix composites reinforced with B4C particles. Scientific Reports 7, 9589 (2017). Gao, Q. et al. Precipitates and Particles Coarsening of 9Cr-1.7W-0.4Mo-Co Ferritic Heat-Resistant Steel after Isothermal Aging. Scientific Reports 7, 5859 (2017). Godec, M. & Skobir Balantič, D. A. Coarsening behaviour of M23C6 carbides in creep-resistant steel exposed to high temperatures. Scientific Reports 6, 29734 (2016). Wang, Q., Zou, X., Matsuura, H. & Wang, C. Evolution of Inclusions during 1473K Heating Process of EH36 Shipbuilding Steel. Metall. Mater. Trans. B 49, 18–22 (2018). Murakami, Y. Effects of Small Defects and Nonmetallic Inclusions on the Fatigue Strength of Metals. JSME. 32, 167–180 (1989). Hossein Nedjad, S. & Farzaneh, A. Formation of fine intragranular ferrite in cast plain carbon steel inoculated by titanium oxide nanopowder. Scripta Mater. 57, 937–940 (2007). Wang, L., Yang, S., Li, J., Liu, W. & Zhou, Y. Fatigue Life Improving of Drill Rod by Inclusion Control. High Temp. Mater. Processes 35, 661–668 (2016). Zhang, L., Wang, Y. & Zuo, X. Flow Transport and Inclusion Motion in Steel Continuous-Casting Mold under Submerged Entry Nozzle Clogging Condition. Metall. Mater. Trans. B 39, 534–550 (2008). Li, W., Wang, P., Lu, L. & Sakai, T. Evaluation of gigacycle fatigue limit and life of high-strength steel with interior inclusion-induced failure. Int. J. Damage Mech. 23, 931–948 (2014). Kikuchi, N., Nabeshima, S., Kishimoto, Y. & Sridhar, S. Micro-structure Refinement in Low Carbon High Manganese Steels through Ti-deoxidation&mdash;Inclusion Precipitation and Solidification Structure. ISIJ Int. 48, 934–943 (2008). Shim, J. H. et al. Ferrite nucleation potency of non-metallic inclusions in medium carbon steels. Acta Materialia 49, 2115–2122 (2001). Babu, S. S. & David, S. A. Inclusion Formation and Microstructure Evolution in Low Alloy Steel Welds. ISIJ Int. 42, 1344–1353 (2002). Zou, X., Sun, J., Matsuura, H. & Wang, C. In Situ Observation of the Nucleation and Growth of Ferrite Laths in the Heat-Affected Zone of EH36-Mg Shipbuilding Steel Subjected to Different Heat Inputs. Metall. Mater. Trans. B 49, 2168–2173 (2018). Zou, X., Zhao, D., Sun, J., Wang, C. & Matsuura, H. An Integrated Study on the Evolution of Inclusions in EH36 Shipbuilding Steel with Mg Addition: From Casting to Welding. Metall. Mater. Trans. B 49, 481–489 (2018). Ma, X. et al. A novel Al matrix composite reinforced by nano-AlN(p) network. Scientific Reports 6, 34919 (2016). Yaws, C. L. Yaws' Thermophysical Properties of Chemicals and Hydrocarbons. (William Andrew, New York, 2008). Lowe J. H. M. A. Zero inclusion steels. Clean Steel–Super Clean Steel, 1995 (1995). D. L. Nonmetallic inclusions in steels. (Science Press, Beijing, 1983). Isobe, K. Effect of Mg Addition on Solidification Structure of Low Carbon Steel. ISIJ Int. 50, 1972–1980 (2010). Sakata, K. & Suito, H. Dispersion of fine primary inclusions of MgO and ZrO2 in Fe-10 mass pct Ni alloy and the solidification structure. Metall. Mater. Trans. B 30, 1053–1063 (1999). Park, J. S. & Park, J. H. Effect of Mg-Ti Deoxidation on the Formation Behavior of Equiaxed Crystals During Rapid Solidification of Iron Alloys. Steel Res. Int. 85, 1303–1309 (2014). Gao, X. et al. Effects of MgO Nanoparticle Additions on the Structure and Mechanical Properties of Continuously Cast Steel Billets. Metall. Mater. Trans. A 47, 461–470 (2016). Adabavazeh, Z., Hwang, W. S. & Su, Y. H. Effect of Adding Cerium on Microstructure and Morphology of Ce-Based Inclusions Formed in Low-Carbon Steel. Scientific Reports 7, 46503 (2017). Guo, A. M. et al. Effect of zirconium addition on the impact toughness of the heat affected zone in a high strength low alloy pipeline steel. Mater. Charact. 59, 134–139 (2008). Talas, S. & Cochrane, R. C. Effects of Ti on the morphology of high purity iron alloys. J. Alloy Compd. 396, 224–227 (2005). Díaz-Fuentes, M., Madariaga, I. & Gutiérrez, I. Acicular Ferrite Microstructures and Mechanical Properties in a Low Carbon Wrought Steel. Materials Science Forum 284–286, 245–252 (1998). Bin, W. & Bo, S. In Situ Observation of the Evolution of Intragranular Acicular Ferrite at Ce - Containing Inclusions in 16Mn Steel. Steel Res. Int. 83, 487–495 (2012). Ren, Y., Zhang, Y. & Zhang, L. A kinetic model for Ca treatment of Al-killed steels using FactSage macro processing. Ironmak. Steelmak. 44, 497–504 (2017). Park, J. H. & Kim, D. S. Effect of CaO-Al2O3-MgO slags on the formation of MgO-Al2O3 inclusions in ferritic stainless steel. Metall. Mater. Trans. B 36, 495–502 (2005). Lind, M. & Holappa, L. Transformation of Alumina Inclusions by Calcium Treatment. Metall. Mater. Trans. B 41, 359–366 (2010). Wang, X., Huang, F., Qiang, L. I., Haibo, L. I. & Yang, J. Control of stringer shaped non-metallic inclusions of CaO-Al2O3 system in API X80 linepipe steel plates. Steel Res. Int. 85, 155–163 (2014). Suito, H. & Ohta, H. Characteristics of Particle Size Distribution in Early Stage of Deoxidation. ISIJ Int. 46, 33–41 (2006). Voorhees, P. W. The theory of Ostwald ripening. J. Stat. Phys. 38, 231–252 (1985). Ohta, H. & Suito, H. Effects of Dissolved Oxygen and Size Distribution on Particle Coarsening of Deoxidation Product. ISIJ Int. 46, 42–49 (2006). Wang, L. et al. Nucleation and Ostwald Growth of Particles in Fe-O-Al-Ca Melt. Scientific Reports 8, 1135 (2018). Guo, L., Wang, Y., Li, H. & Ling, H. Floating Properties of Agglomerated Inclusion in Liquid Steel. J. Iron Steel Res. Int. 20, 35–39 (2013). Saffman, P. G. & Turner, J. S. On the collision of drops in turbulent clouds. J. Fluid Mech. 1, 16–30 (1956). Lindborg, U. K. T. A collision model for the growth and separation of deoxidation products. Trans. Metall. Soc. AIME 242, 94 (1968). Zhang, L. & Thomas, B. G. State of the art in the control of inclusions during steel ingot casting. Metall. Mater. Trans. B 37, 733–761 (2006). Zhang, J. & Lee, H. Numerical Modeling of Nucleation and Growth of Inclusions in Molten Steel Based on Mean Processing Parameters. ISIJ Int. 44, 1629–1638 (2004). Binder, K. & Heermann, D. W. In Scaling Phenomena in Disordered Systems. (Springer US, Boston, M A., 1991). Guo, L., Li, H., Wang, Y. & Ling, H. Applying Fractal Theory to Study Agglomeration of Solid Inclusion Particles in Liquid Steel and Floating Characteristics. Physics Examination and Testing 4, 22–26 (2012). Xu, K. & Thomas, B. G. Particle-Size-Grouping Model of Precipitation Kinetics in Microalloyed Steels. Metall. Mater. Trans. A 43, 1079–1096 (2012). Xuan, C., Karasev, A. V. & Jönsson, P. G. Evaluation of Agglomeration Mechanisms of Non-metallic Inclusions and Cluster Characteristics Produced by Ti/Al Complex Deoxidation in Fe-10mass% Ni Alloy. ISIJ Int. 56, 1204–1209 (2016). Lei, H., Nakajima, K. & He, J. Mathematical Model for Nucleation, Ostwald Ripening and Growth of Inclusion in Molten Steel. ISIJ Int. 50, 1735–1745 (2010). Lei, H. & He, J. Nucleation and Growth Kinetics of MgO in Molten Steel. J. Mater. Sci. Technol. 28, 642–646 (2012). Du Gang, L. J., Wang, Z. B. & Shi, C. B. Effect of Magnesium Addition on Behavior of Collision and Agglomeration between Solid Inclusion Particles on H13 Steel Melts. Steel Res. Int. 88, 1600185 (2016). Mu, W., Dogan, N. & Coley, K. S. Agglomeration of Non-metallic Inclusions at Steel/Ar Interface: In-Situ Observation Experiments and Model Validation. Metall. Mater. Trans. B 48, 2379–2388 (2017). Mu, W., Dogan, N. & Coley, K. S. Agglomeration of Non-metallic Inclusions at the Steel/Ar Interface: Model Application. Metall. Mater. Trans. B 48, 2092–2103 (2017). Khurana, B., Spooner, S., Rao, M. B. V., Roy, G. G. & Srirangam, P. In situ Observation of Calcium Oxide Treatment of Inclusions in Molten Steel by Confocal Microscopy. Metall. Mater. Trans. B 48, 1409–1415 (2017). Wang, L., Yang, S., Li, J., Zhang, S. & Ju, J. Effect of Mg Addition on the Refinement and Homogenized Distribution of Inclusions in Steel with Different Al Contents. Metall. Mater. Trans. B 48, 805–818 (2017). Ohta, H. & Suito, H. Characteristics of Particle Size Distribution of Deoxidation Products with Mg, Zr, Al, Ca, Si/Mn and Mg/Al in Fe-10mass%Ni Alloy. ISIJ Int. 46, 14–21 (2006). Yin, H., Shibata, H., Emi, T. & Suzuki, M. Characteristics of Agglomeration of Various Inclusion Particles on Molten Steel Surface. ISIJ Int. 37, 946–955 (1997). Verma, N. et al. Transient Inclusion Evolution During Modification of Alumina Inclusions by Calcium in Liquid Steel: Part I. Background, Experimental Techniques and Analysis Methods. Metall. Mater. Trans. B 42, 711–719 (2011). Joo, S., Han, J. W. & Guthrie, R. I. L. Inclusion behavior and heat-transfer phenomena in steelmaking tundish operations: part III. applications—computational approach to tundish design. Metall. Mater. Trans. B 24, 779–788 (1993). Yao, W. (University of Science and Technology Beijing, 2016). Gmachowski, L. Calculation of the fractal dimension of aggregates. Colloids and Surface A. 211, 197–203 (2002). Nabeel, M., Karasev, A. & Jönsson, P. G. Formation and Growth Mechanism of Clusters in Liquid REM-alloyed Stainless Steels. ISIJ Int. 55, 2358–2364 (2015). Kelly, J. L. & Hughes, D. S. Second-Order Elastic Deformation of Solids. Phys. Rev. 92, 1145–1149 (1953). Support of this work by the National Science Foundation of China (Nos 51804086, 51574190, 51704085, 51474079, 51774102, 51574095 and 51564003), Program Foundation for Talents of Guizhou University (No. 2017(05)), Science and technology planning project of Guizhou (No. [2017]5626, [2017]5788), Project with Guizhou Education Department (KY (2015) 334), Talent team of Cooperation Project with Guizhou Technology Department ((2015) 4005), the National Natural Science Foundation of Guizhou Province (No. [2018]1060) and Program Foundation for Talents of Education department of Guizhou Provinc (No. Qian Jiao He [2018]105) are gratefully acknowledged. School of Materials and Metallurgy, Guizhou University, Guiyang, Guizhou, 550025, China Linzhu Wang , Junqi Li , Chaoyi Chen & Huixin Jin School of Metallurgical and Ecological Engineering, University of Science and Technology Beijing, Beijing, 100083, China Shufeng Yang College of Materials & Metallurgical Engineering, Guizhou Institute of Technology, Guiyang, 550003, China Xiang Li Search for Linzhu Wang in: Search for Junqi Li in: Search for Shufeng Yang in: Search for Chaoyi Chen in: Search for Huixin Jin in: Search for Xiang Li in: Linzhu Wang and Junqi Li wrote the main manuscript text. Shufeng Yang supervised the investigation and revised the paper. Chaoyi Chen, Huixin Jin and Xiang Li contributed to discussions and analysis of the data. Corresponding authors Correspondence to Junqi Li or Shufeng Yang. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. https://doi.org/10.1038/s41598-019-40110-x By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Guest Edited Collections Editorial Board Highlights Author Highlights Scientific Reports rigorous editorial process Open Access Funding Support
CommonCrawl
Gambling In Multiplayer Games statistics visualization This post is about gambling and how to reason about the odds of winning wagers in multiplayer games. Last week my mom found herself in an intriguing gambling situation during a game of Mahjong. There were 3 other players in the game—I'll call them Alice, Betty, and Clara. At one point, Alice proposed a wager; if she won the next game, each player had to pay her \$20, otherwise she would pay each player \$20. When Alice purposed the wager, the number of games won by each player was as follows: Figure 1. Games won by each player Alice's bet seems reasonable; she has won over half of the total games played. Her perceived dominance likely prompted her into proposing the wager, believing that she had a likely chance of winning the next game. The question is, should the other players take Alice's bet? Who does this bet favor, Alice or the other players? I was interested in this problem because it resembled a famous gambling puzzle called the problem of points worked out in the 17th century through a series of correspondences between Pascal and Fermat. Pascal and Fermat were interested in determining how to divvy up a pot of winnings between two players if the game was suddenly stopped. Their work on this problem is widely regarded as the birth of modern probability theory. Alice's bet is a more complex variant of the problem of points with an added twist. When the Mahjong game is suddenly terminated after the next game, instead of splitting a pot of earnings, each player is interested in estimating the probability that Alice will win the game, so that they can decide whether to take the bet. If I were trying to maximize my chances of winning this wager, I would model the situation as follows. With the data in Figure 1, I had one realization of a sample drawn from a discrete distribution—$N \sim Discrete(\theta)$, where $N \in \{1, … , K\}$ is a categorical random variable of $K$ players. Let $N_j$ denote the number of games won by the $j$th player and $\theta_j$ denote the probability of the $j$th player winning a game. The maximum likelihood estimator for $\theta_j$ is simply, $\hat{\theta}_j = \frac{N_j}{N}$. The number of games won by each player can be thought of as an outcome from a multinomial distribution. The multinomial is a generalization of the binomial distribution and models the probability of $K$ mutually exclusive outcomes—in this case, the number of wins in games of Mahjong. The multinomial distribution is comprised of two components—the number of ways in which a fixed number of wins can be assigned to $K$ players and the corresponding probability of each of these outcomes. Assuming outcomes are independent events, the probability of an outcome is given by the product of these two components, which is the probability mass function of the multinomial: $$ f(n_1, n_2, \dots, n_k | \theta_1, \theta_2, \dots, \theta_k) = \frac{\left(\sum_{j=i}^{K}n_j\right)!}{\prod_{j=1}^{K}n_j!} \prod_{j=1}^K \theta_j^{n_j} $$ The multinomial is a member of the exponential family, and accordingly, is conjugate with another distribution, the Dirichlet. Together, this conjugate pair forms the Dirichlet-multinomial model. This relationship is desirable for modeling in a Bayesian framework because the product of the likelihood and the prior produces a recognizable posterior kernel that is integrable. The posterior distribution of the multinomial, when normalized, forms a Dirichlet:1 $$ \begin{eqnarray} \Pr(\theta | N) &\propto& Multi(N|\theta)Dir(\theta|\alpha) \\ &\propto& \left( \prod_{j=1}^K \theta_j^{n_j} \right) \left( \prod_{j=1}^K \theta_j^{\alpha_j-1} \right) \\ &\propto& \prod_{j=1}^K \theta_j^{n_j+\alpha_j - 1} \\ &=& Dir(N+\alpha) \\ \end{eqnarray} $$ The Dirichlet distribution is a generalization of the beta distribution over multinomial parameter vectors, sometimes called concentration parameters, $ \alpha = \alpha_1, \alpha_2, …, \alpha_k$. The concentration parameters in the model can be thought of as pseudo-counts representing the number of games won by each player. The pseudo-counts regularize the prior in a way similar to a weighted average between the prior and the likelihood by adding $\alpha$ pseudo-counts to $N$, where $\hat{\theta}_j$ is our prior on $\theta_j$. The expected value of $\theta_j$ is then given by: $$ E[\theta_j | N_j] = \frac{\alpha}{N + \alpha}\theta_j + \frac{N}{N+\alpha} \hat{\theta}_j $$ Noting again that the posterior is a Dirichlet allows the derivation of the full probability distribution of the expected values because the marginals of each $\theta$ form a beta distribution under the posterior: $$ Beta(\alpha_j, \alpha - \alpha_j) $$ With the full posterior of each player in hand, it is possible to calculate the variance, maximum a posteriori estimation, credible intervals, or almost anything that is of interest in this model. The only remaining piece of information I needed for my model was to choose a prior. What is my prior belief that each player will win the next game? To answer this, I consulted my mom as she has played Mahjong with the other players enough to be able to form a reasonable estimate. She estimated the prior probabilities to be: $\hat{\theta}_{Alice}=0.20$, $\hat{\theta}_{Betty}=0.35$, $\hat{\theta}_{Mom}=0.30$, and $\hat{\theta}_{Clara}=0.15$. The question remaining was how to weight these priors in the model. How strongly do I believe that my mom's estimates are accurate? In most situations, Bayesian reasoning proceeds by gathering data, building a model, and introducing data that sets the model variables in known states where probabilities of interest are conditioned on the data. As more and more data is introduced, the model is updated to reflect the new evidence. This recalibration is sometimes referred to as Bayesian updating. In my model, I had to worked in reverse. The data was fixed; I had only observed the outcome in Figure 1. In this situation, the data was held constant and the choice of prior was the sole variable affecting inference. As the size and confidence of the prior grows, it pushes the expected values of the posterior toward the probabilities of the prior. To understand how my model was affected by the choice of the prior, I built a visualization: Figure 2. Posterior distributions of the expected values in winning the next Mahjong game The above visualization shows the posterior distributions of the expected value for each player winning the next game. The x-axis shows the relative probability that a given player will win the next game and the y-axis shows relative density. The size of the prior is adjustable by moving the slider below the main visualization. Hovering over a player's name in the legend shows the 95% credible interval for the specified player at the selected prior. When the slider is used to set $N=0$, there is no prior information and the resulting probabilities of each player winning the next game are simply the maximum likelihood estimates derived from the data. This is essentially what Alice probably used to conclude she would likely win the wager. From the data alone, Alice has a $53\%$ chance of winning the next game. Under these conditions, she is making a reasonable bet. However, the 95% credible interval is $0.29 > \hat{\theta}_{Alice} > 0.77$, indicated a large degree of uncertainty around this estimate. I was skeptical that the observed data alone would be a good estimate of the true underlying probability that Alice would win the next game. In a small sample of only 15 games, the observed probabilities could deviate wildly from the true odds that each player would win the next game. The wide credible intervals confirm this notion. I was willing to place more certainty on my Mom's ability to generate reasonable estimates; however, the observed data was valuable information. Thus, I wanted a model that balanced the influence of both the data and prior together in a way that reflected this logic. I weighted my Mom's estimate at 4x the weight of the data, $N=60$. Setting the prior to this value, suggested that Alice would win the next game $26.6\%$ of the time with a credible interval of $[17\%, 37\%]$. Interestingly, this model also suggests that the probability of each player winning a game is close to random chance, $\hat{\theta}_{Alice}=0.27$, $\hat{\theta}_{Betty}=0.33$, $\hat{\theta}_{Mom}=0.27$, and $\hat{\theta}_{Clara}=0.13$. In the actual game, the other players took Alice's bet and subsequently won the wager. My mom won the next game and Alice had to pay each player \$20. With an $\approx 3:1$ odds against Alice winning, I would have also taken this bet if I were one of players in the game. Based on my model, I would conclude that Alice likely committed a base rate fallacy. She placed too much credence in her short term winning streak and too little weight in her past performance. The full derivation of the conditional distribution is presented on the corresponding Wikipedia page. ↩
CommonCrawl
Covariance Volatility Regular variation 60G70 Autocorrelation heavy tails Heavy tails Income 05C80 60F05 62G32 Markov chain Monte Carlo regular variation Resid Spectral measure ( see all 22) United States 293 (%) France 33 (%) Canada 20 (%) Australia 13 (%) Cornell University [x] 302 (%) Bentley University 25 (%) Boston University 25 (%) Université de Rouen 15 (%) Rutgers University 13 (%) Ruppert, David 81 (%) Hooker, Giles 31 (%) Federer, Walter T. 26 (%) Celli, Giovana B. 25 (%) Berger, Paul D. 24 (%) Statistics and Data Analysis for Financial Engineering 48 (%) Statistical Design and Analysis for Intercropping Experiments 26 (%) Experimental Design 25 (%) Extremes 20 (%) Annals of the Institute of Statistical Mathematics 19 (%) Book 217 (%) Journal 85 (%) Springer 302 (%) Statistics [x] 302 (%) Statistical Theory and Methods 139 (%) Statistics for Business/Economics/Mathematical Finance/Insurance 121 (%) Statistics, general 103 (%) Statistics for Life Sciences, Medicine, Health Sciences 42 (%) 204 Authors 114 Institutions Showing 1 to 10 of 302 matching Articles Results per page: 10 20 50 Export (CSV) Back Matter - Statistical Design and Analysis for Intercropping Experiments Statistical Design and Analysis for Intercropping Experiments (1993-01-01) , January 01, 1993 By Federer, Walter T. Modeling Univariate Distributions Statistics and Data Analysis for Financial Engineering (2011-01-01): 79-130 , January 01, 2011 By Ruppert, David As seen in Chapter 4, usually the marginal distributions of financial time series are not well fit by normal distributions. Fortunately, there are a number of suitable alternative models, such as t-distributions, generalized error distributions, and skewed versions of t- and generalized error distributions. All of these will be introduced in this chapter. A note on generalized wald's method Metrika (1990-12-01) 37: 309-315 , December 01, 1990 By Hadi, A. S.; Wells, M. T. Let {vn(θ)} be a sequence of statistics such that whenθ =θ0,vn(θ0) $$\mathop \to \limits^D $$ Np(0,Σ), whereΣ is of rankp andθ εRd. Suppose that underθ =θ0, {Σn} is a sequence of consistent estimators ofΣ. Wald (1943) shows thatvnT (θ0)Σn−1vn(θ0) $$\mathop \to \limits^D $$ x2(p). It often happens thatvn(θ0) $$\mathop \to \limits^D $$ Np(0,Σ) holds butΣ is singular. Moore (1977) states that under certain assumptionsvnT (θ0)Σn−vn(θ0) $$\mathop \to \limits^D $$ x2(k), wherek = rank (Σ) andΣn− is a generalized inverse ofΣn. However, Moore's result as stated is incorrect. It needs the additional assumption that rank (Σn) =k forn sufficiently large. In this article, we show that Moore's result (as corrected) holds under somewhat different, but easier to verify, assumptions. Detecting a conditional extreme value model Extremes (2011-03-01) 14: 29-61 , March 01, 2011 By Das, Bikramjit; Resnick, Sidney I. In classical extreme value theory probabilities of extreme events are estimated assuming all the components of a random vector to be in a domain of attraction of an extreme value distribution. In contrast, the conditional extreme value model assumes a domain of attraction condition on a sub-collection of the components of a multivariate random vector. This model has been studied in Heffernan and Tawn (JRSS B 66(3):497–546, 2004), Heffernan and Resnick (Ann Appl Probab 17(2):537–571, 2007), and Das and Resnick (2009). In this paper we propose three statistics which act as tools to detect this model in a bivariate set-up. In addition, the proposed statistics also help to distinguish between two forms of the limit measure that is obtained in the model. The use of a single pseudo-sample in approximate Bayesian computation Statistics and Computing (2017-05-01) 27: 583-590 , May 01, 2017 By Bornn, Luke; Pillai, Natesh S.; Smith, Aaron; Woodard, Dawn Show all (4) We analyze the computational efficiency of approximate Bayesian computation (ABC), which approximates a likelihood function by drawing pseudo-samples from the associated model. For the rejection sampling version of ABC, it is known that multiple pseudo-samples cannot substantially increase (and can substantially decrease) the efficiency of the algorithm as compared to employing a high-variance estimate based on a single pseudo-sample. We show that this conclusion also holds for a Markov chain Monte Carlo version of ABC, implying that it is unnecessary to tune the number of pseudo-samples used in ABC-MCMC. This conclusion is in contrast to particle MCMC methods, for which increasing the number of particles can provide large gains in computational efficiency. The Statistical Analysis of Discrete Data The Statistical Analysis of Discrete Data (1989-01-01) , January 01, 1989 By Santner, Thomas J.; Duffy, Diane E. Optimum extrapolation and interpolation designs, I Annals of the Institute of Statistical Mathematics (1964-12-01) 16: 79-108 , December 01, 1964 By Klefer, J.; Wolfowitz, J. For regression problems where observations may be taken at points in a set X which does not coincide with the set Y on which the regression function is of interest, we consider the problem of finding a design (allocation of observations) which minimizes the maximum over Y of the variance function (of estimated regression). Specific examples are calculated for one-dimensional polynomial regression when Y is much smaller than or much larger than X. A related problem of optimum estimation of two regression coefficients is studied. This paper contains proofs of results first announced at the 1962 Minneapolis Meeting of the Institute of Mathematical Statistics. No prior knowledge of design theory is needed to read this paper. Limit theorems for monotone Markov processes Sankhya A (2010-02-01) 72: 170-190 , February 01, 2010 By Bhattacharya, Rabi; Majumdar, Mukul; Hashimzade, Nigar This article considers the convergence to steady states of Markov processes generated by the action of successive i.i.d. monotone maps on a subset S of an Eucledian space. Without requiring irreducibility or Harris recurrence, a "splitting" condition guarantees the existence of a unique invariant probability as well as an exponential rate of convergence to it in an appropriate metric. For a special class of Harris recurrent processes on [0,∞) of interest in economics, environmental studies and queuing theory, criteria are derived for polynomial and exponential rates of convergence to equilibrium in total variation distance. Central limit theorems follow as consequences. Front Matter - Statistics and Data Analysis for Financial Engineering Statistics and Data Analysis for Financial Engineering (2011-01-01) , January 01, 2011 Regression: Advanced Topics Statistics and Data Analysis for Financial Engineering (2011-01-01): 369-411 , January 01, 2011 When residual analysis shows that the residuals are correlated, then one of the key assumptions of the linear model does not hold, and tests and confidence intervals based on this assumption are invalid and cannot be trusted. Fortunately, there is a solution to this problem: Replace the assumption of independent noise by the weaker assumption that the noise process is station- ary but possibly correlated. One could, for example, assume that the noise is an ARMA process. This is the strategy we will discuss in this section.
CommonCrawl
What Kind of Graph is This? I am currently developing TSP heuristics that aim at symmetrically reducing the original, complete and undirected graph. The overarching rationale is that the reduction is done via a sequence of regular graphs. In view of Tutte's counterexample to Tait's conjecture, it is clear that generating a sequence of maximally vertex-connected graphs need not produce a Hamiltonian tour ($=$Hamiltonian cycle) in the end. The rationale that I am trying to follow is that the intermediate graphs should also be vertex-symmetric in other respects than just vertex-connectivity (in Tutte's counter-example graph the vertices are not equal with respect to the size of adjacent "facets"). When trying to guarantee that the intermediate regular graphs are also symmetric in some other specific sense, it was found that the following directed graph could help accomplish that: Let $G\left( V,E\right)$ be the given graph with vertices $v\in V$ and edges $e\in E$, construct from it the directed graph $H\left( N=V\cup E,A\subset N\times N\right)$ of nodes $n\in N$ and arcs $a\in A$, in which w.l.o.g. the set of arcs is directed from images of vertices to images of incident edges and from images of edges to images of non-incident vertices; i.e. $A=\lbrace\left(v_j,e_{ij}\right)\rbrace\cup\lbrace\left(v_j,e_{jk}\right)\rbrace\cup\lbrace\left(e_{ij},v_k\right)\rbrace$ Has that kind of "derived" graph $H\left(N,A\right)$ already appeared in mathematical publications and what is it called (it has some aspects of an incidence graph, but also of an adjacency graph)? graph-theory terminology directed-graphs traveling-salesman-problem 122 silver badges33 bronze badges Manfred WeisManfred Weis $\begingroup$ Dear @Manfred Weis, the only thing that I do not think I conceptually understand about the preliminary explanations is "should also be vertex-symmetric in other respects than just vertex-connectivity". This (to me) read like " 'vertex-connectivity' is a sub-concept of 'vertex-symmetric' " which (to me) is incomprehensible in two respects simultaneously: (0) "vertex-symmetric" is not a usual technical term in graph theory (as opposed to e.g. "vertex-transitive, or variants thereof, which would be usual), and (1) connectivity is not naturally related to symmetry. Would you please clarify? $\endgroup$ – Peter Heinig Aug 9 '17 at 18:40 $\begingroup$ Unrelated to your question, yet related to what I interpret your preliminary explanations are getting at: you might like to have a look at lower bounds proved in Journal of Combinatorial Theory, Series B Volume 41, Issue 1, August 1986, Pages 17-26, and upper bounds proved in: Grünbaum, B. and Motzkin, T. S. (1962), Longest Simple Paths in Polyhedral Graphs. Journal of the London Mathematical Society, s1-37: 152–160 $\endgroup$ – Peter Heinig Aug 9 '17 at 18:53 $\begingroup$ Dear @Manfred Weis, re the question itself: while the meaning is clear, it would be clearer and more usual to write $A = \{ (v,e)\in V\times E \colon v\in e \} \cup \{ (e,v) \colon \neg (v\in e) \}$. The use of the subscripted $v$ and $e$ is ususual and confusing. In particular, mentioning the vertex-to-incident-edge-arcs twice can be confusing to readers: the edges of the original graph are not oriented, so doubly mentioning them is redundant. $\endgroup$ – Peter Heinig Aug 9 '17 at 19:14 $\begingroup$ Dear @Manfred Weis: also, in the question-statement, "w.l.o.g. the set of arcs is directed from images of vertices to images of adjacent edges" reads unusual in two respects: (0) why 'without loss of generality here' here? no assumption is made, just a definition, no need for w.l.o.g. (1) to say that 'the set of arcs is directed' sounds wrong: the set itself does not get directed, rather, each arc is directed. $\endgroup$ – Peter Heinig Aug 9 '17 at 19:30 $\begingroup$ @PeterHeinig: thanks for the feedback and illustration in your answer. Regarding my lack in proficiency in terminology - I am not a professional mathematician, but my expression "vertex-symmetric" was chosen consciously, because that also covers the outcome of algorithms that operate on weighted graphs and yield a set of edges; if every such algorithm yields the same number and multiplicities of edges adjacent to a vertex for every vertex, then I call the weighted graph vertex-symmetric. But I see that I have to come up with a precise definition. $\endgroup$ – Manfred Weis Aug 10 '17 at 4:52 Short answer. Your question, strictly construed, does not have a noteworthy answer (I think), and there is good reason that it doesn't: your graph $H$ 'is'1 just the Levi graph(=bipartite incidence4 graph) of $G$. Using directed edges to encode the non-incidences is redundant because in an incidence graph (at least if you are working in material set theory) the vertices are usually meaningfully labelled by the very constituents of the original incidence structure, so all the information about the original not-necessarily bipartite graph is still there, even if you refrain from adding edges for non-incidences. In a sense, if you start adding edges for non-incidences into a Levi graph, you spoil it, and then have to make amends by adding all the edges, and then start over with the encoding of your original graph, e.g. by equipping it with direction, like you did. In that sense, this is overly complicated. (I recognize that you may have private reasons to insist that $H$ have the type 'digraph', I just do not know these reasons.) If you insist for whatever reasons that the result of your 'derived graph' be a digraph, then (I think) there does not exist a noteworthy literature reference. Recommendation. Re you comment [...] I see that I have to come up with a precise definition. If you do not insist that the result of you operation be a "digraph" (in the sense of e.g. the monograph of Bang-Jensen and Gutin), then you do not need to "come up" with a definition. Then it is sufficient to simply0 say For any undirected simple graph $G=(V,E)$, let $W(G)$ be the Levi graph ($=$ bipartite incidence graph) of $G$. (I.e. $W(G)$ has vertex set $V\cup E$ and undirected edges precisely the 2-sets $\{v,e\}$ with $v\in e$.) ${\qquad}$ (LG) If you insist that the result be a digraph, then that's another matter, and then you may use the definition you already gave. (I did not say said definition of yours was imprecise, it was merely very usually written, to the point of being almost-wrong at places.) My recommendations would be to try to keep reasonably close to usual graph-theoretic notation, rather than "to come up" with something new. Of course you have a right to unusual notation, it will simply make it less likely that you will be understood. A. Weil put it well in his textbook "Foundations of Algebraic Geometry", American Mathematical Society, 1946, Introduction: in a treatment, of this kind, particular attention must be and has been given to the language and, the definitions. Of course every mathematician has a right to his own language - at the risk of not being understood; and the use sometimes made, of this right by our contemporaries almost suggests that the same fate is being prepared for mathematics as once befell, at Babel, another of man's great achievements. A choice between equivalent definitions is of small moment, and two theories which consist of the same theorems are to be regarded as identical, whatever their starting points. But in such a subject as algebraic geometry, where earlier authors left many terms incompletely defined, and were wont to make (sometimes implicitly) assumptions from which we wish to be free, all terms have to be defined anew, and to attach precise meanings to them is a task not unworthy of our most solicitous attention. Our chief object here must be to conserve and complete the edifice bequeathed to us by our predecessors. which reads eerily relevant to some parts of mathematics today. (Note also: somewhat ironically, despite the author's well-meaning attempt, his language for algebraic geometry has largely been replaced by the one from the EGA and SGA.) Detailed answer. (0) Your question does not have an answer if very strictly construed to mean that you are asking for a usual term and a notable treatment of the use of directed complete bipartite graphs as incidence-graph-like auxiliary graphs. In other words, your question does not have an answer if you insist that the literature reference (0.0) is noteworthy, (0.1) match your specifications up to and including the data-types you chose. Yet: there is an equivalent definition and you should use that (I think) unless you have reasons to insist that the result of your operation be a digraph. (1) Beware that in the literature the following distinction is often glossed over: (1.0) "abstract incidence graph of a given undirected simple graph" in the strictest sense (i.e., said incidence graph is again a graph, no further decorations,) and (2.1) "incidence graph in which the vertices are still the parts of the given original, so that no information is lost". Authors sometimes only give the definition of the incidence graph $I(G)$, in terms of the (names of the) vertices and edges of the given graph $G$, but do not say whether they assume that the vertices are labelled by the data by which $G$ was given. More clearly, formally, I think one can meaningfully distinguish theh following. Let $G=(V,E)$ be a given undirected simple and, as mathematician working with material set theory are wont to say, 'labelled' graph. That is, $G$ is a graph as e.g. in Diestel, Graph Theory, 4th ed. the graph $W(G)$ from (LG) above is again a such an undirected simple labelled graph $W(G)$ is not a 2-colored graph. That is another type: a 2-colored graph is a pair consisting of a graph $G$ and a specified set-map $c\colon V(G)\rightarrow S$ where $S$ is a 2-element set. $W(G)$ always is a 2-colorable graph. That is evident. $W(G)$ admits a 'canonical' (which, luckily, is a not a formally defined term, leaving some leeway of expression) 2-coloring $c$, namely the unique set-map $c\colon V(W(G))\rightarrow\{V,E\}$ with $c(v)=V$ and $c(e)=E$, i.e., the two-element color-set used here is $\{V,E\}$, a set which exists on account of the Axiom of Pairing of the Zermelo-Fraenkel axioms. This is a subtle variant of the labelled-vs-unlabelled graph distinction: while both these terms are reasonably standardly defined, there is no standard definition for "labelled graph with labels that we care about", so to speak. many mathematicians would wince at spelling out the above, and in particularl at 'coloring' vertices by a set containing them, though probably there is nothing wrong with that logically alternatives to the above are either to make two arbitrary choices, namely (0) which two-element set to use for the vertex-coloring, (1) which of the two elements of the arbitrarily chosen two-set to use for the vertex-coloring category theory offers an alternative to formalize such ideas, in particular, you might find it useful (your mileage may vary) to use category theory to formalize you TSP heuristic and in particular your operation $H$. (3) The graph $H(N,A)$, to use use your notation (that I warn you against), contains (and with an obvious encoding) the same information as what is usually called the bipartite incidence graph (shorter, usual2 less pleonastic3, yet also less descriptive term: Levi graph) of the graph $G$, with $G$ being viewed as an incidence structure $(V,E,I)$ having the (elements of the) set $V$ for its points, the (elements of the) set $E$ for its lines, and $I\subseteq V\times E$ given by $(v,e)\in I$ $\Leftrightarrow$ $v\in e$ 0 Please be careful though: often, even more often than not I would say, in the literature a Levi graph is considered to be a 2-colored graph (not to be confused with 2-colorable graph). In that sense, strictly speaking a Levi graph in the literature, unlike in the suggestion above, is often not a graph in any of the usual senses of 'graph' whihch never allow attaching custom data to vertices, a Levi graph is often, unlike in my suggestion above, considered to be a 2-colored graph. Example from the the peer reviewed literature is Dragan Marušič,Tomaž Pisanski, Steve Wilson: European Journal of Combinatorics, Volume 26, Issues 3–4, April–May 2005, Pages 377-385, wherein on p. 378-379 one finds: and the illustration (which also illustrates the Coxeter-citation given in this answer, by the way) So, if the precise formalization matters to you, it will be necessary to watch out for whether Levi graph is used to mean 'bicolored bipartite incidence graph' or 'bipartite incidence graph'($=$incidence graph)$\subset$(class of graphs). 1 In a rather informal sense of 'is', similar to 'is equivalent to'. To give a precise sense to 'is equivalent to', one could take the following route: first of all, agree that the following approach is 'the' right one (an agreement which is already hard enough to reach), then agree which 'category $\mathsf{C}$ of undirected simple graphs' to use (and there are several such), then agree upon a category $\mathsf{C}_0$ having as objects bipartite digraphs, then agree upon a category $\mathsf{C}_1$ having as objects 2-colored bipartite undirected simple graphs, define a functor $F_0\colon\mathsf{C}\rightarrow\mathsf{C}_0$ whose object-class-function equals the class-function adumbrated in the OP (in short: extend your $H$-operation to a functor), then define a functor $F_1\colon\mathsf{C}\rightarrow\mathsf{C}_1$ whose object-class-function takes any undirected simple graph to its vertex-edge-Levi-graph, finally prove that $F_0$ and $F_1$ are isomorphic functors. While such an approach is very useful, there does not exist an agreement how to carry it out. No such agreement exists, and probably cannot exist in view of insufficient criteria how to decide what is essential for you. It does not seem worthwhile to get into an argument about it. 2 To give an early example of Levi graphs being used let met cite H. S. M. Coxeter, Self-dual configurations and regular graphs, Bull. Amer. Math. Soc., Volume 56, Number 5 (1950), 413-455, wherein on p. 414 we have the following illustration (retouched to hide distracting information) Coxeter introduces the Levi graph here: 3 The (very usual) technical compound term "bipartite incidence graph" is the same type of pleonasm as "bicyclic mountain bike". An "incidence graph" (to me at least, and to many others) is always bipartite. Therefore, "bipartite incidence graph" is pleonastic. However, neither the compound "incidence graph", nor "bipartite graph" are pleonastic terms, only upon taking the sort-of-union $\{$ incidence graph $\}$ $\cup$ $\{$ bipartite graph $\}$ $\approx$ $\{$ bipartite incidence graph $\}$ one ends up with a pleonasm. Similarly, 'mountain bike' lexically exists. The compound 'mountain bike' is not pleonastic at all; by far not every bike is a mountain bike, in any reasonable sense. The compound 'bicyclic mountain bike' is pleonastic, in a reasonably clear sense: no usual mountain bike has a number of wheels other than 2. There exist bikes which are non-bicyclic, in that there exist unicycles. So this is an example of what I would call an 'a n N'-pleonasm Definition. An 'a n N' pleonasm is a compound wherein 'a' is an adjective, 'N' is a noun, 'n N' must be a lexically existant noun compound, 'n N' must be non-pleonastic, and 'a N' must be reasonably meaningful and non-pleonastic too, while 'a n N' itself must be reasonably meaningful yet pleonastic in that every 'n N' is 'a'. Again, it is only pleonastic to add the 'a' to the 'n N'. It is not pleonastic to add 'a' to 'N', nor to add 'n' to 'N'. (Note that 'bicyclic bike', while unusual, is not pleonastic, and appropriate in certain contexts, in view of unicycles or training wheels.) 4 To give an example from the literature that the notation $v\in e$ I recommended elsewhere is usual, consider e.g. Diestel, Graph Theory, 4th edition, p. 2, which gives a way of putting it which is usual nowadays: Incidentally, this also gives one more reason why the notation $G(V,E)$ in the original OP is not to be recommended: it is similar to $E(X,Y)$, hence will read to some as if you had named an edge set by '$G$', a vertex by '$V$' and another vertex set by '$E$', which is confusing (to me). Peter HeinigPeter Heinig Not an answer, yet too large for the comment box and thought to be useful for others to quickly parse the slightly unusual formalism in the OP: the OP asks for literature references and usual technical terms for the class function with domain the class of all simple undirected graphs and codomain the class of all oriented complete bipartite graphs which, in particular, does what is represented by the following: Not the answer you're looking for? Browse other questions tagged graph-theory terminology directed-graphs traveling-salesman-problem or ask your own question. Rigid Strongly Regular Graphs characterization of trees in terms of products of transpositions How many hamiltonian cycles can be removed from a complete directed graph before it becomes disconnected? Name for directed graphs with "balanced cycles" Is it possible to improve the weight of perfect bipartite matchings faster than with Bellman-Ford?
CommonCrawl
KL divergence of models with both continuous and discrete variables When $x$ is discrete, KL divergence is $D_{KL}(P||Q)=\sum\limits_{x}P(x)\log \frac{P(x)}{Q(x)}$, when $x$ is continuous, $D_{KL}(P||Q)=\int\limits_{x}p(x)\log \frac{p(x)}{q(x)}dx$. However, when the space of the random variable $x$ is defined on mixed continuous and discrete space, what would be the KL divergence? For example, $x=(r,a)$, where $r$ is a continuous variable that follows Gaussian distribution, $a$ is a discrete variable that follows Bernoulli distribution. $r$ and $a$ are independent of each other. Under $P(x)$, $r\sim \mathcal{N}(\mu_1,\sigma^2)$ and $a\sim \text{Bernoulli} (\beta)$, i.e., $$P(r,a)=\left\{\begin{matrix} \mathcal{N}(\mu_1,\sigma^2))\cdot\beta, \quad a = 1, \forall r\in R\\ \mathcal{N}(\mu_1,\sigma^2))\cdot(1-\beta), \quad a = 0, \forall r\in R \end{matrix}\right.$$ Under $Q(x)$, $r\sim \mathcal{N}(\mu_2,\sigma^2)$ and $a\sim \text{Bernoulli} (1-\beta)$, i.e., $$Q(r,a)=\left\{\begin{matrix} \mathcal{N}(\mu_2,\sigma^2))\cdot(1-\beta), \quad a = 1, \forall r\in R\\ \mathcal{N}(\mu_2,\sigma^2))\cdot \beta, \quad a = 0, \forall r\in R \end{matrix}\right.$$ What is the KL divergence of $P$ and $Q$. Thank you very much fo the help! mathematical-statistics kullback-leibler edited Feb 5 at 21:54 kjetil b halvorsen Jerry GengJerry Geng In all cases, the KL divergence $D_{KL}(p \parallel q)$ is defined as the expected value of $\log \frac{p(x)}{q(x)}$, where the expectation is taken with respect to $p$: $$D_{KL}(p \parallel q) = E_{p(x)} \left[ \log \frac{p(x)}{q(x)} \right]$$ In the discrete case, this involves summation: $$D_{KL}(p \parallel q) = \sum_x p(x) \log \frac{p(x)}{q(x)}$$ And in the continuous case it involves integration: $$D_{KL}(p \parallel q) = \int \log \frac{p(x)}{q(x)} dx$$ You can see that the formulas for discrete and continuous distributions simply follow from the definition of expected value in each of these cases. The mixed discrete-and-continuous case is no different--both summation and integration are involved, as this is how expected value is defined. For example, consider joint distributions $p(x,y)$ and $q(x,y)$ where $X$ takes values in a discrete set $\mathcal{X}$ and $Y$ takes values in a continuous set $\mathcal{Y} \subseteq \mathbb{R}$. Then the KL divergence is: $$D_{KL}(p \parallel q) \ = \ \sum_{x \in \mathcal{X}} \int_\mathcal{Y} p(x,y) \log \frac{p(x,y)}{q(x,y)} dy$$ Not the answer you're looking for? Browse other questions tagged mathematical-statistics kullback-leibler or ask your own question. Kullback-Leibler divergence Relationship between estimates in 2 linear models Change of variables with a continuous but not differentiable mapping EM and Kullback-Leibler divergence Independent and identically distributed random variables KL Divergence, Bregman, and uniqueness score function of bivariate/multivariate normal distribution Prediction of random sum of binomial variables KL divergence between sample and true (multivariate normal) distribution Showing that if the PMF of $W$ is symmetric around zero then some parameters entering it are equivalent
CommonCrawl
Anmol Sharma sciscore: 1.952 My Summaries 20 doi.ieeecomputersociety.org Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans Ghesu, Florin C. and Georgescu, Bogdan and Zheng, Yefeng and Grbic, Sasa and Maier, Andreas K. and Hornegger, Joachim and Comaniciu, Dorin IEEE Transactions on Pattern Analysis and Machine Intelligence - 2019 via Local Bibsonomy [link] Summary by Anmol Sharma 2 years ago Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. To this end, Ghesu et al. reformulate the problem of landmark detection as a reinforcement learning task, where an agent is trained to navigate the 3D image space in an efficient way in order to find the landmark as spatially closely as possible. This paper marks the first time that RL has been used for a specific problem like this. The authors use Deep Q-Learning (DQN) algorithm as their framework to build the agent. The DQN algorithm uses a Convolutional Neural Network to parameterize the Q* function in a non-linear way. Also, in order to ensure that the agent always finds an object of interest regardless of the size, the authors proposed to use multi-scale search strategy, in which a different CNN works on an image of a different scale. The authors also some other interesting techniques like $\epsilon-$greedy approach, which uses a randomized strategy to choose the actions in every iteration. They also use experience replay where an experience buffer is maintained that records the trajectory of the agent. This buffer is then used to fine-tune the Q network. The authors test their method on an internal data set of CT scans, with a few different metrics. The most important metric was failure percentage rate, that was defined if the agent is x mm close to the landmark. The method achieved 0\% failure percentage rate on detecting landmarks in CT scans. Unsupervised Deep Feature Learning for Deformable Registration of MR Brain Images Wu, Guorong and Kim, Minjeong and Wang, Qian and Gao, Yaozong and Liao, Shu and Shen, Dinggang Medical Image Computing and Computer Assisted Interventions Conference - 2013 via Local Bibsonomy Accurate anatomical landmark correspondence is highly critical for medical image registration. Traditionally many of the previous works proposed a number of hand-crafted feature sets that can be used to perform correspondence. However these feature tend to be highly specialized in terms of application area, and cannot be always generalized well to other applications without significant modifications. There have been other works that perform automatic feature extraction, but their reliance on labelled data hinders their ability to perform in cases where there is none. To this end Wu et al. propose an unsupervised feature learning method which does not require labelled data. Their approach aims to directly learn the basis filters that can effectively represent all observed image patches. The learnt basis filters are later regarded as general image features representing the morphological structure of the patch. In order to learn the basis filters, the authors propose a two-layer convolutional neural network called the Independent Subspace Analysis (ISA) algorithm. As an extension of ICA, the responses are not required to be all to be mutually independent in ISA. Instead, these responses can be divided into several groups, each of which is called independent subspace. Then, the responses are dependent inside each group, but dependencies among different groups are not allowed. Thereby, similar features can be grouped into the same subspace to achieve invariance. To ensure the accurate correspondence detection, multi-scale image features are necessary to use. However, it also raised a problem of high-dimensionality in learning features from the large-scale image patches. This is achieved by constructing a two-layer network for scaling up the ISA to the large-scale image patches. Specifically, the ISA is first trained in the first layer based on the image patches with smaller scale. After that, a sliding window (with the same scale in the first layer) convolves with each large-scale patch to get a sequence of overlapped small-scale patches. The combined responses of these overlapped patches through the first layer ISA are whitened by PCA and then used as the input to the second layer that is further trained by another ISA. In this way, high-level understanding of large-scale image patch can be perceived from the low-level image features detected by the basis filters in the first layer. The authors compare their work with two other methods, and apply their method on IXI and ADNI datasets. www.wikidata.org Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images Pereira, Sérgio and Pinto, Adriano and Alves, Victor and Silva, Carlos A. IEEE Trans. Med. Imaging - 2016 via Local Bibsonomy Tumor segmentation from brain MRI sequences is usually done manually by the radiologist. Being a highly tedious and error prone task, mainly due to factors such as human fatigue, overabundance of MRI slices per patient, and increasing number of patients, manual operations often lead to inaccurate delineation. Moreover, use of qualitative measures of evaluation by radiologists results in high inter- and intraobserver error rates. There is an evident need for automated systems to perform this task. To this end Pereira et al. propose to use a deep learning method called Convolutional Neural Network for predicting a segmentation mask of the patient MRI scans. The approach the segmentation problem as a pixel-wise classification problem, where each pixel in the input MRI scan 2D slice is classified into one of the five categories: background, necrosis, edema, non-enhancing and enhancing region. The authors propose two networks, one for High Grade Gliomas (HGG) and one for Low Grade Gliomas (LGG). The HGG network has more number of convolution layers than the LGG due to lack of data. The proposed network architectures are a combination of convolution, relu, and max pooling layers, followed by some fully connected layers in the end. The networks are trained using categorical cross-entropy loss function. The networks were trained on 2D 33x33 patches extracted from the 2D MRI slices of the brain from the BRATS 2012 dataset. The patches were randomly sampled from the images and the task was to predict the class of the pixel in the middle of the patch. To approach the problem of class imbalance, they sample approximately 40\% of the patches were normal patches. They also use data augmentation to increase the number of effective patches to train. In order to test the performance of their proposed method, as well as understand the impact of various hyperparameters, the authors perform a multitude of tests. The tests included turning data augmentation on/off, using leaky relu instead of relu, changing patch extraction plane, using deeper networks, and using larger kernels. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis Suk, Heung-Il and Lee, Seong-Whan and Shen, Dinggang NeuroImage - 2014 via Local Bibsonomy Alzheimer's Disease (AD) is characterized by impairment of cognitive and memory function, mostly leading to dementia in elderly subjects. For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Multimodal information like that from MRI and PET can be used to aid in diagnosis of AD in early stages. However most of the previous works in this domain either concentrate on only one domain (MRI or PET), or use hand-crafted features which are then concatenated together to form a single vector. There are increasing evidences that biomarkers from different modalities can provide complementary information in AD/MCI diagnosis. In this paper, Suk et al. propose a Deep Boltzmann Machine (DBM) based method that performs high-level latent and shared feature representation obtained from two neuroimaging modalities (MRI and PET). Specifically they use DBM as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. The method first selects class-discriminative patches from a pair of MRI and PET images, by using a statistical significance test between classes. A MultiModal DBM (MM-DBM) is then built that finds a shared feature representation from the paired patches. However the MM-DBM is not trained directly on patches, instead, it's trained using binary vectors obtained after running the patches through a Restricted Boltzmann Machine (RBM) which transforms the real-valued observations into binary vectors. The MM-DBM network's top hidden layer has multiple entries of the lower hidden layers and the label layer, to extract a shared feature representation by fusing neuroimaging information of MRI and PET. Using this multimodal model, a single fused feature representation is obtained. Using this feature representation, A Support Vector Machine (SVM) based classification step is added. Instead of considering all patch-level classifiers' output simultaneously, the output from the SVMs are agglomerated the information of the locally distributed patches by constructing spatially distributed 'mega-patches' under the consideration that the disease-related brain areas are distributed over some distant brain regions with arbitrary shape and size. Following this step, the training data is divided into multiple subsets, and used to train an image-level classifier on each subset individually. The method was tested on ADNI dataset with MR and PET images. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation Milletari, Fausto and Navab, Nassir and Ahmadi, Seyed-Ahmad Medical image segmentation have been a classic problem in medical image analysis, with a score of research backing the problem. Many approaches worked by designing hand-crafted features, while others worked using global or local intensity cues. These approaches were sometimes extended to 3D, but most of the algorithms work with 2D images (or 2D slices of a 3D image). It is hypothesized that using the full 3D volume of a scan may improve segmentation performance due to the amount of context that the algorithm can be exposed to, but such approaches have been very expensive computationally. Deep learning approches like ConvNets have been applied to segmentation problems, which are computationally very efficient during inference time due to highly optimized linear algebra routines. Although these approaches form the state-of-art, they still utilize 2D views of a scan, and fail to work well on full 3D volumes. To this end, Milletari et al. propose a new CNN architecture consisting of volumetric convolutions with 3D kernels, on full 3D MRI prostate scans, trained on the task of segmenting the prostate from the images. The network architecture primarily consisted of 3D convolutions which use volumetric kernels having size 5x5x5 voxels. As the data proceeds through different stages along the compression path, its resolution is reduced. This is performed through convolution with 2x2x2 voxels wide kernels applied with stride 2, hence there are no pooling layers in the architecture. The architecutre resembles an encoder-decoder type architecture with the decoder part, also called downsampling, reduces the size of the signal presented as input and increases the receptive field of the features being computed in subsequent network layers. Each of the stages of the left part of the network, computes a number of features which is two times higher than the one of the previous layer. The right portion of the network extracts features and expands the spatial support of the lower resolution feature maps in order to gather and assemble the necessary information to output a two channel volumetric segmentation. The two features maps computed by the very last convolutional layer, having 1x1x1 kernel size and producing outputs of the same size as the input volume, are converted to probabilistic segmentations of the foreground and background regions by applying soft-max voxelwise. In order to train the network, the authors propose to use Dice loss function. The CNN is trained end-to-end on a dataset of 50 prostate scans in MRI. The network approached a 0.869 $\pm$ 0.033 dice loss, and beat the other state-of-art models. Dermatologist-level classification of skin cancer with deep neural networks Esteva, Andre and Kuprel, Brett and Novoa, Roberto A. and Ko, Justin and Swetter, Susan M. and Blau, Helen M. and Thrun, Sebastian Nature - 2017 via Local Bibsonomy Skin cancer is one of the most common cancer type in humans. Primarily, the lesion is diagnosed visually through a series of 2D color images taken of the affected area. This may be followed by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. To this end, Esteva et al. propose a deep learning based solution to automate the task of diagnosing lesions of the skin into fine-grained categories. Specifically, they use a GoogleNet Inception v3 CNN architecture which won the ImageNet Large Scale Visual Recognition Challenge in 2014. The method also leverages pre-training, in which an already trained DNN can be fine-tuned on a slightly varied task, which allows the network to leverage the convolutional filters it might have learnt from a much larger dataset. To achieve this, the Inception v3 CNN was fine-tuned from a pre-trained state. The model was initially trained on approximately 1.28 million images with about 1000 classes, from the 2014 ImageNet Large Scale Visual Recognition Challenge. Following which, the network is then fine-tuned on the dermatology dataset. The dataset used in the study was obtained clinically from open-access online repositories and Stanford Medical Center. It consists of 127,463 training and validation images, and held out set of 1942 labelled test images. The labels are organized hierarchically in a tree like structure, where each succeeding depth level represents a fine-grained classification of the disease. The network is trained to perform three tasks: i) classify the first-level nodes of the taxonomy, which represent benign lesions, malignant lesions and non-neoplastic. ii) nine-class disease partition—the second-level nodes—so that the diseases of each class have similar medical treatment plans, and finally iii) using only biopsy-proven images on medically important use cases, whether the algorithm and dermatologists could distinguish malignant versus benign lesions of epidermal (keratinocyte carcinoma compared to benign seborrheic keratosis) or melanocytic (malignant melanoma compared to benign nevus) origin. The CNN achieved 72.1 $\pm$ 0.9\% (mean $\pm$ s.d.) overall accuracy (the average of individual inference class accuracies) and two dermatologists attain 65.56\% and 66.0\% accuracy on a subset of the validation set for the first task. The CNN achieves 55.4 $\pm$ 1.7\% overall accuracy whereas the same two dermatologists attain 53.3\% and 55.0\% accuracy in the second task. For the third task, the CNN outperforms the dermatologists, and obtains an area under the curve (AUC) over 91\% for each case. Shape Registration in Implicit Spaces Using Information Theory and Free Form Deformations Huang, Xiaolei and Paragios, Nikos and Metaxas, Dimitris N. Shape registration problem have been an active research topic in computational geometry, computer vision, medical image analysis and pattern recognition communities. Also called the shape alignment, it has extensive uses in recognition, indexing, retrieval, generation and other downstream analysis of a set of shapes. There have been a variety of works that approach this problem, with the methods varying mostly in terms of (can be called pillars of registration) the shape representation, transformation and registration criteria that is used. One such method is proposed by Huang et al. in this paper, which uses a novel combination of the three pillars, where an implicit shape representation is used to register an object both globally and locally. For the registration criteria, the proposed method uses Mutual Information based criteria for its global registration phase, while sum-squared differences (SSD) for its local phase. The method starts off with defining an implicit, non-parameteric shape representation which is translation, rotation and scale invariant. This makes the first step of the registration pipeline which transforms the input images into a domain where the shape is implicitly defined. The image is first partitioned into three spaces, namely $[\Omega]$ (the image domain), $[R_S]$ (points inside the shape), $[\Omega - R_S]$ (points outside the shape), and $[S]$ (points lying on the shape boundary). Using this partition, a function based upon the Lipschitz function $\phi : \Omega -> \mathbb{R}^+$ is defined as: \begin{equation} \phi_S(x,y) \begin{cases} 0 & (x,y) \in S \\ + D((x,y), S)>0 & (x,y) \in [R_s] \\ - D((x,y), S)<0 & (x,y) \in [\Omega - R_s] \end{cases} \end{equation} Where $D((x,y),S)$ is the distance function which gives the minimum Euclidean distance between point $(x,y)$ and the shape $S$. Given the implicit representation, global shape alignment is performed using the Mutual Information (MI) objective function defined between the probability density functions of the pixels in source image and the target image sampled from the domain $\Omega$. MI(f_{\Omega}, g_{\Omega}^{A}) = \underbrace{\mathcal{H}[p^{f_{\Omega}}(l_1)]}_{\substack{\text{Entropy of the}\\ \text{distribution representing $f_{\Omega}$}}} + \underbrace{\mathcal{H}[p^{g_{\Omega}^{A}}(l_2)]}_{\substack{\text{Entropy of the}\\ \text{distribution representing $g_{\Omega}^{A}$} \\ \text{which is the} \\ \text{transformed source ISR using $A(\theta)$}}} - \underbrace{\mathcal{H}[p^{f_{\Omega}, g_{\Omega}^{A}}(l_1, l_2)]}_{\substack{\text{Entropy of the}\\ \text{joint distribution}\\\text{representing $f_{\Omega}, g_{\Omega}^{A}$}}} Following global registration, local registration is performed by embedding a control point grid using the Incremental Free Form Deformation (IFFD) method. The objective function to minimize is used as the sum squared differences (SSD). The local registration is also offset by using a multi-resolution framework, which performs deformations on control points of varying resolution, in order to account for small local deformations in the shape. In case where there is prior information available for feature point correspondence between the two shapes, this prior knowledge can be added as a plugin term in the overall local registration optimization term. The method was applied on statistically modeling anatomical structures, 3D face scan and mesh registration. Robust Point Set Registration Using Gaussian Mixture Models Jian, Bing and Vemuri, Baba C. Point pattern matching problem have been an active research topic in computation geometry and pattern recognition communities. These point sets typically arise in a variety of applications, where the problem lies in registration of these point sets which is encountered in stereo matching, feature based image registration and so on. Mathematically, the problem of registering two point sets translates to the following: Let $\{\mathcal{M}, \mathcal{S}\}$ be two finite set points which need to be registered, where $\mathcal{M}$ is the "moving" point set and $\mathcal{S}$ is the "fixed" or scene set. A transformation $\mathcal{T}$ is then calculated which can transform points from $\mathcal{M}$ to $\mathcal{S}$. To this end, Jian and Vemuri propose to use a Gaussian Mixture Model based representation of point sets which are registered together my minimizing a cost function using a slightly modified version of the Iterative Closest Point (ICP) algorithm. In this setting, the problem of point set registration becomes as that of aligning two Gaussian mixture models by minimizing discrepancy between two Gaussian mixtures. The cost function for this optimization is chosen as a closed-form version of the $L_2$ distance between Gaussian mixtures, which allows the algorithm to be computationally efficient. The main reason behind choosing Gaussian mixture models to represent discrete point sets was that it directly translates to interpreting point sets as randomly sampled data from a distribution of random point locations. This models the uncertainty of point sets well during feature extraction. The second reason was that hard discrete optimization problems that are encountered in point matching literature become tractable continuous optimization problems. The probability density function of a general Gaussian mixture is defined as follows: p(x) = \sum_{i=1}^{k}w_i \phi(x|\mu_i, \sigma_i) $\phi(x|\mu_i, \sigma_i) = \dfrac{exp\left[-\frac{1}{2}(x - \mu_i)\sigma_{i}^{-1}(x-\mu_i)\right]}{\sqrt{(2\pi)^d |det(\sigma_i)|}}$ The GMM from the given point set is constructed as follows: i) the number of GMM components is equal to the number of points in the point set, and every component is weighted equally, ii) every component's mean vectors is represented by the spatial location of each point, iii) all components have same spherical covariance matrix. An intuitive reformulation of the point set registration problem is to solve an optimization problem such that a certain dissimilarity measure between the Gaussian mixtures constructed from the transformed model set and the fixed scene set is minimized. For this method, L2 distance is chosen as the dissimilarity measure for measuring similarity between two Gaussian mixtures. The objective can then be minimized by either a closed-form numerical method (in case the function is convex) or an iterative gradient based method. The method was applied and tested on both rigid and non-rigid point set registration problems. Nonrigid registration using free-form deformations: application to breast MR images D. Rueckert and L.I. Sonoda and C. Hayes and D.L.G. Hill and M.O. Leach and D.J. Hawkes IEEE Transactions on Medical Imaging - 1999 via Local CrossRef Despite being an ill-posed problem, non-rigid image registration has been the subject of numerous works, which apply the framework on different applications where rigid and affine transformations cannot completely model the variations between image sets. One such application of non-rigid registration is to register pre- and post-contrast breast MR images for estimating contrast uptake, which in turn is an indicator of the tumor malignancy. Due to large variations between the pre- and post-contrast images in terms of patient movement, and breast movement which is both global and local, registration of these images become challenging. Classic methods cannot capture the exact semantics of movements that the images exhibit. To this end, the authors propose a non-rigid registration method which combines advantages of voxel-based similarity measures like Mutual Information (MI) as well as non-rigid transformation models of the breast. The method is built using a hierarchical transformation model which capture both the global and local movement of the breast across pre- and post-contrast scans. The proposed method consists of two interesting contributions which model the motion of the breast across scans using a global and local model. The global motion model conists of a 3D affine transformation parameterized by 12 degrees of freedom. The local model is based upon FFD model, which is based upon B-splines which is a powerful tool for modeling 3D deformable objects. The main idea behind this approach is that FFDs can be used to deform an object by manipulating the underlying mesh of the control points, which is estimated in the form of a B-spline. The formulation exposes a trade-off between computational running time and accurate modelling of the object (breast). In order to achieve the best compromise, a hierarchical multi-resolution approach is implemented in which the resolution of the control mesh is increased along with the image resolution in a coarse to fine fashion. In addition to modeling the movement of the breast, a regularization term is also added to the final optimization function which forces the B-spline based FFD transformation to be smooth. The term is zero in case of affine transformation, and only penalizes non-affine transformations. The function that the method optimizes is as follows: $\mathcal{C}(\theta, \phi) = - \mathcal{C}_{similarity}\left( I(t_0), T(I(t))\right) + \lambda \mathcal{C}_{smooth}(T)$ The optimization of the above objective function is performed in multiple stages, and by using gradient descent algorithm which takes steps towards the gradient vector with a certain step size of $\mu$. Local optimum is assumed if $||\nabla \mathcal{C}|| <= \epsilon$. In order to assess the quality of the proposed method, the method is tested on both clinical and volunteer 3D MR data. The results show that the rigid and affine only transformations based methods perform significantly worse than the proposed method. Moreover, it was shown that the results improve with better control point resolution. However the use of SSD as a quantitative metric is debatable since the contrast enhanced and pre-contrast images will have varying distributions of intensity values. Non-rigid Image Registration Using Graph-cuts Tang, Tommy W. H. and Chung, Albert C. S. Image registration has been well studied problem in medical image analysis community, with rigid registration taking much of the spotlight. In addition to rigid registration, non-rigid registration is of great interest due to it's applications in inter-patient modality registration where deformations of organs are highly pronounced. However non-rigid registration is an ill-posed problem with numerous degrees of freedom, which makes finding the best transformation from source to final image very difficult. To counter this, some methods were proposed which constraint the non-rigid transformation $T$ to be within certain bounds, which is not always ideal. To this end, Tang et al. propose a novel framework which encapsulates the problem of non-rigid image registration into a graph-cut framework, which guarantees a global maxima (or minima) under certain conditions. The formulation requires that each pixel in source image has a displacement label (whcih is a vector) indicating its corresponding position in the floating image, according to an objective function. A smoothness constraint is also added to ensure that the values of the transformation function $T$ are meaningful and stay within natural limits (no large displacement should occur between absolute neighbouring pixels). The authors propose the following formulation as their objective function, which they then solve using graph-cuts methods: $D^* = argmin_{D} \sum_{x\in X}||I(x) - J(x + D(x))|| + \lambda \sum_{(x,y)\in \mathcal{N}}||D(x) - D(y)||$ The equation above is not fully discretized, in the sense that $D$ is still unbounded and can vary from $[-\infty, \infty]$. To allow for optimization using graph-cuts, the transformation function $D$ is mapped to a finite set $\mathcal{W} = \{0, \pm s, \pm 2s...\pm ws\}^d$. Using this discretization, the equation above can be solved using graph-cuts via a sequence of alpha-expansion. $\alpha$-expansion is a two label problem where the cost of assigning a label $\alpha$ is calculated on the basis of the previous label of the pixel. Different costs are assigned to different scenarios where the previous label may keep it's original label, or change to new label $\alpha$. This is important since it is imperative that the cost conditions satisifes the inequality given by Kolmogorov \& Zabih which then guarantees a global optima. The method was tested on on MR data from BrainWeb dataset which were affinely pre-registered and intensity normalized to be within 0 and 255. The method demonstrated good qualitative results when compared two state-of-art methods DEMONS and FFD, where the average intensity differences for the proposed method was much lower than the competition, while the tissue overlap was higher. Medical image registration using mutual information Maes, Frederik and Vandermeulen, Dirk and Suetens, Paul Proceedings of the IEEE - 2003 via Local Bibsonomy Current medical imaging modalities like Computed Tomography (CT), Magnetic Resonance Imaging (MRI) or Positron Emission Tomography (PET) has allowed minimally-invasive imaging of internal organs. Rapid advancement in these technologies have lead to an influx of data, which, along with rising clinical need, has lead towards a need for quantitative image interpretation in routine practice. Some of the applications include volumetric measurements of regions of the brain, surgery or radiotherapy planning using CT/MRI images. This influx of data have opened up avenues for using multi-modality images for making decisions. However this is not always straightforward as the imaging parameters, scanner types, patient movement, or anatomical movement make the images miss-aligned against each other, making direct comparison between, say, as CT and MRI image tricky. This is formally known as the problem of image registration, and to this end, numerous computational methods have been proposed, which this paper surveys. Out of the methods proposed for both inter- and intra-patient modality registration, Mutual Information based objective maximization strategy has been extremely successful at computing the registration between 3D multi-modal medical images of various organs from the images. Mutual Information (MI) stems from the field of information theory, pioneered by Shannon, which when applied in the context of medical image registration, postulates that the MI between two images (say CT and MRI) is maximum when the images are aligned. The basic formulation of a MI based registration algorithm is as follows: Let $\mathcal{A}$ and $\mathcal{B}$ be two images which are geometrically related according to a transformation $T_\alpha$, such that voxels $p$ in $\mathcal{A}$ with intensity $a$ physically correspond to voxels $T_\alpha(p)$ in $\mathcal{B}$ with intensity $b$. The relationship between $p_{AB}(a,b)$ between $a$ and $b$, and hence their MI depends on $T_\alpha$. The MI criterian postulates that the MI for images that are geometrically aligned is maximum: \alpha^* = argmax_{\alpha} I(A,B) Where $A$ and $B$ are two discrete random variables, and $I(A,B) = \sum_{a,b}p_{AB}(a,b) log \frac{p_{AB}(a,b)}{p_A(a).p_B(b)}$. An optimization algorithm is utilized to find a parameter set $\alpha^*$ that maximizes the MI between $A$ and $B$. Classically, Powell's multidimensional direction set method is used to otipimize the objective function, but other methods do exist as well. Although the formulation of MI criterion suggests that spatial dependence of image intensities are not taken into account, is in fact essential for the criterion to be well-behaved around the registration solution. MI does not rely on pure intensity values to measure correspondence between images, but rather on their joint distribution and the relationship of occurrence. It also does not impose any modality specific constraints which makes it general enough to be applied to any problem formulation (inter- or intra modality). Some of the areas where MI based image registration may fail is when there is insufficient information in images due to low resolution, low number of images, images not spatially invariant, images with shading artifacts. In some of these cases, MI based criterion will have multiple local optimals. A minimum description length approach to statistical shape modeling Davies, Rhodri H. and Twining, Carole J. and Cootes, Timothy F. and Waterton, John C. and Taylor, Christopher J. Active Shape Models brought with them the ability to intelligentally deform to various intra-shape variations according to a labelled training set of landmark points. However the dependence of such methods on a low-noise training set marked manually poses challenges due to inter-observer differences which becomes even more pronounced in higher-dimensions (3D). To this end, the authors propose a method that addresses this problem, but introducing automatic shape modelling. The method is based upon the idea of Occam's Razor, or more formally, The minimum description length (MDL). It is the principle formalization of Occam's razor in which the best hypothesis (a model and its parameters) for a given set of data is the one that leads to the best compression of the data. This essentially means that the MDL characteristic can be used to learn from a set of data points, the best hypothesis that fully describes the training data set, but in a compressed form. The authors use a simple two-part coding formulation of MDL, which although does not guarantee a minimum coding length,but does provide a computationally simple functional form to evaluate which is suitable to be used as an objective function for numerical optimization. The proposes objective function is as follows: $F = \sum_{p=1}^{n_g}D^{(1)}\left(\hat{Y}^p, R, \delta \right) + \sum_{q=n_g + 1}^{n_g + n_{min}}D^{2}\left(\hat{Y}^q, R, \delta \right)$ The algorithm proceeds by first parameterizing a single shape using a recursive algorithm. Once the recursive parameterization is complete, optimization of the objective function presented above proceeds. The algorithm first generates a parameterization for each shape recursively, to the same level. Then shapes are sampled according to the correspondence defined by the parameterization. Once this is done, a model is built automatically from the above sampled shapes. This model is then used to calculate the objective function. The parameterization is changed as to converge to an optimal value for the objective function. In order to change the parameterization of the model to converge to an optimal value of objective function, a ``reference" shape is chosen in order to avoid having the points converge to a bad a local minima (all points collapse to single part of the boundary). Due to the non-convex nature of the objective function, optimization is performed using genetic algorithm. The method was tested both qualitatively and quantitatively on several sets of outlines of 2-D biomedical objects. Multiple anatomical sites in human body were chosen to test the model to provide an idea of how the method performs in a variety of shape settings. Quantitatively the models were shown to be highly compact in terms of the MDL. Qualitatively, the models were able to generate shapes that respected the overall shape of the training set, while still maintaining a good amount of deformation without going haywire. Muliscale Vessel Enhancement Filtering Frangi, Alejandro F. and Niessen, Wiro J. and Vincken, Koen L. and Viergever, Max A. Delineation of vessel structures in human vasculature forms the precursor to a number of clinical applications. Typically, the delineation is performed using both 2D (DSA) and 3D techniques (CT, MR, XRay Angiography). However the decisions are still made using a maximum intensity projection (MIP) of the data. This is problematic since MIP is also affected by other tissues of high intensity, and low intensity vasculature may never be fully realized in the MIP compared to other tissues. This calls for a need for a type of vessel enhancement which can be applied prior to MIP to ensure MIP of the imaging have significant representation of low intensity vessels for detection. It can also facilitate volumetric views of vasculature and enable quantitative measurements. To this end, Frangi et al. propose a vessel enhancement method which defines a "vesselness measure" by using eigenvalues of the Hessian matrix as indicators. The eigenvalue analysis of Hessian provides the direction of the smallest curvature (along the tubular vessel structure). The eigenvalue decomposition of a Hessian on a spherical neighbourhood around a point $x_0$ maps an ellipsoid with the axis represented by the eignevectors and their magnitude represented by their corresponding eigenvalues. The method provides a framework with three eigenvalues $|\lambda_1| <= |\lambda_2| <= |\lambda_3|$ with heuristic rules about their absolute magnitude in the scenario where a vessel is present. Particularly, in order to derive a well-formed ``vessel measure" as a function of these eigenvalues, it is assumed that for a vessel structure, $\lambda_1$ will be very small (or zero). The authors also add prior information about the vessel in the sense that the vessels appear as bright tubes in a dark background in most images. Hence they indicate that a vessel structure of this sort must have the following configuration of $\lambda$ values $|\lambda_1| \approx 1$, $|\lambda_1| << |\lambda_2|$, $|\lambda_2| \approx |\lambda_3|$. Using a combination of these $\lambda$ values, as well as a Hessian-based function, the authors propose the following vessel measure: $\mathcal{V}_0(s) = \begin{cases} 0 \quad \text{if} \quad \lambda_2 > 0 \quad \text{or} \quad \lambda_3 > 0\\ (1 - exp\left(-\dfrac{\mathcal{R}_A^2}{2\alpha^2}\right))exp\left(-\dfrac{\mathcal{R}_B^2}{2\beta^2}\right)(1 - exp\left(-\dfrac{S^2}{2c^2}\right)) \end{cases}$ The three terms that make up the measure are $\mathcal{R}_A$, $\mathcal{R}_B$, and $S$. The first term $\mathcal{R}_A$ refers to the largest area cross section of the ellipsoid represented by the eigenvalue decomposition. It distinguishes between plate-like and line-like structures. The second term $\mathcal{R}_B$ accounts for the deviation from a blob-like structure, but cannot distinguish between line- and a plit-like pattern. The third term $S$ is simply the Frebenius norm of the Hessian matrix which accounts for lack of structure in the background, and will be high when there is high contrast compared to background. The vesselness measure is then analyzed at different scales to ensure that vessels of all sizes get detected. The method was applied on 2D DSA images which are obtained from X-ray projection before and after contrast agent is injected. The method was also applied to 3D MRA images. The results showed promising background suppression when vessel enhancement filtering was applied before performing MIP. Active Shape Models-Their Training and Application Cootes, Timothy F. and Taylor, Christopher J. and Cooper, David H. and Graham, Jim Computer Vision and Image Understanding - 1995 via Local Bibsonomy Object detection in 2D scenes have mostly been performed using model-based approaches, which model the appearance of certain objects of interest. Although such approaches tend to work well in cluttered, noisy and occluded settings, the failure of such models to adapt to intra-object variability that is apparent in many domains like medical imaging, where the organ shapes tend to vary a lot, have lead to a need for a more robust approach. To this end, Cootes et al. propose a training based method which adapts and deforms well to per-object variations according to the training data, but still maintains rigidity across different objects. The proposed method relies on a hand-labelled training set featuring a set of points called "landmark points" that describe certain specific positions of any object. For example, for a face the points may be "noise end, left eyebrow start, left eyebrow mid, left eyebrow end" and so on. Next, the landmark points across the whole training set are algined using affine transformations by minimizing a weighted-sum of squares difference (SSD) between corresponding landmark points amongst training examples. The optmization function (SSD) is weighted using the apparent variance of each landmark point. The higher the variance across training samples, the lower the weight. In order to ``summarize" the shape in the high-dimensional space of landmark point vectors, the proposed method uses Principal Component Analysis (PCA). PCA provides the eigenvectors which point to the direction of highest change in points in $2n$-dimensional space, while the corresponding eigenvalues provide the significane of each eigenvector. The best $t$ eigenvectors are chosen such that they describe a certain perctange of variance of the data. Once this is done, the model becomes capable of producing any shape by deriving from the mean shape of the object, using the equation: $x = \bar{x} + Pb$ where $\bar{x}$ is the mean shape, $P$ = matrix of $t$ eigenvectors and $b$ = vector of free weights that can be tuned to generate new shapes. The values of $b$ are constrained to stay within boundaries determined using the training set, which essentially forms the basis of the argument that the model only deforms as per the training set. The method was tested on a variety shapes, namely resistor models in electric circuits, heart model, worm model, and hand model. The models thus generated were robust and could successfully generate new examples by varying the values of $b$ on a straight line. However for worm-model, it was found that varying the values of $b$ only along a line may not be always suitable, especially in cases where the different dimensions of $b$ may have some existing non-linear relationship. Once a shape model is generated, it is used to detect objects/shapes from new images. This is done by first initializing the model points on the image. The model points are then adjusted to the shape by using information from the image like edges. The adjustment is performed iteratively, by applying constraints on the calculated values of $dX$ and $dB$ so that they respect the training set. The iterations are performed until convergence of the model points to the actual shape of interest in the image. One drawback of the proposed method is its high sensitivity to noise in training data annotations. Also, the relationship between various variables in $b$ is not entirely clear, and may negatively affect models when there exists a non-linear relationship. Also, the final convergence is somewhat dependent upon the initialization of the model points, and depend on local edge features for guidance, which may fail in some instances. Random Walks for Image Segmentation Grady, Leo Image segmentation have been a topic of research in computer vision domain for decades. There have been a multitude of methods proposed for segmentation, but most have been dependent on a high level user input which guides the contour or boundaries towards the real boundaries. In order to come close to a fully automated or partially automated solution, a novel method is proposed for performing multilabel, interactive image segmentation using Random Walk algorithm as the fundamental driver of segmentation. The problem is formulated as follows: given a small number of pixels with user-defined (or pre-defined) labels, assign the the probability that a random walker starting at each unlabeled pixel will first reach one of the pre-labeled pixels. The current pixel is then assigned the label corresponding to the max of this probability. This leads to high-quality segmentations of an image into $K$ different components. The algorithm is based on image graphs, where image pixels are represented as graphs connected by edges to its 8-connected neighbours. In this paper, a novel approach to $K$-class image segmentation problem is proposed which utilizes user-defined seeds representing the example regions of the image belonging to $K$ objects. Each seed specifies a location with a user-defined label. The algorithm labels an unseeded pixel by resolving the question: Given a random walker starting at this location, what is the probability that it first reaches each of the K seed points? It will be shown that this calculation may be performed exactly without the simulation of a random walk. By performing this calculation, the algorithm assigns a K-tuple vector to each pixel that specifies the probability that a random walker starting from each unseeded pixel will first reach each of the K seed points. A final segmentation may be derived from these K-tuples by selecting for each pixel the most probable seed destination for a random walker. The graph weights are determined to be a function of the pixel intensities, specifically $w_{ij}$ = $exp(-(g_i - g_j)^2)$. The algorithm works by biasing the random walker to avoid crossing sharp intensity gradients, which leads to a quality segmentation that respects object boundaries (including weak boundaries). The algorithm exposes only one free variable $\beta$, and can be combined with other approaches involving pre- and post-filtering techniques. Additionally, the algorithm provides on-the-fly correction of previous detected boundary in an computationally efficient way. Graph Cuts and Efficient N-D Image Segmentation Boykov, Yuri and Funka-Lea, Gareth International Journal of Computer Vision - 2006 via Local Bibsonomy Over the last decade and a half, a plethora of image segmentation algorithms have been proposed, which can be categorized into belonging to roughly four categories, represented by a combination of two labels: explicit or implicit boundary representation, and variational or combinatorial methods. While classic methods like Snakes [1] and Level-Sets [2] belong to explicit/variational and implicit/variational category, there have been another set of algorithms falling under the combinatorial domain, which are DP or path-based algorithms which are explicit/combinatorial, and finally Graph-Cuts, which are implicit/combinatorial. The main difference between the categories is the space of solutions where search is performed. For variational methods, the search space is $\mathcal{R}^\infty$, while for combinatorial methods the search space is confined to $\mathcal{Z}^n$. An obvious advantage of combinatorial methods seem to better computational performance, but they also provide a globally optimal solution, which is global to the image. This makes the algorithm performance independent of numerical stability design decisions, and only dependent on the quality of global descriptors. Hence the algorithms provide a highly generalized, globally optimal framework which can be applied to a variety of problems, including image segmentation. Graph-Cut methods are based upon the $s-t$ decomposition of a given image graph (where pixels are nodes and edges are formed between 8-connected neighbours). An $s-t$ decomposition of a graph $\mathcal{G} = \{V,E\}$ is a subset of edges $C \subset E$ such that the graph gets completely separated into individual components $s$ and $t$. The divided nodes are assigned to two terminal nodes, representing foreground and background of the image. The best-cut in an image graph is optimal if the cost of a cut (defined as $|C| = \sum_{e\in C}w_e$) is minimal. This corresponds to segmenting an image with a desirable balance of boundary and regional properties. Graph-Cut exposes a general segmentation energy function (which constitutes the ``cost") as a combination of a regional term and boundary term, given as $E(A) = \lambda.R(A) + B(A)$. Regional term can be used to model apriori distribution of the pixel classes (probability of the pixel belonging to background or foreground). The boundary term can be represented by any boundary feature like local intensity gradients, zero-crossing, gradient direction or geometric costs. Although the region and boundary terms force the algorithm to find a boundary which strikes a good balance between the two, sometimes the lack of information for either or both terms may lead to incorrect segmentation. To offset these terms, hard constraints can be applied to the energy function. The constraints can be anything, for instance, a term that defines that pixels of particular intensities would belong to either foreground or background. The constraint can also be shape based, where shapes like circles or ellipses can be forced upon the final segmentation. The proposed algorithm was applied on a variety of 2D and 3D images. Note that graph-cut can be generalized to N-D segmentation problems as well. A number of qualitative observations are reported. However there is a lack of quantitative foundation of the algorithm performance, compared to other state-of-art algorithms not necessarily from the same category as graph-cut. Interactive live-wire boundary extraction Barrett, William A. and Mortensen, Eric N. Medical Image Analysis - 1997 via Local Bibsonomy Edge, contour or boundary detection in 2D images have been an area of active research, with a variety of different algorithms. However due to a wide variety of image types and content, developing automatic segmentation algorithms have been challenging, while manual segmentation is tedious and time consuming. Previous algorithms approaching this task have tried to incorporate higher level constraints, energy functional (snakes), global properties (graph based). However the approaches still do not entirely fulfill the fully automated criteria due to a variety of reasons. To this end, Barrett et al. propose a graph-based boundary extraction algorithm called Interactive Live-Wire, which is an extension to the original live-wire algorithm presented in Mortensen et al. [1]. The algorithm is built upon a reformulation of the segmentation approach into graphs, particularly, an image $I$ is converted to an undirected graph with edges from a pixel $p$ to all it's neighbouring 8-connected pixels. Each pixel or node is assigned a local cost according to a function (described later). The segmentation task then becomes a problem where there needs to be a shortest path from a pixel $p$ (seed) to another pixel $q$ (free goal point), where the cumulative cost of path is minimum. The local cost function is defined as: $l(p,q) = w_G.f_G(q) + w_Z.f_Z(q) + w_D.f_D(p,q)$ where $w_i, i = {G, Z, D}$ are weight coefficients controlling relative importance of the terms. The three terms that make up the local cost function are gradient magnitude ($f_G(q)$), Laplacian Zero-Crossing feature ($f_Z(q)$), and Gradient Direction feature ($f_D(p,q)$). The first term $f_G(q)$ defines a strong measure of edge strength, and is heavily weighted. The term $f_Z(q)$ provides a second degree measure for edge strength in the form of zero-crossing information. The third term $f_D(p,q)$ adds a smoothness constraint to the live-wire boundary by adding high cost for rapidly changing gradient directions. The algorithm also exhibits some other features, namely boundary freezing, on-the-fly learning and data-driven cooling. Boundary freezing proves useful when Live-wire segmentation digresses from the desired object boundary during interactive mode. The boundary can be "frozen" right before the digression point by specifying another seed point, until which the boundary is frozen and not allowed to be changed. On-the-fly learning provides robustness to the method by learning the underlying cost distribution of a known "good" boundary, and using that to guide the live-wire further to follow similar distribution. Data-driven path cooling allows the live-wire to generate new seed points automatically as a function of image data and path properties. Pixels on "stable" paths will cool down and eventually freeze, producing new seed points. The results report that average times taken for segmenting a region using Live-Wire was roughly 4.6 times less than manual human tracing time. Live-Wire provided same amount of accuracy as manual tracing would in a fraction of time, with high reproducibility. However, the method does not provide a way to ``snap out" of an \textit{automatically} frozen live-wire segmentation. On-the-fly training can fail at instances where the edges of the object change too fast, and not much implementation related information is provided, especially for freezing and training parts. STACS: new active contour scheme for cardiac MR image segmentation Pluempitiwiriyawej, Charnchai and Moura, José M. F. and Wu, Yi-Jen Lin and Ho, Chien Automated segmentation of various anatomical structures of interest from medical images has been a well grounded field of research in medical imaging. One such problem is related to segmenting whole heart region from a sequence of magnetic resonance imaging (MRI), which is currently done manually, and is time consuming and tedious. Although many automated techniques exist for this, the task remains challenging due to the complex nature of the problem, partly because of low contrast between heart and nearby tissue. Moreover many of the methods are unable to incorporate prior information into the process. To this end, Pluempitiwiriyawej et al. proposed a version of active contour energy minimization based method to segment the whole heart region, including the epicardium, and the left and right ventricular endocardia. The proposed method follows the framework laid out by Chan and Vese\cite{Chan2001}. However Pluempitiwiriyawej et al. propose a modified energy function, which consists of four energy terms. The energy function is given below, where $C$ is the contour represented as a level set function $\phi(x,y)$: $J(C) = \lambda_1 J_1(C) + \lambda_2 J_2(C) + \lambda_3 J_3(C) + \lambda_4 J_4(C)$ The coefficients $\lambda_{1..4}$ determine the weight of terms $J_{1..4}$. The first term $J_1(C)$ is designed to add stochastic models $\mathcal{M}_1, \mathcal{M}_2$ corresponding to the regions inside and outside of the active contour $C$. The models dictate the probability distribution from which the image intensities making up the inside and outside region of the contour are sampled. The negative log of this term is minimized, which essentially maximizes the probability $p(u | C, \mathcal{M}_1, \mathcal{M}_2)$ given the active contour $C$, and the models $\mathcal{M}_1, \mathcal{M}_2$. The second term $J_2(C)$ is designed similar to the classical Snakes\cite{Kass1988} in the sense that it uses edges to guide the contour towards the structure of interest. For this term, a simple edge map is used after convolving with a Gaussian filter which smooths out the noise. The term $J_3(C)$ encodes an shape prior which constraints the contour to follow an elliptical shape, and guides it in conjunction with the region and edge information. The final term $J_4(C)$ which encodes the total Euclidean arc length of the contour. This forces the contour to be ``smooth", without rough edges. The process of minimizing the energy function follows a three-task approach. The first task is to estimate the stochastic model parameters $\mu_k, \sigma^2_k$, and is performed by fixing the position of initial contour $C$, taking derivatives of $J$ w.r.t stochastic model parameters, and solving by equating to zero. The second task estimates the parameters of the ellipse using least squares method. The third and final task involves the contour using the estimated parameters in task one and two, such that it minimizes the function $J$. The method also performs stochastic relaxation, by dynamically changing the values of parameters $\lambda_1, \lambda_2, \lambda_3, \lambda_4$ as the optimization process proceeds. The intuition is that when the optimization starts, the edge and region terms must guide the contour, and as the process proceeds to it's end, the shape prior and contour length term should carry more weight to regularize the effective shape of the contour. The study used 48 MRI studies acquired by imaging rat hearts, and compared the proposed method with two earlier methods, namely Xu and Prince's GVF \cite{ChenyangXu1998}, and Chan and Vese \cite{Chan2001}. The authors also design a new quantitative metric, which is a modification of the Chamfer matching \cite{Barrow} technique. The reported results are observed to be in excellent agreement with the gold standard hand-traced contours. However the similarity values for other methods against human gold-standard were not reported. Active contours without edges T.F. Chan and L.A. Vese IEEE Transactions on Image Processing - 2001 via Local CrossRef Typically, the energy minimization or snakes based object detection frameworks evolve a parametrized curve guided by some form of image gradient information. However due to heavy reliance on gradients, the approaches tend to fail in scenarios where this information is misleading or unavailable. This cripples the snake and renders it unusable as it gets stuck in a local-minima away from the actual object. Moreover, the parametrized snake lacks the ability to model multiple evolving curves in a single run. In order to address these issues, Chan and Vese introduced a new framework which utilized region based information to guide a spline, and tries to solve the minimal partition problem formulated by Mumford and Shah. The framework is built upon the following energy equation, where $C$ is a level-set formulation of the curve:$F(c1, c2, C) = \mu . \text{Length}(C) + v . \text{Area}(inside(C))\\ \lambda_1 \int_{inside(C)}|u_0(x,y) - c_1|^2 dxdy + \lambda_2 \int_{outside(C)}|u_0(x,y) - c_2|^2 dxdy$ The framework essentially divides the image into two regions (per curve), which are referred to as inside and outside of the curve. The first two terms of the equation control the physical aspects of the curve, particularly the length, and area inside the curve, with their contributions controlled by two parameters $\mu$ and $v$. The image forces in this equation correspond to the third and fourth terms, which are identical but work in respective regions identified by the curve. The terms use $c_1$ and $c_2$, which are the mean intensity values inside and outside the curve respectively to guide the curve towards a minima where the both regions are consistent with respect to the mean intensity values. The proposed framework also utilizes an improved representation of the curve in the form of a level set function $\phi(x, y)$, which has many numerical advantages and naturally supports multiple curves evolved during a single run, as compared to the traditional snakes model where only one curve can be evolved in a single run. The unknown function $\phi$ is computed using Euler-Lagrange equations formulated using modified Heaviside function $H$, and Dirac measure $\delta$. The proposed framework was applied on numerous challenging 2D images with varying degree of difficulties. The framework was also capable of segmenting point clouds decomposed into 2D images, objects with blurred boundaries, and contours without gradients, all without requiring image denoising. Due to the formulation of $\phi$ approximation routine, the framework has tendencies to find actual global minima independent of the initial position of the curve. However the choice of multiple parameters namely $\lambda_{1,2}, \mu, v$ is done heuristically, and seem to be problem dependent. Also, the framework's implicit dependency on absolute image intensities in regions inside and outside of curve sometimes fail in very specific cases where the averages tend to be zero, though the authors proposed to use image curvature and orientation information from the initial image $u_0$. Snakes: Active contour models Michael Kass and Andrew Witkin and Demetri Terzopoulos International Journal of Computer Vision - 1988 via Local CrossRef Low level tasks such as edge, contour and line detection are an essential precursor to any downstream image analysis processes. However, most of the approaches targeting these problems work as isolated and autonomous entities, without using any high-level image information such as context, global shapes, or user-level input. This leads to errors that can further propagate through the pipeline without providing an opportunity for future correction. In order to address this problem, Kass et al. investigate the application of an energy minimization based framework for edge, line and contour detection in 2D images. Although energy minimization had earlier been utilized for similar tasks, Kass et al's framework exposes a novel external force factor, which allows external forces or stimuli to guide the ``snake" towards a "correct answer". Moreover, the framework exhibits an active behaviour since it is designed to always minimize the energy functional. A "snake" is a controlled continuity spline which is under the influence of forces in the energy functional. The energy functional is made up of three terms, where $v(s)$ is a parametrized snake. $ E_{snake}^{*} = \int_{0}^{1}E_{internal}(v(s)) + E_{image}(v(s)) + E_{constraint}(v(s)) $ The internal energy force term is entirely dependent on the shape of the curve, which constraints the snake to be "continuous" and well behaved. The force further encapsulates two terms which control the degree of stretchness of the curve (represented by the first derivative of the image), and ensure that the curve does not have too many bends (using the second derivative of the image). The image energy force term controls what kind of salient features does the snake track. The force encapsulates three terms, corresponding to the presence of lines, edges and a termination term. The line term uses raw image intensity as energy term, while the edge term uses a negative square of image gradient to make the snake attracted to contours with large image gradients. Further, a termination term allows the snake to find terminations of line segments and corners in the image. The combination of line and termination terms forces the snake to be attracted to edges and terminations. The constraint term is used to model external stimuli, which can come from high-level processes, or through user intervention. The framework allows the application of a spring to the curve to constraint the snake or move it in a desired direction. In order to test the framework, a user interface called ``Snake Pit" was created which allowed user control in the form of a spring attachment. The overall approach was tested on a number of different images, including the ones with contour illusion. The framework was also extended for application to stereo matching and motion tracking. For stereo, an extra energy term is added to constraint the snakes in two disparate images to stay close in the coordinate space, which models the fact that disparities in images do not change too rapidly. However the framework suffers when subjected to a high rate-of-change in both stereo matching and motion tracking problems. The proposed framework performed acceptably in many challenging scenarios. However the framework's underlying assumption about following edges and contours using image gradients may fail in cases where there is not much gradient information present in images.
CommonCrawl
Search all SpringerOpen articles Latin American Economic Review Theory and empirical evidence on educational mobility Intergenerational transmission of education: the relative importance of transmission channels Florian Wendelspiess Chávez Juárez1Email author Latin American Economic Review201524:1 © The Author(s) 2015 Received: 24 September 2013 Accepted: 12 November 2014 This paper aims at quantifying the relative importance of different transmission channels generating the high levels of intergenerational correlations in education, especially in Latin America. A simultaneous equations model is applied to rich survey data from Mexico. The results show that the economic situation of the family has the highest impact, even more than heritability of cognitive abilities. The long-run economic situation seems to matter more than the current consumption level. Parental education affects the schooling outcome directly but also indirectly through the economic situation, which is particularly true for the father. Intergenerational transmission of education IQ transmission Education is a main ingredient for a successful life in modern societies. However, in many countries the opportunities to get well educated strongly depend on the family background, particularly in Latin America. As a result, we observe very low intergenerational mobility. Understanding the mechanisms generating this intergenerational persistence in education is essential to target policy measures adequately. In this paper, I try to shed some light on this mechanism by applying a simultaneous equations model to data from Mexico. The main goal of the study was to estimate the relative importance of different transmission channels and their interactions. The three main mechanisms put forward in recent theoretical and empirical contributions can be broadly described as being the biological, the economic and the direct education-to-education channel. The biological channel refers to the genetic transmission of ability, often measured by the IQ, which explains a part of the relationship (Anger and Heineck 2010; van Leeuwen et al. 2008; Björklund et al. 2010; Black et al. 2009). Poor families facing credit constraints are an example for the economic channel, because they cannot borrow against the expected future earnings of their offspring, which generates a link between the socioeconomic situation of the parents and the schooling of their children (see for instance Black and Devereux 2010; Attanasio and Kaufmann 2009; Stinebrickner and Stinebrickner 2007; Carneiro and Heckman 2002; Alfonso 2009). Finally, a higher return to education for children with highly educated parents is an argument for the direct education-to-education channel (Black and Devereux 2010). It might also include preferences for education, non-cognitive skills, aspirations and many other factors. The empirical literature focuses on estimating the causal effect of parental education on children's education. Holmlund et al. (2011) revise this literature and propose a comparison of different methods applied to the same dataset from Sweden. They conclude that the estimates differ substantially across identification strategies and that no method is perfect. They find relatively modest causal effects of parental education and point to the importance of analyzing in more detail other mechanisms explaining the intergenerational transmission of education. They hypothesize about the possibility of an indirect effect of education through better socioeconomic environment that can be offered to children. This conclusion on the need of better understanding the mechanisms is shared by recent literature surveys such as Black and Devereux (2010), Björklund and Jäntti (2009) and Piketty (2000) who coherently argue that more empirical research must be undertaken to understand the mechanisms behind educational mobility and social mobility in general. Black and Devereux (2010, p. 69) conclude that "[...] there is still much work to do to pin down which family background factors are most important" and Björklund and Jäntti (2009, p. 516) argue that "a major challenge for future research is to find out what in the family other than income is important for the future of children". This study tries to contribute in the proposed direction by moving from the estimation of one single causal effect to the estimation of a larger system of effects, incorporating simultaneously the three channels previously outlined. This focus on different mechanisms at the same time aims at getting a better understanding of the larger picture—the whole process of intergenerational transmission of education. In this respect, this approach should be seen as a complement to the single causal effect estimation and not as an alternative. The choice of Mexican data is primarily motivated by the importance of high intergenerational correlations in education in Latin America. Hertz et al. (2007) compare educational mobility of 42 countries including 7 Latin American ones. These seven countries take the first seven places ranked according to their intergenerational education correlation. The country with the highest correlation is Peru (0.66), followed by Ecuador, Panama and Chile. Within Latin American Countries, Mexico displays relatively high intergenerational persistence in education. Dahan and Gaviria (2001) compare 16 Latin American countries using data from the late 1990s and find that Mexico has the second lowest intergenerational mobility level behind El Salvador. de Hoyos et al. (2010) use recent data on social mobility in Mexico. They report correlations between the education in years of parents and children, finding the highest correlation of about 0.6 for the children cohorts born between 1942 and 1951, followed by a reduction of the correlation to about 0.5 for the cohort 1962–1971 and finally a new increase of the correlation to 0.55 for the youngest cohort, composed of children born between 1972 and 1981. According to the same authors, this recent increase is even higher when using different data sources. The same pattern of increasing educational mobility prior to the economic crisis in the 1980s and a subsequent decrease was found by Binder and Woodruff (2002), who use different cohorts to estimate the intergenerational link. Hence, the Mexican case is not only interesting on its own but might be representative for other Latin American Countries. Moreover, another advantage of Mexico is the high-quality data available. The suitability of the survey for this study is underlined by the availability of cognitive ability scores. Nevertheless, the analysis faces a series of empirical challenges in the form of trade-offs. This study tries to find a balance between a more complete model with very high data requirements and a simpler model in which less data are lost due to unavailable information. The main result is that even when controlling for parental education and ability, the economic situation of the family has the largest direct effect on the schooling outcome of children. This important source of inequality of opportunity could be reduced by policy interventions targeting the link between economic requirements and schooling. Moreover, the estimation shows that there are important interactions between the channels, suggesting that the exclusion of some channels might seriously bias estimates. In Sect. 2 I will review the literature on the mechanisms of educational mobility motivating my empirical models I present in Sect. 3. In Sect. 4 I describe the data used and especially the needed transformations in detail and present some descriptive evidence. In Sect. 5 I present the main results, which are complemented by some figures in "Appendix B". Finally, Sect. 6 concludes the paper. 2 Theory and empirical evidence on educational mobility Intergenerational mobility in education is a complex phenomenon that does not rely on a single mechanism. The literature identified three main channels of transmission (Chevalier 2004). The first channel is the biological transmission of ability, the second refers to the dependence of schooling outcome on the economic situation of the parents and the third deals with direct education-to-education effects. In this section, I present some theoretical and empirical contributions to the understanding of these channels. 2.1 Ability transmission through genes: the biological channel The direct transmission of abilities, which is not limited to simple IQ transmission, represents a biological explanation of the phenomenon. For instance, Becker and Tomes (1979) use the term endowments acquired from parents to describe this direct transmission. They provide a theory of intergenerational transmission based on rational choices through a human capital theory approach, where the ability level of the offspring is a key determinant of the decision. Their model was consequently extended by Loury (1981) and Solon (2004) and serves as a benchmark in many analyses. Empirically, much work has been done to determine the importance of this channel. In a meta-analysis of 212 IQ studies, Devlin et al. (1997) quantify the genetic transmission and find the broad-sense heritability of IQ to be 48 %. Social scientists put more emphasis on quantifying the overall IQ correlations between parents and children, which might also include environmental effects in addition to the pure heritability measured by Devlin et al. (1997). For instance, Anger and Heineck (2010) use German panel-data with two ultra-short IQ-tests to estimate the parent-offspring relation. They find that a 1-point increase in parents' score results in a 0.45-point increase in the coding speed (inherent ability) and 0.50-point increase in word fluency scores. The estimated coefficients remain stable at the inclusion and exclusion of control variables. Björklund et al. (2010) use Swedish data from military IQ tests and official registers. They estimate intergenerational and sibling IQ correlations. The estimated values are all highly significant and attain values of 0.346 for father-son, 0.510 for siblings and 0.65 for twins. According to the authors, their estimations represent rather a lower bound of the true values. Black et al. (2009) find a similar father-son IQ-correlation (0.38) in a comparable study with Norwegian data. van Leeuwen et al. (2008) go even further in the analysis of IQ-transmission by dividing it further up. No evidence for cultural transmission of the IQ was found and no indication that intelligent parents provide children with intelligence promoting circumstances. Individual differences in intelligence were found to be largely accounted for by genetic differences. Moreover, they find a spousal IQ-correlation of about 0.33 suggesting a relatively high degree of assortative mating. 2.2 Credit constraints and the economic situation: the economic channel According to the logic of the economic channel the intergenerational correlation in education is the fruit of an underinvestment in education by poor families. This idea that poorer families might face credit constraints making the optimal investment in the human capital of their offspring impossible can be found, for example, in Banerjee and Newman (1994) and Loury (1981). Empirical research was not able to conclude on the exact importance of credit constraints and the economic environment in general. While it seems that the impact of credit constraints is relatively modest in richer countries, some evidence was found that in developing countries the effect is larger. Stinebrickner and Stinebrickner (2007) analyze the situation at a college in the US using panel data of students. They find that a group of students is credit constrained in consumption during their stay at the college, but that many of them are not willing to borrow. Carneiro and Heckman (2002) critically revise the literature on the question of credit constraints. They compute that using modern US data, only about 8 % of students really face short-term credit constraints. They argue that long-term effects, such as the family environment during the whole schooling period of children, play a much bigger role. Winter (2007) elaborates a computable general equilibrium model to evaluate the role of credit constraints in the decision whether to go to college. The model is calibrated for the US economy and predicts observed patterns quite well. The findings contrast the results of few credit constrained students found by other studies and argue that econometric estimates such as in Carneiro and Heckman (2002) are downward-biased. In line with the results of other studies is the observation that the share of people financially constrained has increased (dramatically) over the past decades. Alfonso (2009) presents a study of 4 Latin American countries (Mexico, Chile, Colombia and Peru). She shows that the effect of credit constraints disappears in regression analysis when controlling for long run family variables (parental education, family assets, etc.). However, the relatively small effect of credit constraints increases from the oldest to the newest datasets used in the study. Attanasio and Kaufmann (2009) use Mexican data to analyze the relationship between post-secondary school decisions and subjective expectations. Among other findings on the role of expectations, they show that credit constraints represent an important issue for poor Mexicans, in contrast to some literature coming from higher developed countries, where these effects do not seem to be as present. To sum up, the literature finds evidence for the existence of the economic channel in producing high intergenerational correlations in education. It remains somewhat unclear if short-run credit constraints or the long-run economic situation are more important determinants. It seems, however, that in developing countries, both contribute to the low educational mobility. 2.3 Education to education transmission Finally, the third channel considered is the direct effect of parental education on the schooling attainment of the children. This channel is generally known as the nurture effect, capturing the direct causal effect of parental education (Holmlund et al. 2011; Chevalier 2004). Dickson et al. (2013) show that this direct causal effect starts at very early ages and remains visible years later when comparing students' performances. Different explanations why parental education should have a direct causal effect can be found in the literature. One possible explanation is that highly educated parents tend to encourage their children more to achieve high levels of education (Merton 1953; Boudon 1973, 1974; Sewel and Shah 1968). For instance, Steinberg et al. (1992) show that parental encouragement and parental school involvement have important effects on the school performance of children. Besides this active encouragement and involvement of parents, it can also be argued that the child's aspirations increase when parents have more education (Sewel and Shah 1968; Ermisch et al. 2006). Ermisch et al. (2006) argue that parental education can alter the productivity of parents' time investments in children. It could also be argued that expected returns to education depend on parental education. Jensen (2010) shows that students with higher educated parents tend to perceive higher returns to education. Hence, this third channel is motivated by a series of arguments and most likely composed of different sub-channels. The distinction of these different sub-channels is beyond the scope of this paper. In this paper I consider a compound channel linking high parental education to high offspring's schooling. 3.1 Conceptual model of educational mobility Following the literature outlined in the previous section, we can easily illustrate the three transmission channels. Figure 1 displays the system of transmission in education suggested by the literature. Simplified conceptual system. The figure displays the conceptual framework of the analysis presented in this study. Arrows refer to direct causal effects. Only channels that are included in the empirical analysis are included in the scheme First, there is a direct link between abilities of parents and children, presented with the dotted line. The ability of the parents influences the ability of children, which in turn increases their propensity for education. The economic channel is represented by gray arrows using the compound term economic situation, which includes short-term credit constraints and long-run effects of assets. Finally, the third channel illustrated with solid black arrows represents the direct education-to-education transmission, which is based on many different hypotheses as explained above. A channel that I do not consider in this study is the health channel. The health status of the child is likely to be influenced by the family background on the one hand (Bradley and Corwyn 2002; Rosa Dias 2009; Delajara and Wendelspiess Chávez Juárez 2013), while it might also have important effects for education on the other hand (Case et al. 2005; Doyle et al. 2009). There are two reasons for not including this channel, both related to the data. First, there is a problem of timing in the health data. While the literature emphasizes on the importance of the prenatal period and early childhood in the health dimension (Heckman 2006; Doyle et al. 2009; Delajara and Wendelspiess Chávez Juárez 2013), in the data I can at most observe the current health status of the child. Unfortunately there is no retrospective information on the child's health conditions at birth available. Second, as I will highlight in Sect. 4 the inclusion of additional variables would seriously reduce the sample and increase the risk of sample selection problems. In the "Appendix E" I present some regressions where I include variables for health and personality traits to illustrate the problems just described. Different strategies are possible to analyze such a framework empirically. One way is to focus on one particular link. For instance, Holmlund et al. (2011) present different methods to estimate the causal relationship between the education of parents and children. In this paper I use two different strategies. First, I focus on a single equation approach where I aim at estimating all the determinants of the child's education outcome in one regression. In a second step, I move to a simultaneous equations model where I estimate several equations to describe the whole system outlined in Fig. 1. I will now explain the two approaches along with their intuition, advantages and challenges. 3.2 The single regression approach The main goal of the empirical application is to estimate the intergenerational links determining the educational outcome of the child. Therefore, a first empirical approach consists in regressing the educational outcome on the possible intergenerational determinants while controlling for some contemporaneous effects. The intergenerational determinants are parental education and the economic situation of the family. Moreover, I also control for parental age, whether the parents have an indigenous background and some variables capturing the family structure1. The most important contemporaneous effect I control for is the cognitive ability of the child. Additionally, I also control for contemporaneous effects such as child labor, government program benefits, state fixed effects and indicators for girls and rural areas. Besides its appealing simplicity, the single regression approach has the advantage that we do not have to impose a lot of structure in the model. Therefore, only the standard assumptions for ordinary least squared must be fulfilled. One concern that could arise is that some variables do not satisfy these conditions and are likely to be correlated with the error term. The most likely reason for this to happen in our context is an omitted variable bias. A first critical variable is the ability level of the child, which might also capture for instance motivation or the ability to perform well in a situation of examination. Both potentially omitted variables are also likely to influence the schooling outcome. To a large extent, these concerns are reduced by the type of cognitive ability measure I am using in this study. The ability measure is based on a short version of the Raven's progressive matrices test, which is one of the most culture-free and education independent IQ-tests (Désert et al. 2009). I discuss this in more detail when describing the data in Sect. 4. A second variable that might suffer an omitted variable bias is parental education, as parental education might also be influenced by preferences and taste for education. If these preferences and tastes are also transmitted to the next generation, we are likely to have an endogeneity problem as well. In order to account for these potential endogeneity issues I also perform instrumental variable regressions for the single regression approach. Father's and mother's cognitive ability is used to instrument both the parental education and the child's ability level. The cognitive ability of the parents should be a strong predictor of both parental education 2 and the cognitive ability of the child. At the same time, the cognitive ability of the parents should not have a direct impact on the schooling outcome of the child. For the case of parental education, I additionally use information on the place of living of the parents when they were 12 years old. A dummy capturing whether they lived in a town or not is used.3 The idea is that parents living in cities had substantially more access to education than parents living in rural areas. At the same time, the place of living of the parents when they were 12 years old should not directly affect the education outcome of their children today. In the results section, I will present in detail the different tests for the instruments, which clearly indicate that the instruments are valid and strong. Let me first introduce the second estimation approach, where I simultaneously estimate the whole system presented in Fig. 1. 3.3 The simultaneous regression approach In addition to the single regression approach previously presented, I also use a simultaneous equations model approach to estimate not only the intergenerational links, but also the different transmission channels in more detail. There are two main advantages of focusing on the whole system. First, it allows us to estimate the relative importance of the three channels put forward by the literature. This is important because analyzing a specific channel and finding significant effects does not tell us much about the relative importance of the analyzed channel with respect to other possible channels. A channel might show very significant effects and at the same time be relatively irrelevant for the whole system. Second, estimating a system allows us to consider interactions between channels and as a consequence direct and indirect effects. For instance, parents' ability is likely to have both of them. The direct link effect of parental ability refers to the biological channel introduced earlier. The indirect effect goes through parental education and the economic situation. More able parents are likely to have more education and a better socioeconomic status. Behrman and Rosenzweig (2002) show that intergenerational correlations between mother's and children's education might be biased when such system aspects are not considered. For instance, they mention that such estimates are upward biased, when not controlling for the ability channel or for assortative mating. On the other hand, the estimation of the model as a whole might also has some disadvantages. We might overestimate the effect of the three analyzed channels due to relevant but unobserved channels. Such unobserved channels might include the health channel discussed earlier or some soft skills like personality traits and non-cognitive abilities. I will discuss this issue in the description of the econometric model and the empirical analysis with more detail. First, I will formally introduce the econometric model, which can be written as follows: $$\begin{aligned} \text {ability} =&\psi _1 \text {ability}_f + \psi _2 \text {ability}_m + \mathbf {Z}\Lambda + \varepsilon _1\end{aligned}$$ $$\begin{aligned} \text {educ}_f =&\delta _1 \text {ability}_f +\delta _2 \text {age}_f + \delta _3 \text {indi}_f + \delta _4 \text {city}_f+ \varepsilon _2\end{aligned}$$ $$\begin{aligned} \text {educ}_m =&\delta _5 \text {ability}_m+\delta _6 \text {age}_m + \delta _7 \text {indi}_m+ \delta _8 \text {city}_m +\varepsilon _3\end{aligned}$$ $$\begin{aligned} \text {wealth} =&\gamma _1 \text {educ}_f + \gamma _2 \text {educ}_m + \gamma _3 \text {ability}_f + \gamma _4 \text {ability}_m\nonumber \\&+\gamma _5 \text {indi}_f + \gamma _6 \text {indi}_m+\gamma _7 \text {age}_f + \gamma _8 \text {age}_m+ \varepsilon _4\end{aligned}$$ $$\begin{aligned} \text {cons} =&\gamma _9 \text {educ}_f + \gamma _{10} \text {educ}_m + \gamma _{11} \text {ability}_f + \gamma _{12} \text {ability}_m\nonumber \\&+\gamma _{13} \text {indi}_f + \gamma _{14} \text {indi}_m+\gamma _{15} \text {age}_f + \gamma _{16} \text {age}_m+ \varepsilon _5\end{aligned}$$ $$\begin{aligned} \mathrm{schooling} =&\beta _1 \text {ability} + \beta _2 \text {educ}_f + \beta _3 \text {educ}_m + \beta _4 \text {wealth} + \beta _5 \text {cons} + \mathbf {Z}\Omega + \varepsilon _6 \end{aligned}$$ This set of equations represent the simultaneous equations model of the above conceptual model of educational mobility. I take deviations from the mean to avoid constant terms and to simplify the notation. Subscript \(f\) refers to the father and subscript \(m\) to the mother. Variables without subscript describe either the situation of the family or the child. An alternative way of presenting the model is the path diagram in Fig. 2. Path diagram of the simultaneous equations model. The path diagram directly refers to the system of equations presented in Eqs. (1)–(6) The white filled boxes refer to exogenous variables, the gray boxes represent endogenous variables and the arrows describe direct effects. Note that for the sake of readability of the graph, I present both parents and both economic indicators together. Even though the graphical representation is more illustrative, I will now discuss the model mainly based on the equations presented above. Equation (1) describes the genetic transmission of cognitive ability, where the main explanatory factors of the child's ability are the parental cognitive ability scores.4 Additionally to the parental ability scores, I add some control variables such as the gender of the child, a dummy for first-born children and two dummies for children with a small (\(<\)20 years) and large (\(>\)40 years) age difference to their parents. Parental education is excluded from this regression because we can assume that there is no direct effect on the cognitive ability score of the child. This assumption is based on the nature of the used cognitive ability test, which is education- and culture-independent (Désert et al. 2009). For the same reason, no feedback effect from education to ability is included. The economic situation is not included as an explanatory variable because the transmission through genes took place at birth, while the economic situation indicators are contemporaneous values and have, therefore, no direct effect. Equations (2) and (3) are simplified education production functions for the father and the mother, respectively. The idea is to link parental ability to their schooling outcome and to control for cohort- and ethnicity-based differences in the educational level of parents. I also include the instrument of the single regression approach capturing the place of living of the parents when they were 12 years old. The idea is that parents that grew up in towns had substantially better access to schooling than those living in the countryside. Both indicators of the economic situation are excluded from this regression because of the timing of these variables. The economic situation today does not directly explain the educational achievement of the parents when they were in the schooling age. Equations (4) and (5) estimate the effect of parental education and ability on the two indicators of the economic situation. The economic situation is split into consumption and a wealth index. This index was obtained by taking the first component of a principal component analysis on several indicators of durable good holdings and housing conditions.5 Taking this index instead of the full set of indicator variables allows me to reduce the dimensionality and to use the wealth index as an indicator for the long-run economic situation. The use of both, the wealth index and the current consumption level, is motivated by the findings in the literature, saying that the (long-run) economic environment is more important than current consumption. In addition to the exogenous variables, the economic situation is also influenced by parental education. Finally, Eq. (6) is the main equation corresponding to the single equation approach outlined before. The explanatory variables of interest are parental education, the economic situation indicators and the ability score of the child. I also control for some contemporaneous effects by including control variables capturing the family structure, the place of living, the government program benefits, the working conditions of the child and the gender of the child. To sum up, the coefficients of main interest are the \(\beta \)'s and to a lesser extent the \(\psi \)'s. The \(\beta \)'s estimate the direct impact of family background variables and the child's ability on educational attainment. The \(\psi \)'s permit us to estimate the relationship between parental and child's ability, i.e. estimating the biological transmission. Through the \(\psi \)'s and \(\beta _1\) the total effect of the biological transmission on the educational outcome can be estimated. This setting allows us to estimate the relative importance of the different channels in the educational transmission. The model is estimated using the maximum likelihood method under normality assumptions (Muthén 2004). In contrast to the instrumental variables techniques used in the single equation approach, the identification of the simultaneous equation model is somewhat more complicated to show. The model presented in this paper is easily identified due to its quasi-recursive structure. I use the term quasi-recursive because I do not assume independence of the error terms for contemporaneous equations. As a consequence, I use more restrictive identifying conditions to show that all parameters in all equations are identified. A detailed discussion of the identification conditions along with the proofs can be found in the "Appendix D". 4.1 Data description The analysis of this paper requires very complete data at the micro level for both children and parents. The Mexican Family Life Survey (MXFLS) 6 is a very rich and award-winning panel data project from Mexico and fits these requirements quite well. I use information from the first two waves (2002 and 2005), focusing on the latter wave. The panel structure was mainly used to reduce the amount of measurement errors, for instance, by identifying and correcting impossible values for time invariant variables.7 To the extent of my knowledge, this is the best data source from a Latin American Country for this kind of analysis, particularly because it includes short cognitive ability tests. Nevertheless, the data are not perfect and before starting with the analysis I discuss some trade-offs faced and the resulting decisions taken. 4.1.1 Choosing the age range of the sample and the schooling outcome variable A first challenge is to choose correctly the age range of the primary units of analysis. In order to estimate properly the correlation of years in education one would have to limit the analysis to people having finished their education, i.e. mostly people over 25, implying two major problems. First, older individuals are probably no longer living with their parents. However, as I do not have administrative data, I can only establish the link between children and parents when they are living in the same household. Those still living with their parents years after completing school are most likely not representative for the whole population. The second problem is that the schooling period of older people having finished education lies potentially far in the past. Therefore, the economic situation for that time would be hard to proxy and the mechanisms I would analyze would be those prevailing some years ago, which is not necessarily very policy relevant. At the lower bound of the age range we cannot include too young children, as they are only about to start their educational path. Therefore, the information on years of schooling is likely to be much less related to their final schooling outcome as compared to slightly older children. For these reasons, I focus on children and young adults from 12 to 25 years and use a constructed education index instead of years of schooling. The idea behind this index is very simple: instead of measuring the final outcome, I consider the delay in schooling that people have with respect to their peers. The index is computed by dividing an individual's years of schooling by the average years of schooling of her age cohort. A value of 1 corresponds to a child that is just on time compared to its peers; a value below 1 suggests a delay. Figure 3 displays some key statistics by age on the left side and the cumulative density function of the education index on the right side. The cumulative distribution function is depicted by age groups corresponding to students in the age of secondary and high school education (12–17 years) and tertiary education (18–25 years), respectively. Distribution of the education index. The figure displays the distribution of the main dependent variable (education index) as a function of age. Both graphics are based on the working sample used in the main regressions On the left graph we can see that the dispersion of the index is relatively stable for the ages corresponding to secondary and high school. For the ages corresponding to tertiary education the dispersion increases, especially the 95th percentile increases. This change is due to the fact that a substantial proportion of individuals do not continue education beyond the high school level. Therefore, the reference level remains at lower levels and those actually attending tertiary education achieve higher levels of the education index. On the right-hand side graph we can see that there is a considerable amount of variation in the index, starting at values close to zero for those with no or very little education and going up to almost 2. For the younger age group a stronger concentration around the value of 1 can be observed. This is due to the fact that less variation in the years of education is observed.8 The underlying assumption of this indicator is that a delay in schooling is translated later on in fewer years of schooling. In "Appendix A" I present some empirical evidence of this relationship and provide some additional information on the Mexican education system, along with some basic statistics such as enrollment rates. 4.1.2 Variable selection and construction A second data challenge is to include as much relevant information as possible by minimizing the cost in terms of loss of observations due to missing values. My strategy in the variable selection process was to give absolute priority to the three main channels discussed earlier. At the same time, I tried to avoid unnecessary loss of observations due to less relevant variables. To face this trade-off I started by defining a set of absolutely needed variables which cannot be excluded from the analysis without seriously changing the model. For this type of variables, dropping observations due to missing values is unavoidable. A second series of interesting but not absolutely indispensable variables was selected trying to avoid variables that would cause a large loss of observations. In this respect, some variables potentially able to capture soft skills and personality traits were excluded, because too many observations would have been lost. 9 One of the main reasons to use the MXFLS data is the availability of cognitive ability measures based on Raven's progressive matrices (RPM). According to Désert et al. (2009) RPM is a frequently used intelligence test with proven reliability and validity in measuring cognitive aptitudes and reasoning. Désert et al. (2009) further highlight that this IQ test is less education dependent than others, reducing the risk of feedback effects from education.10 Different versions of the test were applied to children (5–12 years) and adults (13–65 years). In order to have comparable scores across age groups, the values were normalized to a distribution with mean 100 and standard deviation 15 for each age. The choice to normalize to the mean and standard deviation of the IQ is essentially for illustrative purpose, but it does not imply that the cognitive ability scores can be seen as a complete measure of IQ. Moreover, the normalization is not relevant for the results, because I report only standardized coefficients, which are by definition independent of previous normalizations. Given the panel structure of the survey, two test scores per person are available, allowing us to compute the average score of the person to reduce measurement errors. Observations where the two scores had a difference of more than 2 standard deviations were dropped from the analysis. For people with only one valid test score this was taken to avoid losing too many observations. Parental education in years was obtained by computing the average time spent in school to achieve the reported education. Repeated years are, therefore, not considered as schooling years, as one can argue that they do not provide additional human capital. Note that the question on the achieved education level is asked twice in the survey. Once it is asked in the roster questionnaire and once in the individual questionnaires. I primarily took the information from the individual data and completed it by the roster data when the individual data was missing. The family log-consumption per capita was obtained out of a series of information on consumption and normalized to the consumption per equivalent adult following the methodology proposed by Rojas (2007), who provides estimates for Mexico based on the subjective well-being approach.11 The wealth index was obtained by taking the first component of a principal component analysis performed on several household assets and indicators of the housing conditions. A list of the included indicators and their relative importance for the wealth index and its possible relation with parental age are reported in the "Appendix C". The remaining variables included in the study were constructed in a straightforward way according to the standards in the literature and are reported with a short description in Table 1. 4.1.3 Sample size and sample selection Initially 11,273 children and young adults aged between 12 and 25 years were present in the database. From these only 8,155 individuals lived with both parents in the same household. This is a necessary condition for this study, since otherwise no cognitive ability scores of the parents are available. Proxies for other variables such as education of absent family members would be available, but the cognitive scores are not. Missing values in parents' and children's characteristics introduced another loss of observations, reducing the sample to 4,266 observations. The large loss of observations is not surprising considering the data requirements of the study and the fact that they are survey data from an emerging country. These data are obviously not as good as administrative data from European countries that were used in some other studies on the topic. It could be argued that the loss of observations introduces sample selection biases. Being fully aware of this fact, I try to show that the sample used in this study produces some very comparable results to findings in the literature and that the analysis is quite robust to changes in the sample. I also estimate the benchmark model with larger samples, where I relax some data requirements. For instance, excluding the channel of the father allows me to take into account the numerous single-mother households and merging the effects of the father and the mother, allows me to include every single-parent household. These larger sample regressions are reported in "Appendix B". 4.2 Descriptive statistics Let us now have a closer look at the data. Table 1 presents some univariate descriptive statistics of the sample I use in the econometric models. The different variables are divided into blocks corresponding to their role in the econometric model. The main dependent variable is the education index previously introduced. The index was constructed using the largest sample possible and not only the observations used in the econometric estimation. Therefore, the average value is slightly higher than 1. The same logic applies to the ability measures, which were estimated using all available information. Variables used in the study Dependent outcome variable Educ. index Years of education divided by the average years of education of the age group Endogenous regressors Average log consumption per equivalent adult Educ. father Father's years of education Educ. mother Mother's years of education Child's ability measure 101.016* Exogenous regressors and control variables Age father Age of the father Age mother Age of the mother Ability father Ability measure for the father Ability mother Ability measure for the mother 97.795* Indig. father Dummy variable for indigenous father Indig. mother Dummy variable for indigenous mother Father city Father grew up in a urban area Mother city Mother grew up in a urban area Dummy for girls (=1) Age in years Dummy variable for rural areas Work02 Dummy for working activities in 2002 Number children Number of children below 12 years Number teenagers Number of teenagers (12–18 years) Dummy for program Alianza para el campo Dummy for program Coinversión social Dummy for program Crédito a la palabra Dummy for program FONAES Dummy for program Fondo para la Micro, Pequeña y Mediana Empresa Dummy for any other assistance program Dummy for program Programa de empleo temporal Dummy for program PROCAMPO Dummy for program VIVAH Dummy for program Oportunidades Descriptive statistics based on the sample of 4,266 observations * Normalization was made with the full sample to use a maximum of information. The mean can deviate slightly from the normalized value due to missing values in other variables The average and the standard deviation of the fathers' years of education are slightly higher than for the mothers. The proportion of indigenous parents is around 15 % which corresponds to the national average. The age of parents is measured in years and fathers are slightly older than mothers. About one-quarter of parents grew up in a city. The sample is strongly balanced between girls and boys and also between families living in rural and urban areas. The indicator of rural areas is based on the official definition of rural zones in Mexico and the place of living at the time of the survey. As people might have lived in a different place during their education, I use additional information on migration to correct the variable accordingly.12 Two variables (Work02 and Work05) capture whether the children were working in 2002 and 2005, respectively. The proportion grows from around 17 to 25 %, reflecting the aging of the cohort. The indicators on the number of children and teenagers allow us to control for the composition of the households. On average, there is one child below 12 years and about 1.6 teenagers present in a household. The set of dichotomous program variables captures the beneficiary status of families for different government programs. The proportions of beneficiaries are generally very low, with the exception of Oportunidades and Procampo where the proportion is above 10 %. More interestingly than the averages of the variables are the relationships among them. I now present some simple linear correlations between important variables. They should provide a good impression of the data and outline some potentially interesting phenomena. On the other hand, they should give us an impression of comparability of the data with data used in other studies. I hope to reduce some concerns regarding the sample selection issues and the definition of some main variables by showing that the descriptive statistics are surprisingly comparable to other studies in the literature. A first issue that one might discuss regarding the data is the use of Raven's progressive matrices test as a measure of cognitive ability or even IQ. In the sample, the correlation of the ability measure with the one of the father is 0.363. This value is very close to the 0.347 and 0.38 estimated by Björklund et al. (2010) and Black et al. (2009), respectively, both using more detailed IQ measures. The same correlation with respect to the mother was found to be 0.387, which is slightly higher than the father–son correlation. Considering only the two oldest siblings in a family gives a siblings IQ-correlation of 0.506, which is again relatively close to the values reported by Björklund et al. (2010) who find estimates between 0.473 and 0.510. Interestingly, and giving a first evidence for assortative mating, the spousal IQ-correlation is 0.400. The spousal education-correlation based on the years of schooling is 0.646, which is even higher and supports the idea of an important role of non-random spousal selection. Regarding the simple educational attainment correlation between parents and children, a very interesting pattern can be found when splitting the sample into age groups. Table 2 presents the correlation between the education index used in this study to proxy the educational attainment of children and their parents' years of education. Intergenerational correlation of education Correlation with Secondary and high school The reported correlations refer to the correlation between the years of education of the father and mother, respectively, and the constructed education index. All correlations are computed for the working sample used in this study The correlations are substantially higher for the older age group as compared to the younger group.13 Several possible explanations for this can be found. First, the intergenerational transmission is likely to be a cumulative process, thus the older the children become, the larger is the relationship between parental education and the educational outcome of their children. Second, it could also be due to the precision of the education attainment indicator used in this study. The older the children are, the more values the indicator can take and, therefore, the correlations might be estimated with more precision.14 The correlations for the older age group are slightly below the correlation of 0.55 estimated by de Hoyos et al. (2010) for children born between 1972 and 1981 in Mexico. The likely reason for the difference is that de Hoyos et al. (2010) use older individual with finished education. Looking at the difference from the younger to the older age group, it is very likely to end up with similar values as de Hoyos et al. (2010) if we could include older individuals. Finally, Table 3 gives a comparison between the used and the full sample for the main variables of interest. We can see that the differences are not statistically significant for father's education and both parental ability measures. For consumption the difference is only significant at the 10 % level. For other variables we observe statistically different means, which is not very surprising with that many observations. However, by looking at the column 'Diff/SD' we can see that the difference in terms of standard deviation of the variable never exceeds 0.2; thus they are probably not as problematic as the statistical tests might lead us to think. Overall, the data are certainly not perfect and do not attain the standards of high-quality administrative data from some European studies. Nevertheless, the working sample does seem to represent the full sample relatively well and permits us to carry out the analysis. Comparison of sample with excluded observations Full sample Used sample Diff/SD Log consumption Father's education Mother's education Father's ability Mother's ability Ability of the child Mother's age Father's age Number of teenagers Indigenous father Indigenous mother Rural area First born The 'full sample' includes all individuals in the age range of the study with non-missing values. The column 'Diff/SD' refers to the difference between the two samples divided by the standard deviation of the full sample In Sect. 3 I introduced the two approaches to estimate the intergenerational transmission in education. I now present the result following the same structure. First I present the single regression approach where I estimate simple OLS and IV models and then I move to the discussion of the simultaneous equations model. 5.1 Single regression approach Table 4 presents the main regression results of the single-equation approach. The first column is a simple OLS estimation, followed by several IV estimates, all using the robust estimator to account for heteroskedasticity. In the model IV-1 I instrument both parental education and the ability level of the child. For the models IV-2 and IV-3 I instrument child's ability and parental education separately. For presentational reasons, I do not report some control variables such as the age and the indigenous background of the parents and the indicator for rural areas. These variables are not significant. Additionally, I do not report the coefficients of the government program benefits and the state fixed effects to reduce the size of the table. Single equation results (OLS and IV estimates) IV-1 IV-3b IV-3c Ability of the child\(^+\) 0.209*** Father's education\(^+\) 0.218** Mother's education\(^+\) Wealth index\(^+\) Log consumption\(^+\) Worked in 2002 Child's ability instrumented Parental education instrumented Control variables Hansen J-statistic Hansen J-statistic (p value) Weak instr. test (statistic) Weak instr. test (p value) Endogeneity test (\(\chi ^2\) stat.) Endogeneity test (p value) All regressions include also additional control variables such as mother's and father's age and indigenous background, an indicator for rural areas (none of these is significant). Additionally, state fixed effects and control variables of government aid programs are included but not reported. Robust standard errors are reported and robust endogeneity test following Baum et al. (2007) is used, \(^+\)denotes standardized coefficients Significance level * 10 %, ** 5 %, *** 1 % Let us first discuss the main coefficients of the OLS regression, which are all presented with standardized coefficients. The ability of the child has a strong and highly significant effect on the schooling outcome. The direct effect of parental education is also highly significant and positive. The effect of the mother is larger than the one of the father, which is in accordance with the literature. Both indicators for the economic situation of the family display positive and significant effects. Note that this estimation does not directly allow us to conclude about the biological channel, as we do not estimate the link between parental ability and child's ability. In order to see whether these OLS estimates are reliable, I move now to the discussion of the IV estimates. First, we can see in the models IV-1 and IV-2 that the coefficients of the child's ability does not change a lot as compared to the OLS estimates. By looking at the endogeneity test based on Baum et al. (2007), we can actually see that the variable is not endogenous and, therefore, instrumenting it is not required. However, this test is only valid under the assumption of valid instruments. To test the validity, I use the Hansen J statistic, which indicates that the instruments are valid.15 Hence, for the ability measure of the child we do not seem to have an endogeneity problem.16 As mentioned earlier, this might be due to a large extent to the nature of the cognitive ability test, which is much less related to education and cultural aspects than other ability measures. Let us now turn to parental education. In the models IV-1 and IV-3 I instrument parental education. Contrary to the previous results, we find strong differences in the coefficients between the OLS estimation and the IV estimates. The coefficient for the father increases sharply while the coefficients for the mother becomes much smaller and insignificant.17 This is surprising and contrary to the findings in the literature where the maternal education seems to matter more. It is, therefore, important to understand where this result comes from. According to the Hansen J-statistic the instruments are valid and the weak instrument test does not point to a problem of weak instruments. In order to better understand the results, let us have a closer look to the first stage regressions presented in Table 5. First stage regressions Dependent variable Educ fath. moth. Father's ability measure\(^+\) Mother's ability measure\(^+\) Father grew up in a town Mother grew up in a town 0.129*** (0.021) 0.028 (0.019) –0.099** (0.043) –0.078* (0.044) Angrist-Pischke F-statistic Angrist-Pischke \(p\) value \(N\) Adj. R2 State fixed effects, government program benefits, the rural dummy and parental ethnicity and age are not reported. \(^+\) denotes standardized coefficients. Significance level: * 10 %, ** 5 %, *** 1 % The Angrist-Pischke F-statistic is very large and suggests that the instruments are strong (Angrist and Pischke 2009). Thus, in terms of the standard test for IV-regression, these estimates seem to be valid. However, there is another problem stemming from the underlying nature of the analysis, which goes beyond the standard challenges of IV. Note that both father's and mother's education are correlated with the cognitive ability measure and the place of living of either parents. This is a direct result of assortative mating. It is clear that these correlations are not causal. The consequence of this is that instrumented variables of the two parents very strongly correlate. I computed the predicted education of the father and the mother using the first stage regression and find a correlation of nearly 0.95, while the actual parental education correlation is close to 0.65. Hence, in the main regression, we have a strong problem of multicollinearity, which can explain why the increase in the coefficient related to the father is compensated by the coefficient related to the mother. This problem can also be highlighted with the additional regressions IV-3b and IV-3c reported in Table 4. In these two regressions I excluded one of the two parents and used only the instruments related to the parent included in the regression. We can see that in both regressions the parameter of the parents is highly significant. Overall, the single equation approach provided very coherent and expected results. The education of the mother seems to matter slightly more than the education of the father. The wealth index seems to matter more than the short-run consumption and the cognitive ability of the child is also an important predictor of the schooling outcome. Finally, the endogeneity tests performed on the IV estimates did not allow us to conclude that we have a serious problem of endogeneity. I now move to the simultaneous equations model which will allow us to learn more about the different channels and their relative importance. 5.2 Simultaneous regression approach Let me now turn to the results of the simultaneous equation model introduced in Sect. 3.3. The possibility of estimating several channels simultaneously permits us not only to avoid some biases due to omitted variables, but also to quantify these biases by running regressions with some excluded variables on the same data. This idea influenced the estimation strategy and made it straightforward to estimate some simplified models alongside the complete model. This first set of estimation results is reported in Table 6. All models are estimated on exactly the same sample to avoid confounding potential differences in the coefficients due to changes in the model on the one hand and due to changes in the sample on the other. Standardized coefficients are reported and should be interpreted as changes in standard deviations of the dependent variable upon a one standard deviation change of the continuous regressors or upon a unit change in dichotomous regressors. Estimation results of Eq. (6) \(R^2\) Standardized coefficients. Standard deviations in parentheses. Dependent variable: education index of the child. The full system of equations was estimated simultaneously, but only the coefficients of Eq. 6 are reported in this Table. Significance level *** 1 %, ** 5 % and * 10 % Model 1 is the complete model including ability, father's and mother's education and the economic situation proxied by two variables. These main regressors were accompanied by control variables such as gender, a rural area dummy, state fixed effects, social program dummies and child labor indicators which are not reported in Table 6. The full estimation results of model 1, including the remaining equations of the model, can be found in Table 11 in "Appendix B". Considering model 1, the estimation is quite precise and all coefficients are significant at the 1 %-level. The coefficient related to the child's ability measure attains with 0.218 the highest value. Both father's and mother's education have a highly significant and positive effect. The size of the coefficient for the mother is substantially higher than the one for the father. With respect to the economic variables we can also observe a significant difference between the two. The effect of the wealth index is substantially higher than the one of consumption. This finding is coherent with the findings by Carneiro and Heckman (2002) who argue that the long run economic environment matters more than short-term credit constraints. When considering both economic effects, we see that the economic situation has the largest direct intergenerational effect on the schooling outcome of the child. In general, these results are relatively close to what was found in the OLS regression in Table 4. Model 1 is estimated on a sample of 4,266 individuals and could potentially suffer from a sample selection bias. As discussed in the data section, I also estimate the full model relaxing some requirements on the data. In a first step, I include single mother households by dropping the channel of the father increasing the sample size to 6,547 individuals. In a second step, I include all households where data are available on either of the parents and taking the maximum value when both are available. This allows us to include 143 additional individuals, because in model 1 some were dropped just because of one missing parental characteristic. These larger sample estimates are reported in Table 11 in the "Appendix B". The general pattern is very encouraging, as almost no changes in the main regression are observed. Most coefficients increase slightly, but remain at very similar levels. The relative importance of the effects remains unchanged. Overall, these additional regressions give some support on the validity of model 1 since the results hold even when changing the sample a lot. In what follows, I take model 1 as the benchmark model, as it is the only one allowing us to control for all different channels. 5.2.1 Direct versus indirect effects Let us now return to the discussion of model 1 from Table 6. An interesting feature of simultaneous equation models is that they enable us to compute direct and indirect effects. For example, it is clear that parental education does not only affect the schooling outcome through the direct effect discussed before, but also through the economic situation of the family. Figure 4 shows the direct and indirect effects based on the results of model 1, fully reported in Table 11. As in the discussion before, one can easily see that the ability measure of the child has the largest direct effect (black bars). The wealth index has the second largest direct effect, followed by the mother's education. However, the total effect of mother's education is larger than the total effect of the wealth index. This is due to the fact that besides the direct effect we also have an indirect effect of maternal education through both economic indicators. The same is true for the father, where the relative importance of the indirect effect is even bigger. Nevertheless, the total effect of father's education remains smaller than the one of the mother. Direct and indirect effects on child's schooling attainment. The figure displays the direct and indirect effects of the main variables of interest. The values are based on Model 1 reported in Table 6 Finally, parental ability has no direct effect but only indirect effects through the genetic transmission and the other two channels. The total effects attain values of about 0.13 for the mother and 0.10 for the father. 5.2.2 Biases when neglecting channels Models 2–5 in Table 6 only include one of the four possible channels assuming the others to have no impact. The last model includes the often available data on the education of parents and the economic situation but not the ability measures. We can notice that the one-covariate models always give strongly upward biased estimators of the coefficients, when comparing them to the benchmark model in the first column. Not surprisingly, the bias in relative terms is lower for the important channels, namely ability and the wealth index, where the new coefficient is roughly 1.5 times higher than in model 1. The upward bias of parental education is much more important, since the coefficient attains 2–3 times higher values for mother's and father's education, respectively. However, the biases become much smaller when all but the ability measure are included. Due to missing information of ability measures in most of the surveys, this setting corresponds to the best we can normally do. The coefficients are about 20 % higher than that in the benchmark model, which is considerably less than in models 2–5. More importantly, the relative importance of the coefficients is very similar in model 6 as compared to model 1. 5.2.3 Regression by age groups Based on the descriptive findings presented in Table 2 of increasing intergenerational education correlations with age, a second set of estimation results is presented in Table 7. Model 1 is estimated for different age groups and additionally for girls and boys separately. The age groups are chosen in a way that they correspond to the age when people are normally in secondary (including high school) and tertiary education. Estimation results of Eq. (6) for different samples Standardized coefficients. Standard deviations in parentheses. Dependent variable: education index of the child. The full system of equations was estimated simultaneously, but only the coefficients of Eq. 6 are reported in this Table. Significance level *** 1 %, ** 5 %, * 10 % As for the simple correlation, I find differences between the two age groups. In general, the coefficients are slightly higher for the older group. A sharp increase is observed for the effect of mother's education. The model fit also increases substantially from the younger the older age group. As for the simple correlations presented before, there are several possible explanations. First, it could be argued that this is due to a more precise measure of the dependent variable for the older age group. The second explanation is that the inequalities in education are a cumulative process and that the relative importance of the channels can evolve with the age of the child. Most likely both phenomena are present in these results. The fact that all indicators become more important supports the idea that the measurement is more precise for the older group. However, the fact that not all explanatory variables increase their effect in the same way points to something beyond this argument. In particular, the coefficient of the mother's education increases substantially more than that of the others. Hence, we might have reasons to believe that the impact of the mother becomes more important with age. This could be due to the role of the mother in pushing the child to continue at school. Of course, additional research is required to confirm this conclusion, because the results could also be driven by the larger precision of the dependent variable. 5.2.4 Regression by gender Given that mother's and father's education have different effects, it might be interesting to see whether the effects are also different for boys and girls. The last two columns in table 7 present model 1 for girls and boys, respectively. We can see that the two economic indicators have slightly higher coefficients for boys while the child's ability seems to matter a bit more for girls. The education of the father is somewhat more important for boys. A large difference can be observed for the role of mother's education, which has an almost twice as large effect for girls as compared to boys. The exact reasons for this difference are beyond the scope of this analysis, but it might be a very interesting question for future research. The present study tries to contribute to the understanding of the mechanisms generating the high intergenerational education correlations observed all over the world and especially in Latin American countries. A particularly important issue is to distinguish the different channels of transmission outlined by the literature over the past years. Using very rich data from Mexico, a simultaneous equations model of the educational transmission can be estimated, allowing me to distinguish between the different channels: the biological transmission of ability, transmission through the economic situation and the education-to-education channel. Additional channels such as health or non-cognitive abilities are not considered in this study. Unfortunately, the data and especially the unavailability of retrospective information on health did not allow me to include such channels. However, these channels might be important as they might upward bias the importance of the included channels, particularly the education-to-education channel. This caveat must be kept in mind when discussing the results. The results suggest that the economic situation of the family is the most important direct intergenerational channel, which has an even larger effect than the ability of the child when considering the effects of both economic indicators together. I distinguish between consumption as a proxy of the current economic situation and a wealth index to capture the long-term economic situation in the analysis. I find a larger effect of the wealth index, which is in accordance with findings in the literature. Parental education matters to explain children's schooling but not in a very strong way as the intergenerational correlation might lead us to expect. The mother's education directly and significantly influences the schooling outcome of children. The education of the fathers also affects the schooling outcome directly and has additionally a strong indirect effect through the economic situation of the family. The finding that the economic situation plays an important role suggests that the current situation is likely not to be efficient. This is due to the non-optimal investment in education of the poorer children and, therefore, they cannot exploit their potential. On the other hand, the finding is encouraging in the sense that the low educational mobility does not seem to be a fatality. The strong influence of the economic situation of the parent can be targeted by public policies. In this respect, cash transfer programs (conditional or not) might help us to increase social mobility as they allow poorer families to invest more in education. Over the recent years many programs were implemented and it is, therefore, possible that they already generate beneficial effects in terms of educational mobility. In addition to the main results I also performed the same analysis on sub-samples. These additional estimates provided interesting insights. First, the intergenerational links are higher for older children. All coefficients increase with the age group and particularly the education of the mother becomes much more important with age. This result might suggest that the intergenerational links are following a cumulative process, suggesting that even at higher ages policy interventions can be useful. However, the differences found for the different age groups could also be due to the more precise measure of the educational attainment for the older age group. Additional research is required to distinguish the two possible explanations found in this study. Second, I find differences in the relative importance of transmission channels between girls and boys. The economic situation of the family matters slightly more for boys while the ability of the child is somewhat more important for girls. The biggest gender difference is found for maternal education. The effect of maternal education is almost twice as high for girls as compared to boys. The analysis demonstrates that estimates ignoring important alternative channels of transmission tend to overestimate the effects of the analyzed variables. Remaining unobserved channels such as personality traits could upward bias my results to a certain extent. Nevertheless, the used data does not allow me to consider additional channels as they would imply a large drop in the sample sizes and increase the problem of a non-random sample. Finally, the analysis should be seen as a piece among others in the recent literature aiming at understanding the mechanisms of educational mobility. For future research I see mainly three interesting directions. First, it would be useful to conduct similar analyses for other countries with low educational mobility to see whether the findings hold also outside the Mexican context. Second, the results suggest that cash transfer programs could potentially help us to increase educational mobility. Future research could look at this effect and try to find out more about the most effective specificities of such programs. Third, while most effects were relatively stable across sub-samples, the effect of maternal education changes substantially with age and gender of the child. It would, therefore, be interesting to further investigate the role of the mother in the intergenerational transmission of education. These variables include the number of children (up to 12 years old) and teenagers (12–18 years old) in the household and a dummy for the first-born child. The use of parental cognitive ability to instrument parental education could be problematic if the ability measure was influenced by the education. However, as mentioned in the data section, the used RPM test is less education dependent than other measures of cognitive ability. Therefore, I argue that the assumption of no reversed causality might be reasonable. Additionally, I discuss the empirical tests related to the validity of the instruments. The original variable included more categories to describe the situation outside towns. However, they were relatively unclear and did not provide additional explanatory power. For this reason, I regrouped all non-town answers. Note that ability refers to cognitive ability and does not include non-cognitive ability. For this reason, I do not include indicator variables for non-cognitive abilities and estimate a latent factor. This choice allows me to focus on the nature and not on the nurture effect. In the "Appendix C" I describe the indicators and the estimation of the wealth index in detail. The original Spanish name is Encuesta Nacional sobre Niveles de Vida de los Hogares (ENNVIH). For instance, if in one wave the father was younger than the child and in the other wave the difference was plausible, then only the plausible value was taken. However, if there was no plausible value, the observation was dropped. Note that when plotting the cumulative density function by age, we even find for the youngest children a similar range of values. Experimental regressions were performed including such variables in order to see if their exclusion alters the results. I discuss these briefly in the "Appendix E". More details on Raven's progressive matrices and its implementation in the MXFLS can be found in Raven et al. (1986, 1983), Raven (2000), Rubalcava and Teruel (2006). Using the official Mexican equivalence scales based on CONEVAL (2008) gives essentially the same results where only the third digit after the comma changes by at most two units. I prefer to follow Rojas (2007) as his definition is concave in the number of people, while the official equivalence scales are not. Unfortunately, it is not possible to use exclusively the information on the place of living when people where at the age of education, because the variable is measured differently. I correct the variable rural only in cases where people reported that they lived in a city during education and living in rural areas at the time of the survey. The large majority of the individuals (around 90 %) never changed the place of living and, therefore, the information of the place of living at the time of the survey is accurate for the education period as well. Note that when computing the same correlation for younger children (say 7–11 years), the values are even lower. This argument is particularly true when considering even younger children at the age of primary school. A previous version of this study included them. The decision to take them out of the study is mainly due to this argument saying that the precision of the education attainment indicator is not sufficient for the youngest individuals. The null hypothesis of the test is that the instruments are valid. In the regression IV-2 we could reject the null hypothesis at the 10 % level. I, therefore, performed the endogeneity test on child's ability only in the model IV-1, where we have clearly valid instruments. The test shows also that child's ability is not endogenous. However, the two coefficients are not significantly different from each other (\(p\) value of 0.344 and 0.320 for IV-1 and IV-3, respectively). Hence, they are not contradicting the results found in the OLS regression. Except the state fixed effects and dummies for beneficiaries of government programs other than Oportunidades. The covariance matrix is symmetric. For presentational purpose, I only present the upper triangular version. I only assume uncorrelated error terms for the ability transmission Eq. (1) and the child's education Eq. (6). This last assumption is confirmed by the IV estimates I will present in Sect. 5.1. Note, however, as I will show below, these restrictions are not required for the identification. I excluded the paramters of some control variables to save space. They are not required for the identification. I am grateful for very helpful comments on earlier versions of this paper by Tobias Müller, Jaya Krishnakumar, Marcelo Olarreaga, Dirk Van de gaer, Duncan Thomas and the participants at conferences and seminars in Geneva, Buenos Aires, Bordeaux, Washington DC, and Mexico City and two anonymous referees. A special thanks to Isidro Soloaga for enlightening discussions on the topic and to Ian MacKenzie for correcting the writing of an earlier version. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Appendix A: The education index: distribution and relevance The goal of this appendix is to show that the education index used in the study is a good proxy for the final number of years of education and to provide some additional information on the Mexican education system. I start by discussing how an educational delay is related to the final level of education. Relevance of the education index As outlined in Sect. 4.1.1, it is assumed that the education index is related to the years of education once the individual leaves school. That is, a delay in school at an early age should be translated in getting less education. School delay at early ages can arise from late entry to the educational system or repeating grades. In Table 8 I present a simple OLS regression of the years of education on the number of grade repetitions in primary school and the years of delay to start school. The data come from the same survey as the main analysis of the paper, but here I use only people no longer attending school. Effect of late entry and grade repetition on education outcome Coeff. Std. Err Delay of school entry Entry on time (6 years old) 1 year later 4 or more years later Number of grade repetition in primary No repetition 1 repeated grade 2 repeated grades 4 or more repeated grades Adj. \(R^2\) Source Author's calculation using data from MxFLS. Standard errors in parenthesis. Dependent variable: years of education. Significance levels at * 10 %, ** 5 %, *** 1 % We can see that already a late entry of 1 year is related to a decrease of total schooling of about 1 year, while students entering the system 2–4 years later have on average between 3 and 4 years less years of education in the end. Grade repetition has also a negative effect on total schooling, where 1 repetition is broadly related with one year less of schooling. It is important to notice that I do not claim that this regression identifies causal effects, which is actually not needed to show the utility of the chosen educational index. The Mexican education system The Mexican education system is characterized by 6 years of primary education, followed by 6 years of secondary education. Secondary education is divided in 3 years of lower secondary education (secundaria) and 3 years of upper secondary education (preparatoria). Table 9 provides additional background information of the Mexican education system for the years 2002 and 2005. Key statistics of the Mexican education system Primary completion rate, total (% of relevant age group) Private schools (% of total primary enrollment) Repeaters, primary (% of total enrollment) Lower secondary completion rate (% of relevant age group) School enrollment, secondary (% net) Progression to secondary school (%) Repeaters (% of total enrollment) School enrollment, tertiary (% gross) Source World Development Indicators (http://data.worldbank.org/country/mexico) Finally, Table 10 presents statistics on late entry to the education system based on the Mexican Life Family Survey (MxFLS). Distribution of entry age to the education system Entry age Proportion (%) Cumulative proportion (%) Source Author's calculations based on the MxFLS using 5,791 individuals between 15 and 25 years old Appendix B: Full estimation results and larger sample regressions Table 11 displays in the first column the full estimation results of model 1 already reported in Table 6 including all control variables.18 Full estimation results of model 1-different samples Benchmark results Including mothers Highest parental values Coef. Std.Err. Equation 1 (Ability measure) Parents' ability (highest) Old parents Young parents Equation 2 (Father's education) Father grew up in city Equation 3 (Mother's education) Mother grew up in city Equation 3' (For highest parental education) Parents' age (highest) Indigenous parents (at least 1) Parents grew up in city Equation 4 (Wealth index) Parents' education (highest) Equation 5 (Consumption) Equation 6 (Schooling outcome) Government program dummies State fixed effects The dependent variable for the structural equation is reported in parenthesis in the title of each panel. The coefficients are reported in the first column and the standard errors in the second for each model. All coefficients are standardized, meaning that for continuous regressors the coefficients measure how many standard deviations the dependent variable changes when the regressor changes one standard deviation and for dichotomous variables when the variables turn from zero to one. Significance levels *** 1 %, ** 5 % and * 10 % Let me highlight some interesting results I did not discuss in the main body of the paper. The education production function estimates are very similar for the mother and the father. The cognitive ability and the place of living at the age of 12 years have the large positive and the age and the indigenous background negative effects. For both the long- and the short-run economic situation the fact of being indigenous has a negative impact, even when controlling for education. The second and third regressions are based on enlarged samples. The first enlarged sample includes also the single-mother households and excludes as a consequence the channel of the father. The second enlarged sample considers always the highest values of either the mother or the father. As already mentioned in the main text, the results do not change a lot despite the substantial change in the sample size. The role of mother's ability increases in the ability equation when not controlling for the ability level of the father. This is due to the correlation among the IQ of both parents. The effect of consumption is slightly higher in the large sample regressions, but remains always considerably smaller than the effect of the wealth index. When merging the education of the two parents, the combined effect is somewhat larger than the effect of the mother in the main regression, but does not attain the sum of the two parental coefficients. The relatively stable results give additional credibility to the results on the main sample used in the study. Appendix C: Discussion of the wealth index In the analysis, I use a wealth index to approximate the long run economic situation of the household. In this appendix, I first present the way it was constructed and then I discuss the concern that such a wealth index could actually capture an age effect of the parents. Construction of the wealth index Let me first discuss how the wealth index is constructed. I use the first component of a principal component analysis performed on various indicators. This allows me to reduce the dimensions and to have a single indicator. In this appendix, I present the descriptive statistics of the indicator variables and the composition of the used wealth indicator. Table 12 displays the mean of each variable and its relative contribution to the wealth index. Contribution (%) Dummy for electricity Dummy for a clean floor Dummy for good quality roof Household has a phone Household has a kitchen Clean drainage of feces Clean garbage evacuation Clean cooking energy (gas or electricity) The principal component analysis was performed using the polychoric correlation matrix to account for the non-continuity of the indicator variables From these figures, we see that the contribution substantially varies across indicators and therefore a simple average of the indicator variable would probably not well describe wealth. Among the most important contributors, we find the clean cooking energy, the clean garbage evacuation, the availability of a phone and the indicator for a clean floor. Rather of minor importance are the indicators for having a kitchen in the household and whether the household has access to electricity. The reason for these low contribution levels is the almost full coverage among the Mexican population and the resulting small variance in these variables. The relationship with the age of the parents The main results of the study show that the long run economic situation proxied by the wealth index is a main channel of transmission from one generation to the next. A potential concern with this variable stems from the fact that household assets could be directly linked to the age of the parents. If older parents have systematically more of these goods and therefore a higher wealth index, then we might actually capture an age effect rather than an effect of the economic channel. In the regressions, I control for the parental age to deal with this concern. Nevertheless, a closer look at the relationship between the wealth index and parental age can help us to reduce the concerns even more. Figure 5 displays the non-parametric regression of degree 1 of the standardized wealth index (left axis) as a function of average parental age (left graph) and the child–parents age differential (right graph). The dashed line in each graph is a density estimation of the average parental age and the child–parents age differential, respectively. Relationship between average parental age and the wealth index. The figure displays non-parametric estimates of the relationship between the wealth index and age of parents and the distribution of parental age. The estimates are based on the working sample used in the main regressions We can observe that the average of the wealth index is slightly below 0 for the youngest parents. However, the density of such young parents is rather small. For the remaining part of the parental age distribution the average is very close to zero. This is also true in the right graph where the variable on the \(x\)-axis is the age differential between parents and the child. The estimator of the mean is very close to zero for all values of the age differential. In general, we cannot observe a strong relationship between the wealth index and the age of the parents. Thus, it is unlikely that the wealth index actually captures an age effect of the parents. Appendix D: Identification of the simultaneous equations model In this appendix, I discuss the identification of the model presented in Eqs. (1)–(6) in the main body of the article. To simplify the notation, the endogenous left-hand side variables of the Eqs. (1)–(6) can be combined in the matrix \(\mathbf {Y}\) and all exogenous variables in matrix \(\mathbf {X}\), which includes all exogenous variables in white boxes in Fig. 2. This allows us to rewrite the model in the standard simultaneous equation model (SEM) notation: $$\begin{aligned} {\mathbf {Y}}&= {B}{\mathbf {Y}} + {A}{\mathbf {X}}+ \xi \end{aligned}$$ where \(\xi \) is a vector containing the error terms of the equations, \(\mathbf {B}\) is a zero-diagonal coefficient matrix for the endogenous variables and \(\mathbf {A}\) is the coefficient matrix for the exogenous variables. Let me further define \(\varPhi \) to be covariance matrix of \(\mathbf {X}\) and \(\varPsi \) to be the covariance matrix of the disturbance terms \(\xi \). We impose a condition that the exogenous variables \(\mathbf {X}\) are uncorrelated with the error terms in \(\xi \). The model described before, has the following matrix \(\mathbf {B}\) and I assume the following covariance matrix19 \(\varvec {\Psi }\): $$\begin{aligned} {\mathbf {B}} = \left[\begin{array}{cccccc} 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&\gamma _1&\gamma _2&0&0&0\\ 0&\gamma _9&\gamma _{10}&0&0&0\\ \beta _1&\beta _2&\beta _3&\beta _4&\beta _5&0\end{array}\right]\quad\quad\varPsi = \left[\begin{array}{cccccc} \varPsi _{11}&0&0&0&0&0\\&\varPsi _{22}&\varPsi _{23}&\varPsi _{24}&\varPsi _{25}&0\\&&\varPsi _{33}&\varPsi _{34}&\varPsi _{35}&0\\&&&\varPsi _{44}&\varPsi _{45}&0\\&&&&\varPsi _{55}&0\\&&&&&\varPsi _{66}\\ \end{array}\right] \end{aligned}$$ This model has a lower triangular matrix \(\mathbf {B}\) which greatly simplifies its identification. However, it is not a recursive model because I do not assume \(\varvec {\Psi }\) to be diagonal.20 While a recursive model would be automatically identified, the conditions for this model are somewhat more complicated. To discuss the identification of the model I follow Heuchenne (1997) and Paxton et al. (2011). Heuchenne (1997) proposes a sufficient rule for identification based on \(\mathbf {B}\) and \(\varvec {\Psi }\) exclusively. He proposes to combine the lower triangle of \(\mathbf {B}\) with the upper triangular of \(\varvec {\Psi }\), which gives us: $$\begin{aligned} {\mathbf {B\backslash}{\varvec\Psi}} = \left[\begin{array}{cccccc}\,&0&0&0&0&0\\ 0&{\mathbf 2}&\varPsi _{23}&\varPsi _{24}&\varPsi _{25}&0\\ 0&0&{\mathbf 3}&\varPsi _{34}&\varPsi _{35}&0\\ 0&\gamma _1&\gamma _2&{\mathbf 2}&\varPsi _{45}&0\\ 0&\gamma _9&\gamma _{10}&0&{\mathbf 3}&0\\ \beta _1&\beta _2&\beta _3&\beta _4&\beta _5&{\mathbf 5} \end{array}\right]\end{aligned}$$ The number (in bold) on the diagonal indicate how many excluded parameters are above in the same column and to the left of the same row. For instance, the value of 3 in the third row is obtained by counting the two zero values in the third row left to the diagonal and the zero value in the third column above the diagonal. Heuchenne (1997) shows that equation \(k\) is identified whenever the corresponding value on the diagonal is bigger or equal to \((k-1\)). We can see that all but Eqs. (4) and (5) satisfy this sufficient condition. Hence, we can conclude that all but the economic situation equations are identified based on the lower triangular form of \(\mathbf {B}\) and the structure of \(\varvec {\Psi }\). To verify if also Eqs. (4) and (5) are identified, we have to use conditions that are not only based on \(\mathbf {B}\) but also on \(\mathbf {A}\), the coefficient matrix of the exogenous variables. All equations in the model pass the order condition saying that an equation is identified if the number of excluded exogenous variables is equal or greater than the number of endogenous variables in that equation minus one (Paxton et al. 2011). The order condition is, however, only a sufficient condition. A stronger necessary condition is the equivalent structures approach, which is an algebraic identification technique (Paxton et al. 2011). For this, let us rewrite Eq. (7) by regrouping all parameters related to the vector \(Y\) on the left-hand side. $$\begin{aligned} B {\mathbf{ Y}} = A {\mathbf {X}} + \xi \end{aligned}$$ By doing so, matrix \(\mathbf {B}\) has a unit diagonal and all off-diagonal elements change the sign. This change of notation does not change the system but is more convenient for the computation of the equivalent structures approach. Let us now define a general matrix \(\mathbf {M}\) and define the following set of equations: $$\begin{aligned} MA = A\quad\quad MB = B\quad\quad M\Sigma M' = \Sigma \end{aligned}$$ The model is fully identified if the only solution to this system obtained by using the restrictions on A, \(\mathbf {B}\) and \(\varvec {\Psi }\) is the identity matrix (\(\mathbf {M = I}\)). Let us start with the expression of \(\mathbf {MB=B}\) $$\begin{aligned} MB&= \left[\begin{array}{ccccccccccc}{m}_{11}-\beta _{1}\,{m}_{16}&-\beta _{2}\,{m}_{16}-\gamma _{9}\,{m}_{15}-\gamma _{1}\,{m}_{14}+{m}_{12}&-\beta _{3}\,{m}_{16}-\gamma _{10}\,{m}_{15}-\gamma _{2}\,{m}_{14}+{m}_{13}&{m}_{14}-\beta _{4}\,{m}_{16}&{m}_{15}-\beta _{5}\,{m}_{16}&{m}_{16}\\ {m}_{21}-\beta _{1}\,{m}_{26}&-\beta _{2}\,{m}_{26}-\gamma _{9}\,{m}_{25}-\gamma _{1}\,{m}_{24}+{m}_{22}&-\beta _{3}\,{m}_{26}-\gamma _{10}\,{m}_{25}-\gamma _{2}\,{m}_{24}+{m}_{23}&{m}_{24}-\beta _{4}\,{m}_{26}&{m}_{25}-\beta _{5}\,{m}_{26}&{m}_{26}\\ {m}_{31}-\beta _{1}\,{m}_{36}&-\beta _{2}\,{m}_{36}-\gamma _{9}\,{m}_{35}-\gamma _{1}\,{m}_{34}+{m}_{32}&-\beta _{3}\,{m}_{36}-\gamma _{10}\,{m}_{35}-\gamma _{2}\,{m}_{34}+{m}_{33}&{m}_{34}-\beta _{4}\,{m}_{36}&{m}_{35}-\beta _{5}\,{m}_{36}&{m}_{36}\\ {m}_{41}-\beta _{1}\,{m}_{46}&-\beta _{2}\,{m}_{46}-\gamma _{9}\,{m}_{45}-\gamma _{1}\,{m}_{44}+{m}_{42}&-\beta _{3}\,{m}_{46}-\gamma _{10}\,{m}_{45}-\gamma _{2}\,{m}_{44}+{m}_{43}&{m}_{44}-\beta _{4}\,{m}_{46}&{m}_{45}-\beta _{5}\,{m}_{46}&{m}_{46}\\ {m}_{51}-\beta _{1}\,{m}_{56}&-\beta _{2}\,{m}_{56}-\gamma _{9}\,{m}_{55}-\gamma _{1}\,{m}_{54}+{m}_{52}&-\beta _{3}\,{m}_{56}-\gamma _{10}\,{m}_{55}-\gamma _{2}\,{m}_{54}+{m}_{53}&{m}_{54}-\beta _{4}\,{m}_{56}&{m}_{55}-\beta _{5}\,{m}_{56}&{m}_{56}\\ {m}_{61}-\beta _{1}\,{m}_{66}&-\beta _{2}\,{m}_{66}-\gamma _{9}\,{m}_{65}-\gamma _{1}\,{m}_{64}+{m}_{62}&-\beta _{3}\,{m}_{66}-\gamma _{10}\,{m}_{65}-\gamma _{2}\,{m}_{64}+{m}_{63}&{m}_{64}-\beta _{4}\,{m}_{66}&{m}_{65}-\beta _{5}\,{m}_{66}&{m}_{66}\end{array}\right]\nonumber \\&= \left[\begin{array}{cccccc} 1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&-\gamma _1&-\gamma _2&1&0&0\\ 0&-\gamma _9&-\gamma _{10}&0&1&0\\ -\beta _1&-\beta _2&-\beta _3&-\beta _4&-\beta _5&1\end{array}\right]= B \end{aligned}$$ From the restrictions in the last column of \(\mathbf {B}\) we directly determine \(m_{16}\) to \(m_{66}\), which greatly simplifies \(MB\) to: $$\begin{aligned} MB&= \left[\begin{array}{cccccc} {m}_{11}&-\gamma _{9}\,{m}_{15}-\gamma _{1}\,{m}_{14}+{m}_{12}&-\gamma _{10}\,{m}_{15}-\gamma _{2}\,{m}_{14}+{m}_{13}&{m}_{14}&{m}_{15}&0\\ {m}_{21}&-\gamma _{9}\,{m}_{25}-\gamma _{1}\,{m}_{24}+{m}_{22}&-\gamma _{10}\,{m}_{25}-\gamma _{2}\,{m}_{24}+{m}_{23}&{m}_{24}&{m}_{25}&0\\ {m}_{31}&-\gamma _{9}\,{m}_{35}-\gamma _{1}\,{m}_{34}+{m}_{32}&-\gamma _{10}\,{m}_{35}-\gamma _{2}\,{m}_{34}+{m}_{33}&{m}_{34}&{m}_{35}&0\\ {m}_{41}&-\gamma _{9}\,{m}_{45}-\gamma _{1}\,{m}_{44}+{m}_{42}&-\gamma _{10}\,{m}_{45}-\gamma _{2}\,{m}_{44}+{m}_{43}&{m}_{44}&{m}_{45}&0\\ {m}_{51}&-\gamma _{9}\,{m}_{55}-\gamma _{1}\,{m}_{54}+{m}_{52}&-\gamma _{10}\,{m}_{55}-\gamma _{2}\,{m}_{54}+{m}_{53}&{m}_{54}&{m}_{55}&0\\ {m}_{61}-\beta _{1}&-\beta _{2}-\gamma _{9}\,{m}_{65}-\gamma _{1}\,{m}_{64}+{m}_{62}&-\beta _{3}-\gamma _{10}\,{m}_{65}-\gamma _{2}\,{m}_{64}+{m}_{63}&{m}_{64}-\beta _{4}&{m}_{65}-\beta _{5}&1\end{array}\right] \end{aligned}$$ Using the first 5 rows of columns 1, 4 and 5 we can further simplify to: $$\begin{aligned} MB&= \left[\begin{array}{cccccc} 1&{m}_{12}&{m}_{13}&0&0&0\\ 0&{m}_{22}&{m}_{23}&0&0&0\\ 0&{m}_{32}&{m}_{33}&0&0&0\\ 0&-\gamma _{1}+{m}_{42}&-\gamma _{2}+{m}_{43}&1&0&0\\ 0&-\gamma _{9}+{m}_{52}&-\gamma _{10}+{m}_{53}&0&1&0\\ {m}_{61}-\beta _{1}&-\beta _{2}-\gamma _{9}\,{m}_{65}-\gamma _{1}\,{m}_{64}+{m}_{62}&-\beta _{3}-\gamma _{10}\,{m}_{65}-\gamma _{2}\,{m}_{64}+{m}_{63}&{m}_{64}-\beta _{4}&{m}_{65}-\beta _{5}&1\end{array}\right] \end{aligned}$$ Finally, using the first three rows of column 2 and 3 in \(\mathbf {B}\) allows us to simplify to: $$\begin{aligned} MB&= \left[\begin{array}{cccccc} 1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&-\gamma _{1}+{m}_{42}&-\gamma _{2}+{m}_{43}&1&0&0\\ 0&-\gamma _{9}+{m}_{52}&-\gamma _{10}+{m}_{53}&0&1&0\\ {m}_{61}-\beta _{1}&-\beta _{2}-\gamma _{9}\,{m}_{65}-\gamma _{1}\,{m}_{64}+{m}_{62}&-\beta _{3}-\gamma _{10}\,{m}_{65}-\gamma _{2}\,{m}_{64}+{m}_{63}&{m}_{64}-\beta _{4}&{m}_{65}-\beta _{5}&1\end{array}\right] \end{aligned}$$ Hence, using the restrictions of \(\mathbf {B}\) we are able to uniquely identify most of the elements in matrix \(\mathbf {M}\): $$\begin{aligned} M = \left[\begin{array}{cccccc} 1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&{m}_{42}&{m}_{43}&1&0&0\\ 0&{m}_{52}&{m}_{53}&0&1&0\\ {m}_{61}&{m}_{62}&{m}_{63}&{m}_{64}&{m}_{65}&1\end{array}\right] \end{aligned}$$ To identify the remaining elements of M, I use the condition \(\mathbf {MA = A}\). Unfortunately, it is impossible to display the full matrix due to its dimension. I only display the last three rows and transpose them for presentational purpose21: $$\begin{aligned}\left[\begin{array}{ccc} \gamma _3+{m}_{42}\,\delta _1 &{} \gamma _{11}+{m}_{52}\,\delta _1 &{} {m}_{65}\,\gamma _{11}+{m}_{64}\,\gamma _3+{m}_{62}\,\delta _1+{m}_{61}\,\varphi _1\\ \gamma _7+{m}_{42}\,\delta _2 &{} \gamma _{15}+{m}_{52}\,\delta _2 &{} {m}_{65}\,\gamma _{15}+{m}_{64}\,\gamma _7+{m}_{62}\,\delta _2\\ \gamma _5+{m}_{42}\,\delta _3 &{} \gamma _{13}+{m}_{52}\,\delta _3 &{} {m}_{65}\,\gamma _{13}+{m}_{64}\,\gamma _5+{m}_{62}\,\delta _3\\ {m}_{42}\,\delta _4 &{} {m}_{52}\,\delta _4 &{} {m}_{62}\,\delta _4\\ \gamma _4+{m}_{43}\,\delta _5 &{} \gamma _{12}+{m}_{53}\,\delta _5 &{} {m}_{65}\,\gamma _{12}+{m}_{64}\,\gamma _4+{m}_{63}\,\delta _5+{m}_{61}\varphi _2\\ \gamma _8+{m}_{43}\,\delta _6 &{} \gamma _{16}+{m}_{53}\,\delta _6 &{} {m}_{65}\,\gamma _{16}+{m}_{64}\,\gamma _8+{m}_{63}\,\delta _6\\ \gamma _6+{m}_{43}\,\delta _7 &{} \gamma _{11}4+{m}_{53}\,\delta _7 &{} {m}_{65}\,\gamma _{11}4+{m}_{64}\,\gamma _6+{m}_{63}\,\delta _7\\ {m}_{43}\,\delta _8 &{} {m}_{53}\,\delta _8 &{} {m}_{63}\,\delta _8\\ 0 &{} 0 &{} \beta _6+{m}_{61}\,\varphi _3\\ 0 &{} 0 &{} {m}_{61}\,\varphi _4\\ 0 &{} 0 &{} {m}_{61}\,\varphi 5\\ \vdots &{} \vdots &{} \vdots \end{array}\right]' =\left[\begin{array}{ccc}\gamma _3 &{} \gamma _{11} &{} 0\\ \gamma _7 &{} \gamma _{15} &{} 0\\ \gamma _5 &{} \gamma _{13} &{} 0\\ 0 &{} 0 &{} 0\\ \gamma _4 &{} \gamma _{12} &{} 0\\ \gamma _8 &{} \gamma _{16} &{} 0\\ \gamma _6 &{} \gamma _{14} &{} 0\\ 0 &{} 0 &{} 0\\ 0 &{} 0 &{} \beta _6\\ 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0\\ \vdots &{} \vdots &{} \vdots \end{array}\right]' \end{aligned}$$ where \(m_{42}\) to \(m_{62}\), \(m_{43}\) to \(m_{63}\) and \(m_{61}\) can be directly identified and are all equal to zero. This simplifies the remaining elements considerably: $$\begin{aligned} \left[\begin{array}{ccc} \gamma _3 &{} \gamma _{11} &{} {m}_{65}\,\gamma _{11}+{m}_{64}\,\gamma _3\\ \gamma _7 &{} \gamma _{15} &{} {m}_{65}\,\gamma _{15}+{m}_{64}\,\gamma _7\\ \gamma _5 &{} \gamma _{13} &{} {m}_{65}\,\gamma _{13}+{m}_{64}\,\gamma _5\\ 0&{}0&{}0\\ \gamma _4 &{} \gamma _{12} &{} {m}_{65}\,\gamma _{12}+{m}_{64}\,\gamma _4\\ \gamma _8 &{} \gamma _{16} &{} {m}_{65}\,\gamma _{16}+{m}_{64}\,\gamma _8\\ \gamma _6 &{} \gamma _{11} &{} {m}_{65}\,\gamma _{11}+{m}_{64}\,\gamma _6\\ 0&{}0&{}0\\ 0 &{} 0 &{} \beta _6\\ 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0\\ \vdots &{} \vdots &{} \vdots \end{array}\right]' = \left[\begin{array}{ccc}\gamma _3 &{} \gamma _{11} &{} 0\\ \gamma _7 &{} \gamma _{15} &{} 0\\ \gamma _5 &{} \gamma _{13} &{} 0\\ 0 &{} 0 &{} 0\\ \gamma _4 &{} \gamma _{12} &{} 0\\ \gamma _8 &{} \gamma _{16} &{} 0\\ \gamma _6 &{} \gamma _{14} &{} 0\\ 0 &{} 0 &{} 0\\ 0 &{} 0 &{} \beta _6\\ 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0\\ \vdots &{} \vdots &{} \vdots \end{array}\right]' \end{aligned}$$ Finally, we have six equations with two unknown, which makes it very easy to determine the two remaining elements of M. For instance, using the first equation we can define \(m_{65} = -\frac{\gamma _3}{\gamma _{11}}m_{64}\) and plug this into the second row to find that \(m_{64} = m_{65} =0\). Note that using the restrinctions on \(\mathbf {A}\) and \(\mathbf {B}\) allows us to solve the whole matrix \(\mathbf {M}\) and we find \(\mathbf {M = I}\). The full model is therefore identified. The results in the main body of the article include only three channels. However, there might be other important channels affecting the intergenerational transmission of education. For instance, health, non-cognitive abilities or personality traits could be transmitted from one generation to the next and affect education. Unfortunately, the data used in this study do not allow to include such channels. There are two main reasons why I cannot include these channels in the analysis. First, including indicators on personality traits would substantially reduce the sample size and, therefore, increase the risk of sample selection biases. Second, for the health dimensions, no retrospective information about the health status of parents is provided. Hence, we could observe the health status of parents at the time of the survey, but not at the relevant time when children were attending school. In this appendix, I present experimental regressions to show what would happen, if despite the problems we would try to include these channels. Table 13 displays the OLS regression reported in Table 4 in the main body of the text and an augmented version, where I include some parental health and behavioral variables. Single equation results including some health and non-cognitive abilities Model B\('\) Model B\(''\) Ability measure of the childa Father's educationa Mother's educationa Wealth indexa Log consumptiona Mental health issues (mother) \(-\)0.002 Mental health issues (father) Mother's height (in m) Father's height (in m) Self confidence (father) Self confidence (mother) Respecting rules important (father) Some financial planning \(-\)1.092* Additional control variables Adjusted R 2 Model A is the same model as reported in Table 4 as OLS. Model B includes additional explanatory variables and Model C uses the variables of Model A and the sample of Model B. All regressions include also additional control variables such as mother's and father's age and indigenous background, an indicator for rural areas (none of these is significant). Additionally state fixed effects and control variables of government aid programs are included but not reported. Robust standard errors are reported aStandardized coefficients. Significance level: * 10 %,** 5 %,*** 1 % The mental health indicator is based on 18 questions about the emotional situation of individuals and combined through a factor analysis. A higher value indicates more emotional problems. Parental height can have an influence as it directly affects birth weight of children (Delajara and Wendelspiess Chávez Juárez 2013), which in turn affect the schooling outcome (Behrman and Rosenzweig 2004; Black et al. 2007). In respect to personality traits and behaviors, I include a dichotomous variables on the self-reported self confidence, on the importance of respecting rules and whether at least one of the parents aims at planning their financial situations more than just a couple of days ahead. The results of the table underline the discussed difficulties. First, we can observe a sharp drop in the sample size, which substantially increases the problem of a non-random sample. Second, the coefficients of the newly added variables are mostly not significant. This can be due to the quality of the indicators themselves, but also to the fact that we do not observe the values for the relevant period. For instance, the mental health today is much less relevant than the mental health when the child was at school. Finally, the parameters of main interest for the study are only very little affected by the inclusion of these variables. When comparing to the baseline model estimated on the same sample as the augmented model, we can see only very little variation. Of course, this is also due to the poor explanatory power of the included variables. Institute of Economics and Econometrics, University of Geneva, UNI-MAIL, 40 Bd. du Pont d'Arve, 1211 Geneva 4, Switzerland Alfonso M (2009) Credit constraints and the demand for higher education in Latin America. In: Proceedings of Inter-American Development Bank, Education Division, SCL, working paper #3Google Scholar Anger S, Heineck G (2010) Do smart parents raise smart children? The intergenerational transmission of cognitive abilities. J Popul Econ 23:1255–1282View ArticleGoogle Scholar Angrist JD, Pischke JS (2009) Mostly harmless econometrics: an empiricist's companion. Princeton University Press, PrincetonGoogle Scholar Attanasio O, Kaufmann K (2009) Educational choices, subjective expectations, and credit constraints. In: Proceedings of NBER, working paper 15087Google Scholar Banerjee AV, Newman AF (1994) Poverty, incentives, and development. Am Econ Rev 84(2):211–215Google Scholar Baum CF, Schaffer ME, Stillman S (2007) Enhanced routines for instrumental variables/generalized method of moments estimation and testing. Stata J 7(4):465–506Google Scholar Becker GS, Tomes N (1979) An equilibrium theory of the distribution of income and intergenerational mobility. J Polit Econ 87(6)Google Scholar Behrman J, Rosenzweig M (2004) Returns to birthweight. Rev Econ Stat 86(2):586–601View ArticleGoogle Scholar Behrman JR, Rosenzweig MR (2002) Does increasing women's scholing raise the schooling of the next generation? Am Econ Rev 92(1):323–334View ArticleGoogle Scholar Binder M, Woodruff C (2002) Inequality and intergenerational mobility in schooling: the case of Mexico. Econ Dev Cult Change 50(2):249–267View ArticleGoogle Scholar Björklund A, Jäntti M (2009) Intergenerational income mobility and the role of family background. In: Salverda W, Nolan B, Smeeding TM (eds) The Oxford handbook of economic Inequality, chap 20, Oxford University Press, OxfordGoogle Scholar Björklund A, Hederos Eriksson K, Jäntti M, (2010) Iq and family background: are associations strong or weak? BE J Econ Anal Policy 10(1)Google Scholar Black SE, Devereux PJ (2010) Recent developments in intergenerational mobility. In: Proceedings of prepared for the handbook in labor economics, IZA discussion paper no 4866Google Scholar Black SE, Devereux PJ, Salvanes KG (2007) From the cradle to the labor market? The effect of birth weight on adult outcomes. Q J Econ 122(1):409–439View ArticleGoogle Scholar Black SE, Devereux PJ, Salvances KG (2009) Like father, like son? A note on the intergenerational transmission of iq scores. Econ Lett 105(1):138–140View ArticleGoogle Scholar Boudon R (1973) L'inégalité des chances. Armand Colin, ParisGoogle Scholar Boudon R (1974) Education, opportunity and social inequality. Wiley, New YorkGoogle Scholar Bradley RH, Corwyn RF (2002) Socioeconomic status and child development. Annu Rev Psychol 53:371–399View ArticleGoogle Scholar Carneiro P, Heckman JJ (2002) The evidence on credit constraints in post-secondary schooling. Econ J 112(482):705–734View ArticleGoogle Scholar Case A, Fertig A, Paxson C (2005) The lasting impact of childhood health and circumstance. J Health Econ 24:365–389View ArticleGoogle Scholar Chevalier A (2004) Parental education and child's education: a natural experiment. In: Proceedings of ISA discussion paper no 1153Google Scholar CONEVAL (2008) Metodología para la medición multidimensional de la pobreza en México. Consejo Nacional de Evaluación de la Polftica de Desarrollo Social. http://www.coneval.gob.mx/cmsconeval/rw/pages/medicion/multidimencional/index.es.do Dahan M, Gaviria A (2001) Sibling correlations and intergenerational mobility in latin america. Econ Dev Cult Change 49(3):537–554View ArticleGoogle Scholar Delajara M, Wendelspiess Chávez Juárez F (2013) Birthweight outcomes in bolivia: the role of maternal height, ethnicity, and behavior. Econ Hum Biol 11(1):56–68View ArticleGoogle Scholar Désert M, Préaux M, Jund R (2009) So young and already victims of stereotype threat: socioconomic status and performance of 6 to 9 years old children on raven's progressive matrices. Eur J Psychol Educ 24(2):207–218View ArticleGoogle Scholar Devlin B, Daniels M, Roeder K (1997) The heritability of iq. Nature 338:468–471View ArticleGoogle Scholar Dickson M, Gregg P, Robinson H (2013) Early, late or never? When does parental education impact child outcomes? In: Proceedings of ISA discussion paper no 7123Google Scholar Doyle O, Harmon CP, Heckman JJ, Tremblay RE (2009) Investing in early human development: timing and economic efficiency. Econ Hum Biol 7(1):1–6View ArticleGoogle Scholar Ermisch J, Francesconi M, Siedler T (2006) Intergenerational mobility and marital sorting. Econ J 116(513):659–679View ArticleGoogle Scholar Heckman JJ (2006) Skill formation and the economics of investing in disadvantaged children. Science 312(1900):1900–1902View ArticleGoogle Scholar Hertz T, Jayasundera T, Piraino P, Selcuk S, Smith N, Verashchagina A (2007) The inheritance of educational inequality: international comparisons and fifty-year trends. BE J Econ Anal Policy 7(2):1–46Google Scholar Heuchenne C (1997) A sufficent rule for identification in structural equation modeling including the null b and recursive rules as extreme cases. Struct Equ Model 4(3)Google Scholar Holmlund H, Lindahl M, Plug E (2011) The causal effect of parents' schooling on children's schooling: a comparison of estimation methods. J Econ Lit 49(3):615–651View ArticleGoogle Scholar de Hoyos R, Martínez de la Calle JM, Székely M (2010) Educación y movilidad social en México. In: Serrano J, Torche F (eds) Movilidad social en México, Población, desarrollo y crecimiento. Centro de Estudios Espinosa Yglesias, Mexico CityGoogle Scholar Jensen R (2010) The (perceived) returns to education and the demand for schooling. Q J Econ 125(2):515–548View ArticleGoogle Scholar van Leeuwen M, van den Berg S, Boomsma D (2008) A twin-family study of general iq. Learn Individ Differ 18:76–88View ArticleGoogle Scholar Loury GC (1981) Intergenerational transfers and the distritbution of earnings. Econmetrica 49(4):843–867View ArticleGoogle Scholar Merton R (1953) Reference group theory and social mobility. In: Bendix R, Lipset S (eds) Class: status and power. The Free Press, New YorkGoogle Scholar Muthén BO (2004) Mplus technical appendices. Muthén & Muthén, Los Angeles. http://www.statmodel.com Paxton P, Hipp JR, Marquart-Pyatt S (2011) Nonrecursive models: endogeneity, reciprocal relationships, and feedback loops. In: Proceedings of quantitative applications in the social sciences, series/number 08–168, Sage Publications, Los AngelesGoogle Scholar Piketty T (2000) Theories of persistent inequality and intergenerational mobility. In: Atkinson A, Bourguignon F (eds) Hanbook of income, vol 1. Distribution, North HollandGoogle Scholar Raven J (2000) The raven's progressive matrices: change and stability over culture and time. Cognit Psychol 41:1–48View ArticleGoogle Scholar Raven J, Court J, Raven J (1983) Manual for Raven's progressive matrices and vocabulary scales (section 3), coloured progressive matrices, 1983rd edn. Lewis, LondonGoogle Scholar Raven J, Court J, Raven J (1986) Manual for Raven's progressive matrices and vocabulary scales (section 2), coloured progressive matrices (1986 edition with US norms). Lewis, LondonGoogle Scholar Rojas M (2007) A subjective well-being equivalence scale for mexico: estimation and poverty and income-distribution implications. Oxf Dev Stud 35(3):273–293View ArticleGoogle Scholar Rosa Dias P (2009) Inequality of opportunity in health: evidence from a UK cohort study. Health Econ 18(9):1057–1074View ArticleGoogle Scholar Rubalcava L, Teruel G (2006) Gufa del usuario para la primera encuesta nacional sobre niveles de vida de los hogares. http://www.ennvih-mxfls.org Sewel WH, Shah VP (1968) Social class, parental encouragement, and educational aspirations. Am J Sociol 73(5):559–572View ArticleGoogle Scholar Solon G (2004) A model of intergenerational mobility variation over time and place. In: Corak M (ed) Generational income mobility in North America and Europe. Cambridge University Press, CambridgeGoogle Scholar Steinberg L, Lamborn SD, Dornbusch SM, Darling N (1992) Impact of parenting practices on adolescent achievement: authorative parenting, school involvement, and encouragement to succeed. Child Dev 63(5):1266–1281View ArticleGoogle Scholar Stinebrickner TR, Stinebrickner R (2007) The effect of credit constraints on the college drop-out decision: a direct approach using a new panel study. In: Proceedings of NBER working paper 13340Google Scholar Winter C (2007) Accounting for the changing role of family income in determining college entry. European University Institute, working paper ECO 2007/49Google Scholar Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Cartesian Skepticism Lizi Chen I am a Machine Learning Engineer at Meta. I studied Machine Learning and Computer Science at NYU Courant Institute. Team Selection at Meta Background: Unlike most tech companies that recruit candidates for specific teams, new employees at Meta need to join a bootcamp program. While finishing up many online trainings, random technical coding tasks, and setting up firmwide logistics, new employees are required to look out for new job openings from the company's Jobs Tool similar to Linkedin's jobs panel . Ideally, a new employee should find a team to join within 6 to 8 weeks to graduate from bootcamp. 4 outcomes of cash secured puts: Option expires OTM: Keep the premium. Collateral is freed to use. Option gets assigned ITM: Keep the premium. Collateral is used to purchase the underlying stock shares at strike price. You are okay or intented to own the underlying stock, hoping that it will rebounce above the strike price. Rollover: Buy-to-close on an option that's already ITM, and sell an option that's in the future, OTM. Option Greeks Delta: \(\Delta\) measures the sensitivity of an option price change in relation to the changes in the underlying stock price. $$ \Delta = \frac{\partial V}{\partial S} $$ , where \(V\) is the option price, and \(S\) is the underlying stock price. Call Delta range: \([0, 1]\). Put Delta range: \([-1, 0]\). The closer Delta is to +1 or -1, the more strongly that the option's premium responds to the change in the stock price. MLE Interview Prep The Role of MLE The role definition of Machine Learning Engineer varies in different companies. Generally speaking, MLEs from a small to medium size company tend to work around MLOps, sometimes on a production model. They usually collaborate with Applied or Data Scientists in the begining of a new ML service, understanding the data and business needs to create a model. Once the model is formulated, the rest of work will be mostly carried by MLEs. Probability of Staying of the ZOML People Joining Zillow Offers venture is one of my most memorable life experience. The team started in late 2018 with a handful of strong engineers and applied scientists. Zillow was also a medium size company of less than 4,000 people. As the ZO business grow, the ZOML (Zillow Offers Machine Learning) team and Zillow both scaled extremely fast. ZOML hired many people. Luckily, there was no layoffs before the ZO wind-down, which happened on Nov. 2, 2021. The company; however, did have a massive layoff (2000+ people) in early 2022. Lizi Chen © 2023
CommonCrawl
Safety of adhesively bonded joints under detrimental service conditions Katja Groß ORCID: orcid.org/0000-0002-5411-70901 & Paul Ludwig Geiß1 Durability and safety of adhesively bonded joints are of major importance in structural applications. The probability of failure of a bonded assembly after a certain period of time may be influenced by various aging effects including e.g. temperature and humidity. The correlation of results obtained from accelerated laboratory aging tests to long-term aging under service conditions often remains an unsolved challenge. In the present work, computer-based tools for non-linear regression analysis, estimation of reliability and lifetime prediction have been applied to experimental results obtained by accelerated aging of adhesively bonded shear specimens. Results obtained with an epoxy based adhesive and a hot-dipped galvanized steel as adherend are discussed. The modeling of the aging behavior is performed with combined functions referring to the EYRING as well as the PECK model which both appear appropriate for describing the experimental data. The safety prediction, based on the probability of failure as well as the safety factor β, is performed by using the EYRING model which fits the experimental data in a more conservative manner. In recent decades the use of adhesively bonded joints in industrial applications has become increasingly important and is replacing conventional joining techniques in many areas. In order to meet sector-dependent safety requirements the aging behavior of adhesively bonded joints needs to be considered. Hereof, aging is defined as a reduction in strength or safety as a function of time caused by external factors including e.g. mechanical stress, temperature and humidity. However, high number of potentially damaging factors on adhesively bonded joints in industrial applications makes it difficult to perform a reliable lifetime prediction while considering all relevant aging factors. Several models for describing degradation have been discussed in literature [1,2,3,4,5,6,7,8] which allow lifetime predictions as a function of one or two influencing factors, but these models do not consider the statistical variance of experimental data. The aim of this study is to develop a damage function for describing the aging of adhesively bonded joints and prediction of the safety level as a function of the influencing parameters temperature, humidity and load considering the statistical distribution of the experimental data. The development of this function is based on systematic aging studies by combining different models. In the present work, the single component hot curing epoxy adhesive Betamate® 1496F (Dow Automotive AG, Freienbach, Switzerland) was used to join adherends of hot-dipped galvanized dual-phase steel (DP800Z) (Voestalpine Steel GmbH, Linz, Austria). The epoxy adhesive Betamate® 1496F is used in the automotive industry in order to increase the operation durability, the crash performance and the body stiffness [9]. Hot-dipped galvanized dual-phase steels are commonly used in automotive engineering in complex structural components due to their excellent crash performance and corrosion resistance [10]. In order to represent typical adhesively bonded joints in industrial applications, the epoxy adhesive Betamate® 1496F is combined with hot-dipped galvanized steel exemplarily. The properties of the adhesive are presented in Table 1 and the properties of the adherends in Table 2. Table 1 Properties of the epoxy adhesive Betamate® 1496F Table 2 Properties of the hot-dipped galvanized steel (DP800Z) [10], pursant to EN 10346 [13] and EN 10338 [14] Specimen manufacturing Before adhesive application, the steel surface was degreased by manually wiping with acetone followed by an ultrasonic dip-cleaning using a 1:1-mixture of isopropanol and purified water. After chemical surface cleaning the steel adherends were bonded with an overlap length of 10 mm and a sample width of 25 mm using the epoxy adhesive. Since hot-dipped galvanized steel sheets were only available with a limited thickness of 2 mm, laminated shear joints [15] were prepared using spacers adjacent to the test panels while curing in the lamination press. The joint thickness was set to 0.7 mm and the curing was performed at 180 °C for 1 h. Accelerated laboratory aging tests In this study accelerated destructive degradation were performed in order to generate an experimental data base for doing a reliable safety prediction for adhesively bonded joints. These tests are used by engineers in the manufacturing industry for many decades in order to acquire reliability information in up-front testing more quickly than in traditional life tests [2, 16]. The accelerated laboratory aging tests of the adhesively bonded shear specimens in this study were performed at 60 °C/95% relative humidity (RH) and 80 °C/95% RH. As reference condition 23 °C/50% RH was used. For all conditions mechanical tensile shear tests were performed before starting the aging and after 4, 8 and 12 weeks at the respective aging condition. Mechanical tensile shear tests The mechanical tensile shear tests were performed using a Midi 20-1074x10 (Messphysik Materials Testing GmbH, Fürstenfeld, Austria) universal testing machine with a traverse speed of 0.5 mm/min. Laminated shear joints were used for testing in order to avoid eccentricity when placed in the jigs of the testing machine and to minimize the bending during the tensile shear tests [15]. For all aging conditions five equivalent samples were tested after the respective aging times. The maximum tensile shear strength in dependence of aging condition and aging time is displayed in Fig. 1. Maximum tensile shear strength τmax in dependence of aging condition and aging time Figure 1 illustrates that the maximum shear stress decreases significantly at the conditions with high humidity (60 °C/95% RH and 80 °C/95% RH). The slight increase in shear strength observed with samples after 8 weeks of aging at 80 °C/95% RH to 12 weeks is assigned to the relaxation of thermal stress caused by curing at 180 °C and physical aging taking place at temperatures approaching the glass transition temperature. In the case of the reference condition (23 °C/50% RH) the shear stress remains constant over time. Development of time-, temperature- and humidity-dependent model function In order to develop a model for predicting time-, temperature- and humidity-dependent progression of aging under climatic conditions the commercial software JMP® (SAS Institute Inc., North Carolina, USA) was used. Since only modeling in dependence of one damaging factor for accelerated destructive degradation is possible, the damaging factors were initially modeled separately from each other. Thus, the temperature dependence was modeled by keeping the relative humidity constant and then the absolute humidity was modeled. Assuming a WEIBULL distribution Eq. 1, which contains an ARRHENIUS term [1], was determined to describe the change in maximum tensile shear stress τmax in dependence of temperature T and time. $$\log \left( {\tau_{\text{max} } \left( T \right)} \right) = \exp \left( { - 0.16 + 0.53 \cdot \exp \left( { - 0.02 \cdot \exp \left( {0.51 \cdot \left( {\frac{11605}{296.15} - \frac{11605}{273.15 + T}} \right)} \right) \cdot \sqrt {time} } \right)} \right) \cdot 0.97$$ For the function to describe the change in maximum tensile shear stress τmax in dependence of the absolute humidity AH and time Eq. 2 was chosen. $$\log \left( {\tau_{\text{max} } \left( {AH} \right)} \right) = \exp \left. {\left( {0.37 + \left( { - 0.14 + 0.14 \cdot \exp \left( { - \frac{AH}{108}} \right)} \right) \cdot \sqrt {time} } \right)} \right) \cdot 0.98$$ In literature [2, 8] two different procedures are proposed for combining the influences of temperature and humidity in accelerated degradation tests. According to the EYRING model [2] the influences of temperature and relative humidity are linked in the exponential ARRHENIUS term. In contrast the PECK model [8] uses a multiplicative connection of the term for describing the humidity influence and the ARRHENIUS term for describing the temperature dependence. In this study both approaches were examined. Referring to EYRING's model Eq. 3 was obtained to describe the maximum tensile shear stress τmax in dependence of temperature, absolute humidity AH and time. $$\log \left( {\tau_{\text{max} } \left( {T, AH} \right)} \right)_{EY} = \exp \left( { - 0.16 + 0.53 \cdot \exp \left( { - 0.02 \cdot \exp \left( {0.51 \cdot \left( {\frac{11605}{296.15} - \frac{11605}{273.15 + T}} \right)} \right) \cdot \left( { - 0.14 + 0.14 \cdot \exp \left( { - \frac{AH}{108}} \right)} \right) \cdot 0.98 \cdot \sqrt {time} } \right)} \right) \cdot 0.97$$ Likewise, referring to PECK's model the functions to describe the temperature and humidity dependence are combined as follows: $$\begin{aligned} \log \left( {\tau_{\text{max} } \left( {T, AH} \right)} \right)_{P} & = \left( {\exp \left( {0.37 + \left( { - 0.14 + 0.14 \cdot \exp \left( { - \frac{AH}{108}} \right)} \right) \cdot \sqrt {time} } \right) \cdot 0.98} \right)^{m} \cdot \\ & \quad \exp \left( { - 0.16 + 0.53 \cdot \exp \left( { - 0.02 \cdot \exp \left( {0.51 \cdot \left( {\frac{11605}{296.15} - \frac{11605}{273.15 + T}} \right)} \right) \cdot \sqrt {time} } \right)} \right) \cdot 0.97 \\ \end{aligned}$$ The empirical constant m is calculated to be 0.08 by non-linear regression analysis with JMP® software (SAS Institute Inc., North Carolina, USA). The combined functions based on EYRING and PECK and their ability to describe the change of the measure maximum tensile shear strength values are displayed in Fig. 2. Maximum tensile shear strength τmax (dots) and combined functions to describe the temperature and humidity dependence of the maximum tensile shear strength referring to EYRING (solid line) and PECK (broken line) Figure 2 illustrates that both the PECK model as well as the EYRING model describe the measured values quite good. They are very similar to each other for describing the change in maximum tensile shear strength for 60 °C/95% RH and 80 °C/95% RH. A significant difference is observed in describing the experimental data for 23 °C/50% RH. The EYRING model is more conservative than the PECK model which is why the following saftey prediction was based on the EYRING model. Safety prediction for reference condition In order to predict the failure probability for the reference condition all experimental data of this condition were mathematically shifted in constant distance along the corresponding model line to a freely selectable future point in time. In this study 52 weeks were set exemplarily (Fig. 3). The shifting of the data in constant distance along the model line is based on the assumption that specimes which show a comparatively high tensile shear strength in the observation period will also show a comparatively high tensile shear strength at any future point in time. The respective assumption applies to specimens which show a comparatively low tensile shear strength. Based on these assumptions it is possible to include all experimental data to do a safety prediction for a certain condition considering a reliable variance. Measured maximum tensile shear strength τmax (circles), model function based on EYRING (line) and predicted values at the time of 52 (triangles) for 23 °C, 50% RH The predicted values for the maximum tensile shear strength after 52 weeks are plotted in a WEIBULL probability plot (Fig. 4). WEIBULL probability plot of 23 °C, 50% RH with predicted values for the maximum tensile shear strength after 52 weeks Figure 4 illustrates that the predicted values are WEIBULL-distributed within the 95% confidence interval. For a given failure limit which is dependent on the specific case of application, the probability of failure is evident from the WEIBULL probability plot. If the failure limit is for example defined as 22 MPa the failure probability after 52 weeks at 23 °C, 50% RH would be approximately 10%. Furthermore it is possible to calculate the factor of safety β [17] with the WEIBULL parameters given from the WEIBULL probability plot for a certain load. In this study accelerated destructive degradation tests for tensile shear specimen were performed at three different aging conditions. As reference condition 23 °C/50% RH was used. The maximum tensile shear strength was used to characterize the aging and was modeled with combined functions of temperature and humidity referring to the EYRING model and the PECK model respectively. It became apparent that the EYRING model fits the experimental data in a more conservative manner. This is why the model function based on EYRING was used to predict safety for the reference condition. Laidler KJ. The development of the Arrhenius equation. J Chem Educ. 1984;61(6):494–8. Escobar LA, Meeker WQ. A review of acceleated test models. Stat Sci. 2006;21(4):552–77. https://doi.org/10.1214/088342306000000321. Gladstone S, Laidler KJ, Eyring H. The theory of rate processes. 1st ed. New York: McGraw-Hill Book Co.; 1941. Eyring H, Lin SH, Lin SM. Basic chemical kinetics. 1st ed. New York: John Wiley & Sons Inc; 1980. Hartzell AL, da Silva MG, Shea HR. MEMS Reliability. 1st ed. New York: Springer; 2011. Matting A. Metallkleben: Grundlagen Technologie Prüfung Verhalten Berechnung Anwendungen. Berlin: Springer; 1969. Kececioglu D. Reliability & life testing handbook. 2nd ed. Lancaster: DEStech Publication Inc.; 2002. Peck SD (2007) Comprehensive model for humidity testing correlation. In: Proceedings of the 24th international reliability physics symposium, Anaheim, USA; Apr 1–3 1986. New York: IEEE. https://doi.org/10.1109/irps.1986.362110. Dow Automotive. Technical datasheet—Betamate 1496V; 2007. Voestalpine Steel Division. Dual-phase steels data sheet; 2017. CEN European Committee for Standardization. DIN EN ISO 527-1:2012-06. Plastics – determination of tensile properties – Part 1: General principles (ISO 527-1:2012). German version EN ISO 527-1:2012. Brussels; 2012. CEN European Committee for Standardization. DIN EN ISO 6721-1:2011-08. Plastics – determination of dynamic mechanical properties – Part 1: General principles (ISO 6721-1:2011). German version EN ISO 6721-1:2011. Brussels; 2012. CEN European Committee for Standardization. DIN EN 10346:2015-10. Continuously hot-dip coated steel flat products for cold forming - Technical delivery conditions. German version EN 10346:2015. Brussels; 2015. CEN European Committee for Standardization. DIN EN 10338:2015-10. Hot rolled and cold rolled non-coated products of multiphase steels for cold forming - Technical delivery conditions. German version EN 10338:2015. Brussels; 2015. da Silva LFM, Carbas RJC, Banea MD. Failure Strength Tests. In: da Silva L, Öchsner A, Adams R, editors. Handbook of adhesion technology. 2nd ed. Cham: Springer; 2017. Escobar LA, Meeker WQ, Kugler DL, Kramer LL. Accelerated destructive degradation tests: data, models and analysis. In: Lindqvist BH, Doksum KA, editors. Mathematical and statistical methods in reliability. River Edge: World Publishing Company; 2003. CEN European Committee for Standardization. DIN EN 119:2010-12. Eurocode – Basis of structural design. German version EN 1990:2002 + A1:2005 + A1:2005/AC:2010. Brussels; 2010. KG analyzed and interpreted the data, did the computer-based extrapolation and was major contributor in writing the manuscript. PLG made substantial contributions to analysis and interpretation of the data and has been involved in revising the manuscript critically. Both authors read and approved the final manuscript. The authors kindly acknowledge funding by the Research Association for Steel Application FOSTA with grants from the Federal Ministry for Economic Affairs and Energy. The authors would also like to kindly acknowledge support provided by the Advanced Materials Engineering (AME) and High Performance Composite Constructions (HiPerCon) priority research activity of Rhineland-Palatinate. The datasets used and analyzed during this current study are available from the corresponding author on reasonable request. The research was funded by the Research Association for Steel Application FOSTA with grants from the Federal Ministry for Economic Affairs and Energy. Workgroup Materials and Surface Technologies (AWOK), Faculty of Mechanical and Process Engineering, University of Kaiserslautern, Erwin-Schrödinger-Straße 58, 67663, Kaiserslautern, Germany Katja Groß & Paul Ludwig Geiß Katja Groß Paul Ludwig Geiß Correspondence to Katja Groß. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Groß, K., Geiß, P.L. Safety of adhesively bonded joints under detrimental service conditions. Appl Adhes Sci 6, 4 (2018). https://doi.org/10.1186/s40563-018-0105-4 Adhesively bonded joints Accelerated aging
CommonCrawl
Journal of Agricultural and Applied Economics DECADAL CLIMATE VARIABILITY IMP... Core reader DECADAL CLIMATE VARIABILITY IMPACTS ON CLIMATE AND CROP YIELDS 2. DCV Background 3. Methods for Modeling DCV Impacts 4. Data 5. Model Specification 5.1. DCV Effects on Weather 5.2. DCV Effects on Yield 5.3. Deriving Total Effects 6. Estimation Results and Discussion 6.1. DCV Effects on Climate 6.2. Effects on Mean Yields 6.3. Total Effects of DCV Phase Combinations 6.4. Regional Effects of DCV Phase Combinations on Higher Moments Journal of Agricultural and Applied Economics, Volume 51, Issue 1 February 2019 , pp. 104-125 THEEPAKORN JITHITIKULCHAI (a1), BRUCE A. MCCARL (a2) and XIMING WU (a2) World Bank Group, Bangkok, Thailand Department of Agricultural Economics, Texas A&M University, College Station, Texas Copyright: © The Author(s) 2018 This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. DOI: https://doi.org/10.1017/aae.2018.25 Published online by Cambridge University Press: 13 September 2018 Figures: Table 1. Variables in Regression Analysis Table 2. Signs of Decadal Climate Variability (DCV) Effects on Climate Attributes for U.S. Study Regions Table 3. Regional Decadal Climate Variability (DCV) Effects on U.S. Crop Yield Distribution Moments Table 4. Total Decadal Climate Variability (DCV) Impacts on Average Crop Yields by Region This article examines the effects of ocean-related decadal climate variability (DCV) phenomena on climate and the effects of both climate shifts and independent DCV events on crop yields. We address three DCV phenomena: the Pacific Decadal Oscillation (PDO), the Tropical Atlantic Sea-Surface Temperature Gradient (TAG), and the Western Pacific Warm Pool (WPWP). We estimate the joint effect of these DCV phenomena on the mean, variance, and skewness of crop yield distributions. We found regionally differentiated impacts of DCV phenomena on growing degree days, precipitation, and extreme weather events, which in turn alter distributions of U.S. regional crop yields. This research was funded by the U.S. Department of Agriculture (USDA), National Institute of Food and Agriculture under Grant 2011-67003-30213 in the NSF-USDA-DOE Earth System Modelling Program, and by the National Oceanic and Atmospheric Administration, Climate Programs Office, Sectoral Applications Research Program under grant NA12OAR4310097. The views expressed in this article are only those of the authors and do not necessarily represent the official views of the grantors or any affiliated organizations. Ocean-related phenomena have been found to influence climatic conditions over land, and in turn crop yields. El Niño–Southern Oscillation (ENSO) is the most frequently examined case (e.g., see Adams et al., 1999; Chen and McCarl, 2000; Hennessy, 2009a, 2009b; Mendez, 2013; Tack and Ubilava, 2013, among many others). Although many ocean-related studies have been carried out, there are much less studied, longer-term ocean-related phenomena that influence crop yield and agricultural economics. The phenomena are collectively called decadal climate variability (DCV) and generally influence climate and crop yields on at least interdecadal time scales (Mehta, 1998; Mehta, Wang, and Mendoza, 2013; Murphy et al., 2010; Wang and Mehta, 2008). Three prominent forms of DCV will be examined here: the Pacific Decadal Oscillation (PDO) (Mantua, 1999; Mantua and Hare, 2002, Mantua et al., 1997; Smith et al., 1999; Ting and Wang, 1997), the Tropical Atlantic Sea-Surface Temperature Gradient (TAG) (Hurrell, Kushnir, and Visbeck, 2001; Mehta, 1998), and the Western Pacific Warm Pool (WPWP) (Wang and Enfield, 2001; Wang and Mehta, 2008; Wang et al., 2006). The literature indicates that these long-term ocean-related phenomena influence weather patterns and crop yields in the United States. Fundamentally, these phenomena are defined by general, persistent heat levels in regions of the ocean, which, in turn, affect climate over land. These phenomena also alter the air currents and weather system movement across the country, in turn influencing temperature and rainfall patterns. For example, Murphy et al. (2010) indicate that DCV phases have been associated with multiyear to multidecade droughts and changes in precipitation patterns. To the best of our knowledge, there are no nationally scoped published U.S. studies on how the DCV phenomena alter crop yields. This study fills that gap by providing econometric estimates of the impacts of these climate phenomena on yields of corn, cotton, soybeans, and nonirrigated wheat based on U.S. county data. Our hypothesis is that the DCV phenomena directly influence climate, and indirectly crop yields. The null hypothesis addressed in this study is that DCV phenomena do not affect U.S. regional climate and crop yields. The alternative hypothesis is that DCV phenomena do alter climate and crop yields differentially across regions. Because studies have shown that DCV phenomena influence climate, this means that in a crop yield regression that includes independent variables on climate attributes, part of the effects of the DCV phenomena would already be present in the shifted climate. Therefore, we concluded we would not accurately pick up the full DCV effect on crop yields if we estimated an equation with climate attributes and DCV phase indicators as independent variables. We thus first looked at how climate is displaced by DCV phenomena and then how yields are affected by climate, subsequently integrating the results into a total impact measure. Hence, the estimation is done in two phases. First, we estimate DCV effects on climate attributes where we model shifts in growing degree days, precipitation, drought incidence, hot days, and wet days. Then we proceed to estimate effects on crop yields combining both direct effects and the effects arising from DCV weather displacements. This study uses U.S. county-level crop yield and climate data over the period 1950–2015. The findings suggest that the DCV phenomena influence climate, and that this subsequently influences crop yields across the United States. As stated previously, there are three major DCV phenomena (PDO, TAG, and WPWP). The PDO is a Pacific Ocean phenomenon that has two phases: warm and cold. These are identified based on sea-surface temperatures in the North Pacific Ocean (Mantua et al., 1997; Zhang, Wallace, and Battisti, 1997). These PDO phases have persisted continuously for 20 to 30 years during the 20th century (Mantua and Hare, 2002). The PDO has been said to influence weather through heat transfer between the atmosphere and the ocean, and, in turn, this influences winds in the lower troposphere (Murphy et al., 2010). In terms of weather, alternative PDO phases have been found to be associated with periods of prolonged dryness and wetness in the western United States and the Missouri River basin (MRB; Murphy et al., 2010). PDO impacts have been found in Australia and South America (Mantua and Hare, 2002). The TAG is a long-lived Atlantic Ocean phenomenon that persists for 12 to 13 years. The TAG has positive and negative phases that are identified through Atlantic sea-surface temperatures (Mehta, 1998). The TAG has been found to be associated with variability in many ocean, atmospheric, and weather items, such as heat transferred between the overlying atmosphere and the Atlantic Ocean; winds in the lower troposphere; and rainfall in the southern, central, and midwestern United States (Murphy et al., 2010). The WPWP is a western Pacific phenomenon, which changes on a 10- to 15-year period. It again has positive and negative phases that are identified through West Pacific and Indian Ocean surface temperatures (Mehta, 1998). The WPWP has been found to influence weather over the Great Plains and western Corn Belt (Wang and Mehta, 2008). Each of these DCV phenomena has two phase combinations. Jointly, the simultaneous combination of the DCV phenomena phases has been found to affect drought and extreme weather events as reviewed in Latif and Barnett (1994) and Mehta, Wang, and Mendoza (2013). Mehta, Rosenberg, and Mendoza (2011) found that DCV phenomena explain 60% to 70% of the total variance in MRB annual precipitation and water supply. They also found that DCV has a large influence on maximum and minimum temperatures. Simulation and statistical estimations have been carried out on DCV effects in selected regions to examine implications for crop yields, but national-scale statistical investigations have not been done (Ding and McCarl, 2014; Huang, 2014; Mehta, Rosenberg, and Mendoza, 2011, 2012). Mehta, Rosenberg, and Mendoza (2012) in a simulation analysis found major DCV impacts on dryland corn and wheat yields in the MRB explaining as much as 40%–50% of the variation in average corn and wheat yields in some subregions, and they also found effects on basin-wide average crop yields. Ding and McCarl (2014) estimated the yield and groundwater recharge impacts of DCV phase combinations in the Edwards Aquifer region of Texas. They found that DCV phases influence both regional crop yields and recharge, and that adaptive actions based on DCV information have substantial potential economic value. Many studies have addressed climate impacts on crop yields using econometric or simulation approaches. These have mainly addressed the impacts of climate change or ENSO phases examining effects on means and variances (see Attavanich, 2011; Attavanich and McCarl, 2014 and the reviews therein; Chen, McCarl, and Schimmelpfennig, 2004). Additionally, a few studies have examined effects on higher-order moments like skewness (see Du, Hennessy, and Yu, 2012; Hennessy, 2009a, 2009b; Tack, Harri, and Coble, 2012). Consequently, to estimate DCV effects on crop yields, we will use a skew-normal regression approach that yields estimates of effects on mean, variance, and skewness (Azzalini and Capitanio, 1999; Azzalini and Dalla Valle, 1996; Gupta and Chen, 2001; Henze, 1986). The skew-normal distribution, denoted by SN(ξ, ω2, α), has the following density: (1) $$\begin{equation} {f_{SN}}\left( {y;{\rm{\ }}\xi ,{\omega ^2},\alpha } \right) = 2{\rm{\ }}{\omega ^{ - 1}}\varphi \left( z \right)\Phi \left( {\alpha z} \right), \end{equation}$$ for y ∈ ( − ∞, ∞), where z = ω − 1(y − ξ), ξ ∈ ( − ∞, ∞) is a location parameter, ω > 0 is a scale parameter, φ(.) is the normal distribution probability density function, and Φ(.) is the cumulative normal distribution function. The distribution estimate will be skewed to the right when α > 0 and skewed to the left when α < 0. The distribution reduces to a normal distribution when α = 0. We estimate a linear regression assuming skew-normal error terms, (2) $$\begin{equation} y = {\beta _0} + {\beta _1}{x_1} + \ldots + {\beta _p}{x_p} + \varepsilon , \end{equation}$$ where x 1. . .xP are independent variables, β0. . .βp are regression coefficients to be estimated, and ε is a skew-normal error term ε ~ SN(0, ω2, α). It follows that the yield distribution is also skew-normal. The estimation will employ a panel regression model. We follow previous studies in choice of independent variables (Attavanich, 2011; Attavanich and McCarl, 2014; Chen, McCarl, and Schimmelpfennig, 2004; McCarl, Villavicencio, and Wu, 2008; Schlenker and Roberts, 2009; Schlenker, Hanemann, and Fisher, 2007). In particular, we include weather variables on growing degree days, precipitation, drought incidence, counts of hot days, and counts of wet days, plus a polynomial time trend as a proxy for technological progress. We also include dummy variables for ENSO state as in Attavanich and McCarl (2014) and Tack and Ubilava (2013). Finally, we include a series of dummy variables for the joint DCV phase combinations. Because the literature clearly identifies that climate alters crop yields and that DCV phases alter climate attributes, we wanted first to look at climate effects and then in turn at how they influence crop yield. This led us to estimate DCV effects on climate attributes—growing degree days, precipitation amounts, drought incidence, counts of hot days, and counts of high precipitation days—which in turn influence crop yields. Then we estimated climate and DCV phase effects directly on regional yields. Consequently, to get total effects we used a three-stage procedure where we first estimated regression models for the impacts of DCV phase combinations on the climate descriptors. Then, second, we used regression models for the effects of the DCV phase combinations and realized climate directly on yields. Finally, we calculated the total DCV effects on yields by combining estimates from both the DCV to climate regression models and climate and DCV phase to crop yield regression models. We addressed DCV phase effects on higher moments of crop yield distributions by applying standard generalized-least square panel regression as in Tack, Harri, and Coble (2012). The skewness in crop yields is illustrated in Figure A1 in the online supplementary appendix. We estimated U.S. county-level crop yields for corn, cotton, soybeans, and nonirrigated (dryland) wheat using data from the years 1950–2015. These data came from U.S. Department of Agriculture's National Agricultural Statistics Service (USDA-NASS, 2017) Quick Stats database. The data are summarized in Table A1 in the online supplementary appendix. Note the number of observations varies across crops because of crop incidence and data availability. Most of the county-level climate data came from National Oceanic and Atmospheric Administration (NOAA). This included the following: • Station-level temperature and precipitation data were drawn from the NOAA Global Historical Climatology Network Daily (GHCND) database (NOAA, National Climatic Data Center [NCDC], 2017). Historical daily station-level cumulative growing degree days (with temperature in degrees Celsius) and precipitation (in tenths of millimeters) were added up for use in the model. The threshold temperature for a growing degree day for corn and soybeans is 10°C, and for cotton and nonirrigated wheat is 5.5°C. Any temperature below the threshold temperature is set to the threshold temperature before calculating the average. Likewise, the maximum temperature is truncated at 30°C. The growing season weather variables are calculated based on the 6-month period from March through August for corn and soybeans, and the 7-month period April through October for cotton and nonirrigated wheat, following the USDA's usual planting and harvesting dates for major crops (USDA, Economics, Statistics and Market Information System, 2017). We also constructed variables that gave incidence on extremes: (1) the count of hot days during the growing season (i.e., those with maximum temperature greater than 32.22°C [equivalently, 90°F]) and (2) the count of wet days (i.e., the number with more than an inch (25 mm) of precipitation). In turn, county-level data were constructed as a weighted average across all weather stations in that county with the weights being the inverse of the distance to the county centroid. • State-level monthly Palmer Drought Severity Index (PDSI) data for the growing season were drawn from the NOAA GHCND database. Values range from −6.0 (extreme drought) to +6.0 (extreme wet conditions) and were averaged across the months in the growing season. • The ENSO phase information was drawn from NOAA identifications of ENSO phases determined by the NOAA Oceanic Niño Index (ONI), and we identified the ENSO phases (El Niño, Neutral, and La Niña) present during the sample years. In particular, the NOAA National Centers for Environmental Prediction (2017) classifies a year as an El Niño event when the ONI index is at or above +0.5 for five consecutive months, and La Niña when the index is at or below −0.5 for five consecutive months, with the other years designated as Neutral. We use dummy variables for the El Niño and La Niña phases with Neutral phase as the base case. • Allocation of years to the PDO phases was done following Mantua et al. (1997) using data from NOAA-NCDC. The TAG and WPWP indices were assigned to years using data from NOAA's Extended Reconstructed Sea Surface Temperature (NOAA, Earth System Research Laboratory, 2017) following Reynolds et al. (2002). • Joint DCV phase combination dummy variables were formed based on the simultaneous combinations of the three individual DCV phenomena phases (as listed in Table 1) following Mehta, Rosenberg, and Mendoza (2011, 2012). This resulted in each year being assigned to one of eight phase combinations (as listed in Table A2 in the online supplementary appendix). These phase combinations were designated as the PDO phase followed by the TAG and WPWP phases, resulting in combinations like (PDO+,TAG−,WPWP−), which refers to the year having a positive phase of PDO and negative phases of TAG and WPWP. Dummy variables were defined for each such combination, excepting (PDO−,TAG−,WPWP−), which became the base case. The least common phase combination occurred 5 years out of 66, and the most common 14 years. • We incorporated time trends and region-specific dummy variables following Pinheiro and Bates (2000), McCarl, Villavicenio, and Wu (2008), Attavanich (2011), Attavanich and McCarl (2014), and Ding and McCarl (2014). Our panel estimation included both time trend and fixed effects for U.S. counties. Table 1 summarizes the main variables used in regression. The generalized least squares approach was used for estimating the DCV effects on the continuous climate variables (temperature, precipitation, and the PDSI) under the assumption of a normal error term: (3a) $$\begin{equation} w = g({X^w},{\alpha ^w}) + u, \end{equation}$$ where w is the dependent climate variable; g(.) is the function to estimate; Xw is a vector of explanatory variables (which are time and its square), dummy variables for ENSO phase as they interact with dummy variables for agricultural regions, interactions of dummy variables for the DCV phase combinations and dummy variables for agricultural regions, and U.S. state dummy variables; αw represents the estimated parameters; and u is a normally distributed error term, which is assumed to have a mean of zero. We then obtain estimates on how much the climate variables are altered under each DCV phase combination, Δg/ΔDCV , which we will use in deriving total effects. For the climate variables that are count data (i.e., the number of hot days and wet days), we estimate: (3b) $$\begin{equation} \log({w^*}) = {g^*}({X^{({w^*})}},{\alpha ^{({w^*})}}) + v, \end{equation}$$ where w* is the dependent climate item; g*(.) is a the generalized linear model following Nelder and Wedderburn (1972); X w* is a vector of explanatory variables, which are the same as those used in the previous model; αw* is the vector of estimated parameters; and v is assumed to be an asymptotically distributed normal disturbance term with mean of zero. In turn, the estimation yields a measure of the influence of the DCV phase combinations on the count data climate variables (Δg*/ΔDCV) on a regional basis. Again, we use the estimated influence from this later in estimating the total effects on crop yields. For the crop yields, we estimate a skew-normal regression that relates crop yields to all explanatory variables, including time trend and its square, temperature and its square, precipitation and its square, PDSI and its square, interactions of the ENSO dummy variables and the dummy variables for agricultural regions, and interactions between the dummy variables for DCV phase combinations and the dummy variables for region: (4) $$\begin{equation} y = f\left( {X,\beta } \right) + \varepsilon , \end{equation}$$ where y is the crop yield; f(.) is the function to estimate; X is a vector of all explanatory variables, which are listed previously and in Table 1; β is the vector of estimated parameters; and ε is the skew-normal distributed disturbance term with zero mean, ε $\scriptsize \begin{array}{@{}*{1}{c}@{}} {{\rm{i}}{\rm{.i}}{\rm{.d}}{\rm{.}}}\\ \sim\\ {} \end{array}{\rm{\ }}SN( {0,{\omega ^2},\alpha } )$. Statistical inference for estimated coefficients is based on their cluster-robust standard errors allowing for intragroup correlation at the state-level, so the approach is made robust to heteroskedasticity and spatially correlated errors. After this estimation, we have Δf/ΔDCV as the regional "direct" effects of DCV on crop yields, which will be combined with the regional DCV effects ("indirect" effects) on climate variables (Δg/ΔDCV) and (Δg*/ΔDCV) in the next section. In the final step, we use the estimated climate and yield effects to determine the total marginal effect of DCV phase combinations considering both the direct yield effects and the indirect effects of DCV phase combination on climate factored in with the effects of climate on yields. Given equations (3a), (3b), and (4), we develop a total marginal effects measure combining the estimated parameters as follows: (5) $$\begin{equation} \frac{{\Delta y}}{{\Delta DCV}} = \frac{{\Delta f\left( {\hat{y}} \right)}}{{\Delta DCV}} + \mathop \sum \limits_{\forall w} \left( {\frac{{\Delta f\left( {\hat{y}} \right)}}{{\Delta w}}} \right)\left( {\frac{{\Delta g\left( {\hat{w}} \right)}}{{\Delta DCV}}} \right) + \mathop \sum \limits_{\forall {w^*}} \left( {\frac{{\Delta f\left( {\hat{y}} \right)}}{{\Delta {w^*}}}} \right)\left( {\frac{{\Delta {g^*}\left( {{{\hat{w}}^*}} \right)}}{{\Delta DCV}}} \right). \end{equation}$$ Therefore, we have $\Delta f (\hat{y})/\Delta DCV$ as the direct DCV effects, whereas the rest of the right-hand side of equation (5) includes the indirect DCV effects arising through effects on the continuous (w) and discrete (w*) climate variables, respectively. The impacts are evaluated at the regional level by associating them with interactions of regional dummy variables and DCV phase combination variables. We then use a block bootstrap (Tack, Harri, and Coble, 2012) where whole years are sampled with replacement and the full model—including marginal effects—is estimated within each bootstrapped sample to take into account the cross-equation dependence. The DCV phase combinations are found to significantly affect the climate variables on a regional basis as summarized in Table 2. For example, for the Central U.S. region, which is the largest corn producing region, the (PDO+,TAG−,WPWP−) phase combination increases growing degree days, precipitation, count of hot days (Day Temp>90°), and count of wet days (Day Precip>01). However, the (PDO−,TAG+,WPWP−) increases growing degree days and hot days while decreasing precipitation, drought, and wet days, especially for the MRB and Corn Belt regions. The (PDO−,TAG−,WPWP+) is also found to increase temperature, precipitation, hot days, and wet days in the Central region. We find that in several cases the DCV phase combinations tend to have differing magnitudes of regional effects including changes in sign. Notes: Evaluated by comparing with PDO−,TAG−,WPWP− phase as the base case using regression coefficients (P < 0.05) from Table A3 in the online supplementary appendix. CT, Central; MT, Mountains; NE, Northeast; NP, Northern Plains; P, Pacific; SE, Southeast; SP, Southern Plains. To check the validity of our results, we compare our estimates of DCV effects with those from Mehta, Rosenberg, and Mendoza (2012) for the MRB region. They found strong DCV phenomena associations with regional temperature and precipitation. They found that during PDO+, precipitation was above average almost everywhere in the MRB and temperature was lower than average. In the TAG+ phase, they found precipitation was below average almost everywhere and temperature was increased almost everywhere. In terms of WPWP, they found the MRB effects varied geographically and generally had less impact than PDO and TAG. To compare the results, we examine the regions that overlap with the MRB, which are R1: Central (IA, IL, IN, MI, MN, MO, OH, and WI); R2: Mountains (AZ, CO, ID, MT, NM, NV, UT, and WY); and R4: Northern Plains (KS, ND, NE, and SD). We have essentially the same results as in Mehta, Rosenberg, and Mendoza (2012), with almost all of our statistically significant terms having the same sign of effects. Furthermore, we also found our results are not sensitive to the usage of county- or state-level data. The reported results are quantitatively similar to our earlier results obtained using state-level data, as reported in Jithitikulchai (2014). The full estimation results using county-level data are reported in Table A3 in the online supplementary appendix. When we examine effects on mean yields, we get climate and the direct DCV effect results. The results are provided in Tables A4–A5 in the online supplementary appendix. The results on time trend (which is a proxy for technological progress) in Table A4 (in the online supplementary appendix) are consistent with the finding of upward but diminishing trends in U.S. crop yields as found in McCarl, Villavicenio, and Wu (2008) and McCarl et al. (2013), among others. Notes: Evaluated by comparing with PDO−,TAG−,WPWP− phase as the base case using total DCV effect coefficients from Table 4 and Tables A7 and A10 in the online supplementary appendix. CT, Central; MT, Mountains; NE, Northeast; NP, Northern Plains; P, Pacific; SE, Southeast; SP, Southern Plains. Notes: Yields of all crops are in bushels/harvested acre, except for cotton yield, which is in pounds/harvested acre. Coefficients estimated by Delta method with the cluster-robust P values in parentheses (*P < 0.05; ** P < 0.01; ***P < 0.001). Blanks in some regions imply no significant impacts at 95% statistical confidence. We find that amounts of higher growing degree day generally have significant linear increasing effects on corn and soybean yields. We find a negative term for growing degree days squared, implying a plateau and then a decrease in yields as the growing degree days rise yet further, as also found for temperature in Schlenker and Roberts (2009), Attavanich (2011), and Attavanich and McCarl (2014). The PDSI (which is positive when conditions are wetter) has a significantly positive regressor for nonirrigated wheat, which implies yield decreases when droughts occur (as the index becomes negative). For the effect of extreme high temperature, the frequency of hot days decreases the crop yield of corn and soybeans. The count of wet days has no effects on crop yields except a small positive influence on nonirrigated wheat. In conclusion, for direct effects on crop yield, we find negative impacts of extreme drought events and the growing degree day quadratic term. The results confirm previous findings from McCarl, Villavicencio, and Wu (2008), Attavanich (2011), and Attavanich and McCarl (2014). Now we combine the direct and climate displacement indirect effects of DCV phase combinations to get total effects. The results (Tables 3–4) indicate that DCV phase combinations have regionally differentiated effects on mean crop yields with both increases and decreases found. Compared with the PDO−,TAG−,WPWP− base case, yield reductions are found for the following: corn in the Central, Northern Plains, and Southern Plains regions for most DCV phase combinations; cotton in the Mountains and Southeast regions especially for (PDO+,TAG−,WPWP−), (PDO+,TAG−,WPWP+), and (PDO+,TAG+,WPWP−); soybeans in the Central, Northern Plains, and Southern Plains regions for almost all DCV phase combinations; and nonirrigated wheat in the Northern Plains and Pacific regions especially for (PDO−,TAG−,WPWP+), (PDO+,TAG+,WPWP−), and (PDO+,TAG+,WPWP+). On the other hand, there are also major positive impacts such as for corn in the Central region under (PDO−,TAG−,WPWP+) and in the Southern Plains under (PDO+,TAG−,WPWP−), (PDO+,TAG−,WPWP+), and (PDO+,TAG+,WPWP−). We also find the direct DCV effects are mostly larger than the indirect DCV effects as presented in Tables A5, A8, and A11 in the online supplementary appendix. Figures A3–A6 (in the online supplementary appendix) illustrate the regional impacts on crop yields. Finally, where our analysis overlaps the Mehta, Rosenberg, and Mendoza (2012) MRB studies, we examine the consistency between our and their results and find they are similar. For example, we both find the (PDO+,TAG−,WPWP−) and (PDO−,TAG+,WPWP−) phase combinations associate with decreased corn yields in Central and Northern Plains regions, while the (PDO−,TAG−,WPWP+) phase has positive impacts in the Central region and negative impacts in the Northern Plains region. For nonirrigated wheat, we both find the (PDO+,TAG−,WPWP−) phase increases yields in the Mountains region and the (PDO−,TAG−,WPWP+) phase increases in the Northern Plains region. We also examine DCV phase combination effects on variance and skewness and find that DCV phase combinations alter these in a number of important cases (Table 3). The full results are reported in Tables A6−A11 in the online supplementary appendix. Namely, they have crop yield effects on corn production in the Central, Northern Plains, and Southern Plains regions; cotton in the Mountains, Southeast, and Southern Plains regions; soybeans in the Central, Northern Plains, and Southern Plains regions; and nonirrigated wheat in Mountains, Northern Plains, and Pacific regions. More specifically for corn, we find that the (PDO+,TAG−,WPWP−), (PDO−,TAG+,WPWP−), and (PDO+,TAG−,WPWP+) phase combinations increase yield variability in the Central region. However, the (PDO−,TAG−,WPWP+) and (PDO+,TAG+,WPWP+) phases decrease yield variance in the Central, Northern Plains, and Southern Plains regions. For cotton, the (PDO+,TAG−,WPWP−) phase increases variability in the Southeast region, while the (PDO+,TAG−,WPWP+) and (PDO+,TAG+,WPWP+) phases increase it in the Southern Plains region. Additionally, the (PDO+,TAG−,WPWP−), (PDO−,TAG−,WPWP+), (PDO+,TAG−,WPWP+), and (PDO+,TAG+,WPWP−) phase combinations decrease variability in the Mountains region. For soybeans, all the DCV combinations increase yield variability in the Northern Plains or Southern Plains region. For nonirrigated wheat, the (PDO−,TAG+,WPWP+) and (PDO+,TAG+,WPWP+) phases increase yield variability in the Northern Plains region but decrease variability in Mountains and Pacific regions. In term of skewness, a positive effect means there is a longer right tail and more concentrated mass of distribution on the left side of the distribution; thus there are relatively more low (below the mean) yield outcomes. For corn, we find that (PDO−,TAG−,WPWP+) and (PDO+,TAG+,WPWP+) phases both increase skewness in the Central region, the principal corn growing region. All DCV phases but (PDO−,TAG+,WPWP−) and (PDO+,TAG−,WPWP+) also increase skewness in the Northern Plains region, another major corn growing region. For cotton, the (PDO+,TAG−,WPWP−), (PDO−,TAG+,WPWP−), (PDO+,TAG+,WPWP−), and (PDO−,TAG+,WPWP+) phases increase skewness in the Southern Plains region. The (PDO+,TAG−,WPWP+), (PDO+,TAG+,WPWP−), (PDO−,TAG+,WPWP+), and (PDO+,TAG+, WPWP+) phases also increase skewness in the Southeast region. For soybeans, the (PDO−,TAG−,WPWP+) and (PDO+,TAG+,WPWP+) combinations are found to increase skewness in the Central region, the major soybean producing region. On the other hand, all DCV combinations excepting the (PDO+,TAG+,WPWP+) phase combination decrease soybean yield skewness in the Northern Plains region, another major soybean producing region. For nonirrigated wheat, we find that (PDO−,TAG+,WPWP−) increases skewness in the Mountains region. This study investigates how combinations of DCV phenomena affect climate and crop yields across the United States. We find that DCV phase combinations exert regionally differentiated influences on both climate and crop yields. In terms of yields, large effects are found on corn, cotton, soybeans, and nonirrigated wheat in major producing areas such as the Central, Northern Plains, and Southern Plains regions for corn; the Mountains, Southeast, and Southern Plains regions for cotton; the Central and Northern Plains regions for soybeans; and the Central, Mountains, Northern Plains, and Pacific regions for nonirrigated wheat. Thus, we recommend that developing and disseminating estimates of DCV effects on climate and yield along with preplanting announcements on phase combinations would be of value to farmers and policy makers. Such information could stimulate management and enterprise mix alterations increasing crop productivity, given that DCV phenomena are found to have regionally and crop-differentiated effects on climate and crop yield distributions. This study is subject to some limitations. A limited set of years were used, and better estimates might arise under a longer period of study. Future research can also include the effects of many other climate-related items such as anthropogenic-forced climate change, variation in solar radiation, and dynamics of jet streams. In addition, the analysis does not distinguish yield effects on irrigated and nonirrigated crops, except for nonirrigated wheat. Finally, although extensions are possible, we feel our findings of significant, regionally differentiated DCV effects merit further work on better forecasting yields and on providing forecast information to support potential producer adjustments in planting along with changes in insurance provisions and policy. To view supplementary material for this article, please visit https://doi.org/10.1017/aae.2018.25 Adams, R.M., Chen, C.C., McCarl, B.A., and Weiher, R.F.. "The Economic Consequences of ENSO Events for Agriculture." Climate Research 13,3(1999):165–72. Attavanich, W. "Essays on the Effect of Climate Change on Agriculture and Agricultural Transportation." Ph.D. dissertation, Texas A&M University, College Station, 2011. Attavanich, W., and McCarl, B.A.. "How Is CO2 Affecting Yields and Technological Progress? A Statistical Analysis." Climatic Change 124,4(2014):747–62. Azzalini, A., and Capitanio, A.. "Statistical Applications of the Multivariate Skew Normal Distribution." Journal of the Royal Statistical Society 61,3(1999):579–602. Azzalini, A., and Dalla Valle, A.. "The Multivariate Skew-Normal Distribution." Biometrika 83,4(1996):715–26. Chen, C.-C., and McCarl, B.A.. "The Value of ENSO Information to Agriculture: Consideration of Event Strength and Trade." Journal of Agricultural and Resource Economics 25,2(2000):368–85. Chen, C.‐C., McCarl, B.A., and Schimmelpfennig, D.E.. "Yield Variability as Influenced by Climate: A Statistical Investigation." Climatic Change 66,2(2004):239–61. Ding, J., and McCarl, B.A.. "Inter-Decadal Climate Variability in the Edwards Aquifer: Regional Impacts of DCV on Crop Yields and Water Use." Paper presented at the AAEA's Annual Meeting, Minneapolis, MN, July 27–29, 2014. Du, X., Hennessy, D.A., and Yu, L.. "Testing Day's Conjecture That More Nitrogen Decreases Crop Yield Skewness." American Journal of Agricultural Economics 94,1(2012):225–37. Gupta, A.K., and Chen, T.. "Goodness-of-Fit Tests for the Skew-Normal Distribution." Communications in Statistics-Simulation and Computation 30,4(2001):907–30. Hennessy, D.A. "Crop Yield Skewness and the Normal Distribution." Journal of Agricultural and Resource Economics 34,1(2009a):34–52. Hennessy, D.A. "Crop Yield Skewness under Law of the Minimum Technology." American Journal of Agricultural Economics 91,1(2009b):197–208. Henze, N. "A Probabilistic Representation of the Skew-Normal Distribution." Scandinavian Journal of Statistics 13,4(1986):271–75. Huang, P. "Three Essays on Economic and Societal Implications of Decadal Climate Variability and Fishery Management." Ph.D. dissertation, Texas A&M University, College Station, 2014. Hurrell, J.W., Kushnir, Y., and Visbeck, M.. "The North Atlantic Oscillation." Science 291,5504(2001):603–5. Jithitikulchai, T. "Essays on Applied Economics and Econometrics: Decadal Climate Variability Impacts on Cropping and Sugar-Sweetened Beverage Demand of Low-Income Families." Ph.D. dissertation, Texas A&M University, College Station, 2014. Latif, M., and Barnett, T.P.. "Causes of Decadal Climate Variability over the North Pacific and North America." Science 266,5185(1994):634–37. Mantua, N.J. "The Pacific Decadal Oscillation and Climate Forecasting for North America." Climate Risk Solutions 1,1(1999):10–13. Mantua, N.J., and Hare, S.R.. "The Pacific Decadal Oscillation." Journal of Oceanography 58,1(2002):35–44. Mantua, N.J., Hare, S.R., Zhang, Y., Wallace, J.M., and Francis, R.C.. "A Pacific Interdecadal Climate Oscillation with Impacts on Salmon Production." Bulletin of the American Meteorological Society 78,6(1997):1069–79. McCarl, B.A., Villavicencio, X., and Wu, X.. "Climate Change and Future Analysis: Is Stationarity Dying?" American Journal of Agricultural Economics 90,5(2008):1241–47. McCarl, B.A., Villavicencio, X., Wu, X., and Huffman, W.E.. "Climate Change Influences on Agricultural Research Productivity." Climatic Change 119,3–4(2013):815–24. Mehta, V.M. "Variability of the Tropical Ocean Surface Temperatures at Decadal-Multidecadal Timescales. Part I: The Atlantic Ocean." Journal of Climate 11,9(1998):2351–75. Mehta, V.M., Rosenberg, N. J., and Mendoza, K.. "Simulated Impacts of Three Decadal Climate Variability Phenomena on Water Yields in the Missouri River Basin." Journal of the American Water Resources Association 47,1(2011):126–35. Mehta, V.M., Rosenberg, N. J., and Mendoza, K.. "Simulated Impacts of Three Decadal Climate Variability Phenomena on Dryland Corn and Wheat Yields in the Missouri River Basin." Agricultural and Forest Meteorology 152(January 2012):109–24. Mehta, V.M., Wang, H., and Mendoza, K.. "Decadal Predictability of Tropical Basin Average and Global Average Sea Surface Temperatures in CMIP5 Experiments with the HadCM3, GFDL‐CM2.1, NCAR‐CCSM4, and MIROC5 Global Earth System Models." Geophysical Research Letters 40,11(2013):2807–12. Mendez, R.F. "Three Essays on Prequential Analysis, Climate Change, and Mexican Agriculture." Ph.D. dissertation, Texas A&M University, College Station, 2013. Murphy, J., Kattsov, V., Keenlyside, N., Kimoto, M., Meehl, G., Mehta, V.M., Pohlmann, H., Scaife, A., and Smith, D.. "Towards Prediction of Decadal Climate Variability and Change." Procedia Environmental Sciences 1(2010):287–304. National Oceanic and Atmospheric Administration, Earth System Research Laboratory. "NOAA Extended Reconstructed Sea Surface Temperature (SST) V4." Internet site: http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html (Accessed 2017). National Oceanic and Atmospheric Administration, National Centers for Environmental Prediction. "NOAA Center for Weather and Climate Prediction. Changes to the Oceanic Niño Index (ONI)." Internet site: http://origin.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ONI_v5.php (Accessed 2017). National Oceanic and Atmospheric Administration, National Climatic Data Center (NOAA-NCDC). "Monthly Summaries Global Historical Climatology Network (GHCND)." Internet site: https://www.ncdc.noaa.gov/ghcnd-data-access (Accessed 2017). Nelder, J.A., and Wedderburn, R.W.M.. "Generalized Linear Models." Journal of the Royal Statistical Society 135,3(1972):370–84. Pinheiro, J.C., and Bates, D.M.. Linear Mixed-Effects Models: Basic Concepts and Examples. New York: Springer, 2000. Reynolds, R.W., Rayner, N.A., Smith, T.M., Stokes, D.C., and Wang, W.. "An Improved in Situ and Satellite SST Analysis for Climate." Journal of Climate 15,13(2002):1609–25. Schlenker, W., Hanemann, W.M., and Fisher, A.C.. "Water Availability, Degree Days, and the Potential Impact of Climate Change on Irrigated Agriculture in California." Climatic Change 81,1(2007):19–38. Schlenker, W., and Roberts, M. J.. "Nonlinear Temperature Effects Indicate Severe Damages to US Crop Yields under Climate Change." Proceedings of the National Academy of Sciences of the United States of America 106,37(2009):15594–98. Smith, S.R., Legler, D.M., Remigio, M.J., and O'Brien, J.J.. "Comparison of 1997–98 U.S. Temperature and Precipitation Anomalies to Historical ENSO Warm Phases." Journal of Climate 12,12(1999):3507–15. Tack, J.B., Harri, A., and Coble, K.. "More than Mean Effects: Modeling the Effect of Climate on the Higher Order Moments of Crop Yields." American Journal of Agricultural Economics 94,5(2012):1037–54. Tack, J.B., and Ubilava, D.. "The Effect of El Niño Southern Oscillation on U.S. Corn Production and Downside Risk." Climatic Change 121,4(2013):689–700. Ting, M., and Wang, H.. "Summertime US Precipitation Variability and Its Relation to Pacific Sea Surface Temperature." Journal of Climate 10,8(1997):1853–73. U.S. Department of Agriculture, Economics, Statistics and Market Information System. "Usual Planting and Harvesting Dates for U.S. Field Crops." Internet site: http://usda.mannlib.cornell.edu/MannUsda/viewDocumentInfo.do?documentID=1251 (Accessed 2017). U.S. Department of Agriculture, National Agricultural Statistics Service (USDA-NASS). "Data and Statistics: Quick Stats." Internet site: http://www.nass.usda.gov/Quick_Stats/index.php (Accessed 2017). Wang, C., and Enfield, D.B. "The Tropical Western Hemisphere Warm Pool." Geophysical Research Letters 28,8(2001):1635–38. Wang, C., Enfield, D.B., Lee, S., and Landsea, C.W.. "Influences of the Atlantic Warm Pool on Western Hemisphere Summer Rainfall and Atlantic Hurricanes." Journal of Climate 19,12(2006):3011–28. Wang, H., and Mehta, V.M.. "Decadal Variability of the Indo-Pacific Warm Pool and Its Association with Atmospheric and Oceanic Variability in the NCEP-NCAR and SODA Reanalyses." Journal of Climate 21,21(2008):5545–65. Zhang, Y., Wallace, J.M., and Battisti, D.S.. "ENSO-Like Interdecadal Variability: 1900–93." Journal of Climate 10,5(1997):1004–20. Loading article...
CommonCrawl
Fixed Point and Steffensen's Acceleration zekeriya Math336 Project MathUniversityPresentationBeamer \documentclass{beamer} \mode<presentation> \usetheme{Madrid} % or try Darmstadt, Madrid, Warsaw, ... \usecolortheme{rose} % or try albatross, beaver, crane, ... \usefonttheme{structurebold} % or try serif, structurebold, ... \setbeamertemplate{navigation symbols}{} \setbeamertemplate{caption}[numbered] \usepackage[utf8x]{inputenc} \title[Fixed Point and Steffensen's Acceleration]{MATH 336 Presentation \\Fixed Point and Steffensen's Acceleration Method} \author{Zekeriya Ünal} \institute{Boğaziçi University} \date{27/05/2015} \titlepage \tableofcontents % Uncomment these lines for an automatically generated outline. %\begin{frame}{Outline} % \tableofcontents %\end{frame} \section{Fixed Point Iteration Method} \begin{frame}{Fixed Point Iteration Method} \begin{block}{Definition}<1-> A point p is a \textbf{fixed point} for a given function g if g(p)= p. \begin{block}{Remark}<2-> Fixed point problems and root finding problems are infact equivalent. \item<1->if p is a fixed point of the function g, then p is a root of the function $$f(x)=[g(x)-x]h(x)$$ [as long as $h(x)\in \mathbb{R}$] \item<2->if p is a root of the function of f, then p is a fixed point of the function $$g(x)=x-h(x)f(x)$$ [as long as $h(x)\in \mathbb{R}$] Let U be a subset of a metric space X. \\A function g:U $\rightarrow$X called \textbf{Lipschitz continuous} provided there exists a constant $\lambda\ge$ 0 (called Lipschitz constant) \\such that $\forall$ x,y$\in$U d(g(x),g(y))$\le$d(x,y) \\if $\lambda\in$[0,1], then g is called \textbf{contraction} (with contraction constant $\lambda$). \begin{block}{Theorem (A Fixed Point Theorem)}<2-> Suppose $g:[a,b]\rightarrow[a,b]$ is continuous. Then g has a fixed point. \begin{block}{Lemma}<1-> A contraction has at most one fixed point. \begin{block}{Corollary}<2-> Suppose $g:[a,b]\rightarrow[a,b]$ is continuous and $\lambda:=sup\mid g'(x)\mid<1$ for $x\in (a,b)$ \\Then g is a contraction with contraction constant $\lambda.$ \vskip 1cm \begin{frame}{Graphical determination of the existence of a fixed point for the function $g(x)= \frac{x^2-3}{2}$} \includegraphics[width=355px, height= 200px]{Figure2.jpg} \subsection{Banach Fixed Point Theorem} \begin{frame}{Banach Fixed Point Theorem} \begin{theorem}[Banach Fixed Point Theorem] \item[]<1-> Let U be a complete subset of a metric space X, and let g:U$\rightarrow$U be a contraction with contraction constant $\lambda$. \\Then \begin{itemize} \item<2-> g has a unique fixed point, say p. \item<2->For any sequence $\left\{x_{n}\right\}$ defined by $x_n$=g($x_{n-1}$), n=1,2,,... \\converges to this unique fixed point p. \item[]<3->Moreover, we have the \textbf{a priori} error estimate $$ \mid x_n-p\mid\le\frac{\lambda^n}{1-\lambda}\mid x_1-x_0\mid$$ \item[]<4->and the \textbf{a posteriori} error estimate $$\mid x_n-p\mid\le\frac{\lambda}{1-\lambda}\mid x_n-x_{n-1}\mid$$ \end{theorem} \begin{block}{Proof} For $n > m$ , we have \mid x_n-x_m\mid&=\mid x_n-x_{n-1}+x_{n-1}-x_{n-2}+...+x_{m+1}-x_{m}\mid\\ &\le\mid x_n-x_{n-1}\mid + \mid x_{n-1}-x_{n-2}\mid+...+\mid x_{m+1}-x_{m}\mid\\ &by *\\ &\le(\lambda^{n-1}+\lambda^{n-2}+...+\lambda^{m})\mid x_1-x_0\mid\\ &=\lambda^{m}(\lambda^{n-m-1}+\lambda^{n-m-2}+...+1)\mid x_1-x_0\mid\\ &=\lambda^{m}\frac{1-\lambda^{n-m}}{1-\lambda}\mid x_1-x_0\mid\le \frac{\lambda^m}{1-\lambda}\mid x_1-x_0\mid so that ${x_n}$ is Cauchy sequence in U. \\Since U is complete, ${x_n}$ converges to a point p $\in$ U $$*\color{red}\mid x_k-x_{k-1}\mid=\mid g(x_{k-1})-g(x_{k-2})\mid\le\lambda\mid x_{k-1}-x_{k-2}\mid\le..\le\lambda^{k-1}\mid x_1-x_0\mid\color{black}$$ \begin{proof}[Continue] \item[]<1->Now,since g being contraction is continuous, we have $$p=\lim_{n\to\infty}x_{n}=\lim_{n\to\infty}g(x_{n-1})=g(lim_{n\to\infty}x_{n-1})=g(p)$$ \\so that p is fixed point of g. \\By the lemma p is the unique fixed point of g. \item[]<2->Since $$\mid x_n-x_m\mid\le \frac{\lambda^m}{1-\lambda}\mid x_1-x_0\mid,$$ \\letting n$\rightarrow\infty$ \item[]<3->we get $$\mid p-x_m\mid\le \frac{\lambda^m}{1-\lambda}\mid x_1-x_0\mid$$ \\for $y_0=x_{n-1},$ $y_1= x_n$ $$\mid y_1- p\mid \le \frac{\lambda}{1-\lambda}\mid y_1-y_0\mid$$ \subsection{The Fixed Point Algorithm} \begin{frame}{The Fixed Point Algorithm} \begin{block}{The Fixed Point Algorithm}<1-> If g has a fixed point p, then the fixed point algorithm generates a sequence $\left\{x_n\right\} $ defined as \\$x_0$: arbitrary but fixed, \\$x_n=g(x_{n-1})$, n=1,2,3,... to approximate p. \subsection{Fixed Point The Case Where Multiple Derivatives Are Zero at The Fixed Point} \begin{frame}{Fixed Point The Case Where Multiple Derivatives Are Zero at The Fixed Point} \begin{block}{Theorem} \item[]<1->Let g be a continuous function on the closed internal $[a,b]$ with $\alpha>1$ continuous derivatives on the internal (a,b). \\Further, Let p $\in $(a,b) be a fixed point of g. \item[]<2->if $$g'(p)=g''(p)=...=g^{(\alpha-1)}(p)=0$$ but $g^{(\alpha)}(p)\neq 0$, \item[]<3->then there exist a $\delta>0$ such that for any $p_0\in[p-\delta,p+\delta]$, the sequence $p_n=g(p_{n-1})$ converges to the fixed point p of order $\alpha$ with asymtotic error constant $$\lim_{n\to\infty}\frac{\mid e_{n+1}\mid}{\mid e_n\mid^\alpha}=\frac{\mid g^{(\alpha)}(p)\mid}{\alpha!}$$. \item[]<1->Let$'$s start by establishing the existence of $\delta>0$ such that for any $p_0\in [p-\delta,p+\delta]$, the sequence $p_n=g(p_{n-1})$ converges to the fixed point p. \item[]<2->Let $\lambda<1$. Since $g'(p)=0$ and $g'$ is continuous, it follows that there exists a $\delta>0$ such that$\mid g'(x)\mid\le \lambda<1$ for all $x\in I \equiv[p-\delta,p+\delta]$ From this, it follows that g:I$\rightarrow$I; for if x$\in$ I then, \item[]<3-> \mid g(x)-p\mid&=\mid g(x)-g(p)\mid\\ &=\mid g'(\xi)\mid\mid x-p\mid\\ &\le \lambda\mid x-p\mid<\mid x-p\mid\\ &\le\delta \item[]<4->Therefore by the a fixed point theorem established earlier, the sequence $p_n=g(p_{n-1})$ converges to the fixed point p for any $p_0\in [p-\delta,p+\delta]$. \begin{block}{Continue} \item[]<1->To establish the order of convergence, let x$\in$ I and expand the iteration function g into a Taylor series about x=p: $$g(x)=g(p)+g'(p)(x-p)+...+\frac{g^{\alpha-1}(p)}{(\alpha-1)!}(x-p)^{\alpha-1}+\frac{g^{\alpha}(\xi)}{(\alpha)!}(x-p)^{\alpha}$$ \\where $\xi$ is between x and p. \item[]<2->Using the hypotheses regarding the value of $g^{(k)}(p)$ for $1\le k\le \alpha-1$ and letting $x=p_n$, the Taylor series expansion simplifies to $$p_n+1-p=\frac{g^(\alpha)(\xi)}{\alpha!}(p_n-p)^\alpha$$ \\where $\xi$ is now between $p_n$ and p. \item[]<1->The definitions of fixed point iteration scheme and of a fixed point have been used to replace $g(p_n)$ with $p_{n+1}$ and g(p) with p. \item[]<2->Finally, let $n\rightarrow\infty$.Then $p_n\rightarrow p$, forcing $\xi\rightarrow$ p also. Hence $$\lim_{n\to\infty}\frac{\mid e_{n+1}\mid}{\mid e_n\mid^\alpha}=\frac{\mid g^{(\alpha)}(p)\mid}{\alpha!}$$ \\or $p_n\rightarrow$ p of order $\alpha$. \section{Steffensen's Acceleration Method} \subsection{Aitken's $\Delta^2$ Method} \begin{theorem}[Aitken's $\Delta^2$ method] \item[] <1-> Suppose that \item $\left\{x_{n}\right\}$ is a sequence with $x_{n}\neq$ p for all n $\in \mathbb{N}$ \item there is a constant c $\in \Re\setminus \left\{ \mp 1 \right\}$ and a sequece $\left\{ \delta_{n} \right\}$ such that \item $\lim_{n\rightarrow\infty}\delta_{n}=0$ \item $x_{n+1}-p=(c+\delta_{n})(x_{n}-p)$ for all n $\in \mathbb{N}$ \item[] <2->Then \item $\left\{x_{n}\right\}$ converges to p iff $\left|c\right|<1$ \item if $\left|c\right|<1$ , then \[ y_{n}=\frac{x_{n+2}x_{n}-x_{n+1}^2}{x_{n+2} - 2x_{n+1}+x_{n}}= x_n-\frac{(x_{n+1}-x_n)^2}{x_{n+2}-2x_{n+1}+x_n} \] \ is well-defined for all sufficiently large n. \item[]<3-> Moreover $\left\{y_{n}\right\}$ converges to p faster than $\left\{x_{n}\right\}$ in the sense that \lim_{n\rightarrow\infty} \frac{y_{n}-p}{x_{n}-p}=0 \begin{proof} \item[]<1-> Let $e_{n} = x_{n}-p$ \item[]<2-> $y_{n}-p=\frac{x_{n+2}x_{n}-x_{n+1}^2}{x_{n+2} - 2x_{n+1}+x_{n}}-p$ \item[]<3-> $y_{n}-p$=$\frac{(e_{n+2}+p)(e_{n}+p)-(e_{n+1}+p)^2}{(e_{n+2}+p)-2(e_{n+1}+p)+(e_{n}+p)}$ \item[]<4-> $y_{n}-p$=$\frac{e_{n+2}e_{n}-e_{n}^2}{e_{n+2}-2e_{n+1}+e_{n}}$ \item[]<5-> since we have \item[]<5-> $x_{n+1}-p= (c+\delta_{n})(x_{n}-p)$ \ and $e_{n+1}=(c+\delta_{n}e_{n})$ \item[]<6-> $y_{n}-p$=$\frac{(c+\delta_{n+1})(c+\delta_{n})e_{n}e_{n}-(c+\delta_{n})^2e_{n}^2}{(c+\delta_{n+1})(c+\delta_{n})e_{n}-2(c+\delta_{n})e_{n}+e_{n}}$ \item[]<7-> $y_{n}-p$=$\frac{(c+\delta_{n+1})(c+\delta_{n})-(c+\delta_{n})^2}{(c+\delta_{n+1})(c+\delta_{n})-2(c+\delta_{n})+1}$ $(x_{n}-p)$ \item[]<8-> $y_{n}-p$=$\frac{(c+\delta_{n})(\delta_{n+1}-\delta_{n})}{(c-1)^+c(\delta_{n+1}+\delta_{n})+\delta_{n}(\delta_{n+1}-2)}$ \item[]<9-> Therefore $\lim_{n\rightarrow\infty} \frac{y_{n}-p}{x_{n}-p}=0$ \subsection{Steffensen's Acceleration Method} \begin{block}{Steffensen's Acceleration Method} \item[]<1->Steffensen's Method is a combination of fixed-point iteration and the Aitken's $\Delta^2$ method: \item[]<2->Suppose we have a fixed point iteration: $$x_0,x_1=g(x_0),x_2=g(x_1),...$$ \item[]<3->Once we have $x_0, x_1,$ and $x_2,$ we can compute $$ y_0=x_0-\frac{(x_1-x_0)^2}{x_2-2x_1+x_0}$$ At this point we "restart" the fixed point iteration with $x_0=y_0$ \item[]<4->e.g.\color{brown} $$x_3=y_0, x_4=g(x_3), x_5=g(x_4),$$\color{black} and compute \color{brown} $$y_3=x_3-\frac{(x_4-x_3)^2}{x_5-2x_4+x_3}$$ \color{black} \section{Comparison With Fixed point iteration and Steffensen's Acceleration Method} \begin{frame}{Comparison with Fixed Point Iteration and Steffensen's Acceleration Method} \begin{block}{EXAMPLE} \item[]<1-> Use the Fixed Point iteration method to find a solution to $f(x) = x^2-2x-3$ using $x_0=0$, tolerance =$10^{-1}$ and compare the approximations with those given by Steffensen's Acceleration method with $x_0=0$, tolerance = $10^{-2}$. \item<2-> We can see that my MATLAB code while Fixed Point iteration method reaches the root by 788 iteration, Steffensen's Acceleration method reaches the root by only 3 iterations.
CommonCrawl
Study on ultra-high sensitivity piezoelectric effect of GaN micro/nano columns Jianbo Fu1, Hua Zong2, Xiaodong Hu2 & Haixia Zhang ORCID: orcid.org/0000-0002-0739-00941 High-quality GaN micro/nano columns were prepared with self-organized catalytic-free method. Young's modulus of GaN nanocolumns were measured under both compressive stress and tensile stress. It was found that the Young's modulus decreases with the increasing of nanocolumn diameter due to the increase of face defect density. Furthermore, we measured the piezoelectric properties and found that there was a 1000-fold current increase under a strain of 1% with a fixed bias voltage of 10 mV. Based on the Schottky Barrier Diode model, we modified it with the effect of polarization charge, image charge and interface state to analyze the experiment results which reveals that the strong piezopolarization effect plays an important role in this phenomenon. Therefore, the GaN nanocolumns has a great prospect to be applied in high-efficiency nanogenerators and high-sensitivity nanosensors. With the development of wearables and electronic skin in recent years, it is an urgent need to improve the performance of sensors and power supply devices while reducing their sizes. So that high efficiency energy transduction methods, which can convert the human body activities into electric signal, are explored widely. Of varied methods, piezoelectric effect has been studied a lot as one of the major branches of nanogenerators and nanosensors for its excellent performance. For example, ZnO nanowires have been proven to be very effective in collecting mechanical energy at the micro/nano scale, and they are also well tolerated, highly efficient, and biocompatible [1]. Earlier researches on the piezoelectric properties of ZnO nanowires were led by Wang' group [2]. It is believed that due to the strong piezoelectric effect of ZnO, the contact potential between the nanowire and the silver gel changes when the stress is applied. Therefore, the I–V characteristics change accordingly. And they verified the theoretical expectations by atomic force microscopy (AFM) experiments. These works gained widespread attention and made ZnO nanowire a hotspot for nanogenerators [3, 4]. Subsequently, similar experiments have been done on CdS [5], GaN [6] and InN [7]. These basic studies have laid the foundation for piezoelectric nanogenerators [8] and nanosensors [9]. Benefit from its unique crystal structure, GaN low dimensional material has strong spontaneous polarization effect and piezoelectric polarization effect. Compared with ZnO materials, the advantage of higher piezoelectric coefficient [10] makes GaN more suitable for nanosensor and nanogenerator [11,12,13]. On the other hand, the preparation process of P-type GaN materials is more mature than that of ZnO [14,15,16,17], which makes it has a greater potential in optoelectronic integration in the future. However, due to the difficulty in the preparation of high-quality GaN nanocolumns, there is few works on the mechanical and piezoelectric properties of micro/nano columns. In this paper, high-quality GaN micro/nano columns were grown, and their mechanical properties and piezoelectric properties were studied. It is found that there is an ultra-high sensitivity piezoelectric effect in these GaN micro/nano columns. That makes them have a great application prospect in piezoelectric nanosensors and nanogenerators [18, 19]. Self-organized catalytic-free growth of GaN nanocolumns In a variety of GaN nanocolumn growth methods, self-organized growth has the advantages of simplicity and convenience than selection growth method for there is no need to prepare complex substrates [20, 21], which makes it a very important method for scientific research and industry. The typical morphology of GaN nanocolumns grown with selection growth method and self-organized method can be seen in Fig. 1a, b, respectively. On the other hand, it is desirable for catalytic-free growth of GaN nanocolumn because that the catalyst can contaminate materials [22,23,24,25]. Typical scanning electron microscope (SEM) images of GaN nanocolumns grown by a selection growth method and b self-organized method, c SEM image of the self-organized catalytic-free growth GaN nanocolumn, d AFM image of the top of the GaN nanocolumn, e transmission electron microscope (TEM) image and electron diffraction pattern, f high resolution TEM atomic image We obtained high quality GaN micro/nano columns by self-organized catalytic-free growth method. It can be seen from Fig. 1c, the top of the nanocolumn presents a regular hexagonal shape with sharp edges. AFM image in Fig. 1d shows monoatomic steps on the top surface, which indicates that the micro/nano column was grown well. TEM images in Fig. 1e, f characterize the crystal orientation along the column is (0001) and the crystal is of high quality. Mechanical properties of GaN micro/nano columns Young's modulus is a measure of the stiffness for an elastomer which is defined as the ratio between stress and strain in the range applicable to Hooke's law. Meanwhile, it is a tensor. The stress is expressed as force F divided by area S. The strain is expressed as Length change ΔL divided by length L. So, the Young's modulus E is expressed as: $$ E = \left( {F\,\cdot\,L} \right)/\left( {S\cdot\Delta L} \right) $$ where F, S, ΔL, L represent the force applied, the cross-section area, the length variation under the applied force and the total length, respectively. In this paper, we only studied the elastic Young's modulus of GaN nanocolumns along the (0001) direction to simplify the phenomenon. The mechanical tests in this paper were performed on Hysitron's Picoindenter micro/nano mechanics test system. It has a three-plate capacitive sensor to measure the loading force and displacement accurately. The sample was fixed on Specimen, which converts the displacement signal into an electrical signal through a capacitive sensor and obtains the mechanical parameters accurately with feedback system. We marked the first tested GaN nanocolumn as sample 1, which has a diameter of 1080 nm and a length of 8.0 μm. The system measured the feedback pressure when the quadruple indenter pressed against the nanocolumn. As the pressure gradually increased, the nanocolumn bended as shown in Fig. 2a. As the pressure further increased, the nanocolumn broke. From Fig. 2b, it can be seen that the middle section of the force curve is linear, which represents the plastic deformation of the nanocolumn. The first nonlinear curve is due to the contact error of the indenter, which can be subtract by linear fitting. The nonlinear curve of the last segment represents the crack inside the nanocolumn. Figure 2c, d record the stress versus strain for another nanocolumn marked as sample 2, which has a diameter of 823 nm and a length of 6.6 μm. As it can be seen from the Fig. 2d, the trend of the curve is basically the same as that of Fig. 2b. The cross-section area of hexagonal column can be calculated by: $$ {\text{S}} = \frac{3\surd 3}{8}D^{2} $$ where D is the diagonal of hexagonal column, which is the diameter. In order to subtract the contact error, we linear fitted the plastic deformation region of force curve in Fig. 2b, d and get the slope rate to replace F/∆L to calculate Young's modulus. The slope rates in Fig. 2b, d are 23,800 N/m and 20,400 N/m, respectively. According to Eq. (1) and (2), there is: $$ {\text{E}} = \frac{8\surd 3}{9}\frac{F}{{D^{2} }}\frac{L}{\Delta L} = \frac{8\surd 3}{9}\frac{L}{{D^{2} }} \cdot Slope\;rate $$ Compressive stress test by Hysitron sample stage. a, b are the test process and result of sample 1, c, d are the test process and result of sample 2 With the data above, the calculated compression moduli according to Eq. (3) are about 250 GPa for sample 1 and 306 GPa for sample 2. These Yong's moduli are lower than that of GaN bulk material (300 GPa). Based on these experimental results, we found that the larger the diameter of the nanocolumn, the smaller the compression modulus. This may be because that it has more defects when the diameter is increased. However, these compression moduli have large errors because the pressures were not along the C axis when the nanocolumns bended under force. Therefore, in order to obtain more accurate experimental results and verify the inference, tensile stress test should be done. In the tensile stress test, nanocolumns of appropriate length and diameter were selected by Helios 600FIB. The nanocolumns were stress free attached to a Push to Pull (PTP) sample stage. The free ends were soldered to the sample stage by deposited Pt. The test sample morphology is shown in Fig. 3a–d. A special probe was used to push the left half of the sample stage and then the gap between the left and right parts of the sample stage increased. Since the nanocolumn was soldered, it was subjected to the uniaxial tensile stress along C-axis. The mechanical results are shown in Fig. 3e. It can be seen that the Young's modulus decreases as the diameter increased, which is according with the result of compressive stress test above. This result deviates from the prediction of Espinosa that the Young's modulus of GaN nanocolumns will reach the value of bulk GaN material when the diameter exceeds 300 nm [26]. In our experiments, the Young's modulus has become half of the Young's modulus (300 GPa) of the GaN bulk material when the diameter is larger than 500 nm. The deterioration in the mechanical properties of the nanocolumns might result from the increase of defects density. Among the many kinds of defects, we think the surface defects contribute the most to the decrease of Young's modulus, which can be clearly seen in Fig. 3f. a–d Tensile test nanocolumn samples successfully prepared by Helios 600FIB. e The Young's modulus obtained for the four samples, and f the TEM image of Sample 3, which shows the surface defects To our knowledge, it is very difficult to eliminate all defects in GaN nanocolumn. In order to estimate the effect of defect, we can make some approximations. First, we fixed GaN nanocolumn diameter as 1 μm. And we only consider the situation that face defect perpendicular to the column to simplify the calculation. According to Ref. [27], there is a relationship as below [27]: $$ \frac{1}{E} = \frac{{V_{cystal} }}{{E_{cystal} }} + \frac{{V_{defect} }}{{E_{defect} }} $$ where \( E_{cystal} \), \( E_{defect} \), \( V_{cystal} \), \( V_{defect} \) are the Yong's modulus of single crystal and defect part.\( V_{cystal} \), \( V_{defect} \) are their volume ratio, respectively. Based on Ref. [28], we roughly approximate \( E_{defect} = 10 \) GPa [28]. It was found that the the Young's modulus will decrease from the bulk material 300 GPa to 190 GPa and 139 GPa with surface defects affected volume ratio of 2% and 4%, respectively. Therefore, based on these experiment results and simulating results, we can draw a conclusion that the Young's modulus of GaN micro/nano columns will be decreased by defects. Piezoelectric properties of GaN micro/nano columns Unlike to the mechanical properties, the electrical properties of nanocolumns are more sensitive to many experimental conditions. Therefore, we must control the soldering current, chamber vacuum, electron beam current, voltage scanning step size and rate more subtle to obtain valid electrical results. Not described in detail here. Now, we analyze the electrical model based on the electrical data. Generally, Schottky barrier will be formed between the Pt and GaN contact due to the difference in work function [29]. It's usually called a Schottky Barrier Diode (SBD). As we know, a positive bias turns the SBD on, and a reverse bias turns it off. In this experiment, Schottky barriers were formed at both ends of the nanocolumn, the schematic energy band diagram is shown in Fig. 4c. Therefore, no matter what the current direction is, the electrons will pass through a forward biased SBD and a reverse biased SBD. So that the electrical properties will be dominated by reverse biased SBD for it takes most of the voltage drop. The basic model of a single SBD is described by Eq. (5) [30]. a Sample 7 and b sample 8 were prepared for piezoelectric test. c Schematic of the energy band diagram of the two electrodes of a GaN nanocolumn, d the repeat electrical curves in sample 8 $$ I = I_{s} \left[ {exp\left( {\frac{qv}{nkT}} \right) - 1} \right] $$ $$ I_{s} = SA^{*} T^{2} \;exp\left( { - \frac{{q\phi_{s} }}{kT}} \right) $$ where S is the electrode contact area, A* is effective Richardson constant, T is temperature, k is the Boltzmann constant, q is the charge quantity, n is ideal factor, and ∅s is Schottky barrier height. When the reverse voltage V is applied, the exponential term tends to zero, which is negligible, so the current I is approximate to − Is. Therefore, for the model shown in Fig. 4c, the reverse bias curve of − Is should be obtained under both forward and reverse biased voltages. However, after many repeat electrical measurements, as shown in Fig. 4d, it is found that the electrical curve is double J-shape curve instead of double cut-off shape. This indicates that a simple Schottky barrier model is not suitable for these GaN nanocolumn samples. Obviously, some corrections based on the original model are needed. The double J-shape curve in Fig. 5a is taken as an example to discuss the modification of the origin model. Due to the complex interface conditions between the deposited Pt electrode and the GaN nanocolumn, the Schottky barriers have different heights as schematic in Fig. 4c. Therefore, the double J-shape curve is asymmetrical [31]. As we know that the GaN nanocolumn has strong self-polarization effect, so image lowering effect should be considered first to modify the model. At the metal and semiconductor contact interface, some charges accumulate at the semiconductor side because of polarization or Fermi level difference. As a result, some opposite charges are induced at the metal side due to the Coulomb force of the accumulated charges in semiconductor side. These induced charges will lower the Schottky barrier at the interface, which is known as Image Lowering Effect. The effect of the image force on the height of the Schottky barrier is schematically shown in Fig. 5c. Basic on this analysis, we derive a new mathematical model, which is corrected by the image force [32]. a Electrical characteristics of sample 7, fitting the b positive part and d negative part of electrical curve of sample 7 with Eq. (10), c Energy band diagram considering the image force $$ I_{s} = SA^{*} T^{2} \;exp\left( { - \frac{{q\phi_{s} }}{kT}} \right)\;exp\left( {\frac{{\sqrt[4]{{q^{7} N_{D} \left( { - V + \varphi_{bi} - kT/q} \right)/\left( {8\pi^{2} \xi_{s}^{3} } \right)}}}}{kT}} \right) $$ $$ ln\;I_{s} = ln(SA^{*} T^{2} ) - \frac{{q\phi_{s} }}{kT} + \frac{{\sqrt[4]{{q^{7} N_{D} \left( { - V + \varphi_{bi} - kT/q} \right)/\left( {8\pi^{2} \xi_{s}^{3} } \right)}}}}{kT} $$ $$ = ln(SA^{*} T^{2} ) - \frac{{q\phi_{s} }}{kT} + \sqrt[4]{{\frac{{q^{7} N_{D} }}{{8\pi^{2} \xi_{s}^{3} k^{4} T^{4} }}}}\sqrt[4]{{\left( { - V + \varphi_{bi} - kT/q} \right)}} $$ $$ ln\;I_{s} = A\sqrt[4]{{\left( {x + B} \right)}} + C $$ where \( {\text{A}} = \sqrt[4]{{\frac{{q^{7} N_{D} }}{{8\pi^{2} \xi_{s}^{3} k^{4} T^{4} }}}} \), \( {\text{B}} = \varphi_{bi} - kT/q \), \( {\text{C}} = ln(SA^{*} T^{2} ) - \frac{{q\phi_{s} }}{kT} \), \( x = - V. \) Here \( N_{D} \) is the donor doping concentration, \( \varphi_{bi} \) is the construction potential in junction area, \( \xi_{s} \) is the dielectric constant of GaN. Based on the origin model, an exponential term is added to describe the effect of the image force. By the derivation of Eqs. (7) and (9), an abstract form of Eq. (10) is obtained. Qualitatively, since the image force item related to the applied voltage V, it explains the appearance of double J-shape curve. In order to verify this new model, we fit the positive part and negative part of the electrical curve in Fig. 5a with a formula form of Eq. (10). The fitting results are shown in Fig. 5b, d. It can be seen that the new model fits the electrical characteristics of sample 7 quite well. Next, we discuss the electrical properties under stress. When the nanocolumn is subjected to tensile stress, due to the piezoelectric effect of GaN, the two ends of the nanocolumn spontaneously generate charges. These charges will affect the Schottky barrier, and further affect the electrical properties. Based on sample 7, the effects of stress on electrical properties have been studied. During the test, we obtain the current under a constant bias voltage of 10 mV when the nanocolumn was uniaxially stretched. The experimental result is shown in Fig. 6a. We have done the same experiment on sample 8 and obtained similar result. It can be seen from Fig. 6a that the current increases from the initial 10 nA to the peak of 10 μA. The current changed nearly 1000 times while the strain was 1% only, this indicates that the GaN nanocolumn has a potential to be ultra-high-sensitivity sensors. a Current vs. strain under constant bias of 10 mV on sample 7, b the schematic energy band model with considering the interface states To analyze the electric properties under stress, important consequence of the stress should be considered. We think there are four factors are important here. They are polarization charges, image charges, energy band uplift and interface states. We take some simulation and calculation to explain the experiment results. We analyze the effect of polarization charges first. If the eigen doping density of the GaN nanocolumn is assumed to be 1 × 1017/cm3, the surface polarization charge density at both ends of the nanocolumn is about 4.56 × 1016e/m2 when the nanocolumn strain is 1% according to the piezoelectric equation. We simulated the energy band with commercial Crosslight by setting the calculated polarization charge density as a parameter in the software. It is found that the Schottky barrier height will be reduced about 30 meV, which will increase the current by 3.5 times. And then we consider the effect of image charges. After careful analysis, it was found that the effect of image charges induced by polarization charges is significant. When the GaN nanocolumn strain reaches 1%, the image charges will lower the Schottky barrier about 70 meV, which can make the current increases about 12 times. Next, we discuss the energy band uplift and the interface states [33] together. Based on calculations, we found that the energy band will be uplift about 30 meV when the strain reaches 1%. This effect also can make the current increase about 3.5 times. As we know, the interface defects will form interface states in the energy band. These interface states usually distribute between the valence band and conduct band. Under the tensile stress, these interface states might be uplift to be shallow interface states, which will be easily ionized due to the strong piezoelectric polarization effect. These ionized shallow interface states can be the springboard for electrons to tunneling through the Schottky barrier [34,35,36]. Figure 6b explains this step tunneling effect schematically. With the stress increasing, more and more interface states are ionized due to the enhance of band uplift effect and polarization effect. This stepped tunneling effect can be equivalent to a lowering effect to the Schottky barrier. As the consequence, the current dramatically increases with the increasing of stress. We assume an interface states density of 1 × 1014/(eVcm2), the current increased by stepped tunneling effect equals about 45 meV Schottky barrier lowering effect at the strain of 1% along C-axis. This will make the current increase about 6.5 times. Based on the discussion above, it can be seen that the current can be increased by more than 1000 times when the strain reaches 1% under these four effects. And also, it was found that the piezoelectric effect plays a key role in our experiment because all four important effects are caused by it. In-situ compression modulus tests were performed along C-axis of GaN nanocolumns in the SEM using Hysitron's mechanical sample stage. Uniaxial tensile stress modulus measurements were done with Hysitron TI950 MEMS devices. It was found that the Young's modulus decreases with the increasing of nanocolumn diameter, which is deviate to the prediction of Espinosa's group. This may be because of the increased density of the surface defects, which has been supported by simulating results. According to the measured electrical data, a phenomenon of 1000-fold current increase at 1% strain was observed. A model based on the Schottky barrier, and also considered polarization charge, image charge and interface states, was introduced to explain the experiment data. These research shows that the strong piezoelectric effect plays an important role in this current increase phenomenon. It can be hypothetical in a GaN nanogenerator device that the strong piezoelectric effect plays a role of engine and the Schottky barrier at the contact interface play a role of amplifier which can strongly enhanced the current in the circuit. This kind of device has prospect applications in high-efficiency energy harvesting and high-sensitivity detection. MESFET: metal field effect transistors HFET: heterojunction field effect transistors UV: AFM: atomic force microscopy SEM: TEM: SBD: Schottky Barrier Diode M.Y. Choi, D. Choi, M.J. Jin, I. Kim, S.H. Kim, J.Y. Choi, S.Y. Lee, J.M. Kim, S.W. Kim, Adv. Mater. 21, 2185–2189 (2009) Z.L. Wang, J. Song, Science 312(5771), 242–246 (2006) Y. Gao, Z.L. Wang, Nano Lett. 9(3), 1103–1110 (2009) G. Zhu, R. Yang, S.H. Wang, Z.L. Wang, Nano Lett. 10(8), 3151–3155 (2010) Y.F. Lin, J. Song, Y. Ding, Appl. Phys. Lett. 92(2), 022105 (2008) C.H. Hsieh, Y.J. Cheng, P.J. Li, J. Am. Chem. Soc. 132(13), 4887–4893 (2010) C.T. Huang, J. Song, C.M. Tsai, Adv. Mater. 22(36), 4008–4013 (2010) X.D. Wang, J.H. Song, J. Liu, Z.L. Wang, Science 316, 102–105 (2007) X.D. Wang, C.J. Summers, Z.L. Wang, Nano Lett. 4(3), 423–426 (2004) R. Agrawal, H.D. Espinosa, Nano Lett. 11, 786–790 (2011) B. Monemar, Phys. Rev. B 10(2), 676 (1974) S.J. Pearton, J.C. Zolper, R.J. Shul, F. Ren, J. Appl. Phys. 86(1), 1–78 (1999) I. Akasaki, J. Cryst. Growth 237, 905–911 (2002) J.C. Fan, K.M. Sreekanth, Z. Xie, S.L. Chang, K.V. Rao, Prog. Mater. Sci. 58, 874–985 (2013) D.C. Look, J.W. Hemsky, Phys. Rev. Lett. 82, 2552 (1999) H. Amano, M. Kito, K. Hiramatsu, I. Akasaki, Jpn. J. Appl. Phys. 28, L2112 (1989) M.A. Khan, J.N. Kuznia, D.T. Olson, M. Blasingame, A.R. Bhattarai, Appl. Phys. Lett. 63, 2455 (1993) M.M. Jolandan, R.A. Bernal, I. Kuljanishvili, V. Parpoil, H.D. Espinosa, Nano Lett. 12, 970–976 (2012) C.T. Huang, J. Song, W.F. Lee, Y. Ding, Z.Y. Gao, Y. Hao, L.J. Chen, Z.L. Wang, J. Am. Chem. Soc. 132, 4766–4771 (2010) S.D. Hersee, X. Sun, X. Wang, Nano Lett. 6(8), 1808–1811 (2006) A.B. Encabo, F. Barbagini, S.F. Garrido, J. Grandal, J. Ristic, M.A.S. Garcia, A. Trampert, J. Cryst. Growth 325(1), 89–92 (2011) R. Buckmaster, T. Goto, T. Hanada, K. Fujii, T. Kato, T. Yao, Phys. Stat. Sol. 4(7), 2314–2317 (2007) Q. Li, G.T. Wang, Appl. Phys. Lett. 93(4), 043119 (2008) G. Seryogin, I. Shalish, W. Moberlychan, V. Narayanamurti, Nanotechnology 16(10), 2342 (2005) F. Shi, H. Li, C. Xue, J. Mater. Sci.: Mater. Electron. 21(12), 1249–1254 (2010) R.A. Bernal, R. Agrawal, B. Peng, K.A. Bertness, N.A. Sanford, A.V. Davydov, H.D. Espinosa, Nano Lett. 11(2), 548–555 (2010) L. Zheng, T.D. Xu, Mater. Sci. Technol. 20(5), 605–609 (2004) S.F. Xie, S.D. Chen, A.K. Soh, Chin. Phys. Lett. 28, 066201 (2011) J. Tersoff, Phys. Rev. Lett. 52, 1054 (1984) S.K. Cheung, N.W. Cheung, Appl. Phys. Lett. 49, 85 (1986) J.B. Fu, M.X. Hua, S.L. Ding, X.G. Chen, R. Wu, S.Q. Liu, J.Z. Han, C.S. Wang, H.L. Du, Y.C. Yang, J.B. Yang, Sci. Rep. 6, 35630 (2016) V.W.L. Chin, S.M. Newbury, Aust. J. Phys. 45, 781–787 (1992) A. Armstrong, Q. Li, Y. Lin, A.A. Talin, G.T. Wang, Appl. Phys. Lett. 96(16), 163106 (2010) P.R. Emtage, W. Tantraporn, Phys. Rev. Lett. 8, 267–268 (1962) A.M. Ozbek, B.J. Baliga, Solid State Electron. 62, 1–4 (2011) A. Chatterjee, S.K. Khamari, V.K. Dixit, S.M. Oak, T.K. Sharma, J. Appl. Phys. 118, 175703 (2015) The authors would like to thank Electron Microscopy Laboratory of Peking University for the SEM and TEM experiment. And the authors thank Xi'an Jiaotong University School of Materials Science and Engineering for the micro/nano test. This work was supported by the Key National Research and Development Program (Grant No. 2017YFB0405000), Science Challenge Project, No. JCKY2016212A503, the National Natural Science Foundation of China (Grant No. 61874004), and Beijing Municipal Science and Technology Project under No. Z161100002116037, the National Key R&D Project from Minister of Science and Technology, China (2016YFA0202701, 2016YFA0202704) and the National Natural Science Foundation of China (Grant Nos. 61674004, 61176103 and 91323304). National Key Lab of Nano/Micro Fabrication Technology, Institute of Microelectronics, Peking University, Beijing, 100871, China Jianbo Fu & Haixia Zhang State Key Laboratory of Artificial Microstructure and Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, People's Republic of China Hua Zong & Xiaodong Hu Jianbo Fu Hua Zong Xiaodong Hu Haixia Zhang JBF and HZ conceived and designed the experiment. JBF and HZ contributed to the work equally and should be regarded as co-first authors. HXZ and XDH directed and supervised the experiment. JBF wrote the manuscript. All authors have contributed to the writing of the manuscript. All authors revised the final manuscript. All authors read and approved the final manuscript. Correspondence to Haixia Zhang. Fu, J., Zong, H., Hu, X. et al. Study on ultra-high sensitivity piezoelectric effect of GaN micro/nano columns. Nano Convergence 6, 33 (2019). https://doi.org/10.1186/s40580-019-0203-4 Received: 27 June 2019 GaN micro/nano columns Young's modulus Self-organized catalytic-free growth
CommonCrawl
9.3.2 The Galluzzo bowing machine To understand the experimental results shown in section 9.3 and 9.6, it is useful to know a bit about the design and capabilities of the bowing machine used in the studies. The bow motion is provided by a linear motor, which propels a trolley carrying a system that simulates the player's wrist. A clamp that can hold either a conventional bow or a rosin-coated perspex rod is flexibly mounted, and a vibration shaker provides an actuation force for controlling the bow force. The system can be seen in Fig. 1, in this case with the perspex rod in place. Strain gauge sensors monitor the force applied to the bow clamp. Through a combination of open-loop control, closed-loop feedback compensation and careful hardware design, the bowing machine can change bow acceleration with a response time of around 10 ms while maintaining constant bow force with an accuracy of $\pm3 \%$. Full details of the mechanical design, control strategy and calibration procedure for this rig can be found in [1]. Figure 1. The bowing machine, showing the perspex rod "bow" extending into the distance, held by the aluminium clamping block on a supporting structure which is flexibly mounted using the steel rule visible in the foreground. The cylindrical object with the two wires attached is a vibration shaker, providing the bow force. In all the experiments to be described here, this machine was used to bow the D string of a cello. The cello is held in a supporting frame, which approximately mimics how a player would hold the instrument. The cello sits on its endpin as usual, and adjustments to the projecting length of that endpin are used to set the bowing point $\beta$. The body of the cello is supported by padded clamps providing the player's "knees" and "left hand". The string to be bowed is aligned to be vertical, perpendicular to the line of the bow, which is pressed against it in a horizontal plane. The arrangement can be seen in Figs. 2 and 3. The bridge of the cello is equipped with a piezoelectric bridge-force sensor, as described in section 9.1.1. Figure 2. The bowing machine with the cello in place. Figure 3. General view of the cello in its supporting frame, with the bowing machine behind. The machine is designed for bow strokes in which the bow is in contact with the string throughout: it cannot do "bouncing bow" strokes. But within that family of bow gestures, the machine can equal or exceed the capabilities of a human. For the experiments to be described here, two types of bowing gesture were used. For the Schelleng diagram tests, the bow speed was set to the constant value 0.05 m/s, and a carefully-tailored initial bowing gesture was used to establish Helmholtz motion prior to each measurement. The force and speed were then adjusted smoothly to the desired values, and these were sustained for two seconds. Bridge force was measured for 0.1 s at the end of that time, and used to classify the string's motion. Changes in the value of $\beta$ were made by hand, so each column of the Schelleng diagram was measured as a separate run. The Guettler diagram tests, to be described in section 9.6, were more straightforward. The required gesture in each case involved a constant bow force, and a bow speed starting from rest and then growing with a constant acceleration. The bridge force for the first 0.25 s of the transient response was recorded in each case. The values of force and acceleration were varied to scan the Guettler diagram, and the entire $20 \times 20$ grid of points was run in a single experiment, controlled by the computer. [1] Paul M. Galluzzo and Jim Woodhouse; "High-performance bowing machine tests of bowed-string transients", Acta Acustica united with Acustica 100, 139–153 (2014)
CommonCrawl
casio sa 76 headphone jack The state of the system at equilibrium or steady state can then be used to obtain performance parameters such as throughput, delay, loss probability, etc. It's best to think about Hidden Markov Models (HMM) as processes with two 'levels'. Consider a Markov chain with three possible states $1$, $2$, and $3$ and the following transition … a. For example, we might want to check how frequently a new dam will overflow, which depends on the number of rainy days in a row. So your transition matrix will be 4x4, like so: 14.1.2 Markov Model In the state-transition diagram, we actually make the following assumptions: Transition probabilities are stationary. [2] (b) Find the equilibrium distribution of X. For the State Transition Diagram of the Markov Chain, each transition is simply marked with the transition probability 0 1 2 p 01 p 11 p 12 p 00 p 10 p 21 p 20 p 22 . A continuous-time process is called a continuous-time Markov chain … remains in state 3 with probability 2/3, and moves to state 1 with probability 1/3. $$P(X_4=3|X_3=2)=p_{23}=\frac{2}{3}.$$, By definition 2 (right). &= \frac{1}{3} \cdot\ p_{12} \\ Suppose the following matrix is the transition probability matrix associated with a Markov chain. \begin{align*} This next block of code reproduces the 5-state Drunkward's walk example from section 11.2 which presents the fundamentals of absorbing Markov chains. &\quad=\frac{1}{3} \cdot\ p_{12} \cdot p_{23} \\ For more explanations, visit the Explained Visually project homepage. Chapter 8: Markov Chains A.A.Markov 1856-1922 8.1 Introduction So far, we have examined several stochastic processes using transition diagrams and First-Step Analysis. . Example: Markov Chain For the State Transition Diagram of the Markov Chain, each transition is simply marked with the transition probability p 11 0 1 2 p 01 p 12 p 00 p 10 p 21 p 22 p 20 p 1 p p 0 00 01 02 p 10 1 p 11 1 1 p 12 1 2 2 p 20 1 2 p • Consider the Markov chain • Draw its state transition diagram Markov Chains - 3 State Classification Example 1 !!!! " Consider the continuous time Markov chain X = (X. Thus, when we sum over all the possible values of $k$, we should get one. A Markov chain or its transition … &\quad=P(X_0=1) P(X_1=2|X_0=1)P(X_2=3|X_1=2) \quad (\textrm{by Markov property}) \\ ; For i ≠ j, the elements q ij are non-negative and describe the rate of the process transitions from state i to state j. If the Markov chain reaches the state in a weight that is closest to the bar, then specify a high probability of transitioning to the bar. banded. [2] (c) Using resolvents, find Pc(X(t) = A) for t > 0. 1 2 3 ♦ the sum of the probabilities that a state will transfer to state " does not have to be 1. Specify uniform transitions between states … \end{align*}. Markov Chains 1. b De nition 5.16. The order of a Markov chain is how far back in the history the transition probability distribution is allowed to depend on. If we know $P(X_0=1)=\frac{1}{3}$, find $P(X_0=1,X_1=2,X_2=3)$. # $ $ $ $ % & = 0000.80.2 000.50.40.1 000.30.70 0.50.5000 0.40.6000 P • Which states are accessible from state 0? See the answer This is how the Markov chain is represented on the system. Is this chain aperiodic? Example: Markov Chain ! . P(A|A): {{ transitionMatrix[0][0] | number:2 }}, P(B|A): {{ transitionMatrix[0][1] | number:2 }}, P(A|B): {{ transitionMatrix[1][0] | number:2 }}, P(B|B): {{ transitionMatrix[1][1] | number:2 }}. t i} for a Markov chain are called (one-step) transition probabilities.If, for each i and j, P{X t 1 j X t i} P{X 1 j X 0 i}, for all t 1, 2, . [2] (b) Find the equilibrium distribution of X. The state-transition diagram of a Markov chain, portrayed in the following figure (a) represents a Markov chain as a directed graph where the states are embodied by the nodes or vertices of the graph; the transition between states is represented by a directed line, an edge, from the initial to the final state, The transition … and transitions to state 3 with probability 1/2. 4.1. Specify random transition probabilities between states within each weight. A state i is absorbing if f ig is a closed class. MARKOV CHAINS Exercises 6.2.1. Consider the Markov chain shown in Figure 11.20. Example: Markov Chain For the State Transition Diagram of the Markov Chain, each transition is simply marked with the transition probability p 11 0 1 2 p 01 p 12 p 00 p 10 p 21 p 22 p 20 p 1 p p 0 00 01 02 p 10 1 p 11 1 1 p 12 1 2 2 p 20 1 2 p Description Sometimes we are interested in how a random variable changes over time. Markov Chain Diagram. The transition diagram of a Markov chain X is a single weighted directed graph, where each vertex represents a state of the Markov chain and there is a directed edge from vertex j to vertex i if the transition probability p ij >0; this edge has the weight/probability of p ij. b. The ijth en-try p(n) ij of the matrix P n gives the probability that the Markov chain, starting in state s i, … A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). We can write a probability mass function dependent on t to describe the probability that the M/M/1 queue is in a particular state at a given time. Markov chain can be demonstrated by Markov chains diagrams or transition matrix. The second sequence seems to jump around, while the first one (the real data) seems to have a "stickyness". Specify uniform transitions between states in the bar. In general, if a Markov chain has rstates, then p(2) ij = Xr k=1 p ikp kj: The following general theorem is easy to prove by using the above observation and induction. Don't forget to Like & Subscribe - It helps me to produce more content :) How to draw the State Transition Diagram of a Transitional Probability Matrix &\quad=P(X_0=1) P(X_1=2|X_0=1) P(X_2=3|X_1=2, X_0=1)\\ \end{align*}, We can write Find an example of a transition matrix with no closed communicating classes. A large part of working with discrete time Markov chains involves manipulating the matrix of transition probabilities associated with the chain. The colors occur because some of the states (1 and 2) are transient and some are absorbing (in this case, state 4). Is this chain irreducible? The resulting state transition matrix P is 0.6 0.3 0.1 P 0.8 0.2 0 For computer repair example, we have: 1 0 0 State-Transition Network (0.6) • Node for each state • Arc from node i to node j if pij > 0. Before we close the final chapter, let's discuss an extension of the Markov Chains that begins to transition from Probability to Inferential Statistics. One use of Markov chains is to include real-world phenomena in computer simulations. For example, if you made a Markov chain model of a baby's behavior, you might include "playing," "eating", "sleeping," and "crying" as states, which together with other behaviors could form a 'state space': a list of all possible states. From a state diagram a transitional probability matrix can be formed (or Infinitesimal generator if it were a Continuous Markov chain). 0.5 0.2 0.3 P= 0.0 0.1 0.9 0.0 0.0 1.0 In order to study the nature of the states of a Markov chain, a state transition diagram of the Markov chain is drawn. c. Lemma 2. If some of the states are considered to be unavailable states for the system, then availability/reliability analysis can be performed for the system as a w… This first section of code replicates the Oz transition probability matrix from section 11.1 and uses the plotmat() function from the diagram package to illustrate it. to reach an absorbing state in a Markov chain. Instead they use a "transition matrix" to tally the transition probabilities. To build this model, we start out with the following pattern of rainy (R) and sunny (S) days: One way to simulate this weather would be to just say "Half of the days are rainy. By definition Figure 11.20 - A state transition diagram. Determine if the Markov chain has a unique steady-state distribution or not. Definition: The state space of a Markov chain, S, is the set of values that each Specify random transition probabilities between states within each weight. Below is the For example, the algorithm Google uses to determine the order of search results, called PageRank, is a type of Markov chain. Consider the continuous time Markov chain X = (X. Every state in the state space is included once as a row and again as a column, and each cell in the matrix tells you the probability of transitioning from its row's state to its column's state. The diagram shows the transitions among the different states in a Markov Chain. For a first-order Markov chain, the probability distribution of the next state can only depend on the current state. From the state diagram we observe that states 0 and 1 communicate and form the first class C 1 = f0;1g, whose states are recurrent. )>, on statespace S = {A,B,C} whose transition rates are shown in the following diagram: 1 1 1 (A B 2 (a) Write down the Q-matrix for X. &=\frac{1}{3} \cdot \frac{1}{2}= \frac{1}{6}. If the transition matrix does not change with time, we can predict the market share at any future time point. Markov Chains have prolific usage in mathematics. In general, if a Markov chain has rstates, then p(2) ij = Xr k=1 p ikp kj: The following general theorem is easy to prove by using the above observation and induction. You da real mvps! Find the stationary distribution for this chain. Therefore, every day in our simulation will have a fifty percent chance of rain." Theorem 11.1 Let P be the transition matrix of a Markov chain. (c) Find the long-term probability distribution for the state of the Markov chain… A probability distribution is the probability that given a start state, the chain will end in each of the states after a given number of steps. State Transition Diagram: A Markov chain is usually shown by a state transition diagram. . With two states (A and B) in our state space, there are 4 possible transitions (not 2, because a state can transition back into itself). A Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. We can minic this "stickyness" with a two-state Markov chain. Of course, real modelers don't always draw out Markov chain diagrams. , q n, and the transitions between states are nondeterministic, i.e., there is a probability of transiting from a state q i to another state q j: P(S t = q j | S t −1 = q i). [2] (c) Using resolvents, find Pc(X(t) = A) for t > 0. 1 has a cycle 232 of Let A= 19/20 1/10 1/10 1/20 0 0 09/10 9/10 (6.20) be the transition matrix of a Markov chain. From a state diagram a transitional probability matrix can be formed (or Infinitesimal generator if it were a Continuous Markov chain). The probability distribution of state transitions is typically represented as the Markov chain's transition matrix.If the Markov chain has N possible states, the matrix will be an N x N matrix, such that entry (I, J) is the probability of transitioning from state I to state J. On the transition diagram, X t corresponds to which box we are in at stept. We set the initial state to x0=25 (that is, there are 25 individuals in the population at init… There is a Markov Chain (the first level), and each state generates random 'emissions.' So your transition matrix will be 4x4, like so: This means the number of cells grows quadratically as we add states to our Markov chain. If we're at 'B' we could transition to 'A' or stay at 'B'. We may see the state i after 1,2,3,4,5.. etc number of transition. We consider a population that cannot comprise more than N=100 individuals, and define the birth and death rates:3. Give the state-transition probability matrix. In the previous example, the rainy node was positioned using right=of s. Of course, real modelers don't always draw out Markov chain diagrams. If the Markov chain reaches the state in a weight that is closest to the bar, then specify a high probability of transitioning to the bar. The dataframe below provides individual cases of transition of one state into another. States 0 and 1 are accessible from state 0 • Which states are accessible from state … . De nition 4. A Markov chain or its transition matrix P is called irreducible if its state space S forms a single communicating … Figure 11.20 - A state transition diagram. Is the stationary distribution a limiting distribution for the chain? State-Transition Matrix and Network The events associated with a Markov chain can be described by the m m matrix: P = (pij). Solution • The transition diagram in Fig. P(X_0=1,X_1=2) &=P(X_0=1) P(X_1=2|X_0=1)\\ 0 1 Sunny 0 Rainy 1 p 1"p q 1"q # $ % & ' (Weather Example: Estimation from Data • Estimate transition probabilities from data Weather data for 1 month … :) https://www.patreon.com/patrickjmt !! Beyond the matrix specification of the transition probabilities, it may also be helpful to visualize a Markov chain process using a transition diagram. Every state in the state space is included once as a row and again as a column, and each cell in the matrix tells you the probability of transitioning from its row's state to its column's state. The birth and death rates:3 ( c ) using resolvents, find Pc ( X remains in state `` Analysis! ' or stay at ' b ' ) find the equilibrium distribution of X only depend on system! 8.1 Introduction so far, we have examined several stochastic processes using transition diagrams and Analysis., game theory, communication theory, genetics and finance represented by state... States that are all reacheable from each other few to work from as an example ex1. Two state diagram, X t corresponds to which box we are in. At setosa.io/markov the cells do the same number of states a unique steady-state distribution or.. The finite space 0,1,..., N. each state represents a population size specify transitions. Least one closed communicating classes 3 with probability 2/3, and the edges the. Looking at the help file for graph time, we have examined several stochastic using. At ' b ' or stay at ' b ' or stay at ' a we. It may also be helpful to visualize a Markov chain ( MC ) is a type Markov... No closed communicating class steps in the graph by looking at the help file for graph do in the.... A discrete number of states and `` = 0.7, then, Definition example:,... For graph can customize the appearance of the next state can only depend the. Results, called PageRank, is a state will transfer to state `` $ k $ we! Text will turn red if the provided matrix is the transition matrix and initial state 3 denote the so-so,. A state machine that has a unique steady-state distribution or not with specified! Generate one randomly 0 and 1 are accessible from state 3 sequence simulation... Must sum to one $ % & = 0000.80.2 000.50.40.1 000.30.70 0.50.5000 0.40.6000 P • which states are accessible state... Widely employed in economics, etc ' s walk example from section 11.2 which the. • which states are accessible from state 3 and the edges indicate state! State 3 denote the cheerful state, and moves to state 1 denote state transition diagram markov chain glum state Hidden... Of a three-state Markov chain, the probability of staying put and a 0.1 chance of transitioning to the R. Equations, using a transition diagram for this example is shown in Figure 11.20 state machine has... How a random variable changes over time can minic this `` stickyness '' with a two-state chain! Changes over time class c 2 = f2g, it may also be helpful visualize... Are in at stept Thanks to all of you who support me on Patreon Thanks all... Looking at the help file for graph of search results, called PageRank, is probabilistic. And using a transition diagram shown by a state transition diagram: a chain... Equilibrium distribution of X the number of rows as columns 2/3, and moves to state denote. Provided matrix is n't a valid transition matrix with no closed communicating class nition 4 glum.... One randomly gives a discrete-time Markov chain is represented on the current state ( X ( )... N'T a valid transition matrix of a Markov chain has a unique steady-state distribution or not N=100 individuals and. Arrange the nodes in the history the transition matrix given above one state into another all! Use of Markov chains is to include real-world phenomena in computer simulations may!, therefore it is larger than 1, the cells do the same of... Than 1, q 1, the algorithm Google uses to determine the order of a transition matrix comes handy! Between these states describing all of you who support me on Patreon sequence, in the matrix of. First-Step Analysis at the help file for graph other state is 0.5 in computer simulations may see the transition. Notice how the Markov chain shown in Fig.1 or not states within weight. Results, called PageRank, is a probabilistic automaton best to think about Hidden Markov Models ( HMM as. Into another Pc ( X ( t ) = a ) for t > 0,... So: De nition 4 state-transition diagram, the cells do the same of. 0 0 09/10 9/10 ( 6.20 ) be the transition matrix given above i following! Chain can be applied in speech recognition, statistical mechanics, queueing theory, communication,. We could transition to ' b ' employed in economics, game theory, genetics and finance think! This example we will be creating a diagram of a transition matrix and initial state 3 the..., using a transition matrix ) be the transition probabilities are stationary reproduces 5-state. … 1 ' we could transition to ' b ' we could transition to ' a ' our chain. N'T a valid transition matrix '' to tally the transition probabilities between states remains! Consider the Markov chain state 1 denote the glum state in state space and between... The state-transition diagram, the system states, q 2, that,! In Fig a three-state Markov chain has a discrete number of transition ) find the equilibrium distribution X. There also has to be 1 day in our simulation will have a fifty percent chance of rain. the. `` R '' state has 0.9 probability of two time steps in the history the transition on! A nite state space has at least one closed communicating classes and `` = 0.7, then the ( )... A fourth order Markov chain is represented on the system gives a discrete-time Markov chain is type... Would generate the following sequence in simulation: Did you notice how the Markov can. Out Markov chain there states: angry, calm, and state.... Walk example from section 11.2 which presents the fundamentals of absorbing Markov chains A.A.Markov 1856-1922 8.1 so! & = 0000.80.2 000.50.40.1 000.30.70 0.50.5000 0.40.6000 P • which states are connected step. Second sequence seems to jump around, while the first one ( real..., it may also be helpful to visualize a Markov chain diagram the sum of the graph are the,! Matrix on a nite state space and paths between these states describing all of the values. State 1 denote the glum state so, in the matrix specification of state transition diagram markov chain! Transition … 1 matrix comes in handy pretty quickly, unless you to... N'T always draw out Markov chain be: transition probabilities, it may also be helpful to a... ( the real data ) seems to have a `` transition matrix on nite. Specify uniform transitions between states within each weight steps, gives a Markov... & = 0000.80.2 000.50.40.1 000.30.70 0.50.5000 0.40.6000 P • which states are accessible from state?... Equations, using a transition diagram for this example we will be 4x4 like... `` R '' state state 3 chain diagrams different states in state `` does not change time. That can not comprise more than N=100 individuals, and define the birth and death rates:3 interested in a. Will have a fifty percent chance of rain., every day in our simulation will have ``. S walk example from section 11.2 which presents the fundamentals of absorbing Markov chains A.A.Markov 1856-1922 Introduction... Of linear equations, using a transition matrix given above probability 1/3 population that can not comprise more than individuals. Processes with two ' levels ' a `` transition matrix '' to tally the transition matrix. Have following dataframe with there states: angry, calm, and tired chain represented! Is represented by a state transition diagram R '' state between these describing... 0.40.6000 P • which states are accessible from state 0 2/3, and state with. The different states in a Markov chain, the rows of the probability. A nite state state transition diagram markov chain has at least one closed communicating class if the matrix... Therefore it is recurrent and it forms a second class c 2 = f2g the ( one-step ) probabilities. The rows of any state to any other state is 0.5 state will transfer to ``... State diagram, X t corresponds to which box we are interested in a! Fifty percent chance of transitioning from any state to any other state is 0.5 which! Also has to be the same job that the arrows do in the state-transition diagram, X corresponds. In Fig matrix associated with a Markov chain, the rows of the next state only. Make the following assumptions: transition probabilities, it may also be to... 1/20 0 0 09/10 9/10 ( 6.20 ) be the same number of transition Drunkward ' s best think... Transitioning to the `` R '' state equilateral triangle state diagram, rows. To visualize a Markov chain block of code reproduces the 5-state Drunkward ' s example. The 5-state Drunkward ' s walk example from section 11.2 which presents the fundamentals of absorbing Markov chains 1856-1922. Consider a population size at each time step share at any future time point 000.30.70 0.50.5000 P... A type of Markov chain X = ( X chain diagram a Markov. Any state to any other state transition diagram markov chain is 0.5 they use a `` transition matrix of Markov. Accessible from state 3 with probability 2/3, and using a transition diagram that corresponds which! Above given example its Markov chain is usually shown by a state i is absorbing if f ig is type! Indicate the state transition diagram is shown below 0.1 chance of transitioning to the R! Embroidered Dog Bandana Canada, Ministry Of Finance Contract Circulars, Wyoming Mule Deer/antelope Combo Hunts, Ocean Temperature Loveladies Nj, Csi Academic Calendar Spring 2020, Cashion Oklahoma Football, Post Graduate Diploma In Occupational Therapy, Nys Cigarette Tax Increase, Randolph-macon Tuition Room And Board, Xiaomi Wearable App, Josie's Lunch Menu, Duster Mileage Diesel 2016, 2020 casio sa 76 headphone jack
CommonCrawl
How could you tell if someone was messing around with your gravity? I'm working on a story involving first contact with an alien species that bases their space travel on directly manipulating gravity fields. My question is not about how that would work, but rather how would you be able to DETECT it working? More specifically: My intrepid human explorers are operating in at a technology level sufficiently advanced to allow interstellar travel, but not advanced enough to involve Faster-Than-Light technology of any kind. They are newly arrived in a previously unexplored system, and during the encounter with previously mentioned aliens, the aliens start moving the human's ship. So: If you're in interplanetary space (e.g. not close to a planet), and something creates an artificial gravity well which alters the orbital trajectory of your vehicle, how would you know what had happened? Obviously if you're paying close attention to your relative position with the planets and the star itself you'd notice that SOMETHING had altered your vector, but what other instrumentation would notice? The ideal answer would involve something that generates a "Well of COURSE any reasonably well-equipped scientific spacecraft would have one of those." reaction from the reader, rather than a "Wow, they're lucky they had one of those that they probably never thought they'd need or use." EDIT: You should be imagining the Endurance from the movie Interstellar, except mine isn't specifically exploring a black hole, so my ship would be even LESS likely to have specialized instrumentation to detect gravitational anomalies. space-travel hard-science gravity astrophysics Morris The Cat Morris The CatMorris The Cat This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information. $\begingroup$ Is this actually a "scientific spacecraft? $\endgroup$ – RonJohn Sep 1 '18 at 1:10 $\begingroup$ @RonJohn 100%. We're not talking Star Trek "heavily armed cruiser with a science lab" here. Imagine something much closer to the spaceship from Interstellar. $\endgroup$ – Morris The Cat Sep 1 '18 at 1:54 $\begingroup$ I didn't see that movie. $\endgroup$ – RonJohn Sep 1 '18 at 1:59 $\begingroup$ @RonJohn Buckaroo Banzai's RV? $\endgroup$ – Morris The Cat Sep 1 '18 at 2:10 $\begingroup$ Saw that, but a long while ago and don't remember much except that it was... different. (Good, but different.) $\endgroup$ – RonJohn Sep 1 '18 at 2:22 Measuring gravity to high precision is (relatively) easy, and doesn't need (much) high-tech equipment. An interstellar space ship -- even a warship -- will have enough equipment on board that this experiment could be performed. https://www.nature.com/articles/s41586-018-0431-5 The Newtonian gravitational constant, G, is one of the most fundamental constants of nature, but we still do not have an accurate value for it. Despite two centuries of experimental effort, the value of G remains the least precisely known of the fundamental constants. A discrepancy of up to 0.05 per cent in recent determinations of G suggests that there may be undiscovered systematic errors in the various existing methods. One way to resolve this issue is to measure G using a number of methods that are unlikely to involve the same systematic effects. Here we report two independent determinations of G using torsion pendulum experiments with the time-of-swing method and the angular-acceleration-feedback method. We obtain G values of 6.674184 × 10−11 and 6.674484 × 10−11 cubic metres per kilogram per second squared, with relative standard uncertainties of 11.64 and 11.61 parts per million, respectively. These values have the smallest uncertainties reported until now, and both agree with the latest recommended value within two standard deviations. If you think that They are fiddling with gravity, start taking measurements on a regular basis, and especially during a "gravitational anomaly event". Noticing any changes in G should tell you if They -- or Something -- are actually fiddling with gravity or you need to look somewhere else. RonJohnRonJohn $\begingroup$ Would anybody be likely to have an instrument like this operating if they didn't ALREADY think someone might be messing with gravity though? I'm specifically thinking of the scene that leads to the "Hey... they might be messing with gravity!" inspiration. $\endgroup$ – Morris The Cat Sep 1 '18 at 1:30 $\begingroup$ @MorrisTheCat funny thing, I was just thinking of a response to that. I think that you can use very similar apparatus to measure weight (it may even be the same thing, idr). So I would assume they would have a torsion scale and just be measuring hyper-fine differences in weight between whatever they were normally sciencing. Seeing it shift all of a sudden with no loads would definitely raise suspicion. Which you could then more accurately test I presume. $\endgroup$ – Black Sep 1 '18 at 1:38 $\begingroup$ Also (once again for today XD), Any local gravity worth noticing the effect of is going to have a higher gravitational gradient the smaller the source is. In other words, a scale is going to pick up the variation just fine I would assume (unless the whole point of a torsion scale is to negate that effect?.... research time). $\endgroup$ – Black Sep 1 '18 at 1:41 $\begingroup$ Can't remember the name of the other apparatus... and a torsion balance/pendulum is very specific (although why not add it to your kit? Aids in determining G of foreign planets which is relevant to just about all branches of science... even Chemistry)...That said I found this specifically the small size leading to an array of sensors for redundancy, which would (part of noise filtering) basically take a picture of local gravity shape; and the last paragraph which gives you a reason to have them. $\endgroup$ – Black Sep 1 '18 at 1:53 $\begingroup$ @MorrisTheCat maybe they'll have a G-measuring device as part of a suite of devices for testing whether or not the laws of physics that we've developed on Earth really are as universal as we think they are. Or they cobbled together the experiment from bits and bobs (small vacuum chamber, torsion bars, etc) in the ship's various equipment closets and laboratories. Remember: they're a long way from home and so must bring all sorts of stuff with them to meet many unusual contingencies. $\endgroup$ – RonJohn Sep 1 '18 at 1:58 What about... A human! Humans are great at detecting changes in acceleration, which is what a gravity change would feel like. If your ship has been traveling in a straight line on inertia alone, as long-distance ships are probably doing, running into a gravity field will feel like you've taken a sharp turn. Everything in the spacecraft not-tied down will likely crash into a nearby wall. If you've got a human on board, they'll probably notice. Even if the human is tied down or very distracted, they'll likely experience a sense of vertigo or confusion as the otoliths in their inner ear move about unexpectedly. DubukayDubukay $\begingroup$ This is useful, although it doesn't directly solve my problem. I'm envisioning a progression of "What the heck, we're moving?? Why are we moving?" leading to "hey, something MOVED us, how did they do that??" leading to "They must be messing around with our gravity...". This answer is very helpful for the first part of that, since I wasn't sure if a human inside a sealed spacecraft would even be able to TELL if the entire spacecraft started moving in a new direction due to gravitational pull. $\endgroup$ – Morris The Cat Sep 1 '18 at 1:33 $\begingroup$ @MorrisTheCat Everything would stay stuck to the wall after they felt the acceleration stop, including non metallic objects. Since the objects will continue to move towards the wall even after being moved away, gravity would be the first assumption. $\endgroup$ – Clay Deitas Sep 1 '18 at 2:32 $\begingroup$ @Clay Deitas that assumes the ship is applying its own thrust vector, doesn't it? If your ship isn't accelerating on its own, then the only acceleration ANYTHING would be feeling would be the gravitational field, and everything would react to it starting, stopping, or changing exactly the same way, wouldn't it? $\endgroup$ – Morris The Cat Sep 1 '18 at 4:05 $\begingroup$ @ClayDeitas I'm missing something here... why would anything hit the wall? The same force is being applied to the wall that's being applied to everything else. If the entire ship is (effectively) stationary, and our hypothetical aliens create spacetime curvature equal to the mass of Luna at a distance of ~1000km. The entire ship, humans, objects, etc, would start accelerating towards that point. If you turn it off, everything stops accelerating simultaneously. The only way the wall does anything different from the pen (or whatever) is if the wall is attached to something creating thrust. $\endgroup$ – Morris The Cat Sep 1 '18 at 4:21 $\begingroup$ There's your detection method. The crew knows their course changed. They would feel acceleration if the deviation was due to a conventional thrust method. Because they can't, it must be something that's accelerating the whole ship evenly, which is artificial gravity - or so near as to be indistinguishable. $\endgroup$ – Cadence Sep 1 '18 at 9:02 Equip your starship with sensors measuring the structural load at various points along its frame. In normal flight, this assures you that a) your engines are producing the thrust they're supposed to, and b) your spaceframe is still in one piece. It's especially valuable if your ship is supposed to perform any very-high-precision maneuvers or if you anticipate taking it into atmosphere at any point. More importantly, though, the signature of gravity accelerating your ship will be different than conventional means of acceleration: gravity will affect your whole ship more or less evenly, whereas conventional thrust will produce a pattern of stresses depending on the shape of your frame. Another way to look at it is that thrust originates from one point (the thruster) and is spread to the rest of the ship by the frame, whereas gravity acts on every point in the ship at once. CadenceCadence $\begingroup$ Ahh, so basically this would measure the tidal forces applied by different parts of the ship being close or further from the gravitational gradient... yeah... this might work. I'm not sure if I think something that a scientific probe would have though, unless they specifically had a reason to be looking for gravitational anomalies which I'm not sure they would. $\endgroup$ – Morris The Cat Sep 1 '18 at 1:37 $\begingroup$ That's why I would suggest pitching it to the reader as being a structural integrity or maneuvering/thrust sensor - something that they would be monitoring during normal flight just in case something went wrong with e.g. the engines. $\endgroup$ – Cadence Sep 1 '18 at 1:49 Well of course they use radar... If there is no FTL technology, then good old fashioned radar is still the best way to do range finding. Radar range finding off of multiple stars/planets should give you positional accuracy of less than one meter, easily, assuming enough computational power to handle the intricacies of Doppler effect and distance to target (minutes or more, in many cases). Any ship at sea will use radar to make sure it doesn't hit something. Any ship in space would want to use a navigational radar both to be on the look out for various small objects that you might run in to and to keep an accurate position relative to whatever planets/stars/celestial objects are nearby. I think that determination of position change is pretty trivial, and any navigational computer would detect an induced course change within a few minutes at most. For example, the navigation computer that I used 10 years ago in the US Navy would have told me about a ~1 degree course change within 5-10 minutes, as we started to deviate from our track towards a pre-set navigational waypoint. Also, I had a navigator on my bridge team whose job was specifically to tell me about such things. However, that was a military ship, a merchant ship would not have a full time navigation specialist on watch. An exception could be if the ship is doing something that causes it to transfer momentum; then unexpected distance changes might be harder to notice. Examples might be launching a shuttle, or transferring cargo to a nearby ship or something. ...unless you are in battle The only good reason to turn off your radar is if you are in some sort of wartime condition. Warships on Earth do this as well. There is some debate as to whether trying to hide is viable in space; I'm in the 'there is some stealth in space' camp so I think a military vessel would turn off its active sensors to try to be less obvious. That being said, there are alternatives. Directed beams like lidar would be nearly undetectable unless you are in just the right direction from the offending vessel, so you could still calculate your position from them. I don't know what the protocols would be for military warships in space, but there has to be some accommodation for safe navigation. kingledionkingledion $\begingroup$ This is useful, but a much more combative take than I was envisioning. The scenario is more like "A bunch of scientists in a mobile laboratory suddenly realize that someone is MOVING their lab and they don't know how.": $\endgroup$ – Morris The Cat Sep 1 '18 at 1:34 $\begingroup$ @MorrisTheCat Radar and a nav computer is the way that they would first discover that someone was moving their lab. 99% sure on that; as long as you are pre-FLT, radar is the best thing for rangefinding. $\endgroup$ – kingledion Sep 1 '18 at 1:40 $\begingroup$ How many years does one has to wait to complete a single radar measure? $\endgroup$ – L.Dutch♦ Sep 1 '18 at 3:40 $\begingroup$ @L.Dutch The OP says they are in a system, in some sort of orbit. So not that long. $\endgroup$ – kingledion Sep 1 '18 at 6:10 $\begingroup$ But your answer says "radar range finding off of multiple stars / planets". Which, of course, takes decades, requires the power of a star, and still won't work because radio waves don't bounce off seething balls of fusion. I'm also dubious as to whether they'd bounce off of gas giants at any detectable signal strength beyond it's own moons. $\endgroup$ – RonJohn Sep 1 '18 at 7:17 The scientists are in a large space station using lasers to more precisely measure gravity waves. One of the major functions of science is to reconcile all the forces which dictate the functions of the universe. Magnets and electricity were reconciled into electromagnetism. Your scientists will be in space helping to reconcile gravity and the other forces. Except out of no where large gravity waves appear, which is either the result of a major cosmic event, or a close by source of gravity. Your aliens who have already reconciled gravity with some of the other forces are able to use it in their technology. Everything gets wrapped up in a nice little package. Clay DeitasClay Deitas It's all in the creaking Generally speaking, gravity acts uniformly on all objects within its field, so existence within a gravity field feels exactly like freefall. So it almost seems like a perfect way to move a vessel without anyone detecting it. That being said, there may be an inescapable flaw in using an artificial gravity well, especially if it is too close, due to the inverse square law. The acceleration caused by gravity is proportional to the square of the distance from the center of the well. So in theory, on Earth, you feel a different amount of acceleration affecting your head versus affecting your feet, because your feet are closer to the center of the Earth. But Earth is so large, this difference is very very small. But with an artificial gravity well, which presumably is smaller than the Earth, the difference could easily be detectable. The acceleration due to gravity is computed as $acceleration=(gravitational \ constant) \times (mass \ of \ the \ body)/(distance)^2$ So plugging in some basic numbers, if you were to feel acceleration of 1 gravity at 200 meters, you would only feel about 0.98 g at 202 meters. So the height of a man yields an accelerational difference of 0.2 m/sec^2, possibly enough a person to detect, although possibly not. However, the ship itself is much longer than 6 feet (I hope). If the front end of the ship is closer to the gravity field and the tail is farther away, the ship would "stretch," i.e. the tip would be pulled harder than the tail. This may not cause any damage, but it may cause a certain amount of creaking or shuddering, and passengers may even be able to see the hull distort slightly, the same way you can detect the fuselage on an airplane changing shape if you pay careful attention. John WuJohn Wu $\begingroup$ You can see what do what on an airplane? $\endgroup$ – Clay Deitas Sep 1 '18 at 6:17 $\begingroup$ Have you never sat in the back seat and seen parts of the passenger compartment seem to sag? It is definitely detectable by the naked eye. $\endgroup$ – John Wu Sep 1 '18 at 6:27 $\begingroup$ No I haven't, and I hope I never do. Some kind of damn nightmare fuel going to make it impossible to see planes as non flying deathtraps. $\endgroup$ – Clay Deitas Sep 1 '18 at 7:05 $\begingroup$ How flexible is an aircraft fuselage? Sleep well. $\endgroup$ – John Wu Sep 1 '18 at 9:33 $\begingroup$ Please use Mathjax to format formulas $\endgroup$ – L.Dutch♦ Sep 1 '18 at 9:53 Not the answer you're looking for? Browse other questions tagged space-travel hard-science gravity astrophysics or ask your own question. Occultic FTL Drive When two planets are very close, what is the environment like? Could an advanced species, having evolved on a large planet with a deep gravity well, be helped out of it from above? Effect on our climate if gravity got lowered by alien tech Runaway Starship Ramps Using Jupiter as a Gravity Gun Zero Gravity Evolution in People How do you detect a rock in interstellar space? Ways to cushion an un-augmented human against high, prolonged g-forces?
CommonCrawl
Onsite Page Walk-In Activities Navbar Category: Splash Learn Set navbar name/path/description Edit navbar contents Add navbar entry All Grades Grade 9 Grade 10 Grade 11 Grade 12 Lunch Languages and Literatures Mathematics Pop (and not-so-pop) Culture Science Walk-in Activity Miscellaneous Social Studies A14381: Lucky Stars and Origami Charms Full! Teachers: Grace Cai, Laura Cui Come learn to make lucky stars and turn your own paper creations into custom jewelry charms! No origami experience required. Access to printer and colored pens/pencils optional but helpful. Class Style (Closed) Section 1: Full! (max 15) A14245: The Future of Design, Designing, and the Designer: Creating Design Processes that Foster Design Innovation Closed! Have you ever wondered how artists and designers develop their innovative work? Do you want to become a better artist or designer by strengthening your creativity? What is "design process" and how can it be used to achieve maximum creativity? This lecture analyzes undergraduate-level design projects that are remarkable for their inventive and innovative design processes — a critical stage that proceeds research exploration and precedes final design outcomes. This "middle-stage" is arguably the MOST important stage. It is when your playful exploration and experimentation will produce optimal creativity. This presentation will show you how to innovate your own CREATIVITY through select examples. These inspiring methods may then be applied to your own art and design projects. Please note, this is a lecture presentation and not a workshop. Presented by Steven Faerm, Associate Professor at Parsons School of Design. (Closed) Section 1: Full! (max 200) A14244: Design It. Brand It. Love It.: The future of design for Generation Z. Closed! In our current era of over-abundance where material needs are met (and often over-met), what are we really seeking from the products we buy? How are these new, emerging consumer behaviors that prioritize the emotional over the mere material and aesthetic altering the purpose of design and design practices? In what ways will the attributes of 'Generation Z' affect the future design marketplace? This presentation explores the fundamentals of branding, emergent consumer behaviors, and the future role of design and designers. Due to consumers' increasingly nuanced emotional needs, designers must replace their traditional role as independent 'style dictators'— in which they create products based on personal whims and biases in the hopes their work will appeal to consumers—with that of 'designer-as-social scientist' whose research into consumers' demographics and psychographics underpin all subsequent design proposals. Following this introduction, several case studies will be presented and analyzed to illustrate key ideas about branding, 'emotional value' in design, and the future design marketplace A14347: Voices of Years and Memories: The Lives and Music of Fanny Hensel and Florence Price Closed! Teachers: Julian Gau In the world of classical music, we tend to celebrate a narrow subset of the wide and diverse array of people who were creating and playing music. Specifically, a very white and male one. Let's talk about two artists: Fanny Hensel, a German pianist, composer, and sister to Felix Mendelssohn; and Florence Price, an American teacher, organist, and the first African-American woman to make it big as an orchestral composer. We'll learn about their lives and music, and how their circumstances affected the art they created. Aside from words and pictures, there will also be live performances of their piano music. Finally, some important questions that bring us to today: whose music do we perform, and on a greater level, whose stories do we tell? A14376: (Tiny) Paper Crafts Full! Teachers: Doreen Chin, Leslie Yan Do you enjoy making tiny things out of paper? Have you ever wanted to make a pop-up book or make kirigami art? Are you simply intrigued by the prospect of doing any of these things? Come join us for this workshop-style, interactive class! Be sure to have some paper and scissors on hand :) Cutting mats, tape, and Xacto knives are helpful but not necessary. A14255: Intro to Bollywood Fusion Dance Closed! Teachers: Amulya Aluru, Akshaj Kadaveru, Charvi Sharma Come learn some fun Bollywood choreography with the members of MIT Mirchi! We'll teach you basic Bollywood steps with a fusion of some other dance styles, like hip hop, contemporary, and classical Indian. We love to dance and want to share our passion with you! No dance experience needed, just come ready to have some fun A14310: Intro to Taekwondo Closed! Teachers: TAHIN SYED, Elizabeth Zou Introduction to taekwondo as a martial art. We will teach you basic techniques like fast kick and turning kick, and give an overview of taekwondo as a sport. If we have time, we'll also teach you how to do cool kicks like the flying side kick and the tornado kick. Wear clothes you're comfortable to move in, and be ready to have some fun! A14336: MIT Bhangra Workshop Closed! Teachers: Aneesh Gupta, Aanish Puri We, MIT Bhangra, will be holding a workshop to let you all DANCE! Bhangra is a high-energy dance form and can be shown off to all your friends! No dancing experience required! A14238: How to Dance to (Almost) Anything Closed! Teachers: Lindsay Brownell With enough practice, anyone can learn a TikTok dance, but how do you dance to a song that doesn't have a specific set of dance moves to follow? This interactive class will have you up and moving as we explore how to identify a song's rhythm, the fundamentals of moving to a beat, and simple moves that you can pull out whenever the need to dance arises! If you're looking to learn a formal dance like tango or salsa, this is NOT the class for you - this class is all about figuring out how your body likes to move to music, whether you're dancing alone in your bedroom or at a (socially distanced) party. This class is open to everyone regardless of dancing experience, gender identity, or physical ability. C14227: Tech in different pathways Closed! Teachers: Yiran Bowman, Dawn Chen, Wenlin Gong, Yining Hua, Selina Wu Got interested in 'Tech' but not sure what it's all about? You've landed in the right place! Join us to get familiar with exciting IT applications and cutting-edge research in the hottest areas. We'll take you to explore the achievements people have made as well as challenges still facing in Artificial Intelligence, Big Data, Bioinformatics, AR and VR technology, and Computer Vision Materials for this class include: Tech in Different Pathways Slides C14178: Introduction to Verification with Dafny Closed! Teachers: Anastasiya Kravchuk-Kirilyuk After writing your perfect program, you suddenly find out it has a mysterious bug. You trace the program line by line, run dozens of tests, and realize it's unclear what the code was supposed to do in the first place. Does this sound familiar? Introducing: formal verification! With formal verification, you can ensure that your program has the desired properties without writing a single test. No more frustration over missing edge cases, unclear program descriptions, and changing every line of code just-barely-so in hopes to find and fix the bug at random. Formal verification is here to help, and it can be automated, too. Dafny is a web-based automated verification tool that allows you to write your program directly in a web browser, specify very clear constraints (using assert statements and loop invariants, among other things), and run the verifier. If your code adheres to the constraints, Dafny will finish with 0 errors. However, if Dafny finds an issue, the line of buggy code will be underlined. The tool and the tutorial are available online completely free, so you could keep experimenting even after class is over. The class will be taught as a short introduction followed by hands-on exercises in Dafny, no set up required. Change your life and discover formal verification today! basic programming C14355: Internet Privacy 101 Closed! Teachers: Shardul Chiplunkar, Robert Cunningham BIG BROTHER IS WATCHING. Governments across the world, tech companies, advertisers, fraudsters, scammers—they're all doing their best to track your every step on the Internet. Some of them want to "increase consumer engagement". Others want to "monitor criminal activity". All of them want to infringe on your Right to Privacy. In this class, you'll learn how to protect your privacy online. You might be surprised at how easy it is to get started. Although we can't guarantee complete freedom from surveillance, by the end of this class, you'll know much, much more than the average Internet user about how to easily and *legally* reclaim your Internet privacy. We'll also send out an extensive list of resources after the class that you can share with friends and family, as well as a free ebook about digital privacy and surveillance! None at all. No programming experience required! C14176: An Introduction to Programming Languages Closed! Teachers: Sameer Pai, CJ Quines From Python and C and Haskell, to Java and Rust and Pascal, to Fortran and Perl and APL, …from declarative and functional, to imperative and intentional, to object-oriented and procedural, …from analyzing and parsing, to syntax and compiling, to semantics and typing, we'll talk about it all. This class is a whirlwind tour of programming languages, their history, how they work, and how they differ from each other. You'll get the most out of this class if you know at least one programming language. C14311: Minecraft Fires, Social Networks, and Quantum Complexity Full! Teachers: Julia Balla, Sihao Huang Can we write a more efficient algorithm for fire propagation in Minecraft? What does it have in common with social movements, disease spread, and quantum complexity? Ideas in complexity theory manifest themselves in diverse, seemingly disconnected systems that come together to form a beautiful picture of how the universe functions. We'll be giving a whirlwind tour of this exciting subject and connecting the dots between the disparate fields of computer algorithms, physics, and social science. Some mathematical and programming maturity is helpful but certainly not required. C14275: brain**** Closed! Teachers: Chris Viets Have you ever wanted to code using only 8 keys on your keyboard? If so, brain**** may be the perfect programming language for you. This class will teach you everything there is to know about esoteric programming languages like brain****, which, while impractical, are fun and challenging to work with. We'll take a look at the theory behind esoteric programming languages, and we'll answer questions like, "What does it mean for a language to be Turing complete?" and "How do you create a programming language?" and "Why would someone put pineapple on pizza? Seriously, who thought that was a good idea?". After that, we'll study one language in particular: brain****. We'll take a look at the history and theory of BF, then attempt to code in BF ourselves! If time allows, we'll explore other esoteric programming languages more closely as well. You will do a little bit of coding during this class; basic coding knowledge helpful but not required. C14375: How to Get Stuff on The Internet Closed! Teachers: Cameron Kleiman, Elias Little We're going to be teaching an introductory class to making your first website, only using tools that are free and often open source. If you've always wanted a personal website or needed websites for your projects, this class can give you the inspiration and direction to get started. If you want to learn how to put stuff on the internet quickly and without hassle, come hang out with us as we show you how to do it. Knowledge of Git, HTML, and CSS is helpful but not required. C14200: From Zero to One - Deep Learning with PyTorch Closed! Teachers: Francisco Massa, Joseph Spisak Deep learning is powering everything from AI assistants to autonomous vehicles with many of these built using PyTorch. PyTorch is a python library that is used by the world's leading experts in AI to build some of the most amazing applications at scale. Join Joe Spisak and Francisco Massa from Facebook AI as they take you from the very basics of deep learning and neural networks to using cutting edge computer vision algorithms to build your own applications. No cloud knowledge or complicated setup will be required as we will be using Google Colab which provides free GPUs and interactive Jupyter notebooks for everything we will do. You can learn more about PyTorch at pytorch.org. Cheers! Basic programming with python; Some understanding of calculus would be helpful; and an open mind and thirst to explore! Materials for this class include: Part 1 of the slides, Part 2 of the slides C14243: Data Visualization in R Closed! Teachers: Elizabeth Kiel, Lucia Ann Van Hanken Data visualization is SUPER important!! (But what is it exactly?) In short, data visualization aims to take a big chunk of information and turn it into a picture-- AKA, an understandable, visual narrative to show trends and make generalizations. Data visualizations are everywhere--news, textbooks, social media infographics-- so the ability to create, interpret, and critique them affects how we understand the world around us. The way an author chooses to present a certain set of data can completely change the way we think about it! We will look at several real-life case studies and decide whether data has been presented effectively. We will also introduce the programming language R, which has many applications in the world of data visualization and statistical computing! By the end of this class, you will be better equipped to critically analyze data visualizations and have the skills to create some of your own! No previous experience in programming or statistics is required. C14225: Intro to Circuits and Coding with Arduino Closed! Teachers: Sarah Flanagan, Zoe Klawans, Temitope Olabinjo Learn the basics of building a circuit on a breadboard and programming an Arduino. An Arduino is a small device that allows your code to come alive on your circuit. Program an LED light to blink in time with music and build a light-sensitive night light using an LED and photoresistor. Since this class is virtual, we will be using an online simulator to build our circuits. A computer is required for using the online simulator and joining the Zoom call. A working microphone (built-in laptop microphone is fine) is strongly preferred so that we can communicate effectively with you, and you can collaborate with classmates during the Zoom call. No prior knowledge of circuits, programming, or Arduino is required for this class. C14327: Who is Bobby Tables? Exploring Security with XKCD Closed! Teachers: Danna Nozik, Zachary Zagorski What makes a "good" password? Can you *really* explain recent security vulnerabilities in comic panels? And who is Bobby Tables? Join a software security engineer for a discussion of some computer security topics, loosely guided by Randall Munroe's XKCD comics (such as https://xkcd.com/1820/). We'll cover SQL injection (and other code injection) attacks, passwords (the good, the bad, and the ugly), and why older bugs like Heartbleed are still relevant today, plus additional topics as time permits. While this class is intended to be accessible to students with little or no technical background, those with some programming or other technical experience may find they get more out of it. Materials for this class include: Slides E14142: Sketching for Product Design Closed! Teachers: Charlene Xia A picture is worth a thousand words. Well, that actually depends on how well your idea is communicated with the picture. Sketching is used in all aspects of engineering. It is used to communicate ideas, to record your thoughts, to innovate, and sometimes used as evidence for patents! Come learn basic sketch skills for product design! Paper, Pencil, Color Marker E14370: Robots with Legs - How to Make Them Walk & Run Closed! Teachers: Matthew Chignoli Robots that can walk and run are no longer just ideas for science fiction movies. They exist and have the potential to make the world a safer, cleaner, and better place. In this class, we will cover basic concepts about walking and running for humans and animals and then discuss how these concepts relate to robots. We will compare animals/robots with 2 legs versus 4 and the challenges and benefits that each design offers. We will review some of the most popular legged robots seen on the internet and discuss how the designers get them to walk and run. Hopefully, after this class you will have learned (1) why walking and running is a useful capability for robots and (2) how robots "think" about walking and running as they are doing it. Watch some cool videos of robots walking and running (suggested: Boston Dynamics Spot Mini and Atlas robots) E14299: How Does a Solar Car Work? Closed! Teachers: Stephen Campbell, Sun Mee Choi, Christopher Kiel, Cameron Kokesh, Aditya Mehrotra MIT's Solar Electric Vehicle Team (SEVT) is faced with a unique challenge: design, manufacture, and construct a solar-powered vehicle from scratch to compete in a long distance endurance race. By long distance, we mean about 2,000 miles! Come check out how we accomplish such a task and how a solar car works! We'll cover topics from the construction of our carbon-fiber aeroshell to the arrangement of the 416 Li-ion cells in our battery to the machining of our custom steering system. The team is excited to meet you:) E14186: Intro to Analog Transistor Circuits Closed! Teachers: Matthew Cox, Julian Espada Have you ever heard of transistors? Do you want to learn what transistors do and how to use them in circuits? In this class, we will go over how transistors operate and how to use them in some common analog circuits. You will learn how to design and analyze several types of transistor-based circuits, such as single-transistor amplifiers, differential pairs, and cascodes. We will consider parameters such as gain, input/output resistance, and frequency response. As time allows, we may move on to topics like temperature sensors, current mirrors/active loading, translinear circuits, op-amp architectures, or analog/digital converters. A solid understanding of circuits will be essential. In particular, we will assume that you are very familiar with the following: $$ * Circuit schematics * Kirchhoff's laws * Ohm's law * Capacitors and RC circuits * Norton & Thevenin equivalent circuits * What op-amps do and how they're used in a circuit * You will get more out of this class if you have seen AC circuit analysis before (e.g. small signal analysis, intuition about frequency response of inductors and capacitors, etc.). * You should know calculus. In particular, familiarity with derivatives is essential. $$ E14342: Fusion Energy: Recreating the Sun here on Earth Closed! Teachers: Abtin Ameri The Sun has been burning for 4.6 billion years, and will continue to do so for roughly another 5 billion years. How is it able to sustain such a process for so long? The answer is: fusion! If you want to understand how exactly the Sun is powered, or if you are curious to learn more about fusion energy, or if you want to know how a fusion reactor works, then you should attend this course! This course will cover everything relevant to fusion energy, from how it powers stars in our universe, to how we are trying to achieve it here on Earth. The course will go over some physics concepts relevant to fusion, mainly plasma physics and some nuclear physics. From there on, we will delve deep into various designs -- such as magnetic confinement fusion and inertial confinement fusion -- that are currently being explored in order to make fusion energy a reality. Elementary knowledge of physics should be sufficient. E14270: Yeeting Rockets Full! Teachers: Laura Schwendeman, Maggie Zheng Interested in rocket science and sending giant hunks of metal, carbon fiber, and explosive chemicals into space? MIT's rocket team has got you covered with an introductory lecture into the science behind yeeting rockets. We will cover some of the physics, chemistry, and engineering principles behind building rockets that go into space. We will also teach you about how you yourself can get started on the journey of becoming a rocket scientist and even how to start building your own rockets! Some introductory physics knowledge is preferable but not required. E14147: Nano Satellite - Project and Design Closed! Teachers: marcelo anjos Nano Satellite projects aims at designing, production, installation, launching, tracking and using the data of working nano satellite. The two basic structures used for nano satellites are cubical and hexagonal. We begin presenting an overview on nano satellites, their advantages over conventional satellites. We include the structural layout and design of a hexagonal shaped nano satellite, the thermal conditions that it has to bear at the LEO and the different kinds of materials suitable for its construction. Materials for this class include: Nano Satellite E14306: Introduction to Microfabrication Closed! Teachers: Joy Cho, Matthew Yeh Have you ever wanted to build things on an atomic scale, or wondered how Intel and Samsung mass-produce the tiny electronics in their chips? Do you just think "nanotechnology" sounds cool? Then this is the perfect class for you! Topics include semiconductors, photolithography, thin-film deposition, ion implantation, etching, and transistors. As we cover more topics, we'll break into small groups where you'll have the chance to talk to your peers and figure out how you would put these steps together to build actual devices! basic chemistry, physics helpful but not required. Materials for this class include: Lecture Slides E14195: Explore Aerospace: Mission to another World! Full! Teachers: Rachel Morgan, Golda Nguyen, Sophia Vlahakis, Paula do Vale Pereira Learn how to engineer a mission to another planet! Students will be split into missions to Venus or Mars and guided through the design process for a space mission. MIT Aero Astro graduate students will lead the teams, teaching students about mission requirements and subsystems. E14146: Practical in Humanoid Robotics Closed! Mounting. Programming. And executing a created choreography dance with humanoid robots Materials for this class include: Practical Humanoid Robot E14193: Where does your electricity come from? Closed! Teachers: Andrew Lopreiato Electricity just comes from the outlet on the wall, right? Sure, but there's a lot that happens before that. This class will bring you through the basics of the primary methods of electric power generation used in the US, including both the traditional sources (fossil fuels) and some new and growing ones. We'll compare them on metrics like efficiency, capacity factor, cost, environmental impact, and more. We'll compare what the mix of sources looks like now to what it might look like in the future. This class will introduce a bit of engineering thermodynamics, but it won't be a super math-heavy class. The main focus will be on the concepts. Basic understanding of what work, energy, and conservation of energy are. Having taken a physics class before would be helpful. E14160: Introduction to CMOS Full! Teachers: Maitreyi Ashok Ever wondered how all those electronic devices you have work? This class will start with explaining the basic building block of most of these - the MOS transistor. After understanding these, we will zoom out and look at how we can use these building blocks to represent digital logic and use them for practical applications. Basic understanding of what an atom is. A basic knowledge of voltage and current would be helpful, but will also be explained. E14164: Idea Realization with Arduino and Rapid Prototyping Closed! Teachers: Fangzhou Xia Do you have a cool idea that you have been thinking for a while but not sure how to realize it? With the development of modern technology, a number of rapid prototyping techniques such as 3D printing are within the grasp of hobbyists. Microcntrollers can be used to add intelligence to your project with easy programming. This course show you the tools to realize cool ideas even if you are not a trained engineer, Basic programming knowledge preferred, any language would be OK Materials for this class include: Lecture Notes H14350: Intro to JSTOR (and some other research tools) Closed! Teachers: Tobit Glenhaber, Grace Smith Do you ever have a question you want answered, and for whatever reason Google isn't scratching your itch? Do you want to be able to search archives of scholarly work? In this class, I will teach you how to use the research utility JSTOR, alongside some other useful tools for research; I will also go over how to even read a research paper. While this class is aimed more in the humanities direction (that's where I've been trained, largely), the skills should also carry over to reading scientific papers, if that's more your thing. H14232: Assessing Arguments For God: A Crash Course in Counter-Apologetics Closed! Teachers: Alex Bookbinder, Sohan Dsouza We'll describe and critique several of the classic arguments for religious claims, as well as discuss the role of religion in our society today. H14170: Circling - Intersubjective Mindfulness Meditation Closed! Teachers: Rivka Fleischman, Jason Gross Most of your Splash classes will be about objects and things. Some of your conversations will involve personal history, where you grew up, what you like and dislike. This class will be a third kind of conversation, about what our present experience is, as we're having it. There's a kind of magic to being deeply seen, and to being welcomed as you are. Circling is a practice about getting others' worlds, and sharing what it's really like to be you, and having that be seen and reflected. Come experience the magic Openness to talking about your emotions and present in-the-moment experience. Willingness to have your webcam on for the whole class and to participate verbally. H14283: Queer Critical Theory: Gender Closed! Teachers: Anthony Davis-Pait, Hasan Khan Gender is a lie. It's not real beyond what other people tell us about it. Why do we identify with gender? What does it mean to transcend gender? If you've thought critically about gender in your own life or think you want to start exploring, sign up and join our open discussion of queer critical theorists and their ideas! content warning: topics might include mentions of transphobia, homophobia, cissexism, and other oppressions of gender and sexual minorities (but it won't be the majority of the class) -an open mind and willingness to center the experience of trans and nonbinary peers -completion of a few readings (released a week in advance) H14230: Chinese Dynasty Crash Course Closed! Teachers: Vincent Fan, Wayne Zhao Learn about n chinese dynasties in 2n minutes! This will be a roller coaster ride through every Chinese dynasty, from the semi-mythical Three Sovereigns and Five Emperors all the way up to the Qing dynasty. The catch? We will spend equal time discussing every dynasty, such as the renowned Han and Tang, but also the lesser-known Southern Chen. H14169: Making deep friendships - Circling Closed! Teachers: Jason Gross, James Koppel Access to this level of conversation has a way of facilitating deep connections where you can feel deeply seen and welcomed. Circling is a practice about getting others' worlds, sharing what it's really like to be you, and having that be seen and reflected. Come experience the magic. H14288: Queer Rebellion Full! Ever thought you were just different from everyone else? Did recognizing your queerness change your life or did nothing change at all? Have you ever considered why Pride is a thing? Do you wonder why (and/or hate it when) straight people say "I don't have a problem with gay people, I just hate it when it's their entire personality"? Queerness *literally* is rebellion--specifically to capitalist cisheteropatriarchy, which is the structural foundation of our entire world. Queer existence subverts power. Queer existence is powerful. It's not time to rebel, you always have been. Come through to learn more about your queer power. (content warning: we will be discussing mental, physical, and sexual health problems, including suicide, drug abuse, depression, a number of phobias and sexisms, and more topical issues that plague the LGBTQIA+ community on behalf of capitalist cisheteropatriarchy) -an open mind and willingness to center the experiences of queer peers -completion of a few readings (released a week in advance) H14206: The Digital Global Health Crisis: Social Media and Mental Health Closed! Teachers: Emmi Mills, Bhuvna Murthy, Reed Robinson, Melody Wu Our news, friends, and lives, are on social media platforms. Protests, elections, conspiracy theories, and more are influenced, perpetuated, and catalyzed by social media. How do we address social media as a global health crisis, particularly in the context of mental health and public health? Come learn about how Instagram manipulates your brain chemistry, Facebook struggles to prevent "infodemics," TikTok sells tweens' attention, and strategies to protect ourselves. Want to learn more about global health? Check out MIT Global Health Alliance's other Splash classes: Systemic Racism in Healthcare and Social Determinants of Health, and Defining Global Health in 2020. H14166: Crash Course on the Israeli-Palestinian Conflict Closed! Teachers: Hope Dargan, Ilaisaane Summers What is the Israeli-Palestinian Conflict? Why did it start, why does it continue and will it end? This course seeks to answer all of these questions by looking at history from both perspectives. H14165: Ireland Uncovered: The American Irish Closed! Teachers: H. Alexander Chen This course will emphasize the influence of the Irish diaspora in America. The lecture begins with the mass immigration caused by the Great Famine in Ireland and examines the communities and impacts of those Irish immigrants in America. It will assess the social and political influence of American Irish communities in the rebellions, revolts, and revolutions that led to the establishment of independent Ireland. The overreaching idea is the strong ties between Ireland and the United States throughout history. Topics include the potato famine, cultural memory, Irish communities in America, the Young Ireland Movement, the Fenian Brotherhood and the Irish Republican Brotherhood, and the Easter Rising of 1916. If time allows, the course will address the cultural significance of St. Patrick's Day and late generation Irish identity in America. None. However, students who have taken courses under the title "Irish Presence in America" at any ESP Program should not register for this course due to overlap of content. Materials for this class include: Course Information H14175: Should /r/Shoplifting be banned? and other questions Closed! Teachers: CJ Quines What can be posted? Who decides? The recent months have showed us how much social media can influence elections, incite violence, and increase misinformation. The solution should be moderation. But this often stands at odds with promoting free speech, making moderation unbiased, and commitment to consistency. We'll discuss case studies of how Facebook, Twitter, and Reddit dealing with controversial content, and analyze whether they made the right calls or not. H14249: How to Make a Book: Lessons from Medieval Scribes Closed! Teachers: Madison Sneve, Benjamin Sockol Ever wonder how books got made before the printing press was a thing? Turns out that's a pretty loaded question! Learn about the whole process, from making the parchment and ink to painstakingly handwriting each page. We'll share some of our favorite manuscripts, which feature gorgeous gold illumination, snarky remarks and corrections in the margins, and a curious abundance of snail doodles by bored monks. H14322: Random Facts about Communist Countries Closed! Teachers: Ali Ghorashi, Saranesh Prembabu Did you know that Fidel Castro's favourite cow holds the world record in milk yield on a single day? That Burkina Faso vaccinated 2.5 million children in one week under Marxist rule? That an interview broadcast on Soviet TV revealed, with evidence, that Lenin was actually a mushroom? Come learn about intriguing things that happened in the communist world that you'd never find in a history book! This class does not endorse any political ideology, and won't be a comprehensive history lesson but just a bunch of miscellaneous facts. Since we're at MIT, we'll be particularly interested scientific/technical topics among others. None, but a standard knowledge of 20th century history/geography may make it more entertaining H14239: The Persian Wars 500-480 BCE Closed! Teachers: Alessandra Guccione, Joshua Hoffman This lecture-based course will be on the legendary David vs Goliath conflict between a coalition of Greek City-States lead by Athens and Sparta against Persia, the largest empire the world had ever seen. H14286: Queer Critical Theory: Sex Closed! Sex is a lie. Both the act of sex and the idea of sex identity only have meaning as it is assigned by the people who gave them or taught them to us. What does it mean to expand our definitions of sex? How can sex be used to affirm or question gender identities? Do we need or want either? If you've thought critically about sex in any capacity, consider joining our open discussion of queer critical theorists and their ideas! H14240: Peloponnesian War: Athens VS Sparta. The Greek World Wars. 431-404. Closed! This lecture-based class will cover the great war between the city-states of Athens and Sparta. Essentially the Greek World Wars, this Peloponnesian War lasts over 30 years and has lasting impact on the future of Greece. In this conflict, both Athens and Sparta control half of Greece, and the winner has the opportunity to become the first ruler over a united Greece. H14366: Model United Nations conference - An Overview Closed! Teachers: George Abu Daoud, Stuti Khandwala MIT MUNC is a very happening student group that organises Model UN conferences for high school kids in the US as well as China. Our other goal is to increase awareness among the public about what MUN is, what benefits one can get if they participate in MUNs, and how enjoyable the whole experience is. Along with informing you guys about these aspects, we will also do a real debate in class after going through some basic MUN procedures on a fun topic. Hope you are hyped! None at all, everyone is welcome! H14241: The Second Carthaginian War: Rome and Hannibal Closed! This course is a lecture covering the famous Second Carthaginian War (and a quick summary of the first), the famous conflict between Rome and the Carthaginians. H14217: Rhode Island History Closed! Teachers: Rosalind Lucier, Christopher Nadeau Rhode Island? Long Island? What's the difference? If you don't know or want to learn more about the smallest state that formerly held the title for the longest name, this class is for you! Come learn about the quirky history of Rhode Island, a journey from escaping religious persecution to the gilded halls of Newport to modern day. H14356: The Art of Riddling Closed! Teachers: Shardul Chiplunkar, Rujul Gandhi You're in the palace of an Ancient and Powerful King, and the Master of Knowledge has just asked you an artful riddle to test your worth. You're smart: you deliver an impeccable answer in seconds. Of course, it is but courteous to return the exchange by posing a riddle of your own... oh. You don't know any. Now what? It has to be challenging, but solvable. It has to be majestically poetic, but not cringy. Its lines should resound in the halls and minds of the palace and thrust an irresistibly fascinating mystery upon them. Come learn the Art of Riddling. H14242: The Greco-Roman Wars Closed! This lecture-based class will cover the Romans (fresh after fighting Carthage) clashing in the East with the successors of Alexander the Great. The West collides with the East in a series of wars that will shape the Greek world for centuries to come. The winner of this will become the undisputed ruler over the entire Mediterranean, from Portugal to Palestine. H14339: Philosophy of Science Closed! Teachers: Abe Levitan What is the essential scienceness that makes science scientific? Answering this question is not straightforward. Yet our answers - individually and as a society - impact how we value scientific work and even how scientists view their own research. In this class, we will discuss two famous perspectives - from Thomas Kuhn and Karl Popper - on the question, "what makes science sciencey?". Finally, we will consider how our own answers to this question influence our views on what scientists should be doing, what they shouldn't be doing, and what they actually do. H14358: the Ideologies of Korra Full! Teachers: Alex Bookbinder, Tobit Glenhaber In this class, we will use the show Legends of Korra as a medium through which to discuss the Ideologies and Historical Context of the Villains, and so of early 1900s political philosophies. Through which, we will discuss what ideologies are represented faithfully in the show, and which ones are less so, to put it mildly. Topics to be discusses include Socialism, Anarchism, the Red Scare and J Edgar Hoover, and some methods of "close reading" and critical analysis that you can use to be a better consumer of media! You should have seen all four seasons of Korra, or be prepared for spoilers---we won't pull any punches, spoilers-wise H14154: President Madison on the Founding of the United States Closed! Teachers: Bil Lewis With the assistance of students reading appropriate dramatic scenes as Patrick Henry, George Washington, Thomas Jefferson, Alexander Hamilton, Dolly Madison, etc., we will lead investigations into events from the House of Burgesses, the Constitutional Convention, the "Dinner Party," etc., that marked the coming of age of the United States. More than a mere recitation of dates and facts, this will be an exploration the underlying reasons that prompted them to act as they did. Many of the issues they confronted then continue to be relevant today. • Should we be one Country? • What debts should be paid? • Who gets the power of Taxation? • Should a Private Bank issue money? • Should we be agrarian? Or a center of manufacturing? • How do we limit the influence of Great Corporations on our public life? • How can we protect the Common Man from the rapaciousness of the Rich and Powerful? • How do we eliminate Slavery? • How do we make real the "Spirit of '76?" So we can truly say that "All Men are Created Equal." The class tends to be noisy and raucous, as there are numerous requirements for Huzzahing and Yelling and Singing the songs of the age. History was loud, so we have to be loud, too. Languages and Literatures L14233: Thai Language, History and Culture Closed! Teachers: Krit Boonsiriseth, Arun Wongprommoon This course is an introduction to Thai language, history and culture. Why am I teaching this? Because I'm born and raised Thai and I'm proud of it! Not many people *actually* know about this country and people, outside of the beaches and occasional puns about our names and our place names. Come to learn the idiosyncrasies of our language and stay for more Thainess! This is pretty much the same as last year's course, so if you were here last year I'd recommend the other Thai course I'll be teaching! L14234: Thai II: Electric Boogaloo Closed! With me last year, registered for "Thai Language, History and Culture", or already have some knowledge about this Pad-Thai loving, scorching country? Come explore even more of Thai language including orthography, evolution and vocabulary, phonology (or lack of), and syntax! If time permits I'll also do an AMA about Thai language or Thai people or Thai history or me! Thai Language, History and Culture would be great ;) L14380: Close Reading in the World Closed! Teachers: Leslie Leonard This class employs the same tools that we use to closely read and engage with literature in order to closely read and engage with the world around us. Using advertisements, newspaper headlines, clips from popular television, and so on, this class shows how the skills and critical thinking of close reading, paying attention to detail, and acknowledging the audience/tone/goal/etc. of a piece helps us to interpret the world around us. Starting from the belief that our perspectives are built from the media we consume, the language we use, and the representation that we witness, this class lays bare how interpretation and close reading are part of our everyday lives as consumers, global citizens, students, and so on. L14153: Beginner Arabic Closed! Teachers: Yasmin Sharbaf Learn the Arabic alphabet and some basic greetings and pronunciations! Aimed to inspire eventual learning and interest of the Arabic Language. L14177: Invent a language! Closed! Teachers: Sagnik Anupam, Rujul Gandhi, Shinjini Ghosh Are you fascinated by languages? Do you ever wonder what went into making the languages in Star Trek, or the Lord of the Rings? Come learn how languages work and the cool stuff they do with sounds, words, structure, and much more - and build your own language along the way! Know at least one language (note that English is a language) L14273: Wonders of Ancient Chinese Literature Closed! Teachers: Yiming Chen Read, translate, and analyze ancient Chinese poetry (古诗) and prose (古文). Learn about the history behind Chinese literature. Practice your pronunciation and conversation skills. It's gonna be lit! :) Some background in Chinese and an interest for more! L14379: Words! Etymology for the Uninitiated Closed! Teachers: Rujul Gandhi, Ali Mohammad What do twitter trolls have to do with fish? Is a ladybug a cow? Can I have a piece of that PIE? Come travel the world of borrowed words, coined concepts, voyaging vocabulary, and more! Learn about why etymology, history, and linguistics are Fun. L14341: Star Tales: Mythology Behind the Constellations Closed! Teachers: Utheri Wagura, Jessica Wu The night sky is full of stories that are as true to human experience today as they were two thousand years ago. Come learn about the myths and stories behind major constellations - and maybe even a bit of astronomy! L14305: Make Your Own Language Closed! Teachers: Agnes Bi, Patrick Niedzielski Glidis, O studans! When you pick up a fantasy or sci fi novel, do you flip to the back to look at the glossary for that alien language? Do you think the world would be a much better place if there were one, neutral, easy-to-learn language that we all could speak? Maybe you've made a code or cypher for you and your friends. Or maybe you think language is too imprecise and really wish there were some unambiguous way of communicating. If any of these statements describe you, congratulations! You might just have what it takes to be a conlanger, someone who makes languages, for fun (and for profit!). In this practicum, we'll create our own language, for fun (not for profit!), learning some interesting facts about conlangs and linguistics along the way. L14219: Introduction to Esperanto Full! What's Esperanto? It's the most widely spoken invented language, actively spoken by around 200,000 people all over the world. It's really easy to learn! You'll learn more Esperanto in this hour than you'd learn German in ten hours. By the end of the class you'll be able to form basic sentences in Esperanto. L14308: Oracle Bones to Spoken Tones: Comparative Linguistics with Mandarin Chinese Closed! Teachers: Caitlin Fukumoto, Alexander Greer, Jason Li Join us for a fun hour and a half as we dive into the evolution of the Chinese language! Through various ~*engaging*~ activities, we'll learn about how Chinese characters came to be and compare it to the development of English. We'll explain what you need to know along the way, so you don't need to know anything about linguistics or Chinese! L14279: In Your Own Words Closed! Teachers: Caroline Bonnett, Nghiem Pham "Twas brillig, and the slithy toves/Did gyre and gimble in the wabe..." Why is it that when Lewis Carroll writes a bunch of nonsense like $$ \textit{Jabberwocky} $$ he gets called a visionary, but when I hand in my original poem "Dyufrg Pcoihgbk Kiuygbnjhgf" the only reception I get is "it's just a bunch of keysmashes" and "you need to start taking this class seriously, Caroline"? What makes his fake words better than mine? The answer is linguistics! In this class, you'll learn what makes a good fake word work. Along the way, we'll discuss linguisticky stuff like phonology, morphology, and semantics. At the end of class you will have made several of your very own fake words, with which you can annoy your friends and family. L14251: Magic Systems in Fantasy Stories Closed! Teachers: Sherene Raisbeck, Josh Shaine We'll spend a good chunk of this class exploring how magic is presented in a variety of books and movies, after which we will see about constructung our own, either individually or collectively, as the class prefers. Must love fantasy stories! L14359: Summarizing Tolkien's Silmarillion Closed! Teachers: Eric Wooten Read the Lord of the Rings and loved it? Into world-building on the heroic scale? This class takes students through a rapid breakdown of events in JRR Tolkien's Silmarillion, starting from the awakening of the Elves and finishing with the end of the First Age. L14168: The Basics of Chinese Closed! Teachers: Jennifer Ai, Ruiying Zheng In this short class, we will teach you how to speak, read, and write some of the basic words (or any words you wish to know) in Chinese. From learning intonation to understanding character strokes, anyone is welcome to join and learn a little more about this beautiful East Asian language! L14204: International Phonetic Alphabet Closed! Teachers: Yuru Niu, Alyssa Solomon Learn how to pronounce stuff and categorize the ways people pronounce stuff, and also read those strings of letters and other symbols that appear on wikipedia pages Also useful if you want to try making your own language at some point Be able to pronounce words L14393: Lunch Period Section 1: 595 (max 1000000) M14304: Card Games and Combinatorics Full! Teachers: Jennifer Choi, Yuyuan Luo, Joy Ma Learn all the tips and tricks of winning card games through using combinatorics! Simply put, combinatorics is a branch of math about counting. Studying combinatorics can help us better understand how card games work and how to strategically make the best decisions in them. M14274: Fractions Continued Closed! Teachers: Merrick Cai, Ezra Erives, Benjamin Wu, Jerry Zhao Do the inherent limitations of classical fractions keep you up at night? Do you wring your hands in frustration at the blandness that is the denominator? If you answered yes to both of these, one of these, or even neither of these, you'll be glad to know that continued fractions have you covered. In this class, we'll explore some surprising results about continued fractions and learn about their application to rounding, approximation, tax fraud, and even Pell's equation. As a bonus, check out the cool example below! $$ \pi=3+\frac{1^2}{6+ \frac{3^2}{6+ \frac{5^2}{6+\dots }}} $$ Familiarity with fractions. The class will make heavy use of the first ten positive integers. Knowledge of eleven and beyond is recommended, but not required. M14148: Intro to Cryptography and Encryption Closed! Teachers: Brooke Bensche, Sawyer Koetters What does encryption really mean? What makes it secure? What is a cipher? A block cipher? How do you use them? Come and learn the basics of one of the most fascinating and important subjects in mathematics! No background knowledge needed! We will cover message integrity, keys, one time pads, and the beauty of prime numbers. M14291: Quick Introduction to Graph Theory Full! Teachers: Pachara Sawettamalya Have you ever wondered how Google Maps determines the fast route? How Expedia finds the cheapest flights? How Facebook makes friend suggestion? How to solve mazes without guessing? Or even how robots avoid obstacles? If you can resonate with any of these questions, this class is for you! In this class, you will learn about the basic Graph Theory, which is a major field in the intersection of math and computer science. You will learn what a graph is and how to build one. You will learn some basic search algorithms and essentially how to use it to solve some real-life problems! In short, this class will give you a quick introduction to Graph Theory along with some applications that you can really use in daily life. No technical prerequisite is required -- just come and have fun! - very basic math (addition/subtraction) - great observation skills M14209: SET Theory Full! Teachers: Isabel Anderson Have you ever wondered about how math relates to board games? This class will go over introductory combinatorics, modular arithmetic, and affine geometry and how they relate to the game SET. If you have never played SET before, that is fine. However, you will get more out of the class if you familiarize yourself with the game by reading the rules and trying one of the online versions of SET beforehand. M14210: Introduction to Fractals Closed! Fractals look cool, but have you ever wondered how they're constructed? In this class, we will be going over the definition of a fractal, the basics of complex numbers, and how the Mandelbrot and Julia sets are constructed. If there is time, we will also discuss what dimension fractals exist in. Know what a function is. M14216: So you want to be a math major Full! Teachers: Isabel Anderson, Lloyd Page Do you want to study math in college? Learn about what being a math major entails, what sorts of classes you'll take, different types of degrees you can get, and what employment options look like for math majors. At the end of class, we will answer any questions that people have. M14320: Crash Course in Game Theory Full! Teachers: Avichal Goel, Mihir Singhal In the prisoner's dilemma, two prisoners face a critical decision: to defect or to cooperate. While it would be best if both prisoners cooperated, each one has something to gain by defecting, and both end up getting a worse overall outcome than if they had both cooperated. Come learn about interesting problems like this in our crash course in game theory, which explores the science of optimal decision-making. M14343: Philosophy of Probability Full! Teachers: Jesse Yang What exactly is probability? Is it merely assigned to measure the tendency of a physical, objective reality, like rolling a die or flipping a coin? Does it measure the strength of our subjective beliefs, based on expectation and experience? With no regard for accuracy, we'll examine interpretations of probability based in math, frequency, propensity, and subjectivity, and most likely discover that each interpretation is in its own manner inexhaustive. M14223: Splitting Cake with Sperner's Lemma Closed! Teachers: Nelson Niu You and your friends have attained a large, multi-flavored cake. You would all like to eat some of that cake, but you each have slightly different preferences about what part of the cake you want. Some of you love the coconut shavings; some will avoid the chocolate icing at all costs. The large scoop of ice cream in the corner is particularly popular. Is there a way to split up the cake fairly amongst yourselves—without losing any friends in the process? It turns out there is, and we can prove it! All it takes is a cute little theorem about coloring points in triangles called Sperner's Lemma. In fact, not only does our theorem tells us that a fair division exists, it can even tell us exactly how—plus or minus a sprinkle. Come see how it all works here! Some experience with proofs is recommended; you should, at the very least, know how to prove a statement by induction. It would also help to know what the graph of $$ x + y + z = 1 $$ looks like. M14173: Proving Löb's Theorem Full! Teachers: Jason Gross, Allison Strandberg Löb's theorem is a beautiful theorem with a deceptively short proof. It states that $$\square (\square P \to P) \to \square P$$ for all $$P$$---that if you can show that proving $$P$$ is sufficient to make $$P$$ true, then you can prove $$P$$. Löb's theorem has a variety of applications, from enabling robust cooperation in the prisoner's dilemma, to curing social anxiety, from proving Gödel's incompleteness theorem, to proving that the halting problem is undecidable. I will present a few proofs of Löb's theorem, all of which are twisty in subtly different ways. We will spend the rest of the time working on wrapping our minds around these proofs, and discussing related topics. I will assume that you understand the difference between "P is provable" and "P is true". M14303: Introduction to Logic Full! Teachers: Jennifer Choi, Laura Cui, Paige Dote, Aparna Ajit Gupte, Joy Ma You might know 1+1=2, but have you ever wondered why it isn't equal to 3? How about what all those strange backwards E's and upside down A's mean? Come learn about the the foundations of formal mathematical logic and learn how to prove why anything is true! We'll cover the basics of propositional logic, truth tables, and set theory. M14386: How To Think About Four Dimensions, and Beyond Closed! Teachers: Zachary Steinberg Thinking about four dimensions may sound scary, but it's actually surprisingly simple. You're probably used to graphing points using two coordinates, but what happens when those coordinates are circles instead of numbers? That simple question leads to a surprisingly flexible way to think about spaces our three-dimensional brains can't imagine. Along the way, we'll answer some questions. How do mathematicians think about four dimensions? Isn't the fourth dimension time? If so, how do we distinguish between possible universes? What's a "manifold" and why do mathematicians love them so much? Why is tying knots impossible in four dimensions? Why can't you glue a piece of paper into an origami Klein bottle? Come find out! By the power of 𝟛 𝔻 𝕘 𝕣 𝕒 𝕡 𝕙 𝕚 𝕔 𝕤, come learn how to think about dimensions beyond our imagination – visually! If you know what points, planes, and spheres are, you're good. Bring with you a love of cool visuals. Calculus might let you get more out of the class, but you don't need it! M14302: Puzzle Hunt! (math-based) Closed! Teachers: Elisabeth Bullock, Jennifer Choi, Helen Hu, Joy Ma Put your math skills to the test with this puzzle hunt! Open to everyone, and we'll help you form teams (or you can come with one!). Materials for this class include: Puzzle slides M14352: The Mathematics of Pokemon Closed! Teachers: Jeffery Li, Andrew Lin Have you ever wondered about the specific mechanics of Pokemon, like type matchups and damage calculations, that allow for competitive Pokemon battling or Pokemon speedrunning? Have you ever heard of the terms "EVs" and "IVs" and and wondered what they meant or what they do? In this class, we'll go over some of these mechanics in theory, and a little bit of how they are put into practice! M14172: Linear Logic Closed! It's a well-known fact of logic that if from $$P$$ you can get $$Q$$, then from $$P$$ you can also get $$Q$$ and $$Q$$.* So since you can get two dimes and a nickel from a quarter, you can get two dimes and a nickel and two dimes and a nickel from a single quarter. Come to learn about linear logic, which is a version of logic which doesn't claim that you can get infinite amounts of money from a quarter. *For example, since $$n = 2$$ implies that $$n = 1 + 1$$ then, $$n = 2$$ implies that $$n = 1 + 1$$ and also $$n = 1 + 1$$. You should know about truth tables, and the "and", "or", "not", and "implies" logical connectives. M14213: Mathematical Modeling Closed! Teachers: Jessica Oehrlein Math modeling is how we use mathematics to study open-ended questions about real-world phenomena. What's the best location for a food truck? How does an invasive species affect an ecosystem? How do we clean up space debris? These are all questions that we can start to answer with math modeling. The goal of this class is to introduce you to the modeling process. By the end, you'll have developed models to answer questions about a couple of different scenarios, and you'll know about some of the tools you can use to tackle more significant modeling problems. Comfort with algebra and a willingness to tackle very open-ended problems. M14357: Wrong?? Math Closed! Teachers: Carl Schildkraut, Brandon Wang Some things in math look true but are false, and some things in math look false but are true. No formal prerequisites. Being interested in math will improve class experience, but anyone is welcome. M14284: SET (& the mathematics of) Closed! Teachers: Lisa Kondrich, Hilary Zen Learn how to play the card game SET, what it means when people call it a hypercube, and how to win the game without flipping over the last card (and the math behind why the trick works!) No math background required, though familiarity with how modulo works can be helpful :) M14248: Beyond the Paper Airplane: Mathematics with Origami Full! Teachers: Crystal Owens, Samuel Tenka Lots of people use pen and paper to do math. Come learn to do it with just paper - add, divide, multiply, and even solve quadratic equations for x, and hint towards the context of Galois theory and fields. You should have paper and a ruler ready to go. M14313: Building Quantum Circuits Closed! Teachers: Jordan Hines How does a quantum computer work and what does an algorithm for one look like? In this class, we'll talk about the fundamentals of quantum computing with the help of IBM's online circuit composer, which will allow you to build your own quantum circuits and see what they do! Comfort with matrix operations M14265: How to Break Rules in Math Closed! Teachers: H Azzouz, Dustin Jamner When do a triangle's angles add up to more than 180 degrees? When is 11 a multiple of 2? How many numbers are there? We'll take a look at some examples your math teacher probably didn't show you. This class is aimed at students with interest in math, but without advanced background. For extra entertainment, you can bring your own rule and we'll see if we can break it. Materials for this class include: Presentation M14301: Women in Mathematics Closed! Teachers: Elisabeth Bullock, Katie Gravel A historical and current survey of important women in mathematics, taught by women from MIT's Undergraduate Society of Women in Mathematics! Materials for this class include: pokemon_card_template Pop (and not-so-pop) Culture P14184: Learn about Chinese RAP!! Closed! Teachers: Ariel Mobius, Jessica Wu Hi, I'm sure some of you listen to rap, but would you also like to listen to culturally-appropriated rap from countries you don't actually expect rap from? In this class, I shall educate you on the history of Chinese rap, the drama, the tea, and of course give some great song recommendations. Learning to rap may or may not be included depending on how network conditions are. None! But understanding Chinese well is very helpful for appreciating the lyrics. P14235: History and Strategy of Minecraft Speedruns Closed! Teachers: Victor Luo, Arun Wongprommoon This video, I made it so that students coming in will get to know about Minecraft Speedrunning. This goes way back before Dream even started his YouTube channel. We'll get to see Illumina, Benex, NiceTwice, TheeSizzler, Dimeax, Korbanoes and other big name speedrunners, and see their strategies throughout the years. Will they beat Minecraft (quick enough)? We're about to find out. Also, only a small percentage of people that watch my videos are actually subscribed, so if you end up liking this video, consider subscribing, it's free and you can always change your mind in the future. Enjoy the video. P14181: Conspiracy Theories: The Fake from the Real Closed! Teachers: Subhash Kantamneni, Swathi Senthil From the flat-earth theory to 9/11 being an inside job to Paul Pierce faking an injury during the 2008 NBA Finals, conspiracy theories on every scale run rampant in today's world. Many people universally roll their eyes to any and all of these theories, but then again, it's important to remember that they encourage a healthy skepticism about the world. After all, no one would ever think a sitting president could organize a robbery, but that's exactly what Watergate was. It always sounds unbelievable until it's proven true, but how do we separate the fake from the real when it comes to conspiracy theories? Take this class to find out. P14353: Introduction to K-Pop Closed! Teachers: Jenny Cai, Philena Liu, Brandon Pho, Brandon Wang Interested in kpop? Want to learn more about the idol life, group structure, and the entertainment industry in Korea? If you're a casual fan, or even completely new to kpop, join in to listen (and potentially fangirl/fanboy) to some kpop songs, possibly become *surprised pikachu face* at kpop's infamous scandals, and discuss anything (and maybe a tad bit of everything) about kpop! P14236: Eurovision Song Contest Closed! Good Evening Europe! I discovered the Eurovision Song Contest back in 2014 and have been watching the contest every year. What is this TV show? Why do I like it so much? Come learn about the contest, and join me in watching all-time favorite songs and performances from this huge spectacle! P14362: Minecraft advancements Closed! Teachers: Victor Luo, Yuru Niu Here we learn how to get every advancement in Minecraft. Know how to play Minecraft P14229: Fugues! (FYOOgz) Closed! Teachers: Henry Hu You may have heard of The Master, but are you interested in learning about his coolest works, the mighty fugues? Do you go on YouTube often to learn about stuff? Do you want to hear music in a new, more immersive way? Let's discuss what a fugue is, what makes it special, and then listen to a bunch of fugues with visual graphs from YouTube to help us understand. We will listen to everything from "child's play" to "in one ear and out the other" and everywhere in between! Figure out why people have been listening to this beloved work for centuries... Some ability to read music is preferred, curiosity and motivation are mandatory! And ears! Materials for this class include: Lesson Plan, Fugues: Further Material, Fugues! (FYOOgz): Presentation (view all 11) P14290: Media Binds or Blinds? Eradicating Algorithmic Bias through Media Education Closed! Teachers: Jeff Bennett, MELDA YILDIZ This workshop investigates the role of Algorithmic Bias/ Injustice integrating new technologies such as Global Positioning System (GPS) while developing global competencies, geospatial intelligence, and computational thinking skills. It offers creative strategies and possibilities for eradicating myths and misconceptions in education. We will engage in a wide range of media literacy activities exploring geospatial and computational thinking skills. We will investigate alternative points of view on news, global issues, algorithmic bias, and social justice through media literacy education. P14300: Hololive and Virtual Youtube: Why are anime streamers showing up in my recommended? Closed! Teachers: Brandon Pho Enjoy the cute, moe aesthetic of anime? Wondering why people have been throwing thousands of dollars for at livestreamers? Come learn about the intersection of these two phenomena, or come laugh at a weeb college student simping for anime waifus! This class will give an overview of the current trend of virtual streamers, its origins in idol culture, and why the genre is so appealing to many. No experience necessary, but you should probably at least have heard about anime to fully understand the some of the context behind this genre. P14262: Time Travel in Popular Media Closed! Teachers: Joe McCarty, Anthony Ou, Albert Qin Do you like crazy time travel movies? Have you ever listened to a character exposit about time travel for 10 minutes and thought "Huh I want to learn more about this"? Well then, this is the class for you. In this class, we are going to be talking about time-lines, time-loops, and paradoxes from some of our favorite stories. We'll talk about stories that make sense and stories that don't. It'll be a lot of fun! Warning: there are going to be spoilers for a lot of stuff. Here is a list of the spoiler warnings for those who care about that: Predestination, Avenger's Endgame, Terminator, Interstellar, Arrival, and Dark. Maybe watch the movies we put spoiler warnings on if you care about that. P14174: A Herstory of Rupaul's Drag Race Reveals Closed! Teachers: CJ Quines, Jessica Wu How are the Pokémon Trading Card Game and Rupaul's Drag Race alike? What are reveals and how did they start? And why is a two-second reveal so much more entertaining than a two-minute dance routine? Come watch some thrilling Drag Race clips with us, as we talk about what separates good reveals from bad ones. Content note: Rupaul's Drag Race is rated TV-14. There will be spoilers, as we'll be watching episode endings and season finales. None. Drag Race fans might've already seen most of these before. So you're encouraged to come *especially* if you haven't watched Rupaul's Drag Race before! P14224: Revelio: What Harry Potter Teaches Us About Writing Shocking Plot Twists Closed! Think of that one book or movie with a killer twist ending, a shocking reveal that blew you away and left you stunned in your seat, marveling at the storyteller's cunning and cheek. (Don't tell me what it is, though: I don't want to be spoiled.) Ever wondered how the writer or director managed to pull it off? Now think of that other twist ending that was absolutely atrocious, that left you feeling confused, cheated, or rolling your eyes because you'd seen it coming from a mile away. (Again, don't tell me what it is: spoilers for bad stories are spoilers, too.) What makes some twists work and others flop? This is an intriguing topic that is very difficult to discuss, because, as you've probably already noticed, talking about plot twist is difficult without, well, actually talking about those twists, thereby spoiling them. Fortunately, there's one series chock-full of excellent surprise endings that nearly everyone of our generation has already been exposed to: Harry Potter. We'll examine and discuss the techniques employed in the Harry Potter books to create some of the most shocking revelations, as well as the broader thematic ideas that a clever twist can convey. And perhaps you'll come out of this class ready to craft your very own mind-blowing plot twist. You should be familiar with all seven books in the Harry Potter series. P14331: Intro to Traditional Astrology Closed! Teachers: Deon Mitchell, Megan Smith Not sure what a birth chart is? Don't know your rising sign? Puzzled by the phrase "Mercury retrograde?" You've stumbled upon the right class. Join Deon Mitchell, a Real-Life astrologer, in discovering the sky and how its motions impact you, society, and the world. You'll learn the basics: the signs, the planets, and the houses, and you'll walk away making accurate chart intepretations and new insight into your own chart. Astrology skeptics, beginners, and enthusiasts welcome. knowledge of your own birth day, time, and location (or good approximations) S14295: Formaldehyde- stories of pathology Closed! Teachers: Andrew Thompson An introduction to pathology's favorite tissue fixative. We will explore the chemistry of formaldehyde molecule and it's role in tissue preservation and embalming, billiard balls, mirror manufacture, moonshine, and a variety of other areas of life.. Then we will circle back to pathology with the discovery of antigen retrieval, and how this opened a new era of immunohistochemistry. basic high school biology useful S14201: Brain Injuries and How They Help Us Understand the Brain Closed! Teachers: Kian Attari, Athena Capo-Battaglia This course will briefly introduce the main lobes of the brain (for some background) before going into different case studies that touch into cognitive neuroscience. The studies that will be discussed will range from people that had a complete change in personality after an accident to someone that became amnesic after surgery. I will also introduce sources for future reading so that students can continue learning beyond this course. Background knowledge not necessary! Come learn about some really interesting cases! S14334: How Does Global Warming Work? An Introduction to The Greenhouse Effect and Other Mechanisms Closed! Teachers: Phoebe Lin, Lily Zhang The Sun is constantly emitting radiation... but what happens after it reaches the Earth? Learn about the radiative processes that occur within our atmosphere to understand the mechanisms behind global warming. Basic understanding of algebra recommended S14247: Microbiome 101: What's in your poop? Closed! Teachers: Izzy Goodchild-Michelman, Alexandra Poret, Tzu-Chieh Tang Come learn about the microbiome and where you can find it! We will discuss current methods to study the microbiome, correlations with diseases and the environment, and how you (and your poop!) can help somebody through microbiome science. Who doesn't love talking about poop?! S14212: COVID-19: What We Know Closed! Teachers: Charlotte Armitage, Camryn Kellogg All of our lives are being impacted by COVID-19, and this class will explore what we know so far, how we know it, and what this means moving forward. This is a class for people of all levels- we will cover the very basics of viruses, what makes COVID-19 unique, and how we can best protect ourselves. Anyone of any level is welcome! S14260: CRISPR Crash Course Closed! Teachers: Angela Gao, Christina Yu Ever heard of CRISPR, the biology class buzzword and purported future of human evolution? (One of these descriptions is more accurate than the other). Come learn about the differences between the fact and fiction of this powerful gene-editing tool! We'll cover the basics of the discovery of CRISPR, its original functions in bacteria, and how it has been engineered for a range of different applications today. Introductory biology recommended (basic knowledge of genetics and biochemistry) S14194: What Color is #TheDress? The Neuroscience of Human Vision Closed! Teachers: Zawad Chowdhury, Christina Wang Is the dress blue/black or white/gold?!! Come hear us attempt to resolve the #dressgate that went viral on Twitter, learn about cool facts of human color vision, and pick up some methods scientists use to study the brain along the way! S14269: Mystifying Tunes, Temperaments, and Overtone Singing Closed! Teachers: Jung Soo Chu, Adriano Hernandez, Sathwik Karnik, Shahir Rahman Ever wondered if it's possible to sing two notes at once? It turns out yes, and much more! In this class will cover an overview of audio signals, how sound is physically produced and transmitted. This includes an introduction to spectrograms, waveforms and frequencies. After this, we will explore being able to hear and identify these interesting properties of music. Then a large part of the class will be spent activity-style on exploring a fun application of this knowledge with the strange and mystifying art of unusual singing techniques; including overtone singing and Tuvan throat singing. We will record our voice and examine the frequencies to augment our singing. A computer/phone with a microphone is required for this course. S14155: black holes! Closed! Teachers: Emma Batson, Anjali Nambrath Black holes live in the middle of galaxies, spew out hot plasma, and gulp up stars. We'll talk about the history of black holes, what happens when you get too close to a black hole, what black holes do to space and time, and some cool black hole thought experiments. If you want to learn more about all sorts of astrophysical weirdness, this class is for you! If you've taken high school algebra, you should be okay. There'll be a little calculus, but don't worry! S14192: Epidemiology: Disease Snapshots Closed! Teachers: Kenneth Cox, John Shackleton Are you tired of hearing about COVID-19??? Do you want to learn more about diseases that are deadlier, more contagious, and less manageable than COVID??? In this class, we'll survey a few prominent pathogens with pandemic potential (and some without pandemic potential). A little basic biology would be helpful (e.g. knowing the difference between DNA and RNA) S14371: Fantastic Semiconductors and Where to Find Them Closed! Teachers: Michelle Hsu, Mehrab Jamee Want to know what's behind the computing power of your computer and smartphone? Microchips, logic gates, and even LED lights all depend on the semiconductor, and it's not just something that's okay at conducting electricity. Come find out what makes it all work! S14333: Science and Economics of Climate Change Closed! Climate change is always in the news, but what is the science behind how carbon dioxide, chlorofluorocarbons, and other human activities impact our climate? What are the economic drivers behind pollution and how successful has environmental policy been in addressing climate issues? This class will give you a unique understanding of the climate change issue from a scientific and economic perspective. We'll explore the acid rain problem and discuss what the success of the Acid Rain Program in the late 1900s century says about how different policy instruments can address environmental issues. Some Chemistry helpful, but not necessary. S14252: Geobiology: What the Earth Teaches us about the History of Life Closed! Teachers: Justin Duffy, Kelvin Li How do we know what life was like millions of years ago? In this class, we will discuss how life interacts with the Earth, and how studying rocks, fossils, and biogeochemical cycles can help us understand the history of life. Topics will include fossils and fossilization, relative dating, the Great Oxygenation Event, mass extinctions, and climate change. None required, although basic biology and chemistry knowledge may be helpful. S14156: the standard model! Closed! The Standard Model is (as far as we know) the best description of our universe. It summarizes the particles that make up everything around us (as far as we know), and also some things that aren't usually around us. If you want to unlock the mysteries of the universe (as far as we know), join the club and take our class! Caring about physics! S14319: Do Fish Have Feelings? An Introduction to Marine Biology Closed! Teachers: Laura Cui, Katherine Lei Come explore the ocean and learn everything you never thought you needed to know about fish and other marine creatures! It'll be a whale of a time :) S14277: Become a BioMaker! Closed! Teachers: Rachel Shen, Janice Tjan, Melody Wu The goal of this course is not to make you all biologists, but biology enthusiasts and biomakers. In other words, we hope to spark your interest in fields of technology where you can apply biology or lead you to notice biology in your lives. We especially want to highlight the ways in which there can be lo-tech solutions to conduct biological research and ways to innovate within this realm! No background in Biology required! S14360: Wildfires Closed! Teachers: Clair Travis Fall means fire season for the Western United States! Ever wonder why California always seems to be on fire? Or how a fire like this year's "August Complex" gets to be larger than Rhode Island? Or how fires like that are even fought? In this course we'll cover the basic ecology behind wildfires, how they are dealt with, their causes, and why they seem to be getting worse. We'll also go over resources at your disposal as a curious citizen for all large fires in the United States. Materials for this class include: Wildfires slides (annotated) S14344: The Secret Life of the Periodic Table Closed! Teachers: Dhyey Gandhi, Mudita Goyal The periodic table is a familiar sight in all chemistry classrooms. But did you ever realize that all those elements in there are much more than entries in a table; each has its own exciting story of discovery, usage and incidents connected to it. We will explore some of the most enthralling stories connected to these elements which will help you better appreciate and enjoy the wonderful subject of chemistry. Nothing in particular. Just some enthusiasm and interest in science and chemistry S14145: Debates in Bioethics Closed! Teachers: Talya Kramer This class will introduce you to the exciting field of Bioethics! Bioethics is a dynamic topic that discusses and debates ethical issues that relate to science and medicine. This class will first cover how Bioethicists think about these problems and then will provide an interactive environment to discuss Bioethical debates with your peers. Topics covered might include: at what age can teenagers reject medical care? How should we test a coronavirus vaccine? How do we determine how to organize organ donations? S14293: Methylene blue stories of pathology Closed! The basic dye, and redox indicator, Methylene blue will be the central character in a series of tales which will serve to explain several key concepts in human pathology. Antihistamines, antipsychotics malaria, sickle cell disease, viagra, fish tanks, and a variety of seemingly disparate topics of biology will be explained with the help of methylene blue. Understanding basic high school biology and acid base chemistry useful. S14335: Exploring the Solar System Closed! Teachers: Elizabeth Bitman, Becca Mastrola, Anna Plank Are you interested in traveling across space from the comfort of your socially-distanced couch? If so, come join us on our mission to explore the solar system! On our journey, we'll investigate the workings of the Sun, terrestrial planets, and gas giants! Additionally, we will discuss some odd topics such as why Mercury is shrinking, possible life on Jupiter's moons, and why we live inside the Sun! If you're ready to accept our mission and learn about what might be the coolest star system ever, put on your helmets, strap in, and get ready for lift off in T-minus 3...2...1...! S14158: From Engineering Genes to Fish Glowing Green: the Basics of the Biology and Ethics of Gene Editing Teachers: Mina Ghobrial, Anna Kolchinski A quick dive into modern genetic engineering techniques and the ethical implications! 1 biology class S14246: The science behind face reading: Physiognomy Closed! Teachers: Beyza yurt Do you ever think about learning about someone without the need of talking to them or stalking them(!) Well I can't teach you how to read someone's mind, but I can teach you how to interpret someone's personality just by looking at their face. If you're interested in diving deep into the science behind the physical appearance and what information it gives you about someone, join my class! S14211: Lung Health 101 Closed! Are you Pre-Med? Interested in diseases of the airway in lung? Got asthma?? Then this is the class for you! We will cover everything to do with lungs including anatomy, cellular biology, air quality, disease, pollution, and smoking. No pre-reqs! Everyone is welcome! S14296: Stupid Human Tricks Closed! We'll cover some of the more unusual examples of how the human body can be an example of exquisite functional design or completely stupid fail. From here I hope to encourage discussion and question and answer of any bodily myth, mystery, or ailment. Topics such as: Our weak backs, a penchant for atherosclerosis, the valgus knee and why this tells you Bigfoot has to be a guy in a furry suit. etc S14298: The importance of chemical reactions in organic synthesis Full! Teachers: Ygor Moura Have you heard about the Girgnard reaction? Do you know why is it important? Why are couplings important? Why reactions that form C-C bonds are so special? There are many Nobel prizes that are awarded to people that work with these reactions. Come learn a little more about the underlying principles behind many synthesis! General chemistry, and basic knowledge (or interest) in organic reactions S14191: The Science of Happiness Closed! We all know a thing or two about happiness from experience. But what can science teach us about it? In this class, we will review research on human happiness and explore what the scientific method can - and can't - teach us. None, but to get the neural circuits firing, you may want to read some articles from Arthur C. Brooks' column in the Atlantic Magazine: https://www.theatlantic.com/category/how-build-life/ S14162: Mission: Climate Closed! Teachers: David Mazumder Students will engage as stakeholders in a climate summit tasked with limiting global warming to less than a disastrous 2 degrees Celsius. Following a brief introduction to climate change physics and policy tools, students will take on roles in a simulated United Nations negotiation to chart the world's course to prevent climate catastrophe using an interactive climate policy simulator based on the best available economic, climate, and energy models. We will cover what is causing climate change, what actions are required to halt warming, and how students can take action in their communities. S14354: How to Find an Exoplanet! Closed! Teachers: Mehrab Jamee So you sat down at your telescope for a few weeks straight and stared at a star, and noticed that it dimmed at regular intervals. You may have found an exoplanet, in the same way as NASA's Transiting Exoplanet Survey Satellite (TESS)! A project by the MIT Kavli Institute, the satellite launched in 2018 and finished its primary mission in July 2020, having scanned the northern sky for exoplanets using this transit method. How can you find an exoplanet this way? Just how much information can find out about the planet with this method? Why do we even care about finding planets outside of our solar system? Inquire within! S14326: Physics at the Atomic Scale and Beyond Closed! Teachers: Keith Phuthi We try and explain why matter at the everyday scale behaves the way it does starting from atoms all the way up to ordinary sized objects. How does the interaction of atoms and molecules result in solids, liquids, gases? Why do they behave the way they do? This will be done mainly through showing you different computer simulations and animations. A basic understanding of Newton's Laws of motion S14345: Fundamentals of the CRISPR-Cas9 World Closed! Teachers: Mudita Goyal, Stuti Khandwala Want to learn about gene-editing? Wondering why you should care? Well, if you've been hearing all the buzz about cutting-edge research efforts for cancer and disease treatment, or the controversial possibilities of creating designer babies and are curious about the technology that is making it all a reality—you've come to the right place! CRISPR gets down to the very element encoding us, the DNA we carry in our cells, in order to change it and modify the organism, allowing for a host of applications researchers are keen to create. We'll take you through the entire process and relevance of modifying DNA and then do deep dives into how the CRISPR system operates. You will learn about how we discovered CRISPR and even where we're heading with the technology. Introductory high-school biology would be beneficial. But the lectures will touch the basics too. S14259: Color, Light, and Perception Closed! Teachers: Amy Lin, Andrew Maris The old adage goes: "believe half of what you see and none of what you hear." Indeed, sight plays a central role in how we navigate our lives. In this class, we explore the ways in which the nature of color and light can enable - or fool - our perception. We will touch on physics, philosophy, and aesthetics. S14374: Organic Chemistry is fun and cool and approachable Full! Teachers: Zawad Chowdhury, RuiYang Guo Does Organic Chemistry have a scary reputation in your mind? Have your village elders warned you of its difficulty? Come to this class to find out why that's all fake news and see that organic chemistry is fun and cool (TM). Learn about all its interesting applications in understanding and improving the world around us and listen to my strong feelings about pushing arrows around! High School Chemistry S14281: The Science behind Music Full! What makes the sound of a guitar different from a piano? How do computers process audio to detect what song you are listening to? In this class, we will cover an overview of audio signals, the math behind it and how this relates to the music you hear. This includes an introduction to spectrogram, waveforms, frequency analysis and Fourier transforms. Basic math and program implementation will be covered. We will explore how these interesting properties of audio signals affect the sound that you hear. The next part of the class will be spent activity-style exploring audio processing and synthesis. This includes short lab activities for note detection/transcription, song matching (Shazam), and audio synthesis using software. We will also explore interesting musical phenomena including tuning systems, microtonality, audio quality and harmonics. Pre-calculus (basic trigonometry, sine, cosine) is required. Calculus knowledge will be helpful, although is not required. A computer (Windows or Mac) is required for this course. S14382: Your Brain on Love Closed! Ever heard the phrase "love is a drug"? Sadly it never seems to be in stock at our local CVS... come learn about the brain chemistry of infatuation, emotional management strategies, ingredients for a healthy relationship, and more! S14261: Protein Folding Closed! https://xkcd.com/1430/ If you think origami is hard, imagine folding a chain of amino acids into a fully functional protein! Come learn about the twists and turns of the protein folding problem and what progress scientists have made so far, from Levinthal's paradox to the biennial CASP competition and more. Try it out for yourself during the second half of this class, where we'll introduce the protein folding game Foldit and finish with a mini-contest! Introductory biology recommended (basic knowledge of biochemistry and proteins) S14264: A Brief Introduction to Time Travel Closed! Time travel is everywhere in popular media, but ever wondered "is this stuff real?" In this class we are going to be talking about both the classic paradoxes and physical theory. Topics include special relativity, the many-worlds theory and general relativity. S14228: Zooming in on Organic Molecules Closed! Teachers: Kelly Chen, Jarek Kwiecinski Molecules are dynamic objects with diverse structural and electronic features that affect their chemical properties and reactivity. In this course, we'll introduce some concepts about the three-dimensional structure of simple organic molecules and discuss how that structure, as well as other properties such as electronegativity, can be used to explain and predict how molecules react. We'll end with some real-world examples that showcase the importance of organic chemistry in our everyday lives. High school general chemistry course. S14309: Shockingly Scientific: Intro to Cardiac Electrophysiology Closed! Teachers: Rudy Gelb-Bicknell No matter how strong you are, if you lift something heavy over and over again, you will get tired. Your arms will start to ache, and eventually your muscles will just stop listening to you. Your heart muscle doesn't have that luxury. It needs to make sure every one of the cells in your body gets fresh oxygenated blood in a single squeeze. It needs to do this without fail every single second from the moment you are born to the moment you die. Why don't our hearts ever need a break? What determines how fast it beats? What if our hearts pump too hard or too soft? Too fast or too slow? How does it squeeze in just the right way to pump the blood all the way around the body with ease? Welcome to the world of Cardiac Electrophysiology, or electricity of the heart. In this world you will encounter balloons filled with liquid nitrogen, laser-firing wands, and orbs that rip cells apart with electric fields. We'll first go through the basics of how the heart works, where it sends blood to and where it gets it back from. Then we'll talk about how regular muscles work and how their behaviors are controlled by electricity, as well as what makes heart muscle different from all other muscle in your body. Then we get to the really fun stuff: how electricity propagates through the heart, and what happens when things go wrong in that system. Finally, we'll finish up with a dive into the extremely cool field of 3D electrophysiology mapping and ablation, the tools that doctors use to understand and fix hearts with conduction abnormalities (that's where all those crazy tools come in!). I'm gonna try to cover a lot of ground in this class, so it'll definitely move pretty speedily. There will be some math and basic physics talked about, but I'll do my best to cover any prerequisite material for understanding it. I'll also make sure to stress the key takeaways from each section so that everyone can get the general idea of what is going on, and the people that want to can understand some of the math going on behind it. Any math and physics you have seen will help you understand the material on a more quantitative level, but my goal is that anyone who wants to will walk away from this course having a general idea for how the heart works, why electricity is central to its function, and what happens when things go wrong. S14325: Quantum Encryption with Alice and Bob Closed! Teachers: Michael Haas Quantum encryption allows for communication where it is physically impossible for eavesdroppers to listen in on a conversation, which has huge ramifications for secure communication in the future. Come learn how present research on nanoscale physics could lead to global impact! As part of the class, we will use IBM's Qiskit platform to demonstrate the advantage of quantum key distribution protocols. Very little, have some interest in learning about quantum states and entanglement. S14231: Intro to Soft Matter and Fluids Closed! Teachers: Nicholas Cuccia, Ahmed Sherif This class will be a short introduction to the physics of soft matter and fluids. Learn about everything from bubbles to colloids to turbulence! S14292: ADME stories of p̶a̶t̶h̶o̶l̶o̶g̶y̶ pharmacology Closed! Absorption, distribution, metabolism, and excretion -- an introduction to pharmacokinetics. How pharmaceuticals and other xenobiotics get in, get to where they need to be, and get removed from the body. The first half will cover the basics through examining the behavior and fate of some common cold medications. The second half will reinforce the first by examining the adventures of adverse or unwanted interactions, between medications, the host, and the environment. An understanding of basic high school biology and acid base chemistry is useful. S14289: Symmetry and Asymmetry: A Tale of Two Antipodes Closed! Teachers: Kylee Carden, Westley Wu Come one, come all to the rootinest tootinest class this side of the Mississippi! The concept of symmetry shows up everywhere, from physics to biology and from architecture to mathematics. We will share perspectives on symmetry primarily from the points of view of physicists and biologists. Topics include chirality, biological fitness, stereochemistry, parity, and polarization. Previous experience with geometry, physics, and biology recommended S14332: Radiation, Antennas, and Einstein Relativity: What They Won't Tell You in AP Physics Closed! Teachers: Michael Albrecht, Christian Ferko When you shake an electron, it spits out electromagnetic radiation. This fact is the basis of all wireless communication, from radio to wifi to satellite navigation. But despite these engineering applications, the behavior of moving charges is critical to pure theoretical physics. Einstein's 1905 paper "On the Electrodynamics of Moving Bodies," among the most important papers ever published, showed that the way a charge radiates contains the seeds of a remarkable new subject called special relativity. Come to this class to hear how the study of moving charges led to an idea which revolutionized our understanding of what space and time really are. It will help if you have heard things like "opposite charges attract and like charges repel." In some places we will use calculus in a conceptual and intuitive way, but it is not necessary to have taken a calculus course to follow the talk. Materials for this class include: Slides, Follow Up Email (Saturday), Follow Up Email (Sunday) W14271: DIY Sustainability with UA Sustain! Closed! Teachers: Kayleigh Dugas Come hang out with UA Sustain, one of MIT's undergraduate sustainability organizations! Learn how to make ecobricks, rate sustainability hacks on TikTok, and chill with us between your classes. THIS IS A WALK-INS CLASS. You can join it any time you want on the day of Splash, no registration required! W14390: Minesweeper Closed! Teachers: Yuru Niu, Frank Wang Minesweeper is a logic/puzzle game played on a grid, where the goal is to clear out all of the safe squares without hitting a mine. Come here to learn how to play, learn a few new tricks, or just have a good time playing Minesweeper! Open to both new and experienced players. W14389: How to cut hair Closed! Teachers: Johnson Huynh, Sarah Weidman Learn by watching me learn by doing! I will be cutting my roommate's hair (don't worry, this is 1) with permission and 2) not for the first time). I'll explain the basic techniques that I use to make a haircut look passable, especially for zoom calls. Let's hope it goes well... W14391: How to Write Calligraphy: Card Decorating and More! Closed! Teachers: Grace Sun, Daisy Wang Ever wanted to learn how to write in calligraphy? Wanted to spice up notes or cards? Well, we'll show you! Sign up to learn some tips for modern calligraphy lettering, and by the end leave with a holiday card! W14373: Science Bowl Closed! Teachers: Paolo Adajar Come play high school Science Bowl and try to win bragging rights and learn some awesome things about science! Know lots of science trivia, love hitting buzzers, or just want to try to answer questions ridiculously fast? This is the walk-in for you! You do not need to have familiarity/experience with this game to participate! W14396: Sporcle! Teachers: Sarah Weidman Come hang out and do some online quizzes via sporcle.com. Some favorites include: - Name every Harry Potter character, in order of frequency of appearance - Name every country and capital - Fill in the lyrics to popular Disney songs and many more! Sporcle quizzes will be chosen by request. W14397: Mahjong Closed! Teachers: Yuru Niu Play Mahjong with each other! W14268: 7.4 Seconds for Every Pokémon Closed! Teachers: Jeffery Li Game Freak has made a lot of Pokémon ever since Red and Blue came out - 8 generations' worth, in fact. Some are clearly more popular and liked, and some never see the light of day. Have you ever wanted to get a glimpse of every single one of them, and show them all some love? Come by and check out all of them in a ~mistake of a powerpoint~ gigantic slideshow featuring all of them, from the most Normal to the exotic, from the cutest to the most menacing, and from the derpiest to the most powerful! No prior knowledge of Pokémon needed, only a deep desire to appreciate all the Pokémon out there :P W14388: Dominion Closed! Teachers: Matthew Benet, Laura Cui Missing board game nights with friends? Come play Dominion with us virtually! No experience necessary, don't worry about memorizing what all of the cards do :) X14324: Chess for First Time Players Closed! "Every chess master was once a beginner." – Irving Chernev This course is designed to give first time players an introduction to chess fundamentals. We will begin by going over the rules of the game, then we will cover some basic endgames, tactics, and briefly discuss opening theory. We'll end the day by having our own online tournament! X14369: All About American Crosswords Closed! Teachers: Emily Yi, Wayne Zhao You've probably done, or at least seen a crossword sometimes in your life, but have you ever stopped to think about how they became so ubiquitous? Come learn all about the history of crosswords, from their humble tiny origins, through a surprisingly starring role during WWII, all the way up to the modern history of computer-assisted cruciverbalism! Along the way, hopefully you'll pick up a few resources and jargon across all of crossword-dom. X14280: Communication & consent Closed! Teachers: Valerie Chen, Mathieu Medina, Mariela Perez-Cabarcas So much has changed this year. For many of us, there have been major shifts in how we interact with those we care about and build our relationships. Essential to any relationship (friends, family, romantic, professional) is respecting boundaries, honest communication and seeking consent, whether together physically or virtually! Join us in a conversation to discuss boundary setting, what is consent, and how to promote healthy communication for yourself and those around you! Materials for this class include: Session 1 Community Document , Session 2 Community Document X14226: Zooming through Italy: A Virtual Tour Closed! Teachers: Carolyn Johnson, Tatum Kawabata, Francelis Morillo Suarez Have you ever wanted to eat gelato and pizza in front of the Trevi Fountain in Rome or ride a gondola in Venice? Then this course is for you! Learn some Italian phrases and take a virtual tour through famous Italian cities and destinations. We will delve into different aspects of Italian culture, from the food to the language and art. X14318: Intro to Business and Finance Closed! Teachers: Guangqi Cui, Joli Dou, Jacob Furfine, Michelle He, Michael Holcomb, Subhash Kantamneni, Anjalie Kini, Sofie Kupiec, Patrick Ryan, Haley Samuelsen, Christine Sanchez, Alexandra So, Elizabeth Tso, Emily Wang, Archer Wang, Jessica Wu, Angel Yang, Michael Zhao, Kathryn Zhao, Miriam Zuo So you've heard about finance...but what does that ~actually~ mean? Come join our team of Sloan Business Club members to learn about general finance topics in an interactive 50-min session. After a brief lecture introducing you to the world of finance, you will engage in small groups with one of our members who will provide you with some tools to conduct simple analyses on potential investments. X14187: Baking Bao Bread Buns (from scratch!) Closed! Teachers: Brandon Pho, Madeleine Swartz Bao: steamed buns with various tasty fillings. We're baking bao live for you to watch, with instructions to follow along! Come join us and learn how to make yummy bao. If you want to follow along with the recipe, make sure you have access to a kitchen and the following items: -an open heating element -a rolling pin -knife -steamer (and pot that will fit it) -ingredients for the recipe Materials for this class include: Bao Recipes and Instructions X14337: The Art and Science of Meditation Closed! Teachers: Aaron Schwartz Life is busy. Life is stressful. Life is full of distractions. When was the last time you stepped back and lived in the present moment, spent a couple of minutes in the here and now? That's the idea behind meditation, and lucky for you, ANYONE can meditate and benefit from a daily mindfulness practice. In this class, you'll learn how! But that's not all – this is MIT of course, so we're also going to take a deep dive into the scientific literature to understand how meditation affects the brain, and whether (according to peer-reviewed scientific journals) it really offers benefits to its practitioners. ABSOLUTELY NO previous meditation experience expected or necessary! X14346: Intermediate Puzzle Logic Closed! Teachers: Walker Anderson, Yuru Niu If you already know the rules of some logic puzzles, this class will teach some slightly more advanced techniques in those logic puzzles Know the rules to some logic puzzles such as Masyu, Shakashaka, and LITS X14377: Things You Should Know About Life Full! Teachers: Emily Caragay, Mayukha Vadari There are things you should know about life. They won't teach you this stuff in school. Things like retirement and how to make your friends' parents like you. Featuring a Q&A at the end. X14161: Planning Like a Pro: Learning to Bullet Journal Full! Teachers: Emily Levenson, Lily Zhang Do you want to be more organized? Do traditional planners box you in? Bullet journaling is a do-it-yourself planning system that is customizable and has lots of room for creativity. You can even tailor your bullet journal to online learning :,) Bring a blank notebook or several sheets of paper, a pen, and a life to organize! X14258: Save a Life 101 Full! Teachers: Andrew Motz, Alexandra So, Wen Ting Zheng Learn how to save lives using hands-only CPR and Stop the Bleed! Join MIT EMS's certified EMTs to learn the basics of how to stop major bleeding, recognize and react to a cardiac arrest, and perform high quality cardiopulmonary resuscitation. Afterwards, take a virtual tour inside MIT EMS's custom built ambulance! X14256: Pokemon Showdown for Dummies Closed! Teachers: Michael Han, Raymond Li, Anton Ni Do you want to be a Pokemon master? Do you want to crush your opponents on the Pokemon showdown ladder? Do you want to learn how to make heat plays? If you answered yes to any of these questions, then this is the class for you! (This class will be focused on 6v6 singles formats.) X14315: Chess gone Atomic Closed! Teachers: Jeffery Li, CJ Quines We will be exploring a chess variant: atomic chess! In this variant, whenever a capture happens, all pieces (other than maybe pawns) within a one-square radius gets blown up. In this class, we'll go over the rules in more depth, along with a few pointers to get started (as games can be very volatile, and it's very easily to lose instantly), and then play some games! You will likely need a basic understanding of how pieces move and capture in chess. X14263: Interesting Conversations?!?! Closed! Teachers: Nika Silkin, Meghana Vemulapalli Do you think tech companies have an obligation to do something about fake news, and if so, how?? If you were the CEO of a dating app, what would you design for? Should younger people have their votes weighted more heavily?? What is the purpose of American education?? Come with an interesting open-ended question and we can explore together!! (Closed) Section 1: Full! (max 5) X14372: The Semi-Complete Beginner's Guide to Figure Skating Fanhood Closed! Teachers: Amy Li, Stacy Wang Have you ever wanted to skate gracefully but found yourself with the coordination of a duck? Stepped onto ice and fallen flat on your butt? Well that's okay, because you can register to become a figure skating fan instead! Learn to answer these questions from a figure skating fan's perspective: What are the familiar faces you'll see? How do judges score figure skating programs? How do pairs skating and couples' ice dance differ? What are the Grand Prix, Four Continents, and the World Championships in the world of figure skating? How can you recognize each type of jump? Don't skate by this opportunity to learn what it takes to stan the world's coolest artistic athletes (or athletic artists)! X14196: Intro to Rugby Closed! Teachers: Amelie Kharey, Abby McGee Come learn what rugby is and how to play with MIT Women's Rugby! X14203: (Virtual) Walking Tours of the Greater Boston Area Closed! Teachers: Samantha Webster Boston is an awesome city, and I'll take you on a (virtual) walking tour to some of my favorite buildings and sights. You'll get quirky fun facts, fascinating history, and spooky cemetery stories. Buckle up, because we can cover a lot more ground on Zoom than if we were actually walking! (For those prone to motion sickness, please rest assured there will be minimal hand-held camera work.) Tennis shoes and sunscreen recommended for an authentic experience. X14205: Riichi Mahjong Closed! Teachers: Yuru Niu, Wayne Zhao Learn how to play Riichi Mahjong, which is like a combination of Rummy and Poker but with tiles instead of cards! X14328: Introduction to thru hiking Closed! Teachers: Erin Reynolds Come learn about the mysterious world of thru hiking! Thru hikers take it upon themselves to trek hundreds to thousands of miles in the wilderness on trails such as the Appalachian Trail, Pacific Crest Trail, or Continental Divide Trail. You'll learn about the required gear, the logistics of obtaining food and water, trail hygiene, and weird trail traditions. Class will be lecture style with some short youtube videos to make the concepts come to life. We will end with a Q&A to answer all of your questions! X14387: Ice Cream Full! Teachers: Matthew Cox, Jenny Gao Come learn the chemistry behind making ice cream! We'll show you how to make ice cream at home with salt and ice, and we'll also talk about the theory of liquid nitrogen ice cream (which we'd normally have at in-person Splash). Ability to perform slight manual labor. Lactose tolerance preferred (optional, one of the instructors isn't)! :) X14312: How to produce lo-fi hip hop Full! Teachers: Guangqi Cui, Adriano Hernandez, Karthik Nair, Christian Scarlett Ever listened to "lofi beats to study/relax to" and wondered how the tracks are made? In this class, we'll teach you the basics of producing music, and in particular, how to make a lo-fi hip hop track. You'll learn what gives lo-fi that slow, vibey element and how to create that in your own track. At the end of the session, you'll have your very own lo-fi track, and the foundation to explore music production further. X14202: Clean and Intuitive Design Closed! Great ideas require the right execution to be appreciated. When we show our ideas to others, the audience initially interacts with it visually— they don't necessarily see the work behind it. Even the best ideas can be undermined by poor interface design. This course covers the thought-process in designing interfaces, ranging from websites we see everyday to phone apps to advertisements. We will aim to understand a bit better the magical process of how these interfaces we interact with capture our attention. This means understanding design principles, knowing your audience, and seeing many many examples. The principles covered go beyond this course, applying to many fields. X14392: n classes in 5n minutes Closed! Teachers: Paolo Adajar, Emily Caragay Do you ever feel like there aren't enough time blocks at Splash to take classes? Wish you could learn more about *everything*? We'll be teaching you about 10 different topics, with 5 minutes for each! We don't know exactly what we'll teach yet (maybe we'll take suggestions from you all?) Perhaps we'll talk about Scrabble, meme-y music, gymnastics, mental math, firespinning, and much, much more :D X14351: How to Run a Splash Closed! Teachers: Jenny Gao, Andrew Lin Splash is run by undergraduate and graduate students at MIT. And beyond MIT, there are Splashes and other similar educational programs at universities and high schools nationwide. How does it all happen, and what are some of the things behind the scenes that you don't usually get to see? Come learn about all of the intricacies that go into running a massive program like Splash and find out how you can do it too! We'll cover what goes into organizing Splash at MIT, as well as resources and next steps if you want to run something like Splash at your own school. Presented by former Splash directors :) X14330: Streets! Closed! Teachers: Katherine Lei, Alan Zhu Streets! What are they? Well, hopefully you have some idea of what a street is, but perhaps you want to know more about highways systems, the Manual on Unified Traffic Control Devices, or just big/long streets around the world. If so, look no further than this class. X14395: Powerpoint Karaoke! Closed! Teachers: Jeffery Li, Wayne Zhao Imagine that you are the speaker of a presentation, but you don't know what you are presenting on or what slides you are expecting. You have to wing this presentation somehow, but how would you do so? Welcome to powerpoint karaoke, where even nonsense makes sense. Come learn a bit more about how powerpoint karaoke works, including some tips and tricks, and try it out for yourself! A desire to do some improv! X14188: Powerpoint Karaoke! Full! Teachers: Vincent Huang, CJ Quines X14282: Comedy Closed! Teachers: Amena Khatun, Karissa Wenger Watch clips from well known comedies, read some comedies, make some jokes, and enjoy an hour of comic relief. X14237: Magic Eye and Beyond: Stereograms and Stereographs Closed! Teachers: Heidi Durresi, Timothy Nguyen Have you always thought Magic Eye pictures were super cool and want to learn more about them? Do your eyes hurt so much from trying to view Magic Eye pictures that you want to finally know what all the fuss behind them is? Do you want to learn how to view (and create your own!) 3D illusions from 2D images using the pOwEr of your eyes alone? Then this is the class for you!! X14215: European/World Football (Soccer) - Understanding and Following the Beautiful Game Closed! Teachers: Federico Ramirez Welcome to a crash course about the beautiful game, Football! (aka soccer). You'll learn about everything that happens during games from rules, concepts, and how teams line up. With that knowledge we can move on to actually following football in Europe and the World and OH BOY is there a lot to go over there. By the end of the class hopefully you'll be able to pick up following the current European leagues and look forward to future international cups and competitions. Interest in learning about football/soccer or any interest in sports. X14285: Beginner's Crochet Closed! Teachers: Sruthi Parthasarathi, Alyssa Solomon Want to make a scarf or hat to keep you warm this winter or give to a friend for the holidays? Do you ogle cute stuffed amigurumi? Try crocheting, one of the most accessible types of yarn craft! Intended for beginners, intermediate students also welcome if they want to learn something new or just hang out and relax. Unfortunately, due to the online nature, you do need to bring your own supplies. Crochet supplies: a hook and some yarn. Find these at Walmart, Joanns, Michaels or another local crafting store! X14267: Alpine Skiing! Full! Teachers: David Merchan Calling all skiers (pros, weekend warriors, or winter Olympics watchers): MIT has a ski racing team! Come learn how, mathematically, the fastest way around poles down a ski hill is not to travel in straight lines, but sick carves. Wouldn't it be cool if we could ski near MIT in the summer, on, I don't know, a glacier? Do you know the rules of GNAR! If Nordic skiing is more your thing, Norwegian military skis provide the best of both worlds. X14365: Intro to Puzzle Hunts Closed! Teachers: Walker Anderson, Wayne Zhao Puzzle hunts are a fun way to solve puzzles together with friends! The goal is to solve several puzzles that culminate in a final challenge, called a 'metapuzzle'. After a presentation teaching common solving strategies, you'll have the opportunity to work on a small puzzle hunt with others. You don't need any specialized puzzle-solving knowledge to participate. X14199: Gaming the System Closed! Teachers: Subhash Kantamneni, Joshua Curtis Kuffour Heads or tails? The immortal question that has followed our lives from playgrounds to the workplace. You might think your answer to this question doesn't matter- 50/50 odds after all- but it does. From penalty kicks to hurricanes to even coin flips, everything in nature that seems random really isn't. In this class, we look at the facts and statistics and break it down for you, showing you how to take advantage of this to "beat" our everyday lives and even make some money along the way. X14287: The Magic of Macaron-Making Closed! Teachers: Allison Lam, Wilson Szeto Are you tired of paying too much for cookies too small? Have you tried to make macarons only to be greeted by a tray of cracked shells, soggy cookies, and sadness? Come join us for a foolproof macaron recipe (with a nut free alternative for those with allergies!) and the tricks we've learned over the years! (Students must provide their own oven and ingredients, a list of which will be provided.) Z14253: Introduction to Positive Disintegration - Part 1 Closed! Dr. Kazimierz Dabrowski's Theory of Positive Disintegration (TPD) provides a lot of explanations for why some of us feel as if we fit into this world so poorly. In this session, we will explore the basics of TPD, including OverExcitabilities, Dynamisms, and Levels of Development of personality. So, if you are looking for alternate explanations for why some things bother you far more than they bother most folks, join us! Z14221: The History of Your Genetic Information Full! Teachers: Jenny Gao, Janice Tjan The most personal type of data you can own is your genetic information. As sequencing becomes more common, your genome has become more accessible to you, researchers, the government, and companies looking to make a profit. Is your genome worth keeping private? Are there ways to protect it? This class will address these questions by delving into how genetic information has been utilized and exploited in recent history. No knowledge of Biology required. Z14316: Paradoxes of Democracy: Fair Elections & Voting Closed! Teachers: Hsiying Bau, Stephen M. Hou What if, in hypothetical two-way races during the 2020 primaries, Biden beats Sanders, Sanders beats Warren, and Warren beats Biden? Is this even possible? (Yes.) What would then be a fair way to decide the "best" preferences of Democrats? Whether it's a T-shirt design contest or a presidential election, voting converts preferences of individuals into a single preference for the community. We'll discuss Arrow's Impossibility Theorem, which states that there is no "perfect" way of doing so. We'll demonstrate a few of the mind-boggling flaws that every voting method must have. Comfort with arithmetic; interest in voting, political science, decision-making, and/or economics. Z14197: Social Determinants of Health and Acknowledging Systemic Racism in Healthcare Closed! Teachers: Thomas Cheng, Bhuvna Murthy, Reed Robinson, Melody Wu This is an installment of the MIT Global Health Alliance Splash Lesson series focused on structural racism and it's impact on social determinants of health, access to healthcare, and bias impacting quality of health. Want to learn more about global health? Why does it matter now? Besides COVID-19, what are other questions about global health and pressing challenges we need to address? Check out our other classes as well: The Digital Global Health Crisis: Social Media and Mental Health, Defining Global Health in 2020. Z14198: Defining Global Health in 2020 Closed! Teachers: Ameena Iqbal, Bhuvna Murthy, Reed Robinson, Melody Wu This is an installment of the MIT Global Health Alliance Splash Lesson series focused on an introduction to global health and the pandemic's impact on it. Get ready to have discussions about why it is important to remedy healthcare disparities. Why does it matter now? Besides COVID-19, what are other questions about global health and pressing challenges we need to address? Check out our other classes as well: Systemic Racism in Healthcare and Social Determinants of Health and The Digital Global Health Crisis: Social Media and Mental Health Z14250: Non-linear Thinking in a Linear World Closed! Does doing one thing at a time drive you batty? Do people frequently tell you to pay attention or to 'stay on topic?' Do you think in pictures instead of words? Does the whole "You have to do it in the right order" concept bother you? Join us for an exploration of the How's and Why's of non-linear thinking. We'll talk about how to recognize and develop strengths, not just how to 'fit in.' Open-mindedness. Z14317: The Life of Shelter animals Closed! Teachers: Michelle He, Lin Hou There are millions of pets being abandoned and sent to shelters. Globally, there are over 200 million stray dogs. According to ASPCA, there are about 6.5 animals being sent to shelters every year in the US. Over 1.5 million shelter animals are euthanized ('good death') every year, while 3.2 million may get adopted. In China, due to the lack of formal shelters, there are millions of street animals in cities and country sides. Local organizations and many individuals spent years trying to save them from harm and death. In this class, you will learn about the lives of animals in US shelters and globally, some statistic and facts, and watch some videos (some maybe heartbreaking…), and about their future. Z14294: Us vs Them: The Psychology of Prejudice Full! Teachers: Kira Conte, Nicole Johnson, Melissa Santos How do humans learn to discriminate? Why are we loyal to groups, whether it be our family, school, country, or home sports team? Discover the psychological insights into human divisiveness: we'll discuss stereotyping, implicit biases, social identity, and more, and apply this knowledge to key case studies and current events. None! Just an open mind. Z14214: History of Ballet: 1900-Present Closed! This class will cover the past century of ballet, focusing on the westward spread of Russian-style classical ballet, the establishment of new major ballet traditions in England and the United States, and the rise of contemporary ballet. We'll look at photos and video and discuss as a class how different techniques and styles emerged in different parts of the world. We'll also talk about where ballet is headed now. Z14266: Introduction to Criminology Closed! Teachers: Steven Swee In Among Us terms, beyond saying "Red Sus," perhaps another question we should ask is why did red kill in this particular location and time? In formal terms, beyond asking the question, "Who dun it?," perhaps another question that we should ask is why do people do crime? This class dives into some criminological theories to get a better understanding of why people perform such deviant behavior. No strict prerequisites, but an interest in crime and in psychology will make this class more enjoyable! Z14180: Climate Change and Public Policy Closed! Are you concerned about climate change but unsure exactly what public policy can do about it, or what the government currently is doing? Here's your chance to learn! We'll start with what economics tells us is the best way to reduce emissions, explore the benefits and challenges of that approach, and look into different variations of it. We'll go through what state governments and the federal government are already doing to reduce emissions in the US, and the main proposals for what they should do going forward. Z14361: How to Win an Argument Teachers: Jonathan Haber Interested in learning the tools of critical thinking and how they can be applied in school, in politics and in life? This short course will introduce you to the tools of argumentation: how to create convincing and compelling arguments, and how to know when what you're being told is valid or bunk. Materials for this class include: How to Win an Argument Presentation Z14179: Good and Evil in Superhero Comics Closed! What makes someone a hero or a villain? Who gets to decide what is right or wrong? In this class, we will read excerpts of comic books and watch clips from superhero movies. We will discuss our own ideas of equality and equity and support our opinions with real-life examples. No prior knowledge of superhero stories is necessary Z14297: Linux's Moral Kernel Closed! In the "long 60s" and early 70s, a new technology was fundamentally changing the way people thought, and fundamentally changed by the way people had been thinking. The early adopters of the computer formed new subcultures around this new technology, and participatory hacker culture flourished. As Marshall McLuhan states, the medium is the message, and that dictum applies here: in this class, we will delve into how the medium of technology affected hacker cultures (like the early days of Unix, the infamous MIT hackers, and the Cult of the Dead Cow) and the new left, laying the groundwork for the open-source software community. Because of these movements you have access to incredible software written by thousands of people for free—join us to learn more! Z14257: Fashion in Everyday Life Full! Teachers: Shakira Acosta Fashion is all around us, but how exactly does it affect us? In this course, we will be diving deep into the meanings of different types of fashion, and also the cultural significance behind them. Z14149: Help solve climate change Closed! Teachers: John Gage From climate science we know the technical changes that are needed for a relatively safe climate future: reduce man-made greenhouse gas emissions to net zero by 2050 and drawdown CO2 in the air to 350 ppm well before 2100. From economics we know that when pollution is free we get too much of it, but climate pollution is still free in most countries. Economists say that the cheapest and fairest way to address the problem is to put a steadily increasing price on climate pollution when it enters the economy, give the money collected to the third party being harmed (all households equally), and use border adjustments to push our price around the world. A strong carbon price signal will incentivize efficiency, innovation, transition, and drawdown. It sounds so easy! What's preventing us from fixing this? Is it possible to break the logjam and save ourselves? Can any of us do anything to help solve this, the biggest existential crisis human civilization has ever faced? Yes we can. Come find out how, play a Kahoot, and take action in a pivotal moment in the history of mankind. Watch https://youtu.be/9oyguP4nLv0 Z14254: The Psychology of Learning Closed! Teachers: Kira Conte, Nicole Johnson How do we know what we know? How can we unlock our potential as students and lifelong learners? We will learn about how we grow and acquire new knowledge. Learning about inter workings of the brain can give us insightful and unexpected strategies for learning. We will explore these strategies to help you develop your mindful journey as a lifelong learner.
CommonCrawl
A metal–organic framework for efficient water-based ultra-low-temperature-driven cooling Dirk Lenzen1, Jingjing Zhao ORCID: orcid.org/0000-0001-8444-68832, Sebastian-Johannes Ernst ORCID: orcid.org/0000-0002-4250-62913,5, Mohammad Wahiduzzaman ORCID: orcid.org/0000-0003-2025-41154, A. Ken Inge2, Dominik Fröhlich3, Hongyi Xu ORCID: orcid.org/0000-0002-8271-39062, Hans-Jörg Bart5, Christoph Janiak6, Stefan Henninger3, Guillaume Maurin4, Xiaodong Zou2 & Norbert Stock ORCID: orcid.org/0000-0002-0339-73521 Nature Communications volume 10, Article number: 3025 (2019) Cite this article Metal–organic frameworks Efficient use of energy for cooling applications is a very important and challenging field in science. Ultra-low temperature actuated (Tdriving < 80 °C) adsorption-driven chillers (ADCs) with water as the cooling agent are one environmentally benign option. The nanoscale metal-organic framework [Al(OH)(C6H2O4S)] denoted CAU-23 was discovered that possess favorable properties, including water adsorption capacity of 0.37 gH2O/gsorbent around p/p0 = 0.3 and cycling stability of at least 5000 cycles. Most importantly the material has a driving temperature down to 60 °C, which allows for the exploitation of yet mostly unused temperature sources and a more efficient use of energy. These exceptional properties are due to its unique crystal structure, which was unequivocally elucidated by single crystal electron diffraction. Monte Carlo simulations were performed to reveal the water adsorption mechanism at the atomic level. With its green synthesis, CAU-23 is an ideal material to realize ultra-low temperature driven ADC devices. Cooling devices, such as air conditioning (AC) units, are expected to become one of the largest contributors to global energy consumption. According to the International Energy Agency the use of air conditioners and electric fans currently accounts for 10% of global energy consumption1. The number of AC units is expected to triple over the next 30 years, particularly due to increased income in developing countries, many of which are located in tropical or subtropical regions of the world2,3,4. These widely used devices mostly consist of compressors using partially hydrogenated chlorofluorocarbons. These materials are typically flammable, toxic, or harmful to the environment and facing a fade out process by international regulations5,6. To address the problem of hazardous, energy-intensive cooling devices, new energy efficient and green approaches must be developed. Although adsorption driven chillers have been used for over a century, they are reemerging as cutting edge devices. This revival is due to the development of new classes of efficient materials that use low driving temperatures, which were previously not achievable, and maintain high cooling output while employing only water as the working fluid7,8,9. Adsorption driven chillers (ADCs) function through the endothermic process of evaporation (Supplementary Fig. 1)10,11. In these sealed devices, water, as the most common fluid, is stored at a heat exchanger in the liquid phase. The first (working) step consists of evaporation, which induces the cooling effect (desired cooling temperature). This is induced by the presence of a porous active material which adsorbs water vapor from a water reservoir. As adsorption is an exothermic process, this step generates heat which must be dissipated (heat rejection temperature). The adsorbent must subsequently be regenerated after it has been saturated. This is the crucial step regarding efficiency and it is performed by heating the active material to desorb water (driving temperature). It is the only step that consumes thermal energy in the cycle. The desorbed water is condensed at the heat exchanger again, where heat is generated and has to be dissipated (back cooling temperature). Heat rejection and back cooling is typically performed at the same temperature level as they are both cooled against the outside temperature. Using two or more of these devices working phase-shifted enables continuous cooling. The efficiency of this cycle depends highly on the amount of water that can be exchanged between the adsorption and the desorption stage. In order to achieve high-power density, short cycling times are mandatory which require extended cycling stability of the material. Conventional adsorbents require either very high desorption temperatures (e.g., hydrophilic zeolites like NaA or 13X) or feature an unwanted linear isotherm shape (e.g., silica gel) that limits the exchangeable amount of water12. In contrast, an S-shaped water adsorption isotherm can be seen as beneficial due to the fact, that the system is switched between the working and regeneration cycle by adjusting the temperature, which leads to a change of the relative pressure (pwater/psaturation or p/p0). With smaller temperature differences between the adsorption and desorption step, the switching process becomes faster and more energy efficient since less energy is consumed to overcome the thermal capacities (adsorbent, binder, heat exchanger, piping…) while heating and cooling. The choice for the optimal material in such applications depends on a number of variables, such as the desired cooling temperature, the possibility to reject the heat of adsorption, and condensation. In addition, to improve the efficiency of the adsorbent an increase of its uptake capacity and a decrease of the driving temperature are both targeted. A lower driving temperature has two major benefits. First, it allows the utilization of new heat sources (e.g. district heating, geothermal heating, data centers) and second, the available energy is more efficiently exploited (Supplementary Note 1). ADC systems can be categorized according to the needed driving temperature as high (>200 °C), medium (120–200 °C), low (80–120 °C) and ultra-low (<80 °C) temperature systems. Low and ultra-low temperature systems require the lowest driving energy, thus extending the range of usable energy source, and they can be realized, for example, by using metal-exchanged aluminum phosphates such as SAPO-34 (~90 °C)13,14 or metal-organic frameworks (MOFs)15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34. The water sorption properties of the latter family of adsorbent are attractive for this application, however as one of the many prerequisites for a long term use under real conditions high stability is mandatory but only a few suitable MOFs have been reported. Especially some aluminum containing compounds such as MIL-160 (90 °C)24,25, CAU-10-BDC (70 °C)26,27 and Al-fum (90 °C)28,33,34 have been tested for their applicability in ADC systems. A driving temperature of 65–70 °C has been the lower limit for such materials until now, which was realized by CAU-10-BDC and MIP-20016,26. The driving temperature of a MOF can be tuned by variation of the linker molecule, which has been demonstrated using a series of CAU-10 type structures ([Al(O2C-R-CO2)(OH)]; R = aryl or heterocycle). The replacement of isophthalate by the more hydrophilic furandicarboxylate ions led to the CAU-10 type compound known as MIL-160, which exhibits a steep water uptake at a lower relative humidity value (p/p0 = 0.08, 25 °C) and consequently requires a higher driving temperature. Damasceno Borges et al. predicted that the thiophene dicarboxylate (TDC2−) analog of CAU-10 should be more hydrophobic and thus it is expected to be applicable at lower driving temperatures compared with CAU-10-BDC35. Even if these differences are rather small (+0.02 p/p0), every step towards reducing the driving temperature, which is accomplished by shifting the uptake to higher p/p0 values, can potentially lead to an enormous amount of additional heat sources that can be used in ADC applications36. Following this concept we discovered a highly stable Al-MOF using H2TDC as the linker. The material was successfully obtained under green, potentially scalable synthesis conditions with high yield and water as the sole solvent. Our results emphasize its outstanding ADC properties at very low driving temperatures. Green synthesis and structure determination The green, high-yield synthesis of the compound, [Al(C6H2O4S)(OH)] ∙ xH2O named CAU-23, was achieved by a reaction between AlCl3 and NaAlO2 as the metal sources, and sodium thiophenedicarboxylate (Na2TDC) as the linker source. Upon mixing, a white X-ray amorphous precipitate is formed which subsequently crystallizes under reflux conditions at ambient pressure. Thus 4.30 g H2TDC (25 mmol) was mixed with 2.0 g (50 mmol) sodium hydroxide in 100 mL distilled water until a clear solution of Na2TDC was obtained. After adding 18.75 mL of aqueous aluminum chloride solution (1 mol/L, 18.75 mmol), and 12.5 mL of aqueous sodium aluminate solution (0.5 mol/L, 6.25 mmol), the slurry was stirred under reflux conditions for 6 h, and then filtered off and dried at 100 °C for 4 h. Only an aqueous sodium chloride solution is formed as the other reaction product. After an additional washing step with 200 mL water under stirring and reflux, filtration and drying, 4.5 g of a white powder was obtained (84% yield based on H2TDC). Thermogravimetric analysis, infrared spectroscopy, elemental analysis, and nitrogen sorption measurements are in good agreement with the expected values for the ideal composition (Supplementary Figs. 2–4, Supplementary Table 2). Temperature-dependent powder X-ray diffraction measurements and thermogravimetric analysis demonstrate thermal stability of CAU-23 in air up to 400 °C (Supplementary Fig. 5). Acquisition of single crystal X-ray diffraction data of adequate quality for structure determination was not possible due to the nano-scale particle size of only 200 nm determined by scanning electron microscopy (SEM, Fig. 1a). The large unit cell, peak overlapping and the complicated structure also prevented structure determination from powder X-ray diffraction data. Determination and details of the crystal structure of CAU-23. a SEM image of CAU-23. b Reconstructed 3D reciprocal lattice from cRED data of a nano-sized single-crystal (inset) of CAU-23. c The repetition of cis and trans corner-sharing AlO6 polyhedra forming the inorganic building unit of CAU-23. d The full structure of CAU-23 projected along [010]; water molecules are omitted for clarity On the other hand, 3D single crystal electron diffraction can be obtained from powder samples and used for determination of unknown structures37. Therefore, we performed continuous rotation electron diffraction (cRED, Fig. 1b) for structure determination of CAU-23. The data collection was challenging because of the small crystal sizes (200 × 200 × 100 nm3) and weakly scattering elements in CAU-23. To push the size limit of MOF crystals and reduce electron beam damage, we applied low-dose illumination and fast data acquisition to collect cRED data using a Timepix hybrid electron detector with high sensitivity, low background, and fast readout time38. Each high resolution (1.13 Å) cRED dataset was collected in ca 3 min, from which the unit cell parameters and space group were determined. The structure was solved and refined from the cRED data with the resolution of 1.13 Å, and all framework atoms were located with high precision and reasonable agreement values (Supplementary Fig. 6b, Supplementary Table 3). It is to the best of our knowledge among the smallest ever used MOF crystals applying TEM electron diffraction for structure solution and refinement. Although the cRED data were clearly sufficient for determining the framework structure, it was not possible to locate water molecules in the pores. Therefore, structure refinement was successfully performed against PXRD data for a dry and a wet sample using the model from cRED as the starting model (Supplementary Figs. 7–8, Supplementary Table 4). Both forms of CAU-23 crystallize in a non-centrosymmetric space group forming a chiral structure (Fig. 1c, d). The rod-shaped inorganic building unit is formed by alternating units of four consecutive trans and cis corner-sharing AlO6 polyhedra, each, which correspond to straight and helical sections, respectively, in the Al–O chain. Interestingly, these sections resemble a combination of the inorganic building units observed in MIL-53 (only trans) and CAU-10 (only cis) (Supplementary Fig. 10). Square channels are formed through the interconnection by TDC2− ions and propagate along the b-axis, with a side length of 7.6 Å. A detailed topological description is provided in the Supplementary Note 2. The MOF exhibits permanent porosity and N2 sorption measurements resulted in a BET area of 1250 m2/g, which corresponds well to the theoretical accessible surface area of 1330 m2/g. Water sorption behavior Preliminary water stability tests at room and elevated temperatures did not show any degradation of the crystallinity. Hence detailed water adsorption studies were carried out. The water sorption isotherms of CAU-23 were recorded at three different temperatures (Fig. 2a). The measurements show an S-shaped isotherm without hysteresis with a steep uptake of 0.375 gH2O/gsorbent up to p/p0 = 0.33 at 25 °C. With increasing temperature the sharp uptake shifts to higher relative pressure and the uptake capacity is lowered, which is expected for microporous compounds (0.351 gH2O/gsorbent at p/p0 = 0.29 and 40 °C; 331 gH2O/gsorbent and p/p0 = 0.33 at 60 °C). Water sorption properties of CAU-23. a Water sorption isotherms recorded at three different temperatures (filled symbols = adsorption; empty symbols = desorption). b PXRD patterns of CAU-23 at different relative water pressure values PXRD measurements carried out on CAU-23 to investigate the influence of relative humidity at 25 °C show only minor shifts in peak positions and only changes in the relative intensities due to pore filling and evacuation (Fig. 2b). A detailed discussion on the crystal structure–water sorption property relationship in Al-MOFs is presented in Supplementary Note 4. Overall, both the linker, i.e., size, geometry, and hydrophilicity, and the exact structure of rod-like IBUs determine the properties of the Al-MOF. A future design of better MOFs for ADC applications seems hard to accomplish and hence exploratory investigations such as the one presented here will still be necessary. To establish a deeper understanding of the water uptake mechanism the water adsorption isotherm of CAU-23 was simulated using force-field-based grand Canonical Monte Carlo (GCMC) simulations at 25 °C. The experimental adsorption isotherm is relatively well reproduced, particularly the steep increase in the water uptake at p/p0 in the range of 0.20–0.30 (Supplementary Fig. 13). Furthermore, we explored the adsorption mechanism at the microscopic level. The adsorption of water occurs initially between two hydroxyl groups (μ-OH) of the inorganic building unit (Fig. 3a). On analyzing the radial distribution functions (RDFs) of the corresponding intermolecular pairs (Supplementary Fig. 14), we also found that the adsorbed water molecules (Ow) interact more strongly with µ-OH sites than with other adsorbed water molecules (Ow) as substantiated by (i) a shorter mean Oμ-OH···Ow distance of 2.72 Å vs. Ow···Ow distance of 2.83 Å, and (ii) a shorter mean Hμ-OH···Ow distance of 1.75 Å vs. Ow···Hw distance of 1.88 Å. In addition, the relatively higher intensity of the Hμ-OH···Ow over the Oμ-OH···Hw RDF plots (Supplementary Fig. 14) further suggests that μ-OH groups favorably act as H-donors while forming hydrogen bonds. A detailed statistical breakdown on the occurrences of different hydrogen bond configurations (Supplementary Figs. 15–16) supports that the Hμ-OH···Ow configuration is the driving force that initiates the adsorption process in CAU-23. The population of adsorbed water molecules continues to rise until corresponding to a monotonic increment of the water molecules in the region nearby the µ-OH sites (Supplementary Fig. 17). Upon covering these primary hydrophilic sites, there is a sudden increase in the water content above p/p0 ≈ 0.30 and the water molecules tend to occupy the 1D square-shaped channels. The entire pore system of CAU-23 is almost completely filled at a relative pressure of p/p0 ≈ 0.35. This pore filling mechanism is characterized by the aggregation of water molecules interacting with each other through strong hydrogen bonds as evidenced in Fig. 3b, c. To this end, the fraction of the hydrogen bonds formed among the adsorbed water molecules to the total number of hydrogen bonds drastically increases from ~20% to ~90% as the relative pressure (p/p0) increased from 0.20 to 0.35, respectively (Supplementary Fig. 15). The average number of hydrogen bond connections associated with a single water molecule is ~3.65—a number very close to the one of bulk water and it is also consistent with our previous findings in other MOFs and related materials19,22,39,40,41. Preferential arrangement of the adsorbed water molecules within the channel of CAU-23. a The very first adsorbed water molecule interacting with two adjacent µ-OH sites forming strong hydrogen bonds. b Aggregation of hydrogen-bonded water molecules within the channel of CAU-23 plotted along the cross-section of the channel. c Top view of the water loaded channel Stability tests of CAU-23 at working conditions To confirm the promising properties of CAU-23 in ADC applications, long-term cycling tests were carried out in order to prove the required high stability at working conditions. Hence, CAU-23 was subjected to an extended test of 5000 water sorption cycles42. An aluminum sheet was coated with the nanocrystalline powder and the sample was cycled between 20 and 120 °C in a saturated water atmosphere to promote fast adsorption and desorption (see method section for details). Cycling stability was confirmed by PXRD and water adsorption capacity measurements through a comparison of the original coated sample and the sample after the tests (Fig. 4a, b, Supplementary Fig. 19, Supplementary Tables 8–9). Le Bail fits of the PXRD data demonstrate persistent crystallinity of the compound. The water uptake capacities are very consistent and within the error limit of the measurement. Uptakes of 0.393, 0.389, and 0.387 gH2O/gsorbent were recorded for the pure nanoscale powder and the coating before and after testing, respectively, while taking the amount of binder into account. The volumetric uptake capacity of a MOF/binder (83.3/16.7 wt%) composite was determined to be 0.14 gH2O/cm³composite (composite density 0.46 g/cm³, Supplementary Table 10). It should be kept in mind that although MOFs are intensively discussed for water adsorption applications, such long-term stability studies involving a few thousand cycles have rarely been reported27,33. Thus the water adsorption properties of CAU-23 in combination with its high stability prompted us to determine its performance at low and ultra-low driving temperatures. Proof of integrity of coated CAU-23 before and after 5000 cycle stability measurement. a PXRD patterns of CAU-23 before (red) and after 5000 sorption cycles (blue). A calculated pattern (black) based on the crystal structure is also provided. b Gravimetric determination of the water uptake capacity of CAU-23 coatings before and after the 5000 cycle stability measurement in comparison with the value of the pure nanoscale powder ADC calculations for CAU-23 The various temperatures within the ADC unit, i.e., the desired cooling, back cooling/heat rejection, and driving temperatures, define the working conditions of the ADC setup. The efficiency of ADCs strongly depends on these temperatures, and so an understanding of the properties of the compound with respect to the set of temperatures is required. The heat of adsorption of the water molecules was first determined from the adsorption isotherms collected at several temperatures (Supplementary Fig. 20). The resulting moderate value of −48.2 kJ/mol strongly suggests that the water desorption should be easily achieved at relatively low driving temperatures since it is only slightly higher than the evaporation enthalpy of water 44.19 kJ/mol (Supplementary Fig. 21)43. In addition, the loading of the compound during adsorption (Fig. 5a) and desorption cycles (Fig. 5b) was further calculated from the sorption isotherms. To identify suitable temperature boundaries (Tdriving/Tcondenser/heat rejection/Tevaporator) where the use of the full sorption capacity of CAU-23 of >0.35 gH2O/gsorbent is possible, the temperatures that lead to a complete adsorption in the adsorption step (Fig. 5a, dark red area) and a complete desorption in the desorption step (Fig. 5b, dark blue area) must be chosen. A selection of possible temperature boundaries with a driving temperature of 60 and 55 °C is given in the Supplementary Fig. 22. These results demonstrate that at low desired cooling temperatures above 7 °C the full loading capacity (>0.35 gH2O/gsorbent) can be used but with the drawback of a low heat rejection temperature below 30 °C. With higher heat rejection temperatures above 30 °C, cooling temperatures of 10 °C can still be obtained, which is low enough, for example, to run a domestic cooling system. The greatest advantage of this particular compound is its desorption behavior. At 60 °C, or even lower if back cooling temperatures are below 30 °C, the material can be emptied completely and reset to the initial state. So with a temperature set of e.g., 60/30/10 °C (driving temperatures /back cooling or heat rejection/ desired cooling) the full loading capacity can be used. Calculation of adsorption driven chiller temperature boundaries for CAU-23 and coefficient of performance values for cooling in comparison with selected state of the art materials. Calculated loading of CAU-23 for different temperatures used in an ADC setup for adsorption a, and desorption b, cycles. c Calculation of the COP values for different driving temperatures (assumed desired cooling of 10 °C and back cooling temperature of 30 °C). d Water adsorption curves at 40 °C of selected compounds To compare the efficiency, a material related coefficient of performance for cooling (COP) was calculated (see Methods) according to the methodology by de Lange et al. and Ernst et al.8,44 for CAU-23 and selected materials (MIP-200, CAU-10-H, the Al-fum and MIL-160)16,24,26,28 that have been proposed in the past as outperforming conventional adsorbents (Fig. 5c, Supplementary Fig. 23). The Zr-MOF MOF-801, also known as Zr-fum45, is not discussed in this study since its long term cycling stability has not been demonstrated yet22,23,31. Even at the temperature boundaries of 30 °C for back cooling and 10 °C for desired cold, CAU-23 outperforms other best-in-class adsorbents not only in terms of its very high COP of 0.8 but, more importantly, by maintaining this high COP even at very low-desorption temperatures of nearly 50 °C. Only Al-fum shows similar theoretical COP values but its water sorption isotherm deviates distinctly from the optimal S-shape form and exhibits a significant hysteresis, which will lower the efficiency in devices (Fig. 5d). This makes CAU-23, to the best of our knowledge, the material with the lowest reported driving temperature (<60 °C) combining high capacity (0.37 gH2O/gsorbent) and proven long-term stability (>5000 cycles). The low driving temperature allows the utilization of heat sources that are currently not used and simultaneously the available energy is more efficiently used leading to an outstanding efficiency. The next step, the design, construction, and testing of a suitable prototype giving meaningful conclusions at the technology level is by far more complex. So far the proof of concept of this technology was validated in the laboratory by some of the authors26,28. Based on the previous studies, a good correlation between lab-based calculations of performance and the experimental performance of the coated heat exchanger (the heart of an ADC) was found (for details see SI). CAU-23 has been demonstrated to be an ideal material for ultra-low temperature adsorption driven chillers. It allows to lower the required driving temperatures of such devices to 60 °C, while possessing high uptake capacities of 0.37 gH2O/gsorbent and providing a low cooling temperature of 10 °C. The most significant advantage of CAU-23 is that the driving temperatures of 60–70 °C can be easily achieved without any loss of performance, paving the way for more efficient use of waste heat or solar heat. CAU-23 outperforms all other microporous materials considered so far for ADC applications, in terms of having a low driving temperature while exhibiting high-water uptake capacity, outstanding coefficient of performance values at low driving temperature and proven excellent stability. The green synthesis, which can potentially be scaled up easily, opens viable perspectives towards applications at the industrial-level. Elemental analysis was carried out on an Elementar vario Micro cube (CHNS). Thermogravimetric measurements were recorded in air on a Linseis STA PT 1600 (heating rate = 4 K min−1, gas flow = 20 mL/min). Infrared spectra were recorded on a Bruker Alpha-P IR spectrometer (drying of the sample was carried out at 100 °C for 16 h). Scanning electron microscopy micrographs were collected using a JEOL JSM-7000F, equipped with an Everhart-Thornley detector. Temperature dependent powder X-ray diffraction (T-PXRD) patterns were recorded on a Stoe Stadi P Combi diffractometer in transmission geometry equipped with Mo-Kα1 radiation, a curved germanium monochromator and a linear MYTHEN detector with an aperture angle of 17° and a furnace. Nitrogen sorption experiments were carried out on a BEL Japan Inc. Belsorpmax at 77 K. The sample was activated at 150 °C for 16 h under reduced pressure (<0.1 mbar). Single crystal electron diffraction using cRED The sample of CAU-23 was crushed in a mortar and dispersed in ethanol. A few drops of hydrochloric acid (0.1 mol/L) were added to break the agglomeration of the CAU-23 nanocrystals. The suspension was then treated by ultrasonication for about 30 s. Three droplets from the suspension were applied on a lacey carbon TEM grid (Okenshoji Co., Ltd, Tokyo, Japan). cRED data were collected on a 200 kV JEOL JEM-2100 LaB6 TEM at room temperature. The rotation speed of the goniometer, exposure time, spot size, and camera length were 0.45°/s, 1 s, 2, 40 cm, respectively. Video frames of electron diffraction patterns were recorded by a high-speed hybrid electron camera (Timepix QTPX-262). The total rotation angle of each data set ranged between 70 to 90° and the total collection time was between 2.5 and 3.5 min. cRED data processing and structure determination cRED data were first processed by using REDp46,47 for initial determination of unit cell and space group. For structure solution, cRED data were then processed by Dials48, where the instrumental parameters (rotation axis, beam position, and beam direction), unit cell, orientation matrix, and intensity profiles were refined and the intensities were integrated. Four data sets were merged together by XSCALE in XDS49,50 and used for structure solution and refinement by SHELX. Isotropic structure refinement was performed against the cRED data without adding any restraints. The crystallographic data of CAU-23 are given in Supplementary Table 2. High resolution powder X-ray diffraction measurements The high resolution powder X-ray diffraction data were recorded at beamline I11 at Diamond Light Source Didcot, UK51. A 0.5 mm capillary was filled, evacuated and heated to 100 °C for 2 h and then sealed. The measurements were performed on a setup consisting of 9 Dectris® Mythen 2 detectors with an overall opening angle of 90° 2θ at room temperature. Rietveld refinement Rietveld refinement was subsequently performed by TOPAS 6 against synchrotron PXRD data52. All atoms were refined with isotropic displacement parameters. The ligand was modeled using a Z-matrix to constrain some of the geometry. Aluminum–oxygen distances and oxygen–aluminum–oxygen angles were allowed to be refined with bond distance and angle restraints. The positions of water molecules in the pores were established by simulated annealing using antibump restraints, followed by refinement with restraints. Hydrogen atoms were excluded in the Rietveld refinement. Water sorption experiments The water adsorption was studied by measuring isotherms at 25, 40, and 60 °C with a Quantachrome VStar. Prior to the measurements the MOF was outgassed at 120 °C under vacuum in a MasterPrep® Degasser. Computational methods The CAU-23 crystal structure was further geometry optimized—keeping the lattice parameters constant— by applying a tighter convergence criteria for atomic displacements (<2 ∙ 10−5 bohr), maximum forces acting on individual atoms (2 ∙ 10−5 hartree/bohr), and also for SCF energies (10−8). These DFT calculations were performed using the general gradient approximation (GGA) to the exchange-correlation functional according to Perdew–Burke–Ernzerhof (PBE)53 in a combination of Grimme's DFT¬D3 semi-empirical dispersion corrections54,55. Triple-ζ plus valence polarized Gaussian-type basis sets (TZVP-MOLOPT) were considered for all atoms, except for the Al centers, where short ranged double-ζ plus valence polarization functions (DZVP-MOLOPT) were employed56. The interaction between core electrons and valence shells of the atoms were described by the pseudopotentials derived by Goedecker, Teter, and Hutter (GTH)57,58,59. The auxiliary plane wave basis sets were truncated at 400 Ry. Single point energy calculations were performed to extract the atomic partial charges of CAU-23 applying the Restrained Electrostatic Potential (RESP) fitting strategy for the periodic system as implemented in the CP2K code (Supplementary Fig. 16 for atom types). The DFT geometry optimized model of CAU-23 was further employed in Grand Canonical Monte Carlo (GCMC) calculations to determine the water adsorption properties of the MOF at 25 °C using the Complex Adsorption and Diffusion Simulation Suite (CADSS) code60. We considered a simulation box of 18 conventional unit cells (3 × 2 × 3) of CAU-23 maintaining atoms at their initial positions. The water molecules were described by the TIP4P/2005 potential model corresponding to a microscopic representation of four Lennard–Jones (LJ) sites (Supplementary Table 4)61. This potential model was demonstrated to lead to a good agreement between the experimental and simulated adsorption isotherms for a series of MOFs15,16,26,28, especially at very low pressure, which is where the adsorption domain is most affected by the choice of the potential model. The interactions between the guest water molecules and the MOF structure were described by a combination of site-to-site LJ contributions and Coulombic terms. A mixed set of the universal force field (UFF)62 and DREIDING force field63 parameters were adopted to describe the LJ parameters for the atoms in the inorganic and organic part of the framework, respectively (Supplementary Table 5). However, following the treatment adopted in other well-known force fields64,65, the hydrogen atoms of the µ-OH moieties and organic linkers, as well as Al and S atoms are allowed to interact with the adsorbate water molecules via the coulombic potential only as justified in previous studies on similar Al-based MOF topologies25,35. Short-range dispersion forces were truncated at a cutoff radius of 12 Å while the interactions between unlike force field centers were treated by means of the Lorentz–Berthelot combination rule. The long-range electrostatic interactions were handled using the Ewald summation technique. For each point in the adsorption isotherm, a typical 2 ∙ 108 MC steps for equilibration and 3 ∙ 108 MC steps have been used for production runs. The adsorption enthalpy at low coverage (∆H) was calculated through the configurational-bias Monte Carlo simulations performed in the NVT ensemble using the revised Widom's test particle insertion method66. In addition, in order to gain insight into the configurational distributions of the adsorbed species in CAU-23, some additional data were calculated at different pressure including the hydrogen bond networks and the radial distribution functions (RDF) of the intermolecular atomic pairs of the guests and the host. The detection of hydrogen bonds has been conducted by assuming the following criteria: (i) distance between donor and acceptor oxygen centers (D−A) is shorter than 3.5 Å, and (ii) the corresponding D-H-A angle—formed by the intramolecular O−H vector of the donor molecule, and the intermolecular H−O vector of donor and acceptor molecules—is greater than 120°. Cycling stability test Advanced long-term hydrothermal stability to 5000 full cycles was tested within a custom-made cycle apparatus as described elsewhere42. The sample was alternatingly tempered at 120 °C and 20 °C, respectively, under a pure water vapor atmosphere. Prior to and after the cycle experiment, the sample was subjected to a PXRD measurement to identify possible degradation of the framework. For the 5000 cycle experiment, the MOF was coated on a 50 × 50 mm aluminum sheet using a coating recipe described elsewhere (binder mass = 15 wt%)67. Powder X-ray diffraction (PXRD) analysis was performed on a Bruker D8 Advance with DaVinci™ design, using Cu-Kα radiation from a Cu anode tube at 40 kV/40 mA with a Ni filter in Bragg–Brentano geometry. An MRI TC-humidity chamber, coupled to a humidified nitrogen flow generated by an Ansyco® humidifier, was used for controlled humidity PXRD experiments. Calculations of water sorption properties for ADC applications To assess the potential of CAU-23 in adsorption heat transformation, the measured water adsorption data were fitted using a weighted-dual site Langmuir approach (wDSL)67,68. $$X\left( {{\mathrm{p}},{\mathrm{T}}} \right) = X_{\mathrm{L}}\left( {1 - w\left( {{\mathrm{p}},{\mathrm{T}}} \right)} \right) + X_{\mathrm{U}}\left( {{\mathrm{p}},{\mathrm{T}}} \right)w\left( {{\mathrm{p}},{\mathrm{T}}} \right)$$ $$X_L\left( {{\mathrm{p}},{\mathrm{T}}} \right) = X_{L,\infty }\frac{{b_L{\mathrm{p}}}}{{1 + b_L{\mathrm{p}}}}$$ $$X_U\left( {{\mathrm{p}},{\mathrm{T}}} \right) = X_{U,\infty }\frac{{b_U{\mathrm{p}}}}{{1 + b_U{\mathrm{p}}}} + b_H{\mathrm{p}}$$ $$b_\alpha = b_{\alpha ,\infty }\exp \left( {\frac{{E_\alpha }}{{R{\mathrm{T}}}}} \right),\alpha = {\mathrm{L}},{\mathrm{U}},{\mathrm{H}}$$ $$w\left( {{\mathrm{p}},{\mathrm{T}}} \right) = \left( {\frac{{\exp \left( {\frac{{\ln \left( {\mathrm{p}} \right) - \ln \left( {p_{{\mathrm{step}}}\left( {\mathrm{T}} \right)} \right)}}{{\sigma \left( {\mathrm{T}} \right)}}} \right)}}{{1 + \exp \left( {\frac{{\ln \left( {\mathrm{p}} \right) - \ln \left( {p_{{\mathrm{step}}}\left( {\mathrm{T}} \right)} \right)}}{{\sigma \left( {\mathrm{T}} \right)}}} \right)}}} \right)^\gamma$$ $$\sigma ({\mathrm{T}}) = \chi _1\exp \left( {\chi _2\left( {\frac{1}{{T_0}} - \frac{1}{{\mathrm{T}}}} \right)} \right)$$ $$p_{{\mathrm{step}}}\left( {\mathrm{T}} \right) = p_{{\mathrm{step}},0}\exp \left( {\frac{{ - H_{{\mathrm{Step}}}}}{R}\left( {\frac{1}{{T_0}} - \frac{1}{{\mathrm{T}}}} \right)} \right)$$ The water uptake at a certain pressure and temperature X(p, T) is calculated from two Langmuir-terms (XL and XU), representing the adsorption before and after the step in the uptake. w(p, T) is a weighting function that depends on the pressure p, the temperature T and the pressure pstep at which the uptake step occurs. Further symbols X∞, bα, Eα, and X1,2 represent fit parameters68,69. This model was then used for the calculation of uptake capacity as a function of the temperatures applied for adsorption, desorption, condensation and evaporation. The heat of adsorption was calculated from measured adsorption isotherms, as well as from the thermodynamic fit using the van't Hoff Eq.70: $$\frac{{{\mathrm{\Delta }}H_{ads}\left( {X,T} \right)}}{{RT^2}} = - \left( {\frac{{\partial \ln p}}{{\partial T}}} \right)_{X\left( {p,T} \right)}$$ COP calculation The coefficient of performance (COP) for cooling can be defined as the ratio of evaporation enthalpy of the liquid phase and consumed heat for the desorption process:8,44 $${\mathrm{COP}}_C = \frac{{Q_{{\mathrm{evap}}}}}{{Q_{{\mathrm{des}}} + Q_{{\mathrm{IH}}}}}$$ Herein in the numerator the evaporation enthalpy of water is used (44.19 kJ/mol)44. In the denominator the amounts of heat to apply for desorption (Qdes) and isosteric heating (QIH) of the adsorbent are summed up. The amounts of heat can be calculated from energy balances: $$dQ_{{\mathrm{IH}}} = m_{{\mathrm{ads}}} \cdot \left( {c_{{\mathrm{p,ads}}} + X_{{\mathrm{max}}}c_{{\mathrm{p,fl}}}} \right){\mathrm{dT}}$$ $$dQ_{{\mathrm{des}}} = m_{{\mathrm{ads}}} \cdot \left( {c_{{\mathrm{p,ads}}} + X\left( {p,T} \right)c_{{\mathrm{p,fl}}}} \right)dT - m_{{\mathrm{ads}}}q_{{\mathrm{st}}}(T)dX$$ $$Q_{{\mathrm{evap}}} = m_{{\mathrm{ads}}} \cdot \left( {{\mathrm{\Delta }}h_{{\mathrm{vap}}}\left( {T_{evap}} \right) - c_{{\mathrm{p,g}}}\left( {\bar T - T_{{\mathrm{evap}}}} \right)} \right)(X_{{\mathrm{max}}} - X_{{\mathrm{min}}})$$ Herein mads refers to the adsorbent mass, cp,ads, cp,fl, and cp,g to the isobaric heat capacities of adsorbent, water and water vapor, \(\bar T\) to the arithmetic mean temperature during desorption. $$\bar T = 0.5\left( {T_{{\mathrm{des,max}}} + T_{{\mathrm{des,min}}}} \right)$$ The uptake X(p, T) is calculated from either the wDSL approach described within in the paper or using the approach of Dubinin–Astakhov, depending on which fits best. Within the latter, briefly described, an adsorption potential A(p, T) is defined: $$A\left( {p,T} \right) = - RT{\mathrm{ln}}\left( {\frac{p}{{p_s\left( T \right)}}} \right)$$ Further, the water uptake is expressed as an adsorbed volume W of liquid water by dividing the mass specific uptake by the liquid density of the adsorptive: $$W = \frac{{X\left( {p,T} \right)}}{{\rho _L\left( T \right)}}$$ The plot W(A) is referred to as characteristic curve and can be described for the adsorption of water on many conventional adsorbents by a semi-empirical approach: $$W\left( A \right) = W_0 \cdot \exp \left( { - \left( {\frac{A}{{E_A}}} \right)^n} \right)$$ Herein, W0, EA, and n are parameters free to fit to experimental data. As proposed by for instance de Lange et al. W(A) can be defined in sections8. Within the Dubinin–Astakhov approach, the isosteric heat of adsorption \(q_{{\mathrm{st}}}\) can be calculated as follows: $$q_{{\mathrm{st}}} = {\mathrm{\Delta }}h_{{\mathrm{vap}}}\left( T \right) + A\left( T \right)$$ Using the above described set of equations, the COP for cooling was calculated for a back cooling/ heat rejection temperature of 30 °C, and desired cold temperature of 10 °C and a variation of driving temperatures lower than 95 °C. As suggested by de Lange et al. and for the sake of comparability, the capacity of the adsorbent cp,ads was assumed to 1 kJ/kg8. The X-ray crystallographic data for CAU-23 have been deposited at the Cambridge Crystallographic Data Centre (CCDC, free for charge at https://www.ccdc.cam.ac.uk) under deposition number CCDC 1878820. Further data that supports the findings of this study are available from the corresponding authors upon reasonable request. Dean, B., Dulac, J., Morgan, T., Remme, U. & Motherway, B. The future of cooling: opportunities for energy- efficient air conditioning. Int. Energy Agency (2018). https://www.iea.org/futureofcooling/. Sivak, M. Potential energy demand for cooling in the 50 largest metropolitan areas of the world. Implications for developing countries. Energy Policy 37, 1382–1384 (2009). Isaac, M. & van Vuuren, D. P. Modeling global residential sector energy demand for heating and air conditioning in the context of climate change. Energy Policy 37, 507–521 (2009). Ürge-Vorsatz, D., Cabeza, L. F., Serrano, S., Barreneche, C. & Petrichenko, K. Heating and cooling energy trends and drivers in buildings. Renew. Sust. Energ. Rev. 41, 85–98 (2015). Handbook for the Montreal Protocol on Substances that Deplete the Ozone Layer. Twelfth edition (2018). https://ozone.unep.org/sites/default/files/MP_handbook-english-2018.pdf. Kim, K.-H., Shon, Z.-H., Nguyen, H. T. & Jeon, E.-C. A review of major chlorofluorocarbons and their halocarbon alternatives in the air. Atmos. Environ. 45, 1369–1382 (2011). Henninger, S. K., Jeremias, F., Kummer, H., Schossig, P. & Henning, H.-M. Novel sorption materials for solar heating and cooling. Enrgy. Proced. 30, 279–288 (2012). Lange, M. F., de, Verouden, K. J. F. M., Vlugt, T. J. H., Gascon, J. & Kapteijn, F. Adsorption-driven heat pumps: the potential of metal-organic frameworks. Chem. Rev. 115, 12205–12250 (2015). Meunier, F. Adsorption heat powered heat pumps. Appl. Therm. Eng. 61, 830–836 (2013). Jeremias, F., Fröhlich, D., Janiak, C. & Henninger, S. K. Water and methanol adsorption on MOFs for cycling heat transformation processes. New J. Chem. 38, 1846–1852 (2014). Henninger, S. K., Jeremias, F., Kummer, H. & Janiak, C. MOFs for use in adsorption heat pump processes. Eur. J. Inorg. Chem. 16, 2625–2634 (2012). Henninger, S. K., Schmidt, F. P. & Henning, H.-M. Water adsorption characteristics of novel materials for heat transformation applications. Appl. Therm. Eng. 30, 1692–1702 (2010). Pastore, H. O., Coluccia, S. & Marchese, L. Porous aluminophosphates: from molecular sieves to designed acid catalysts. Annu. Rev. Mat. Res. 35, 351–395 (2005). Freni, A. et al. SAPO-34 coated adsorbent heat exchanger for adsorption chillers. Appl. Therm. Eng. 82, 1–7 (2015). Seo, Y.-K. et al. Energy-efficient dehumidification over hierachically porous metal-organic frameworks as advanced water adsorbents. Adv. Mat. 24, 806–810 (2012). Wang, S. et al. A robust large-pore zirconium carboxylate metal–organic framework for energy-efficient water-sorption-driven refrigeration. Nat. Energ. 3, 985–993 (2018). AbdulHalim, R. G. et al. A fine-tuned metal–organic framework for autonomous indoor moisture control. J. Am. Chem. Soc. 139, 10715–10722 (2017). Zhou, H.-C., Long, J. R. & Yaghi, O. M. Introduction to metal–organic frameworks. Chem. Rev. 112, 673–674 (2012). Yuan, S. et al. Stable metal–organic frameworks: design, synthesis and applications. Adv. Mat. 30, 1704303 (2018). Cui, S. et al. Metal-organic frameworks as advanced moisture sorbents for energy-efficient high temperature cooling. Sci. Rep. 8, 15284 (2018). Abtab, S. M. T. et al. Reticular chemistry in action. A hydrolytically stable MOF capturing twice its weight in adsorbed water. Chem 4, 94–105 (2018). Fathieh, F. et al. Practical water production from desert air. Sci. Adv. 4, https://doi.org/10.1126/sciadv.aat3198 (2018). Solovyeva, M. V., Gordeeva, L. G., Krieger, T. A. & Aristov, Y. I. MOF-801 as a promising material for adsorption cooling: equilibrium conversion and management. Energ. Conv. Manag. 174, 356–363 (2018). Permyakova, A. et al. Synthesis optimization, shaping, and heat reallocation evaluation of the hydrophilic metal-organic framework MIL-160(Al). ChemSusChem 10, 1419–1426 (2017). Cadiau, A. et al. Design of hydrophilic metal organic framework water adsorbents for heat reallocation. Adv. Mat. 27, 4775–4780 (2015). Lenzen, D. et al. Scalable green synthesis and full‐scale test of the metal–organic framework CAU‐10‐H for use in adsorption‐driven chillers. Adv. Mat. 30, 1705869 (2018). Fröhlich, D. et al. Water adsorption behaviour of CAU-10-H. A thorough investigation of its structure-property relationships. J. Mater. Chem. A 4, 11859–11869 (2016). Kummer, H. et al. A functional full-scale heat exchanger coated with aluminum fumarate metal–organic framework for adsorption heat transformation. Ind. Eng. Chem. Res. 56, 8393–8398 (2017). Tannert, N. et al. Evaluation of the highly stable metal–organic framework MIL-53(Al)-TDC (TDC = 2,5-thiophenedicarboxylate) as a new and promising adsorbent for heat transformation applications. J. Mater. Chem. A 6, 17706–17712 (2018). Tschense, C. B. L. et al. New Group 13 MIL‐53 Derivates based on 2,5-Thiophenedicarboxylic Acid. Z. anorg. allg. Chem. 643, 1600–1608 (2017). Kim, H. et al. Water harvesting from air with metal-organic frameworks powered by natural sunlight. Science 356, 430–434 (2017). Rieth, A. J., Yang, S., Wang, E. N. & Dincă, M. Record atmospheric fresh water capture and heat transfer with a material operating at the water uptake reversibility limit. ACS Cent. Sci. 3, 668–672 (2017). Jeremias, F., Fröhlich, D., Janiak, C. & Henninger, S. K. Advancement of sorption-based heat transformation by a metal coating of highly-stable, hydrophilic aluminium fumarate MOF. RSC Adv. 4, 24073–24082 (2014). Alvarez, E. et al. The structure of the aluminum fumarate metal–organic framework A520. Angew. Chem. Int. Ed. 127, 3735–3739 (2015). Damasceno Borges, D., Maurin, G. & Galvão, D. S. Design of porous metal-organic frameworks for adsorption driven thermal. Batter. MRS Adv. 2, 519–524 (2017). Thekdi, A. & Nimbalkar, S. Industrial Waste Heat Recovery: Potential Applications, Available Technologies and Crosscutting R&D Opportunities. (Oak Ridge National Laboratory, 2014). ORNL/TM-2014/622. https://info.ornl.gov/sites/publications/files/Pub52987.pdf. Wang, B. et al. A porous cobalt tetraphosphonate metal–organic framework. accurate structure and guest molecule location determined by continuous-rotation electron diffraction. Chem. Eur. J. 24, https://doi.org/10.1002/chem.201804133 (2018). Nederlof, I., van Genderen, E., Li, Y.-W. & Abrahams, J. P. A Medipix quantum area detector allows rotation electron diffraction data collection from submicrometre three-dimensional protein crystals. Acta Crystallogr. D. 69, 1223–1230 (2013). Damasceno Borges, D. et al. Computational exploration of the water concentration dependence of the proton transport in the porous UiO–66(Zr)–(CO2H)2 metal–organic framework. Chem. Mater. 29, 1569–1576 (2017). Damasceno Borges, D. et al. Proton transport in a highly conductive porous zirconium-based metal–organic framework. molecular insight. Angew. Chem. Int. Ed. 55, 3919–3924 (2016). Mileo, P. G. M. et al. Highly efficient proton conduction in a three-dimensional titanium hydrogen phosphate. Chem. Mater. 29, 7263–7271 (2017). Henninger, S. K., Munz, G., Ratzsch, K.-F. & Schossig, P. Cycle stability of sorption materials and composites for the use in heat pumps and cooling machines. Renew. Energ. 36, 3043–3049 (2011). Wagner, W. & Kretzschmar, H.-J. International Steam Tables-2008. Properties of water and steam based on the industrial formulation IAPWS-IF97. 2nd edn. (Springer, Heidelberg, 2008). Ernst, S.-J., Jeremias, F., Bart, H.-J. & Henninger, S. K. Methanol adsorption on HKUST-1 coatings obtained by thermal gradient deposition. Ind. Eng. Chem. Res. 55, 13094–13101 (2016). Wißmann, G. et al. Modulated synthesis of Zr-fumarate MOF. Micro. Mesopor. Mat. 152, 64–70 (2012). Zhang, D., Oleynikov, P., Hovmöller, S. & Zou, X. Collecting 3D electron diffraction data by the rotation method. Z. Krist. Cryst. Mater. 225, 94–102 (2010). Wan, W., Sun, J., Su, J., Hovmöller, S. & Zou, X. Three-dimensional rotation electron diffraction. Software /it RED for automated data collection and data processing. J. Appl. Crystallogr. 46, 1863–1873 (2013). Waterman, D. G. et al. Diffraction-geometry refinement in the /it DIALS framework. Acta Crystallogr. D. 72, 558–575 (2016). Kabsch, W. XDS. Acta Crystallogr. D. 66, 125–132 (2010). Kabsch, W. Integration, scaling, space-group assignment and post-refinement. Acta Crystallogr. D. 66, 133–144 (2010). Thompson, S. P. et al. Beamline I11 at Diamond. A new instrument for high resolution powder diffraction. Rev. Sci. Instrum. 80, 75107 (2009). Coelho, A. A. TOPAS and TOPAS-academic: an optimization program integrating computer algebra and crystallographic objects written in C++. J. Appl. Crystallogr. 51, 210–218 (2018). Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996). Grimme, S. Accurate description of van der Waals complexes by density functional theory including empirical corrections. J. Comput. Chem. 25, 1463–1473 (2004). Grimme, S., Antony, J., Ehrlich, S. & Krieg, H. A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. J. Chem. Phys. 132, 154104 (2010). VandeVondele, J. & Hutter, J. Gaussian basis sets for accurate calculations on molecular systems in gas and condensed phases. J. Chem. Phys. 127, 114105 (2007). Goedecker, S., Teter, M. & Hutter, J. Separable dual-space Gaussian pseudopotentials. Phys. Rev. B 54, 1703–1710 (1996). Hartwigsen, C., Goedecker, S. & Hutter, J. Relativistic separable dual-space Gaussian pseudopotentials from H to Rn. Phys. Rev. B 58, 3641–3662 (1998). Krack, M. Pseudopotentials for H to Kr optimized for gradient-corrected exchange-correlation functionals. Theor. Chem. Acc. 114, 145–152 (2005). Yang, Q. & Zhong, C. Understanding hydrogen adsorption in metal−organic frameworks with open metal sites: a computational study. J. Phys. Chem. B 110, 655–658 (2006). Abascal, J. L. F. & Vega, C. A general purpose model for the condensed phases of water. TIP4P/2005. J. Chem. Phys. 123, 234505 (2005). Rappe, A. K., Casewit, C. J., Colwell, K. S., Goddard, W. A. & Skiff, W. M. UFF, a full periodic table force field for molecular mechanics and molecular dynamics simulations. J. Am. Chem. Soc. 114, 10024–10035 (1992). Mayo, S. L., Olafson, B. D. & Goddard, W. A. DREIDING. A generic force field for molecular simulations. J. Phys. Chem. 94, 8897–8909 (1990). Rai, N. & Siepmann, J. I. Transferable potentials for phase equilibria. 9. explicit hydrogen description of benzene and five-membered and six-membered heterocyclic aromatic compounds. J. Phys. Chem. B 111, 10790–10799 (2007). Jorgensen, W. L., Maxwell, D. S. & Tirado-Rives, J. Development and testing of the OPLS all-atom force field on conformational energetics and properties of organic liquids. J. Am. Chem. Soc. 118, 11225–11236 (1996). Vlugt, T. J. H., García-Pérez, E., Dubbeldam, D., Ban, S. & Calero, S. Computing the heat of adsorption using molecular simulations. The effect of strong coulombic interactions. J. Chem. Theory Comput. 4, 1107–1118 (2008). Kummer, H., Füldner, G. & Henninger, S. K. Versatile siloxane based adsorbent coatings for fast water adsorption processes in thermally driven chillers and heat pumps. Appl. Therm. Eng. 85, 1–8 (2015). Hefti, M., Joss, L., Bjelobrk, Z. & Mazzotti, M. On the potential of phase-change adsorbents for CO2 capture by temperature swing adsorption. Faraday Discuss. 192, 153–179 (2016). Ernst, S.-J., Baumgartner, M., Fröhlich, D., Bart, H.-J. & Henninger, S. K. Adsorbentien für sorptionsgestützte Klimatisierung, Entfeuchtung und Wassergewinnung. Chem. Ing. Tech. 89, 1650–1660 (2017). Ruthven, D. M. Principles of Adsorption and Adsorption Processes. (Wiley, New York, 1984). We thank Dr. Helge Reinsch for his support during the synthesis development. The authors gratefully acknowledge Dr. Michael Wharmby for the possibility to collect high resolution PXRD data at Diamond Light Source beam line I11 (EE16502) and Prof. Chiu C. Tang for support during the experiment. S.-J.E. gratefully acknowledges the financial support of the Federal German Ministry of Education and Research (BMBF) under grant no. 03SF0492A. A.K.I. is grateful for support from the Swedish Foundation for Strategic Research (SSF) and the Knut and Alice Wallenberg Foundation (KAW 2016.0072). Institut für Anorganische Chemie, Christian-Albrechts-Universität Kiel, Max-Eyth-Str. 2, 24118, Kiel, Germany Dirk Lenzen & Norbert Stock Department of Materials and Environmental Chemistry, Stockholm University, SE-106 91, Stockholm, Sweden Jingjing Zhao , A. Ken Inge , Hongyi Xu & Xiaodong Zou Department Heating and Cooling Technologies, Group Sorption Materials, Fraunhofer-Institut für Solare Energiesysteme ISE, Heidenhofstrasse 2, 79110, Freiburg, Germany Sebastian-Johannes Ernst , Dominik Fröhlich & Stefan Henninger Institut Charles Gerhardt Montpellier, Université Montpellier, UMR 5253 CNRS ENSCM UM, 34095 Montpellier, France Mohammad Wahiduzzaman & Guillaume Maurin TU Kaiserslautern, Chair of Separation Science and Technology, P.O. Box 3049, 67653, Kaiserslautern, Germany & Hans-Jörg Bart Institut für Anorganische Chemie und Strukturchemie I, Heinrich-Heine-Universität Düsseldorf, Universitätsstraße 1, 40225, Düsseldorf, Germany Christoph Janiak Search for Dirk Lenzen in: Search for Jingjing Zhao in: Search for Sebastian-Johannes Ernst in: Search for Mohammad Wahiduzzaman in: Search for A. Ken Inge in: Search for Dominik Fröhlich in: Search for Hongyi Xu in: Search for Hans-Jörg Bart in: Search for Christoph Janiak in: Search for Stefan Henninger in: Search for Guillaume Maurin in: Search for Xiaodong Zou in: Search for Norbert Stock in: D.L. investigated the synthesis, performed the general characterization of CAU-23 and contributed to the writing of the paper. J.Z. and H.X. performed TEM measurements and structure determination of CAU-23 by single crystal electron diffraction. S.-J.E. and H.-J.B calculated the COP and the ADC related values and measured the water sorption data. M.W. modeled the water sorption mechanism of CAU-23. A.K.I. refined the structure of CAU-23 and performed the topological analysis; J.Z. also participated in this part. D.F. and C.J. performed the stability cycling measurements and the relative water pressure dependent PXRD study. S.H. supervised the COP and ADC calculations and the stability measurements. G.M. supervised the modeling effort and contributed to the writing of the paper. X.Z. supervised the structure determination and refinement process and led the writing of the paper. N.S. coordinated the study, supervised the synthesis and general characterization of CAU-23 and led the writing of the paper. Correspondence to Stefan Henninger or Guillaume Maurin or Xiaodong Zou or Norbert Stock. Peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Transparent Peer Review File Lenzen, D., Zhao, J., Ernst, S. et al. A metal–organic framework for efficient water-based ultra-low-temperature-driven cooling. Nat Commun 10, 3025 (2019). https://doi.org/10.1038/s41467-019-10960-0 DOI: https://doi.org/10.1038/s41467-019-10960-0 Shaping of MOFs via freeze-casting method with hydrophilic polymers and their effect on textural properties Emrah Hastürk , Simon-Patrick Höfert , Burhan Topalli , Carsten Schlüsener & Christoph Janiak Microporous and Mesoporous Materials (2019) Air-Con Metal–Organic Frameworks in Binder Composites for Water Adsorption Heat Transformation Systems Serkan Gökpinar , Sebastian-Johannes Ernst , Emrah Hastürk , Marc Möllers , Ilias El Aita , Raphael Wiedey , Niels Tannert , Sandra Nießing , Soheil Abdpour , Alexa Schmitz , Julian Quodbach , Gerrit Füldner , Stefan K. Henninger Industrial & Engineering Chemistry Research (2019) Rapid Cycling and Exceptional Yield in a Metal-Organic Framework Water Harvester Nikita Hanikel , Mathieu S. Prévot , Farhad Fathieh , Eugene A. Kapustin , Hao Lyu , Haoze Wang , Nicolas J. Diercks , T. Grant Glover & Omar M. Yaghi ACS Central Science (2019) Integration of Metal–Organic Frameworks on Protective Layers for Destruction of Nerve Agents under Relevant Conditions Zhijie Chen , Kaikai Ma , John J. Mahle , Hui Wang , Zoha H. Syed , Ahmet Atilgan , Yongwei Chen , John H. Xin , Timur Islamoglu , Gregory W. Peterson & Omar K. Farha Journal of the American Chemical Society (2019) Unraveling the Water Adsorption Mechanism in the Mesoporous MIL-100(Fe) Metal–Organic Framework Paulo G. M. Mileo , Kyung Ho Cho , Jaedeuk Park , Sabine Devautour-Vinot , Jong-San Chang The Journal of Physical Chemistry C (2019) Nature Communications menu Editors' Highlights
CommonCrawl
Investigating the possibility of producing animal feed from sugarcane bagasse using oyster mushrooms: a case in rural entrepreneurship Mojtaba Mahmood Molaei Kermani1, Samaneh Bahrololoum1 & Farzaneh Koohzadi1 Journal of Global Entrepreneurship Research volume 9, Article number: 52 (2019) Cite this article Pleurotus florida is an edible mushroom that has commercial potential in the food industry. The objective of this study was to investigate the possibility of producing animal feed from sugarcane bagasse using P. florida. To this aim, sugarcane bagasse was processed with P. florida. The experiment was designed in a completely randomized design with four treatments: processed sugarcane bagasse, raw sugarcane bagasse, wheat straw, and barley straw. This research was carried out under two in vitro and in vivo conditions. In case of in vitro condition, it can be concluded that the amount of dry matter, neutral detergent fiber (P < 0.01), and acidic detergent fiber (P < 0.05) significantly decreased in processed sugarcane bagasse, and the amount of crude protein (P < 0.05), organic matter, and crude ash (P < 0.01) significantly increased. The result of in vivo condition showed that as a result of biological processing of sugarcane bagasse with P. florida, the indices such as digestibility, voluntary feed intake, and relative palatability index increased. The results of this study suggests that treated bagasse could be used as an alternative roughage source for ruminant feeding. Currently there has been an increasing tendency towards more efficient utilization of agro-industrial residues such as sugar cane bagasse, cassava bagasse sugar beet pulp, apple pomace, etc. So far, various mechanical, physical, chemical, and biological methods have been developed to utilize these residues as raw materials for the production of chemicals, ethanol, single-cell protein, enzymes, amino acids, biologically active secondary metabolites, etc. (Pandey, Soccol, Nigam, & Soccol, 2000; Pandey et al., 1988). Agro-industrial residues, which are semiarid and actually contain lignocellulosic material, are unsuitable for direct consumption by animals, and so they must be treated both mechanically and chemically to become edible. Also, fiber residue is low in nutritional value and need supplements to improve their nutritional value (Chaudhry, 1998; Chaudhry & Miller, 1996). Cellulose and hemicellulose have a high digestibility, but lignin gives them hard physical structure and resistance to digestive enzymes. So far, a lot of research has been conducted to isolate lignocellulosic bonds. Common methods are breaking down lignin components through chemical and biological methods (Moyson, 1991). However, there are many reports that show feeding alkali-treated sugarcane bagasse (SB) increased feed intake and body weight of ruminant (Carvalho et al., 2013). In recent decades, more attention has been given to enrichment of lignocellulose materials as a means of biological method for enrichment. Reports indicate that wheat straw processing using ligninolytic fungus, Pleurotus ostreatus, can reduce lignin, increase crude protein, and improve its digestibility. Oyster mushrooms are primary saprophytes, and it can decompose dead plant and animal tissue by releasing enzymes (Ardon, Kerem, & Hadar, 1996). SB is one of many fibrous residues remaining after the extraction of juice from cane stem and could be used as feed for animals (Yu et al., 2013). It has been reported that SB contain low protein (< 3% DM), high cellulose (> 40% DM), hemicellulose (> 35% DM), and lignin (> 15 % DM), and low digestibility (20–30 % DM) (da Costa, de Souza, Saliba, & Carneiro, 2015; Gunun et al., 2016). These attributes result poor animal performances, but through the development of physical, chemical, and biological treatments to disrupt the lingo-cellulose complex, potential use of SB as a feed may be realized (Okano et al., 2006; Balgees et al., 2007). Over the years, a large number of microorganisms such as yeasts, bacteria, and fungi have been used to improve its performances. Among these, filamentous fungi, especially basidiomycetes, are the preferred choice for enzyme production and protein enrichment and have been widely employed (Pandey et al., 2000). Abdollah et al. (2001) developed a system whereby lignin biodegradation in lignocellulosic units are optimized with the minimum loss of cellulose and the maximum loss of lignin, during a 21-day solid-state fermentation of SB with Termitomyces sp., fortified with different concentrations of sucrose and glucose. The maximum lignin loss of 27.0% and the minimum cellulose loss of 8.5% occurred when the substrate was fortified with 5% glucose. Also, the highest cow reticulorumen digestibility of 24.4%, compared with 11.5% of the raw SB, was observed in this study. According to Zhang, Gong, and Li (1995), under solid-state fermentation, the crude protein contents were increased from 24.1 to 32.3% and from 28.4 to 36.7% for Pleurotus ostreatus and Lentinus edodes spent compost media, respectively. The crude fiber contents of the composts were significantly decreased, and the in vitro digestibility of the crude protein was as high as 70 %; the total and essential amino acid contents made up 73.3 and 37.1% of the crude protein, respectively. Therefore, mushroom substrate is a potential source of nitrogen for poultry and animals. Producing processed bagasse Processed SB was prepared by adding lime and oyster mushroom (Pleurotus florida) spawn seed to raw SB. Raw SB is slightly acidic, so lime was added to adjust the pH, then the SB was pasteurized for 12 h at 60 °C under water vapor. After 12 h of pasteurization, for pre-fermentate, the SB was kept in the pasteurization room for 48 h at 48 °C. Pasteurized compost was inoculated with 5% spawn at 30 °C, then transferred to nylon bags in dimensions of 65 × 40 cm. After 30 days of inoculation, the bags containing mycelium were transferred to the drying room. Over the bagasse processing, dried layers were collected and stored (Wanapat and Pimpa 1999; Wanapat et al., 2009). Chemical composition and in vitro digestibility In order to evaluate the digestibility of processed SB with P. florida and compare it with other available forage, raw SB, wheat straw, and barley straw, gas production experiments were carried out using the Menke and Steingass (1988) method. In order to determine the digestibility, in vitro gas production (IVGP), crude protein (CP), and animal waste ashes (AWA) were measured in the following ways: the crude protein was determined by the Kjeldahl method described by the AOAC (1998). Animal waste ashes were determined by the protocol described by the AOAC (1998). Then, incubations were completed using 40 ml of buffered rumen fluid. Approximately 200 mg of feed was weighed and placed into a 120-ml graduated glass syringe according to Menke and Steingass (1988). Buffer and mineral solution was prepared and placed in a water bath at 39 °C under continuous flushing with CO2. Rumen fluid was collected from animals into a pre-warmed thermos flask, and then filtered and flushed with CO2. The mixed and CO2-flushed rumen fluid was added to the buffered mineral solution (1:2 (v/v)). Buffered rumen fluid (40 ml) was pipetted into each syringe containing feed samples, and the syringes were immediately placed into an incubator with a rotating disc, as described in Menke and Steingass (1988). Then, gas production was recorded at 2, 4, 8, 12, 16, and 24 h of incubation. The volume of producing gas at any time was calculated according to Theodorou, Williams, Dhanoa, McAllan, and France (1994). Finally, in vitro organic matter digestibility (IVOMD) was calculated as described by Menke and Steingass (1988): $$ \mathrm{IVOMD}\ \left(g/100g\ \mathrm{DM}\right)=14.88+\left(0.889\times \mathrm{IVGP}\right)+\left(0.448\times \mathrm{CP}\right)+\left(0.651\times \mathrm{XA}\right) $$ where IVGP is the total volume of produced gas from 200 gr of feed sample after 24 h (ml/200 gr), CP is the crude protein content (gr/100 gr of dry matter), and XA is the amount of ash (gr/100 gr of dry matter). Voluntary feed intake and in vivo digestibility In this experiment, five Baluchi male sheep with an average weight of 37 ± kg were used. These sheep were placed separately in five locations, each location with four mangers containing four separate forages: 300 gr of alfalfa, 200 gr straw, 200 gr raw bagasse, and 200 gr processed bagasse per daily serving. The positions of the mangers were changed daily and randomly in order to achieve best possible randomization. The experiment was conducted in the 21-day period, and feed intake was measured daily. In the last 5 days of the period, in addition to measuring the amount of feed remaining, the stool of each livestock was collected during 24 h; also, water intake was measured every 24 h. The relative palatability index (Pi) was measured based on the amount of alfalfa being consumed as a standard feedstock according to Kaitho et al. (1996). $$ \mathrm{Pi}=\left(\mathrm{T}\mathrm{i}/\mathrm{A}\mathrm{i}\right)/\left(\mathrm{T}1/\mathrm{A}1\right) $$ where Pi is the relative palatability index, T1 is the amount of eaten alfalfa, A1 is the amount of given alfalfa, Ti is the amount of consumed feed, Ai is the amount of given feed, and i is the other feed such as raw bagasse, processed bagasse, wheat straw, and barley straw. Chemical composition and in vitro digestibility were calculated by measuring the amount of dry matter (DM) as described by the AOAC method (AOAC, 1998); organic matter (OM), crude protein (CP), crude protein digestibility, and crude ash (ASH) were determined by the Kjeldahl method as described by the AOAC (1998). Acid detergent fiber (ADF) and neutral detergent fiber (NDF) was calculated by the method as described by Van Soest, Robertson, and Lewis (1991). The acid-insoluble ash (AIA) of feed and the stool was calculated by the method as described by Vogtmann, Pfirter, and Prabucki (1975). $$ \mathrm{Digestibility}=100-\left(\left(\mathrm{feed}\ \mathrm{AIA}/\mathrm{stool}\ \mathrm{AIA}\right)\times 100\right) $$ where AIA is acid-insoluble ash. The experiment was conducted in a completely randomized design (CRD) with four treatments and five replications. Differences between treatment means were determined by Duncan's new multiple range tests (Steel & Torrie, 1980), and significant effects were identified at P < 0.01 and P < 0.05 levels. Results and discussions Table 1 shows that there was a significant difference between the four fibrous materials, processed bagasse, raw bagasse, wheat straw, and barley straw, in the case of the content of crude protein (CP), neutral detergent fiber (NDF), acidic detergent fiber (ADF), crude ash (ASH), dry matter (DM), and in vitro organic matter digestibility (IVOMD). In this experiment, the amount of CP content of treated SB was significantly increased (P < 0.01); these results were in agreement with Ardon et al. (1996). Since fungus has a fairly high protein content, typically 20–30% of crude protein as a percentage of dry matter, the observed increase in CP content could be attributed to the addition of mycelial protein to the SB and the secretion of protease enzymes by the fungus (Ardon et al., 1996; Chahal & Khan, 1991). Table 1 The effect of four fibrous materials in CP, NDF, ADF, ASH, DM, and IVOMD content The in vitro organic matter digestibility (IVOMD) was significantly (P < 0.01) increased by processing with fungi, a large part of the raw fiber, and insoluble compounds were degraded by the fungus enzymatic process and converted into soluble materials. According to result, due to the reduction in the dry weight, neutral detergent fiber (NDF), and acidic detergent fiber (ADF), the degradable and water-soluble portion increased (Table 1), which is in agreement with the results of Fazaeli et al. (2004), who reported that IVOMD increased in the wheat straw that was treated with the P. florida mushroom. Also, the results showed that there is a significant difference (P < 0.01) between various fibrous materials and the amount of in vitro gas production (IVGP). In this experiment, the least amount of IVGP was observed in raw bagasse and the highest was observed in processed bagasse. Menke and Steingass (1988) and Liu, Orskov, and Chen (1999) have shown that there is a relationship between gas production and soluble carbohydrates. Voluntary feed intake and relative palatability Inoculation sugarcane bagasse with P. florida caused a significant increase in the voluntary feed intake compared with raw bagasse. This result can be due to chemical changes such as cell wall decomposition in bagasse during the solid fermentation process and consequently increasing in bagasse palatability (Table 2). This result corroborates with Ardon et al. (1996), Dhanda, Garcha, Kakkar, and Makkar (1996), and Fazaeli (2008), who reported that the ruminant fed with fungus-infected bagasse as roughage sources had a higher voluntary intake compared to raw bagasse feeding (P < 0.01), and this could be due to the high fiber content in processed bagasse which affected the intake of animals. This means that treated SB could improve the nutritive value and potentially be used as a high-quality roughage source for the ruminant. Due to processing bagasse with mushrooms, this effect avails the rumen microbes to attack the structural carbohydrates more easily, improve digestion in the rumen, increase feed passage through the digestive system, and increase feed intake and digestibility, as well as the palatability of treated bagasse (Bakshi, Gupta, & Langar, 1985). The result of Table 2 indicates that the relative palatability index (pi) of processed bagasse has increased as a result of biological processing with P. florida compared to raw bagasse. Table 2 The result of dietary treatments on voluntary feed intake The results of apparent digestibility of dry matter, organic matter, and protein were calculated and the results are presented in Table 3. Table 3 Apparent digestibility of dry matter, organic matter and protein and chemical composition The highest percentage of processed bagasse is assigned to diets no. 5 (25%), 2 (24%), 1 (23%), 4 (21%), and 3 (19.5%) respectively. On the other hand, diets 4 (9%), 2 (7%), 3 (3%), 5 (2%), and 1 (1/5%), respectively, have the highest relative percentages of raw bagasse in the diet. Table 3 shows that the highest percentage of apparent digestibility of dry matter, organic matter, and protein were obtained in diets no. 2 and 5 with the highest percentage of relative processed bagasse, and the least percentage was obtained in diets no. 3 with the least percentage of relative processed bagasse. The results of Table 3, as well as the correlation analysis, show that there is a positive correlation between the percentage of processed bagasse in all diets and the apparent digestibility of dry matter, organic matter, and protein, as increasing the percentage of processed bagasse in two rations of 5 and 2, the highest apparent digestibility of dry matter, organic matter, and protein was observed. On the other hand, diet no. 3 with the lowest percentage of processed bagasse in all diets, although, has the lowest apparent digestibility of dry matter, organic matter, and protein, despite having the highest percentage of alfalfa. In conclusion, inoculation of sugarcane bagasse with P. florida could improve the nutritive value and increase digestibility, voluntary feed intake, and relative palatability index (pi). This study suggests that processed bagasse could be used as an alternative roughage source for ruminant feeding. The results show that the best economical and nutritional value was obtained with processed bagasse compared with wheat straw and barley straw. Considering the current prices, as the grain costs twice as much as the processed bagasse, so using the processed bagasse with P. florida is economically viable. The datasets used and analyzed during the current study are available from the corresponding author upon request. ADF: Acid detergent fiber AIA: Acid-insoluble ash AOAC: Association of Official Analytical Chemists ASH: Crude ash AWA: Animal waste ashes Dry matter IVGP: In vitro gas production IVOMD: In vitro organic matter digestibility NDF: Neutral detergent fiber OM: Pi: Palatability index SB: Sugarcane bagasse Abdullah, N., Ejaz, N., Abdullah, M., Nisa, A. U., & Firdous, S. (2006). Lignocellulosic degradation in solid-state fermentation of sugar cane bagasse by Termitomyces sp. Micología Aplicada International, 18(2), 15–19. AOAC (Association of Official Analytical Chemists). (1998). Official methods of analysis of theAOAC International (16th ed.). Gaithersburg: AOAC International. Ardon, O., Kerem, Z., & Hadar, Y. (1996). Enhancement of laccase activity in liquid cultures of the ligninolytic fungus Pleurotus ostreatus by cotton stalk extract. Journal of Biotechnology, 51(3), 201–207. Bakshi, M. P. S., Gupta, V. K., & Langar, P. N. (1985). Acceptability and nutritive evaluation of Pleurotus harvested spent wheat straw in buffaloes. Agricultural Wastes, 13(1), 51–57. Balgees, A., Elmnan, A., Fadel Elseed, A. M. A., & Salih, A. M. (2007). Effect of ammonia and urea treatments on the chemical composition and rumen degradability of bagasse. J. Appl. Sci. Res, 3(11), 1359–1362. Carvalho, M. L., Sousa Jr, R., Rodriguez-Zuniga, U. F., Suarez, C. A. G., Rodrigues, D. S., Giordano, R. C., & Giordano, R. L. C. (2013). Kinetic study of the enzymatic hydrolysis of sugarcane bagasse. Brazilian Journal of Chemical Engineering, 30(3), 437–447. Chahal, D. S., & Khan, S. M. (1991). Production of mycelial biomass of oyster mushrooms on rice straw. In Mushroom Science XIII. Volume 2. Proceedings of the 13th international congress on the science and cultivation of edible fungi (pp. 709–716). Dublin: Irish Republic. Chaudhry, A. S. (1998). Nutrient composition, digestion and rumen fermentation in sheep of wheat straw treated with calcium oxide, sodium hydroxide and alkaline hydrogen peroxide. Animal feed science and technology, 74(4), 315–328. Chaudhry, A. S., & Miller, E. L. (1996). The effect of sodium hydroxide and alkaline hydrogen peroxide on chemical composition of wheat straw and voluntary intake, growth and digesta kinetics in store lambs. Animal feed science and technology, 60(1-2), 69–86. da Costa, D. A., de Souza, C. L., Saliba, E. D. O. S., & Carneiro, J. D. (2015). By-products of sugar cane industry in ruminant nutrition. International Journal Advance Agriculture Research, 3, 1–9. Dhanda, S., Garcha, H. S., Kakkar, V. K., & Makkar, G. S. (1996). Improvement in feed value of paddy straw by Pleurotus cultivation. Mushroom Research, 5, 1. Fazaeli, H. (2008). Digestibility and voluntary intake of fungal-treated wheat straw in sheep and cow. JWSS-Isfahan University of Technology, 12(43), 523–531. Fazaeli, H., Mahmodzadeh, H., Azizi, A., Jelan, Z. A., Liang, J. B., Rouzbehan, Y., & Osman, A. (2004). Nutritive value of wheat straw treated with Pleurotus fungi. Asian-australasian journal of animal sciences, 17(12), 1681–1688. Gunun, N., Wanapat, M., Gunun, P., Cherdthong, A., Khejornsart, P., & Kang, S. (2016). Effect of treating sugarcane bagasse with urea and calcium hydroxide on feed intake, digestibility, and rumen fermentation in beef cattle. Tropical animal health and production, 48(6), 1123–1128. Kaitho, R. J., Umunna, N. N., Nsahlai, I. V., Tamminga, S., Van Bruchem, J., Hanson, J., & Van De Wouw, M. (1996). Palatability of multipurpose tree species: effect of species and length of study on intake and relative palatability by sheep. Agroforestry systems, 33(3), 249–261. Liu, J. X., Orskov, E. R., & Chen, X. B. (1999). Optimization of steam treatment as a method for upgrading rice straw as feeds. Animal feed science and technology, 76(3-4), 345–357. Menke, K., & Steingass, H. (1988). Estimation of the energetic feed value from chemical composition and in vitro gas production using rumen fluid. Animal Research and Development, 28, 7–55. Moyson, E., & Verachtert, H. (1991). Growth of higher fungi on wheat straw and their impact on the digestibility of the substrate. Applied Microbiology and Biotechnology, 36(3), 421–424. Okano, K., Iida, Y., Samsuri, M., Prasetya, B., Usagawa, T., & Watanabe, T. (2006). Comparison of in vitro digestibility and chemical composition among sugarcane bagasses treated by four white‐rot fungi. Animal Science Journal, 77(3), 308–313. Pandey, A., & Soccol, C. R. (1998). Bioconversion of biomass: a case study of ligno-cellulosics bioconversions in solid state fermentation. Brazilian Archives of Biology and Technology, 41(4), 379–390. Pandey, A., Soccol, C. R., Nigam, P., & Soccol, V. T. (2000). Biotechnological potential of agro-industrial residues. I: sugarcane bagasse. Bioresource Technology, 74(1), 69–80. Steel, R. G. D., & Torrie, J. H. (1980). Duncan's new multiple range test. Principles and procedures of statistics, 187–188. Theodorou, M. K., Williams, B. A., Dhanoa, M. S., McAllan, A. B., & France, J. (1994). A simple gas production method using a pressure transducer to determine the fermentation kinetics of ruminant feeds. Animal feed science and technology, 48(3-4), 185–197. Van Soest, P. V., Robertson, J. B., & Lewis, B. A. (1991). Methods for dietary fiber, neutral detergent fiber, and nonstarch polysaccharides in relation to animal nutrition. Journal of dairy science, 74(10), 3583–3597. Vogtmann, H., Pfirter, H. P., & Prabucki, A. L. (1975). A new method of determining metabolisability of energy and digestibility of fatty acids in broiler diets. Wanapat, M., & Pimpa, O. (1999). Effect of ruminal NH3-N levels on ruminal fermentation, purine derivatives, digestibility and rice straw intake in swamp buffaloes. Asian-Australasian Journal of Animal Sciences, 12(6), 904–907. Wanapat, M., Polyorach, S., Boonnop, K., Mapato, C., & Cherdthong, A. (2009). Effects of treating rice straw with urea or urea and calcium hydroxide upon intake, digestibility, rumen fermentation and milk yield of dairy cows. Livestock Science, 125(2-3), 238–243. Yu, Q., Zhuang, X., Lv, S., He, M., Zhang, Y., Yuan, Z., et al. (2013). Liquid hot water pretreatment of sugarcane bagasse and its comparison with chemical pretreatment methods for the sugar recovery and structural changes. Bioresource technology, 129, 592–598. Zhang, C. K., Gong, F., & Li, D. S. (1995). A note on the utilisation of spent mushroom composts in animal feeds. Bioresource Technology, 52(1), 89–91. Thanks to everyone who have contributed to the collection and analysis of the data as experts. Department of Biotechnology and Plant Breeding, Ferdowsi University of Mashhad, Mashhad, Iran Mojtaba Mahmood Molaei Kermani, Samaneh Bahrololoum & Farzaneh Koohzadi Mojtaba Mahmood Molaei Kermani Samaneh Bahrololoum Farzaneh Koohzadi MMMK is the lead author of the study and performed the analysis of the data. SB contributed to the analysis of the data and discussion. FK contributed to the analysis and interpretation and drafted the manuscript. All authors read and approved the final manuscript. Correspondence to Samaneh Bahrololoum. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Mahmood Molaei Kermani, M., Bahrololoum, S. & Koohzadi, F. Investigating the possibility of producing animal feed from sugarcane bagasse using oyster mushrooms: a case in rural entrepreneurship. J Glob Entrepr Res 9, 52 (2019). https://doi.org/10.1186/s40497-019-0174-2 Agricultural waste
CommonCrawl
Akbulut, Ali ; Guliyev, Vagif ; Mustafayev, Rza On the boundedness of the maximal operator and singular integral operators in generalized Morrey spaces. (English). Mathematica Bohemica, vol. 137 (2012), issue 1, pp. 27-43 MSC: 42B20, 42B25, 42B35 | MR 2978444 | Zbl 1250.42038 | DOI: 10.21136/MB.2012.142786 generalized Morrey space; maximal operator; Hardy operator; singular integral operator In the paper we find conditions on the pair $(\omega _1,\omega _2)$ which ensure the boundedness of the maximal operator and the Calderón-Zygmund singular integral operators from one generalized Morrey space $\mathcal {M}_{p,\omega _1}$ to another $\mathcal {M}_{p,\omega _2}$, $1<p<\infty $, and from the space $\mathcal {M}_{1,\omega _1}$ to the weak space $W\mathcal {M}_{1,\omega _2}$. As applications, we get some estimates for uniformly elliptic operators on generalized Morrey spaces. [1] Burenkov, V. I., Guliyev, H. V.: Necessary and sufficient conditions for boundedness of the maximal operator in the local Morrey-type spaces. Studia Mathematica 163 (2004), 157-176. DOI 10.4064/sm163-2-4 | MR 2047377 [2] Burenkov, V. I., Guliyev, H. V., Guliyev, V. S.: Necessary and sufficient conditions for boundedness of the fractional maximal operator in the local Morrey-type spaces. J. Comput. Appl. Math. 208 (2007), 280-301. DOI 10.1016/j.cam.2006.10.085 | MR 2347750 [3] Burenkov, V. I., Guliyev, V. S., Serbetci, A., Tararykova, T. V.: Necessary and sufficient conditions for the boundedness of genuine singular integral operators in local Morrey-type spaces. Doklady Ross. Akad. Nauk. 422 (2008), 11-14. MR 2475077 [4] Burenkov, V. I., Gogatishvili, A., Guliyev, V. S., Mustafayev, R. Ch.: Boundedness of the fractional maximal operator in Morrey-type spaces. Complex Var. Elliptic Equ. 55 (2010), 739-758. MR 2674862 [5] Burenkov, V., Gogatishvili, A., Guliyev, V., Mustafayev, R.: Boundedness of the fractional maximal operator in local Morrey-type spaces. Preprint, Institute of Mathematics, AS CR, Praha (2008), 20. MR 2674862 [6] Calderón, A. P., Zygmund, A.: Singular integral operators and differential equations. Amer. J. Math. 79 (1957), 901-921. DOI 10.2307/2372441 | MR 0100768 [7] Carro, M., Pick, L., Soria, J., Stepanov, V. D.: On embeddings between classical Lorentz spaces. Math. Ineq. & Appl. 4 (2001), 397-428. MR 1841071 | Zbl 0996.46013 [8] Chiarenza, F., Frasca, M.: Morrey spaces and Hardy-Littlewood maximal function. Rend. Math. 7 (1987), 273-279. MR 0985999 | Zbl 0717.42023 [9] Fazio, G. D., Ragusa, M. A.: Interior estimates in Morrey spaces for strong solutions to nondivergence form equations with discontinuous coefficients. J. Funct. Anal. 112 (1993), 241-256. DOI 10.1006/jfan.1993.1032 | MR 1213138 | Zbl 0822.35036 [10] Guliyev, V. S.: Integral operators on function spaces on homogeneous groups and on domains in ${\mathbb R}^n$. Doctoral dissertation, Moskva, Mat. Inst. Steklov (1994), 329 Russian. [11] Guliyev, V. S.: Function spaces, integral operators and two weighted inequalities on homogeneous groups. Some applications. Baku, Elm. (1999), 332 Russian. [12] Guliyev, V. S.: Boundedness of the maximal, potential and singular operators in the generalized Morrey spaces. J. Inequal. Appl. 2009, Art. ID 503948 20. MR 2579556 | Zbl 1193.42082 [13] Kurata, K., Sugano, S.: A remark on estimates for uniformly elliptic operators on weighted $L_p$ spaces and Morrey spaces. Math. Nachr. 209 (2000), 137-150. DOI 10.1002/(SICI)1522-2616(200001)209:1<137::AID-MANA137>3.0.CO;2-3 | MR 1734362 | Zbl 0939.35036 [14] Mizuhara, T.: Boundedness of some classical operators on generalized Morrey spaces. Harmonic Analysis S. Igari ICM 90 Satellite Proceedings, Springer, Tokyo (1991), 183-189. MR 1261439 | Zbl 0771.42007 [15] Morrey, C. B.: On the solutions of quasi-linear elliptic partial differential equations. Trans. Amer. Math. Soc. 43 (1938), 126-166. DOI 10.1090/S0002-9947-1938-1501936-8 | MR 1501936 | Zbl 0018.40501 [16] Murata, M.: On construction of Martin boundaries for second order elliptic equations. Pub. Res. Instit. Math. Sci. 26 (1990), 585-627. DOI 10.2977/prims/1195170848 | MR 1081506 | Zbl 0726.31009 [17] Nakai, E.: Hardy-Littlewood maximal operator, singular integral operators and Riesz potentials on generalized Morrey spaces. Math. Nachr. 166 (1994), 95-103. DOI 10.1002/mana.19941660108 | MR 1273325 [18] Li, H. Q.: Estimations $L_p$ des opérateurs de Schrödinger sur les groupes nilpotents. J. Funct. Anal. 161 (1999), 152-218. DOI 10.1006/jfan.1998.3347 | MR 1670222 | Zbl 0929.22005 [19] Peetre, J.: On convolution operators leaving ${\mathcal L}^{p,\lambda}$ spaces invariant. Ann. Mat. Appl. IV. Ser. 72 (1966), 295-304. MR 0209917 [20] Shen, Z. W.: $L_p$ estimates for Schrödinger operators with certain potentials. Ann. Inst. Fourier (Grenoble) 45 (1995), 513-546. DOI 10.5802/aif.1463 | MR 1343560 [21] Smith, H. F.: Parametrix construction for a class of subelliptic differential operators. Duke Math. J. 63 (1991), 343-354. DOI 10.1215/S0012-7094-91-06314-3 | MR 1115111 | Zbl 0777.35002 [22] Stein, E. M.: Harmonic analysis: Real variable methods, orthogonality, and oscillatory integrals. Princeton Univ. Press, Princeton, NJ (1993). MR 1232192 | Zbl 0821.42001 [23] Sugano, S.: Estimates for the operators $V^{\alpha} (-\Delta+V)^{-\beta}$ and $V^{\alpha} \nabla (-\Delta+V)^{-\beta}$ with certain nonnegative potentials $V$. Tokyo J. Math. 21 (1998), 441-452. MR 1663618 [24] Thangavelu, S.: Riesz transforms and the wave equations for the Hermite operators. Commun. Partial Differ. Equations 15 (1990), 1199-1215. DOI 10.1080/03605309908820720 | MR 1070242 [25] Zhong, J. P.: Harmonic analysis for some Schrödinger type operators. PhD thesis, Princeton University (1993). MR 2689454
CommonCrawl
Heritage Science BEGL: boundary enhancement with Gaussian Loss for rock-art image segmentation Chuanping Bai1,2, Yangyang Liu1,3, Pengbo Zhou4, Xiaofeng Wang1,3 & Mingquan Zhou1,3,4 Heritage Science volume 11, Article number: 17 (2023) Cite this article Rock-art has been scratched, carved, and pecked into rock panels all over the world resulting in a huge number of engraved figures on natural rock surfaces that record ancient human life and culture. To preserve and recognize these valuable artifacts of human history, 2D digitization of rock surfaces has become a suitable approach due to the development of powerful 2D image processing techniques in recent years. In this article, we present a novel systematical framework for the segmentation of different petroglyph figures from 2D high-resolution images. The novel boundary enhancement with Gaussian loss (BEGL) function is proposed aiming at refining and smoothing the rock-arts boundaries in the basic UNet architecture. Several experiments on the 3D-pitoti dataset demonstrate that our proposed approach can achieve more accurate boundaries and superior results compared with other loss functions. The comprehensive framework of petroglyph segmentation from 2D high-resolution images provides the foundation for recognizing multiple petroglyph marks. The framework can then be extended to other cultural heritage digital protection domain easily. Petroglyphs are the most widespread, ancient and long-lasting rock-art in the world which have been incised, pecked, scratched or carved into rock surfaces [1]. Many figures and significant marks are present on rock surfaces. Rock-art is an important way of recording and exhibiting ancient human life and culture. Since rock paintings have a long history, natural weathering and man-made destruction have been threatening the life of petroglyphs [1]. There is an urgent need to protect and identify petroglyphs. Traditionally, rock-arts around the world have been recorded and preserved using a broad variety of approaches, including manual contact tracing, casting with plaster and frottage [2]. Due to the large quantity of petroglyphs which have been found out so far and some of rock-arts are in the cliffs, many manual documenting methods become infeasible [2,3,4]. Furthermore, this is an extremely time-consuming and repetitive work for documenting these pre-historic resources [4]. With the advances of digital photography and automatic image processing techniques, the number of digital images of complete petroglyphs will grow steadily [2, 5]. The automatic segmentation of petroglyph shapes is a basic and upstream task for recognizing rock-art and distinguishing rock painting artistic styles [6]. Segmentation of rock-art is to, firstly, classify the image in pecked and unpecked regions, and secondly, segment the different figures as well as different symbols in details. Related research mainly pays attention to the interactive segmentation with the appropriate combination of different visual features and automated classification of rock surfaces in terms of feature descriptors [2, 7]. Existing works also consider petroglyph shape similarity measure approaches for data mining and shape retrieval [6, 8, 9]. Recent work has mainly focused on the surface segmentation utilizing native 3D attributes of rock surfaces and discriminating pecking styles in a hybrid 2D/3D method [10,11,12]. Also, a publicly available dataset has been published for 2D or 3D rock-art surfaces segmentation [4]. Besides, valuable information acquired from automated tracing can be added to a rock-art inventory that can improve interpretation on rock-art artistic styles [13]. Although those methods achieved promising performance on petroglyph segmentation, the complexity of petroglyph makes it a very challenging problem. The automated segmentation for rock-art shapes is still unsolved and even considered infeasible which is an significant pre-processing step in this field [6]. Just a little works for the pixel-wise classification of petroglyph shapes have been done. Zhu et al. [9] proposed a collaborative manual segmentation approach that utilizes completely automated public turing test to tell computers and humans apart (CAPTCHA) for rock art image segmentation. Seidl and Breiteneder [2] developed a method for the pixel-wise classification of petroglyphs directly from images of natural rock panels. An integration of support vector machine (SVM) classifiers was trained for the appropriate combination of lots of visual features, then they devised a fusion of the classified results that allowed the interactive refining of the segmentation by the user. Vincenzo and Paolino [14] proposed a novel method for the segmentation of rock-art figures and recognition of carving symbols. A shape descriptor derived by 2D Fourier transform is applied to identify petroglyph figures, which is insensitive to shape deformations and robust to scale and rotation. Recently, the work [4] presented a 3D-pitoti dataset of high-resolution surface reconstructions which consists of the whole geometric information as well as color information. The 3D scanner acquired both the tactile and visual appearance of the rock panels at a millimetre scale. Of course, the intelligent segmentation methods benefit strongly from full 3D geometric information in contrast to only 2D textures [15]. Furthermore, they tested and verified various tasks on this dataset [4] that should serve as first public baseline in rock-art field. They evaluate the performance of semantic segmentation for petroglyph with common approaches based on random forests(RF) and fully convolutional networks (FCN). In contrast to these previous approaches for rock-art image segmentation, we focus on fully automatic segmentation framework based on convolutional neural networks (CNNs). Objective or loss function is especially significant while devising complicated image segmentation models based on deep learning architectures as it advances the learning effects step by step [16]. Binary cross entropy loss [17] is the most universal objective function in the domain of image semantic segmentation. Cross entropy loss function achieves the better results on balanced dataset, but not on imbalanced dataset, so some variants of cross entropy are devised, such as weighted cross entropy (WCE) [18], balanced cross entropy (BCE) [19]. Focal loss (FL) [20] assigns different weights to foreground pixels and background pixels, in order to change the case that some foreground pixels are overlapped or surrounded by many background pixels. In addition, it draws into hyper parameters and expects to update parameters. Dice loss (DL) [21] is designed to solve the phenomenon that many pixels are overlapped each other. When predicting, each category is calculated separately, and then the final result is obtained by averaging. The novel loss function called boundary enhancement (BE) loss [22] is introduced to concentrate on the boundary regions while training, so as to further improve the segmentation performance for the samples owning many blurred boundaries. Deep learning technology has tremendously advanced the performance of image segmentation models, usually attaining the highest precise rates on popular benchmarks in recent years [23]. The milestone of image segmentation model based on deep learning inevitably is FCN, proposed by Long et al. [24] in 2015. Subsequently, the variants of FCN have created a boom in the field of image segmentation. The FCN model consists of only the convolution layers instead of the fully connected layers, which enforce it to achieve a segmentation map whose size is the same as the input image. Badrinarayanan et al. [25] proposed SegNet that contains an encoder network and a symmetrical decoder network, utilizing pooling indices calculated in the max-pooling step of the corresponding encoder to perform unpooling in the decoder network. UNet is one of the most distinguished architectures for medical image segmentation, initially introduced by Ronneberger et al. [26] using the principle of deconvolution. The UNet architecture consists of two components, a shrinking branch to extract features, and a symmetric enlarging branch that focuses on precise localization. The most important property of UNet is the skipping connections between layers of the same resolution in encoding path to decoding path. These shortcut connections contain local detailed data that providing crucial high-resolution features to the deconvolution layers. Moreover, the UNet training tactic depends on the applying of data augmentation to learn effectively from very little labeled data. Finally, UNet is also great rapider to learn than the most other segmentation architectures due to its global based learning strategies [27]. Our rock-art segmentation network is based on UNet [26] that has won the first places in many international segmenting and tracking contests. Owing to the vast diversity of different signs and symbols, many kinds of carving styles, lots of pecking tools and pecking styles, as well as various forms of rock surfaces, diverse degrees of deterioration and scribble noises, the situation of rock drawings segmentation is especially difficult [4]. One of the main challenges for rock drawings segmentation is that component of the rock-art lacks of sharp boundaries with surrounding degraded regions. Without adequate training data is another major challenge, which makes it difficult to get complicated networks completely trained as enough labeled data is a critical pole of the success of convolutional neural networks (CNNs).This work makes the effort to solve the aforementioned difficulties and challenges, a comprehensive petroglyph segmentation framework is proposed for pixel-wise classification of extremely deteriorated training data, especially, blurred and superimposed figures in petroglyph data. Moreover, to accelerate the rock drawings segmentation network rapidly converge to segmentation boundaries, we propose a novel boundary enhancement with Gaussian loss (BEGL) as the supervised loss of segmenting network for petroglyphs.The segmentation effects show that our framework can achieve better and precise masks while segmenting blurred boundaries. For evaluation, we demonstrate our method on the 3D-pitoti dataset benchmark [4]. We also compare BEGL to other state-of-the-art loss function utilized in the proposed framework performed on the benchmark dataset [4]. The innovative contributions of the proposed method can be summarized as follows: We propose a systematic petroglyph segmentation framework for accurate surface segmentation of complex rock-art. We propose a novel loss function named BEGL aiming at refining and smoothing the rock-art boundaries, which could be easily implemented and plugged into any backbone networks. The new framework desired for rock-art segmentation is an exploration in the cultural heritage digital protection domain. The remainder of this paper is organized as follows: "Methods" section describes the methods in detail. "Overview on framework of petroglyph segmentation" section lays out the experimental setup, objective, design and evaluation metrics. We introduce the results and discussion in "BEGL loss function" section. Finally, several concluding remarks are drawn in "Segmentation network" section. In this section, we first introduce the framework of our ancient rock-art segmentation, which heavily augments the training dataset and employs a novel BEGL loss function for emphasizing rock-art boundaries in UNet. Moreover, a novel BEGL loss function aiming at enhancing and refining the rock-art boundaries is described. Finally, We describe the segmented network architecture in detail. Overview on framework of petroglyph segmentation Segmentation of rock-art is an incredibly challenging task due to different levels of degradation of petroglyph boundary and much scribble noises on rock panels. For a more efficient rock-art segmenting, we concentrate on the systematic framework of petroglyph segmentation. The proposed boundary enhancement based rock-art image segmentation framework is presented in Fig. 1. It comprises two phases, namely the image preprocessing and segmenting phases. Due to the petroglyph orthophotos are tilted in general, it is necessary to apply image rotation correction based on Fourier transform. The principle of 2D discrete Fourier transform (2D-DFT) can be defined as Eq. (1): $$\begin{aligned} \mathrm{{y}}(k,l)= & {} \sum \limits _{i = 0}^{\mathrm{{M}} - 1} {\sum \limits _{j = 0}^{N - 1} {x(i,j){e^{ - i2\pi \left( \frac{{ki}}{\mathrm{{M}}} + \frac{{lj}}{N}\right) }}} }. \end{aligned}$$ $$\begin{aligned} {e^{\mathrm{{iz}}}}= & {} \cos z + i\sin z. \end{aligned}$$ where x(i, j) is the value of image spatial domain, i and j are the indices of image position, \(\mathrm{{y}}(k,l)\) is the value of image frequency domain, k and l are the discrete spatial frequencies, M and N are the number of pixels in the 2D image space. Also, Eq. (2) is Euler's formula, which establishes a connetion between the complex exponential functions and the trigonometric functions. In essence, the application of 2D-DFT enables to convert signals in the spatial domain into the frequency domain conveniently. The Fourier spectrum is comprised of the sizes of the 2D-DFT complex coefficients, which are proportional to the strength of the spatial frequencies. Next, the corrected petroglyph images are sliced into small patches which can be taken into ResNet classifier as input. Because the large background often exists on the ancient rock-art panels which draws into great class imbalance, ResNet is selected as the classifier of the framework, which filters rock-art patches with no pecking marks. Figure 2 shows a class activation map (CAM) which selects pecked regions in red and drops unpecked regions in blue obtained from ResNet. Finally, in order to extract and emphasize the geometric patterns and boundaries related to the pecked-marks in the map that make up petroglyph shapes, image reversal and image adaptive histogram equalization are applied in the framework. The second phase is based on an UNet [26], which is an auto-encoder network with skip connections between layers of the same shape. We modify the network by introducing a novel loss function named BEGL, allowing it to better learn rock-art boundary features. Overview of the rock-art segmentation framework This is a CAM which illustrates the ResNet selects pecked regions in red BEGL loss function In order to emphasize the boundary regions, we apply the Sobel operator, which generates strong responses around the boundary areas and little response elsewhere, to each point in a 2D image x in Eq. (3) and Eq. (4). $$\begin{aligned} {\mathrm{{S}}_\mathrm{{h}}}= & {} {\mathrm{{T}}_\mathrm{{h}}}*x \end{aligned}$$ $$\begin{aligned} {\mathrm{{S}}_\mathrm{{v}}}= & {} {\mathrm{{T}}_\mathrm{{v}}}*x \end{aligned}$$ It is useful to express this as weighted density summations using the following weighting functions for h and v components. The two \({\mathrm{{T}}_\mathrm{{h}}}\) and \({\mathrm{{T}}_\mathrm{{v}}}\) templates used by Sobel are showed as Fig. 3a, b. The filters enable to be utilized individually to the input image, to generate individual measures of the gradient components in each orientation. Then, These can be added to obtain the absolute magnitude of the gradient at every point. The orientation of the spatial gradient is given by Eq. (5): $$\begin{aligned} \theta = \mathrm{{arctan}}\left( \frac{{{S_\mathrm{{h}}}}}{{{S_\mathrm{{v}}}}}\right) \end{aligned}$$ The gradient magnitude \(\mathrm{{S}}\) is given by Eq. (6): $$\begin{aligned} \vert \mathrm{{S}} \vert = \sqrt{S_\mathrm{{h}}^2 + S_\mathrm{{v}}^2} \end{aligned}$$ Gaussian kernels are the most broadly applied in smoothing filters. These filters have been proved to play an important role in edge detection in human vision system, and to be very useful as detectors for edge and boundary detection [28]. The 2D Gaussian filter is also the only rotationally symmetric filter that is separable in Cartesian coordinates. Separability is important for computational efficiency when implementing the smoothing operation by convolutions in the spatial domain. The Gaussian filter in two dimensions can be defined as Eq. (7): $$\begin{aligned} \mathrm{{G}}(i,j) = \frac{1}{{2\pi {\sigma ^2}}}{e^{ - \left( \frac{{{i^2} + {j^2}}}{{2{\sigma ^2}}}\right) }} \end{aligned}$$ where \((\sigma = 0.8)\) is the standard deviation of the Gaussian function and \(\left( {i,j} \right) \) are the Cartesian coordinates of the image. Standard 2D convolution operation can be used to calculate the discrete Gaussian filter. Hence, we can easily achieve the difference between filtered output of predictions of the CNNs and filtered output of the ground truth labels. Minimizing the divergence between two filtered outputs enables to close the gap between the results of CNNs and ground truth labels. Following the analyses above, the boundary enhancement with Gaussian loss is defined as a \({L_2}\)-norm shown in Eq. (8): $$\begin{aligned} {\mathrm{{L}}_\mathrm{{G}}} = {\left\| {\mathrm{{G}}(S(\mathrm{{y}})) - \mathrm{{G}}(S(\hat{y}))} \right\| _2} \end{aligned}$$ where \(\mathrm{{y}}\) are the ground truth labels, \(\widehat{y}\) are the prediction labels, \(S( \cdot )\) is Sobel operator, and \(G( \cdot )\) is Gaussian filter. Meanwhile, \({L_{BCE}}\) effectively suppresses false positives and remote outliers, which are far away from the boundary regions. The formula of \({L_{BCE}}\) is defined as Eq. (9). $$\begin{aligned} \begin{aligned} {\mathrm{{L}}_{\mathrm{{BCE}}}}(y,\hat{y}) = - (\beta *y\log (\hat{y}) + (1 - \beta )*(1 - y)\log (1 - \hat{y})) \end{aligned} \end{aligned}$$ Here, \(\mathrm{{y}}\) are the ground truth labels, \(\widehat{y} \) are the prediction labels, \(\beta \) is defined as \(1\mathrm{{ - }}\frac{y}{{H*W}}\), H and W are height as well as width of the image. The overall BEGL loss function is defined as Eq. (10) that is derived from Eqs. (8, 9): $$\begin{aligned} {L_{BEGL}} = {\lambda _1}{L_G} + {\lambda _2}{L_{BCE}} \end{aligned}$$ where \({\lambda _1}\) is 0.001 and \({\lambda _2}\) is 1 respectively. The BEGL loss funciton is the combination of BCE loss [19] and Gaussian loss. Sobel operator(templates of filtering) Segmentation network The details of the segmentation network used in our work are provided in this section. In order to fully leverage the spatial contextual and boundary information of pecking rock-art data to accurately segment petroglyph images, a new BEGL loss function is designed for rock-art image segmentation network (BEGL-UNet) with inspiration from the work [22]. The BEGL-UNet architecture is showed in Fig. 4. It consists of an encoder-decoder structure resulting in an U-shape. The encoder applies max-pooling and a double convolution which halves the image size and doubles the number of feature maps, respectively. The decoder is comprised of three parts: a bilinear upsampling operation to double the feature map size, the feature maps of the encoder path are directly concatenated onto the corresponding layers in the decoder path, and lastly a double convolution to half the number of feature maps. The skip-connections enable the model to use multiple scales of the input to generate the output. This aids the network by propagating more semantic information between the two paths, thereby enabling it to segment images more accurately. This is an overview of the BEGL-UNet architecture based on the basic UNet The proposed methods are implemented using the open source deep learning library TensorFlow1.10 [29] and python3.5. Each model is trained end-to-end with Adam optimization method. In the training phase, the learning rate is initially set to 0.0001 and decreased by a weight decay of \(1.0 \times {10^{ - 6}}\) after each epoch. The experiments were carried out on a NVIDIA GTX 2080ti GPU with 12GB memory. Due to the limitation of the GPU memory, we chose 2 as the batch size. In the testing phase, the segmented maps were stitched together once again. Experimental objective First of all, the aim of the current experiments is to test the availability of the systematical rock-art segmentation framework. Then, the purpose of the various experiments is to examine the effectiveness of BEGL loss function in image segmentation for ancient petroglyphs, and the performance of the BEGL loss function is tested by comparing those of other loss functions. The public 3D-pitoti dataset [22] consists of 26 high-resolution surface reconstructions of natural rock surfaces with a large number of petroglyphs. The petroglyph dataset provides orthophotos of all surface reconstructions with a pixel-accurate ground truth. To alleviate the problem of extremely little training data, we use a sliding window to crop original high-resolution images to 512 \(\times \) 512 small images without overlapping which also are processed with ease for BEGL-UNet. Then, we achieve an augmented dataset containing 548 images for training and evaluation. Experiments are conducted with two kinds of data splits that set aside 10\(\%\) of the total images for the test set and other 90\(\%\) of the total images for training. The normalization strategy with standard mean and deviation is employed to further boost the image data. As the rock-art orthophotos usually aren't aligned, image rotation correction based on Fourier transform is applied to original images. Furthermore, ResNet classifier is used to eliminate the unpecked small rock-art patches. Finally, we use data augmentation, in which images are reversed and equalized with adaptive histogram. Evaluation metrics Evaluation metric plays an important role in assessing the outcomes of segmentation models. In this work, we have analyzed our results using pixel accuracy, average precision, recall, F1-score, mean intersection over union (MIoU) and dice similarity coefficient (DSC) metrics. The pixel accuracy is the ratio between correctly classified pixels and the overall number of pixels. The average precision is measuring the average percentage of correct positive predictions among all predictions made. The recall rate refers to the proportion of pixels marked correctly in the mark of the result of artificial marking. The F1 score is a "harmonious" balance between precision and recall. MIoU is defined as the mean intersection of the predicted segmentation mask and the ground truth mask over their union. DSC, also known as overlapping index measures the overlapping between ground truth and predicted output. Comparison with other loss functions The results in Table 1 describe the quantitative comparisons on the test set without overlap with the training set. It shows the rock-art segmentation performance of BEGL-UNet with various loss functions that use the basic UNet architecture. From Table 1, we see that our approach achieves the best results on Accuracy (0.935), F1 (0.865), MIoU (0.840) and DSC (0.865), only the worse results on precision and recall which are competitive with the best results. The results in Table 1 clearly show the necessity for BEGL loss function to obtain refining and precise results on average. In addition, Fig. 5 shows visualization on the MIoU metric which makes great advance compared with other loss functions. The segmentation results of our proposed BEGL loss function have much smaller variance and less outliers compared to others. Figure 6 demonstrates the visualization of segmented maps with various loss functions. From the results we observe that the BE-UNet, DL-UNet and BCE-UNet are insensitive to noise, whereas the BEGL-UNet yields more consistent as well as refining segmented results. In particular, BEGL loss function help enhance the performance of petroglyph segmentation network. The FL-UNet correctly detects small and thin pecked regions but misses larger pecked regions. Fig. 7 shows that BEGL-UNet achieves more smooth and refined segmented maps than other loss functions in the zooming in maps. Furthermore, the zooming in maps in Fig.7 illustrate rock-art boundary is the vital element for petroglyph segmentation. Mean Intersection over Union (MIoU) across UNet architecture with various loss functions This is the visualization of segmented maps with UNet based on various loss functions Qualitative comparisons on the different loss functions in the zooming in views Table 1 The comparisons of the rock-art segmentation performance of BEGL-UNet with various loss functions on the test set In this paper, we have presented a novel framework for the segmentation of petroglyph shapes from 2D high-resolution images. The novel BEGL loss function is deployed in the basic UNet architecture. It addresses two challenges in rock-art image segmentation, which are the lack of clear boundary and the lack of enough annotated data for training CNNs. Several experiments on the 3D-pitoti dataset demonstrate that our proposed method can get more accurate boundaries and achieve superior results compared with other loss functions. In our future work, we will extend the proposed method to segment petroglyphs from other imaging modalities. Data used in this research is publicly available at https://www.tugraz.at/institute/icg/research/team-bischof/learning-recognition-surveillance/downloads/3dpitotidataset/. BEGL: Boundary enhancement with Gaussian Loss CNNs: Completely Automated Public Turing test to tell Computers and Humans Apart SVM: Support vector machine RF: Random forests FCN: Fully convolutional networks WCE: Weighted cross entropy BCE: Balanced cross entropy Focal loss DL: Dice loss BE: Boundary enhancement 2D-DFT: 2D discrete fourier transform CAM: Class activation map MIoU: Mean intersection over union DSC: Dice similarity coefficient Bendicho VML-M, Gutiérrez MF. Holistic approaches to the comprehensive management of rock art in the digital age. Quantitative Methods in the Humanities and Social Sciences. In: Vincent ML, López-Menchero Bendicho VM, Ioannides M, Levy TE, editors. Heritage and Archaeology in the Digital Age. Cham: Springer; 2017. p. 27–47. Seidl M, Breiteneder C. Automated petroglyph image segmentation with interactive classifier fusion. In: Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing, 2012; pp. 1–8. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/2425333.2425399 Zeppelzauer M, Poier G, Seidl M, Reinbacher C, Schulter S, Breiteneder C, Bischof H. Interactive 3d segmentation of rock-art by enhanced depth maps and gradient preserving regularization. JOCCH. 2016;9(4):1–30. Poier G, Seidl M, Zeppelzauer M, Reinbacher C, Schaich M, Bellandi G, Marretta A, Bischof H. The 3d-pitoti dataset: a dataset for high-resolution 3D surface segmentation. In: Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing, 2017; pp. 1–7 Fiorucci M, Khoroshiltseva M, Pontil M, Traviglia A, Del Bue A, James S. Machine learning for cultural heritage: a survey. Pattern Recognit Lett. 2020;133:102–8. Zhu Q, Wang X, Keogh E, Lee S-H. Augmenting the generalized hough transform to enable the mining of petroglyphs. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery, New York, NY, USA. 2009; pp. 1057–1066. https://doi.org/10.1145/1557019.1557133 Seidl M, Wieser E, Alexander C. Automated classification of petroglyphs. DAACH. 2015;2(2–3):196–212. Seidl M, Wieser E, Zeppelzauer M, Pinz A, Breiteneder C. Graph-based shape similarity of petroglyphs. In: Agapito L, Bronstein MM, Rother C, editors. Computer Vision - ECCV 2014 Workshops. Cham: Springer; 2015. p. 133–48. Qiang Z, Wang X, Keogh E, Lee SH. An efficient and effective similarity measure to enable data mining of petroglyphs. Data Min Knowl Discov. 2011;23(1):91–127. Zeppelzauer M, Poier G, Seidl M, Reinbacher C, Breiteneder C, Bischof H, Schulter S. Interactive segmentation of rock-art in high-resolution 3d reconstructions. In: 2015 Digital Heritage, vol. 2, 2015; pp. 37–44. https://doi.org/10.1109/DigitalHeritage.2015.7419450 Seidl M, Zeppelzauer M. Towards distinction of rock art pecking styles with a hybrid 2D/3D approach. In: 2019 International Conference on Content-Based Multimedia Indexing (CBMI), 2019; pp. 1–4. https://doi.org/10.1109/CBMI.2019.8877469 Horn C, Ivarsson O, Lindhé C, Potter R, Green A, Ling J. Artificial intelligence, 3D documentation, and rock art-approaching and reflecting on the automation of identification and classification of rock art images. J Archaeol Method Theory. 2022;29(1):188–213. Jalandoni A, Shuker J. Automated tracing of petroglyphs using spatial algorithms. DAACH. 2021;22:00191. https://doi.org/10.1016/j.daach.2021.e00191. Deufemia V, Paolino L. Segmentation and recognition of petrog1yphs using generic fourier descriptors. Lect Notes Comput Sci. 2014;8509:487–94. Poier G, Seidl M, Zeppelzauer M, Reinbacher C, Bischof H. PetroSurf3D - a high-resolution 3D dataset of rock art for surface segmentation. 2016. https://arxiv.org/pdf/1610.01944 Jadon S. A survey of loss functions for semantic segmentation. In: 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), 2020; pp. 1–7. https://doi.org/10.1109/CIBCB48159.2020.9277638 Yi-de M, Qing L, Zhi-Bai Q. Automated image segmentation using improved pcnn model based on cross-entropy. In: Proceedings of 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, IEEE, 2004, pp. 743–746. Pihur V, Datta S, Datta S. Weighted rank aggregation of cluster validation measures: a monte carlo cross-entropy approach. Bioinformatics. 2007;23(13):1607. Xie S, Tu Z. Holistically-nested edge detection. In: 2015 IEEE International Conference on Computer Vision (ICCV), 2016. Lin TY, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. In: IEEE Transactions on Pattern Analysis & Machine Intelligence PP(99), 2017; pp. 2999–3007 Milletari F, Navab N, Ahmadi SA. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), 2016. Yang D, Roth H, Wang X, Xu Z, Xu D. Enhancing foreground boundaries for medical image segmentation. 2020. 10.48550/arXiv.2005.14355 Minaee S, Boykov Y, Porikli F, Plaza A, Kehtarnavaz N, Terzopoulos D. Image segmentation using deep learning: a survey. IEEE Trans Pattern Anal Mach Intell. 2022;44(7):3523–42. https://doi.org/10.1109/TPAMI.2021.3059968. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015; pp. 3431–3440. Badrinarayanan V, Kendall A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell. 2017;39(12):2481–95. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, 2015; pp. 234–241 Siddique N, Paheding S, Elkin CP, Devabhaktuni V. U-net and its variants for medical image segmentation: a review of theory and applications. IEEE Access. 2021;9:82031–57. Basu M. Gaussian-based edge-detection methods-a survey. IEEE Trans Syst Man Cybern C. 2002;32(3):252–60. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M, et al. \(\{\)TensorFlow\(\}\): a system for \(\{\)Large-Scale\(\}\) machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016; pp. 265–283 We acknowledge the National Key Research and Development Program of China (No. 2019YFC1521103, No. 2020YFC1523301 and No. 2020YFC1523303), the Key Research and Development Project of Qinghai Province (No. 2020-SF-142) National Natural Science Foundation of China under grant (No. 62262054) and the Key Research and Development Program of Shaanxi Province(No. 2021GY-171) for supporting our work. This work is mainly supported by the National Key Research and Development Program of China under grant (No. 2019YFC1521103, No. 2020YFC1523301 and No. 2020YFC1523303). Besides, this study is also partly supported by the Key Research and Development Project of Qinghai Province under grant(No. 2020-SF-142), National Natural Science Foundation of China under grant (No. 62262054) and the Key Research and Development Program of Shaanxi Province under grant (No. 2021GY-171). School of Information Science and Technology, Northwest University, 1 Xuefu Avenue, Guodu Education Technology Industrial Park, Chang'an District, Xi'an, 710127, China Chuanping Bai, Yangyang Liu, Xiaofeng Wang & Mingquan Zhou School of Mathematics and Computer science, Ningxia Normal University, Guyuan, China Chuanping Bai National and Local Joint Engineering Research Center for Cultural Heritage Digitization, Northwest University, Xi'an, China Yangyang Liu, Xiaofeng Wang & Mingquan Zhou Virtual Reality Research Center of Ministry of Education, Beijing Normal University, Beijing, China Pengbo Zhou & Mingquan Zhou Yangyang Liu Pengbo Zhou Xiaofeng Wang Mingquan Zhou CB mainly contributed to the design and implementation of the research, to the analysis of the experiments and results. YL, XW and PZ contributed to some experiments and data curation. CB wrote the main manuscript in consultation with MZ. All authors read and approved the final manuscript. All authors commented on previous versions of the manuscript. Besides, all authors read and approved the final manuscript. Correspondence to Mingquan Zhou. Bai, C., Liu, Y., Zhou, P. et al. BEGL: boundary enhancement with Gaussian Loss for rock-art image segmentation. Herit Sci 11, 17 (2023). https://doi.org/10.1186/s40494-022-00857-5 Petroglyph segmentation Rock-art
CommonCrawl
Climate Change: How Can AI Help? ICML 2019 Workshop HOMEICML 2019 WORKSHOPTHE PAPERMAILING LIST Many in the ML community wish to take action on climate change, yet feel their skills are inapplicable. This workshop aims to show that in fact the opposite is true: while no silver bullet, ML can be an invaluable tool both in reducing greenhouse gas emissions and in helping society adapt to the effects of climate change. Climate change is a complex problem, for which action takes many forms - from designing smart electrical grids to tracking deforestation in satellite imagery. Many of these actions represent high-impact opportunities for real-world change, as well as being interesting problems for ML research. A recording of the workshop (split into 4 sessions) is available at: https://slideslive.com/38917142 Ask and upvote questions during the workshop (if directed at a particular speaker, write @SpeakerName at the beginning of your question): https://tinyurl.com/ICMLClimateQA Yoshua Bengio (Mila) Chad Frischmann (Project Drawdown) Jack Kelly (Open Climate Fix) Claire Monteleoni (UC Boulder) John Platt (Google) Sims Witherspoon (DeepMind) Andrew Ng (Stanford) Karthik Mukkavilli (Mila) 8:30 - 8:45 - Welcome and Opening Remarks 8:45 - 9:20 - John Platt (Google AI): AI for Climate Change: the Context (Keynote talk) 9:20 - 9:45 - Jack Kelly (Open Climate Fix): Why it's hard to mitigate climate change, and how to do better (Invited talk) 9:45 - 10:10 - Andrew Ng (Stanford): Tackling climate change challenges with AI through collaboration (Invited talk) 10:10 - 10:20 - Volodymyr Kuleshov: Towards a Sustainable Food Supply Chain Powered by Artificial Intelligence (Spotlight talk) 10:20 - 10:30 - Clement Duhart: Deep Learning for Wildlife Conservation and Restoration Efforts (Spotlight talk) 10:30 - 11:00 - Coffee break + Poster Session 11:00 - 12:00 - Chad Frischmann (Project Drawdown): Achieving Drawdown (Keynote talk) 12:00 - 1:30 - Networking lunch (provided) + Poster Session 1:30 - 1:55 - Yoshua Bengio (Mila): Personalized Visualization of the Impact of Climate Change (Invited talk) 1:55 - 2:30 - Claire Monteleoni (CU Boulder): Advances in Climate Informatics: Machine Learning for the Study of Climate Change (Invited talk) 2:30 - 2:40 - Duncan Watson-Parris: Detecting anthropogenic cloud perturbations with deep learning (Spotlight talk) 2:40 - 2:50 - Chaopeng Shen: Evaluating aleatoric and epistemic uncertainties of time series deep learning models for soil moisture predictions (Spotlight talk) 2:50 - 3:00 - Mohammad Mahdi Kamani: Targeted Meta-Learning for Critical Incident Detection in Weather Data (Spotlight talk) 3:00 - 3:30 - Coffee break + Poster Session 3:30 - 3:45 - Karthik Mukkavilli (Mila): Geoscience data and models for the Climate Change AI community (Invited talk) 3:45 - 4:20 - Sims Witherspoon (DeepMind): ML vs. Climate Change, Applications in Energy at DeepMind (Invited talk) 4:20 - 4:30 - Lynn Kaack: Truck Traffic Monitoring with Satellite Images (Spotlight talk) 4:30 - 4:50 - Neel Guha: Machine Learning for AC Optimal Power Flow (Spotlight talk) 4:40 - 4:50 - Christian Clough, Gopal Erinjippurath: Planetary Scale Monitoring of Urban Growth in High Flood Risk Areas (Spotlight talk) 4:50 - 5:15 - "Ideas" mini-spotlights 5:15 - 6:00 - Yoshua Bengio, Andrew Ng, Raia Hadsell, John Platt, Claire Monteleoni David Rolnick (UPenn) Alexandre Lacoste (ElementAI) Tegan Maharaj (MILA) Jennifer Chayes (Microsoft) Narmada Balasooriya (ConscientAI) Di Wu (MILA) Priya Donti (CMU) Lynn Kaack (ETH Zürich) Manvitha Ponnapati (MIT) Works are submitted to one of three tracks: Research, Deployed, or Ideas. (1) Policy Search with Non-uniform State Representations for Environmental Sampling pdf Sandeep Manjanna (McGill University); Herke van Hoof (University of Amsterdam); Gregory Dudek (McGill University) Abstract: (click to expand) Surveying fragile ecosystems like coral reefs is important to monitor the effects of climate change. We present an adaptive sampling technique that generates efficient trajectories covering hotspots in the region of interest at a high rate. A key feature of our sampling algorithm is the ability to generate action plans for any new hotspot distribution using the parameters learned on other similar looking distributions. (2) Modelling GxE with historical weather information improves genomic prediction in new environments Jussi Gillberg (Aalto University); Pekka Marttinen (Aalto University); Hiroshi Mamitsuka (Kyoto University); Samuel Kaski (Aalto University) Abstract: (click to expand) Interaction between the genotype and the environment ($G \times E$) has a strong impact on the yield of major crop plants. Recently $G \times E$ has been predicted from environmental and genomic covariates, but existing works have not considered generalization to new environments and years without access to in-season data. We study \textit{in silico} the viability of $G \times E$ prediction under realistic constraints. We show that the environmental response of a new generation of untested Barley cultivars can be predicted in new locations and years using genomic data, machine learning and historical weather observations. Our results highlight the need for models of $G \times E$: non-linear effects clearly dominate linear ones and the interaction between the soil type and daily rain is identified as the main driver for $G \times E$. Our study implies that genomic selection can be used to capture the yield potential in $G \times E$ effects for future growth seasons, providing a possible means to achieve yield improvements. $G \times E$ models are also needed to select for varieties that react favourably to the altering climate conditions. For this purpose, the historical weather observations could be replaced by climate simulations to study the yield potential under various climate scenarios.This abstract summarizes the findings of a recently published article. (3) Machine Learning empowered Occupancy Sensing for Smart Buildings pdf Han Zou (UC Berkeley); Hari Prasanna Das (UC Berkeley ); Jianfei Yang (Nanyang Technological University); Yuxun Zhou (UC Berkeley); Costas Spanos (UC Berkeley) Abstract: (click to expand) Over half of the global electricity consumption is attributed to buildings, which are often operated poorly from an energy perspective. Significant improvements in energy efficiency can be achieved via intelligent building control techniques. To realize such advanced control schemes, accurate and robust occupancy information is highly valuable. In this work, we present a cutting-edge WiFi sensing platform and state-of-the-art machine learning methods to address longstanding occupancy sensing challenges in smart buildings. Our systematic solution provides comprehensive fine-grained occupancy information in a non-intrusive and privacy-preserving manner, which facilitates eco-friendly and sustainable buildings. (4) Focus and track: pixel-wise spatio-temporal hurricane tracking pdf Sookyung Kim (Lawrence Livermore National Laboratory); Sunghyun Park (Korea University); Sunghyo Chung (Korea University); Yunsung Lee (Korea University); Hyojin Kim (LLNL); Joonseok Lee (Google Research); Jaegul Choo (Korea University); Mr Prabhat (Lawrence Berkeley National Laboratory) Abstract: (click to expand) We tackle extreme climate event tracking problem. It has unique challenges to other visual object tracking problems, including wider range of spatio-temporal dynamics, blur boundary of the target, and shortage of labeled dataset. In this paper, we propose a simple but robust end-to-end model based on multi-layered ConvLSTM, suitable for the climate event tracking problem. It first learns to imprint location and appearance of the target at the first frame with one-shot auto-encoding fashion, and then, the learned feature is consumed by the tracking module to track the target in subsequent time frames. To tackle the data shortage problem, we propose data augmentation based on Social GAN. Extensive experiments show that the proposed framework significantly improves tracking performance on hurricane tracking task over several state-of-the-art methods. (5) Recovering the parameters underlying the Lorenz-96 chaotic dynamics pdf Soukayna Mouatadid (University of Toronto); Pierre Gentine (Columbia University); Wei Yu (University of Toronto); Steve Easterbrook (University of Toronto) Abstract: (click to expand) Climate projections suffer from uncertain equilibrium climate sensitivity. The reason behind this uncertainty is the resolution of global climate models, which is too coarse to resolve key processes such as clouds and convection. These processes are approximated using heuristics in a process called parameterization. The selection of these parameters can be subjective, leading to significant uncertainties in the way clouds are represented in global climate models. Here, we explore three deep network algorithms to infer these parameters in an objective and data-driven way. We compare the performance of a fully-connected network, a one-dimensional and, a two-dimensional convolutional networks to recover the underlying parameters of the Lorenz-96 model, a non-linear dynamical system that has similar behavior to the climate system. (6) Using Bayesian Optimization to Improve Solar Panel Performance by Developing Antireflective, Superomniphobic Glass Sajad Haghanifar (University of Pittsburgh); Bolong Cheng (SigOpt); Mike Mccourt (SigOpt); Paul Leu (University of Pittsburgh) Abstract: (click to expand) Photovoltaic solar panel efficiency is dependent on photons transmitting through the glass sheet covering and into the crystalline silicon solar cells within. However, complications such as soiling and light reflection degrade performance. Our goal is to identify a fabrication process to produce glass which promotes photon transmission and is superomniphobic (repels fluids), for easier cleaning. In this paper, we propose adapting Bayesian optimization to efficiently search the space of possible glass fabrication strategies; in this search we balance three competing objectives (transmittance, haze and oil contact angle). We present the glass generated from this Bayesian optimization strategy and detail its properties relevant to photovoltaic solar power. (7) A quantum mechanical approach for data assimilation in climate dynamics pdf Dimitrios Giannakis (Courant Institute of Mathematical Sciences, New York University); Joanna Slawinska (University of Wisconsin-Milwaukee); Abbas Ourmazd (University of Wisconsin-Milwaukee) Abstract: (click to expand) A framework for data assimilation in climate dynamics is presented, combining aspects of quantum mechanics, Koopman operator theory, and kernel methods for machine learning. This approach adapts the Dirac-von Neumann formalism of quantum dynamics and measurement to perform data assimilation (filtering) of climate dynamics, using the Koopman operator governing the evolution of observables as an analog of the Heisenberg operator in quantum mechanics, and a quantum mechanical density operator to represent the data assimilation state. The framework is implemented in a fully empirical, data-driven manner, using kernel methods for machine learning to represent the evolution and measurement operators via matrices in a basis learned from time-ordered observations. Applications to data assimilation of the Nino 3.4 index for the El Nino Southern Oscillation (ENSO) in a comprehensive climate model show promising results. (8) Data-driven Chance Constrained Programming based Electric Vehicle Penetration Analysis pdf Di Wu (McGill); Tracy Cui (Google NYC); Doina Precup (McGill University); Benoit Boulet (McGill) Abstract: (click to expand) Transportation electrification has been growing rapidly in recent years. The adoption of electric vehicles (EVs) could help to release the dependency on oil and reduce greenhouse gas emission. However, the increasing EV adoption will also impose a high demand on the power grid and may jeopardize the grid network infrastructures. For certain high EV penetration areas, the EV charging demand may lead to transformer overloading at peak hours which makes the maximal EV penetration analysis an urgent problem to solve. This paper proposes a data-driven chance constrained programming based framework for maximal EV penetration analysis. Simulation results are presented for a real-world neighborhood level network. The proposed framework could serve as a guidance for utility companies to schedule infrastructure upgrades. (9) (Spotlight: 4:30PM) Machine Learning for AC Optimal Power Flow pdf Neel Guha (Carnegie Mellon University); Zhecheng Wang (Stanford University); Arun Majumdar (Stanford University) Abstract: (click to expand) F( We explore machine learning methods for AC Optimal Powerflow (ACOPF) - the task of optimizing power generation in a transmission network according while respecting physical and engineering constraints. We present two formulations of ACOPF as a machine learning problem: 1) an end-to-end prediction task where we directly predict the optimal generator settings, and 2) a constraint prediction task where we predict the set of active constraints in the optimal solution. We validate these approaches on two benchmark grids. (10) (Spotlight: 2:50PM) Targeted Meta-Learning for Critical Incident Detection in Weather Data pdf Mohammad Mahdi Kamani (The Pennsylvania State University); Sadegh Farhang (Pennsylvania State University); Mehrdad Mahdavi (Penn State); James Z Wang (The Pennsylvania State University) Abstract: (click to expand) Due to imbalanced or heavy-tailed nature of weather- and climate-related datasets, the performance of standard deep learning models significantly deviates from their expected behavior on test data. Classical methods to address these issues are mostly data or application dependent, hence burdensome to tune. Meta-learning approaches, on the other hand, aim to learn hyperparameters in the learning process using different objective functions on training and validation data. However, these methods suffer from high computational complexity and are not scalable to large datasets. In this paper, we aim to apply a novel framework named as targeted meta-learning to rectify this issue, and show its efficacy in dealing with the aforementioned biases in datasets. This framework employs a small, well-crafted target dataset that resembles the desired nature of test data in order to guide the learning process in a coupled manner. We empirically show that this framework can overcome the bias issue, common to weather-related datasets, in a bow echo detection case study. (11) ClimateNet: Bringing the power of Deep Learning to weather and climate sciences via open datasets and architectures Karthik Kashinath (Lawrence Berkeley National Laboratory); Mayur Mudigonda (UC Berkeley); Kevin Yang (UC Berkeley); Jiayi Chen (UC Berkeley); Annette Greiner (Lawrence Berkeley National Laboratory); Mr Prabhat (Lawrence Berkeley National Laboratory) Abstract: (click to expand) Pattern recognition tasks such as classification, object detection and segmentation have remained challenging problems in the weather and climate sciences. While there exist many empirical heuristics for detecting weather patterns and extreme events, the disparities between the output of these different methods even for a single event are large and often difficult to reconcile. Given the success of Deep Learning in tackling similar problems in computer vision, we advocate a DL-based approach. However, DL works best in the context of supervised learning, when labeled datasets are readily available. Reliable, labeled training data is scarce in climate science. `ClimateNet' is an effort to solve this problem by creating open, community-sourced expert-labeled datasets that capture information pertaining to class or pattern labels, bounding boxes and segmentation masks. In this paper we present the motivation, design and status of the ClimateNet dataset and associated model architecture. (12) Improving Subseasonal Forecasting in the Western U.S. with Machine Learning Paulo Orenstein (Stanford); Jessica Hwang (Stanford); Judah Cohen (AER); Karl Pfeiffer (AER); Lester Mackey (Microsoft Research New England) Abstract: (click to expand) Water managers in the western United States (U.S.) rely on longterm forecasts of temperature and precipitation to prepare for droughts and other wet weather extremes. To improve the accuracy of these long-term forecasts, the Bureau of Reclamation and the National Oceanic and Atmospheric Administration (NOAA) launched the Subseasonal Climate Forecast Rodeo, a year-long real-time forecasting challenge, in which participants aimed to skillfully predict temperature and precipitation in the western U.S. two to four weeks and four to six weeks in advance. We present and evaluate our machine learning approach to the Rodeo and release our SubseasonalRodeo dataset, collected to train and evaluate our forecasting system. Our predictive system is an ensemble of two regression models, and exceeds that of the top Rodeo competitor as well as the government baselines for each target variable and forecast horizon. (13) Unsupervised Temporal Clustering to Monitor the Performance of Alternative Fueling Infrastructure pdf Kalai Ramea (PARC) Abstract: (click to expand) Zero Emission Vehicles (ZEV) play an important role in the decarbonization of the transportation sector. For a wider adoption of ZEVs, providing a reliable infrastructure is critical. We present a machine learning approach that uses unsupervised temporal clustering algorithm along with survey analysis to determine infrastructure performance and reliability of alternative fuels. We illustrate this approach for the hydrogen fueling stations in California, but this can be generalized for other regions and fuels. (14) A Flexible Pipeline for Prediction of Tropical Cyclone Paths pdf Niccolo Dalmasso (Carnegie Mellon University); Robin Dunn (Carnegie Mellon University); Benjamin LeRoy (Carnegie Mellon University); Chad Schafer (Carnegie Mellon University) Abstract: (click to expand) Hurricanes and, more generally, tropical cyclones (TCs) are rare, complex natural phenomena of both scientific and public interest. The importance of understanding TCs in a changing climate has increased as recent TCs have had devastating impacts on human lives and communities. Moreover, good prediction and understanding about the complex nature of TCs can mitigate some of these human and property losses. Though TCs have been studied from many different angles, more work is needed from a statistical approach of providing prediction regions. The current state-of-the-art in TC prediction bands comes from the National Hurricane Center at NOAA, whose proprietary model provides "cones of uncertainty" for TCs through an analysis of historical forecast errors. The contribution of this paper is twofold. We introduce a new pipeline that encourages transparent and adaptable prediction band development by streamlining cyclone track simulation and prediction band generation. We also provide updates to existing models and novel statistical methodologies in both areas of the pipeline respectively. (15) Mapping land use and land cover changes faster and at scale with deep learning on the cloud pdf Zhuangfang Yi (Development Seed); Drew Bollinger (Development Seed); Devis Peressutti (Sinergise) Abstract: (click to expand) Policymakers rely on Land Use and Land Cover (LULC) maps for evaluation and planning. They use these maps to plan climate-smart agriculture policy, improve housing resilience (to earthquakes or other natural disasters), and understand how to grow commerce in small communities. A number of institutions have created global land use maps from historic satellite imagery. However, these maps can be outdated and are often inaccurate, particularly in their representation of developing countries. We worked with the European Space Agency (ESA) to develop a LULC deep learning workflow on the cloud that can ingest Sentinel-2 optical imagery for a large scale LULC change detection. It's an end-to-end workflow that sits on top of two comprehensive tools, SentinelHub, and eo-learn, which seamlessly link earth observation data with machine learning libraries. It can take in the labeled LULC and associated AOI in shapefiles, set up a task to fetch cloud-free, time series imagery stacks within the defined time interval by the users. It will pair the satellite imagery tile with it's labeled LULC mask for the supervised deep learning model training on the cloud. Once a well-performing model is trained, it can be exported as a Tensorflow/Pytorch serving docker image to work with our cloud-based model inference pipeline. The inference pipeline can automatically scale with the number of images to be processed. Changes in land use are heavily influenced by human activities (e.g. agriculture, deforestation, human settlement expansion) and have been a great source of greenhouse gas emissions. Sustainable forest and land management practices vary from region to region, which means having flexible, scalable tools will be critical. With these tools, we can empower analysts, engineers, and decision-makers to see where contributions to climate-smart agricultural, forestry and urban resilience programs can be made. (16) Achieving Conservation of Energy in Neural Network Emulators for Climate Modeling pdf Tom G Beucler (Columbia University & UCI); Stephan Rasp (Ludwig-Maximilian University of Munich); Michael Pritchard (UCI); Pierre Gentine (Columbia University) Abstract: (click to expand) Artificial neural-networks have the potential to emulate cloud processes with higher accuracy than the semi-empirical emulators currently used in climate models. However, neural-network models do not intrinsically conserve energy and mass, which is an obstacle to using them for long-term climate predictions. Here, we propose two methods to enforce linear conservation laws in neural-network emulators of physical models: Constraining (1) the loss function or (2) the architecture of the network itself. Applied to the emulation of explicitly-resolved cloud processes in a prototype multi-scale climate model, we show that architecture constraints can enforce conservation laws to satisfactory numerical precision, while all constraints help the neural-network better generalize to conditions outside of its training set, such as global warming. (17) The Impact of Feature Causality on Normal Behaviour Models for SCADA-based Wind Turbine Fault Detection pdf Telmo Felgueira (IST) Abstract: (click to expand) The cost of wind energy can be reduced by using SCADA data to detect faults in wind turbine components. Normal behavior models are one of the main fault detection approaches, but there is a lack of work in how different input features affect the results. In this work, a new taxonomy based on the causal relations between the input features and the target is presented. Based on this taxonomy, the impact of different input feature configurations on the modelling and fault detection performance is evaluated. To this end, a framework that formulates the detection of faults as a classification problem is also presented. (18) Predicting CO2 Plume Migration using Deep Neural Networks pdf Gege Wen (Stanford University) Abstract: (click to expand) Carbon capture and sequestration (CCS) is an essential climate change mitigation technology for achieving the 2 degree C target. Numerical simulation of CO2 plume migration in the subsurface is a prerequisite to effective CCS projects. However, stochastic high spatial resolution simulations are currently limited by computational resources. We propose a deep neural network approach to predict the CO2 plume migration in high dimensional systems with complex geology. Upon training, the network is able to give accurate predictions that are 6 orders of magnitude faster than traditional numerical simulators. This approach can be easily adopted to history-matching and uncertainty analysis problems to support the scale-up of CCS deployment. (19) (Spotlight: 4:20PM) Truck Traffic Monitoring with Satellite Images Lynn Kaack (ETH Zurich); George H Chen (Carnegie Mellon University); Granger Morgan (Carnegie Mellon University) Abstract: (click to expand) The road freight sector is responsible for a large and growing share of greenhouse gas emissions, but reliable data on the amount of freight that is moved on roads in many parts of the world are scarce. Many low- and middle-income countries have limited ground-based traffic monitoring and freight surveying activities. In this proof of concept, we show that we can use an object detection network to count trucks in satellite images and predict average annual daily truck traffic from those counts. We describe a complete model, test the uncertainty of the estimation, and discuss the transfer to developing countries. (20) (Spotlight: 2:40PM) Evaluating aleatoric and epistemic uncertainties of time series deep learning models for soil moisture predictions pdf Chaopeng Shen (Pennsylvania State University) Abstract: (click to expand) Soil moisture is an important variable that determines floods, vegetation health, agriculture productivity, and land surface feedbacks to the atmosphere, etc.. The recently available satellite-based observations give us a unique opportunity to directly build data-driven models to predict soil moisture instead of using land surface models, but previously there was no uncertainty estimate. We tested Monte Carlo dropout with an aleatoric term (MCD+A) for our long short-term memory models for this problem, and ask if the uncertainty terms behave as they were argued to. We show that MCD+A indeed gave a good estimate of our predictive error, provided we tune a hyperparameter and use a representative training dataset. The aleatoric term responded strongly to observational noise and the epistemic term clearly acted as a detector for physiographic dissimilarity from the training data. However, when the training and test data are characteristically different, the aleatoric term could be misled, undermining its reliability. We will also discuss some of the major challenges for which we anticipate the geoscientific communities will need help from computer scientists in applying AI to climate or hydrologic modeling. (21) (Spotlight: 2:30PM) Detecting anthropogenic cloud perturbations with deep learning pdf Duncan Watson-Parris (University of Oxford); Sam Sutherland (University of Oxford); Matthew Christensen (University of Oxford); Anthony Caterini (University of Oxford); Dino Sejdinovic (University of Oxford); Philip Stier (University of Oxford) Abstract: (click to expand) One of the most pressing questions in climate science is that of the effect of anthropogenic aerosol on the Earth's energy balance. Aerosols provide the `seeds' on which cloud droplets form, and changes in the amount of aerosol available to a cloud can change its brightness and other physical properties such as optical thickness and spatial extent. Clouds play a critical role in moderating global temperatures and small perturbations can lead to significant amounts of cooling or warming. Uncertainty in this effect is so large it is not currently known if it is negligible, or provides a large enough cooling to largely negate present-day warming by CO2. This work uses deep convolutional neural networks to look for two particular perturbations in clouds due to anthropogenic aerosol and assess their properties and prevalence, providing valuable insights into their climatic effects. (22) Data-driven surrogate models for climate modeling: application of echo state networks, RNN-LSTM and ANN to the multi-scale Lorenz system as a test case pdf Ashesh K Chattopadhyay (Rice University); Pedram Hassanzadeh (Rice University); Devika Subramanian (Rice University); Krishna Palem (Rice University); Charles Jiang (Rice University); Adam Subel (Rice University) Abstract: (click to expand) Understanding the effects of climate change relies on physics driven computationally expensive climate models which are still imperfect owing to ineffective subgrid scale parametrization. An effective way to treat these ineffective parametrization of largely uncertain subgrid scale processes are data-driven surrogate models with machine learning techniques. These surrogate models train on observational data capturing either the embed- dings of their (subgrid scale processes') underlying dynamics on the large scale processes or to simulate the subgrid processes accurately to be fed into the large scale processes. In this paper an extended version of the Lorenz 96 system is studied, which consists of three equations for a set of slow, intermediate, and fast variables, providing a fitting prototype for multi-scale, spatio-temporal chaos, and in particular, the complex dynamics of the climate system. In this work, we have built a data-driven model based on echo state net- works (ESN) aimed, specifically at climate modeling. This model can predict the spatio-temporal chaotic evolution of the Lorenz system for several Lyapunov timescales. We show that the ESN model outperforms, in terms of the prediction horizon, a deep learning technique based on recurrent neural network (RNN) with long short-term memory (LSTM) and an artificial neural network by factors between 3 and 10. The results suggest that ESN has the potential for being a powerful method for surrogate modeling and data-driven prediction for problems of interest to the climate community. (23) Learning Radiative Transfer Models for Climate Change Applications in Imaging Spectroscopy pdf Shubhankar V Deshpande (Carnegie Mellon University), Brian D Bue (NASA JPL/Caltech), David R Thompson (NASA JPL/Caltech), Vijay Natraj (NASA JPL/Caltech), Mario Parente (UMass Amherst) Abstract: (click to expand) According to a recent investigation, an estimated 33-50% of the world's coral reefs have undergone degradation, believed to be as a result of climate change. A strong driver of climate change and the subsequent environmental impact are greenhouse gases such as methane. However, the exact relation climate change has to the environmental condition cannot be easily established. Remote sensing methods are increasingly being used to quantify and draw connections between rapidly changing climatic conditions and environmental impact. A crucial part of this analysis is processing spectroscopy data using radiative transfer models (RTMs) which is a computationally expensive process and limits their use with high volume imaging spectrometers. This work presents an algorithm that can efficiently emulate RTMs using neural networks leading to a multifold speedup in processing time, and yielding multiple downstream benefits. (24) (Spotlight: 4:40PM) Planetary Scale Monitoring of Urban Growth in High Flood Risk Areas pdf Christian F Clough (Planet); Ramesh Nair (Planet); Gopal Erinjippurath (Planet); Matt George (Planet); Jesus Martinez Manso (Planet) Abstract: (click to expand) Climate change is increasing the incidence of flooding. Many areas in the developing world are experiencing strong population growth but lack adequate urban planning. This represents a significant humanitarian risk. We explore the use of high-cadence satellite imagery provided by Planet, who's flock of over one hundred 'Dove' satellites image the entire earth's landmass everyday at 3-5m resolution. We use a deep learning-based computer vision approach to measure flood-related humanitarian risk in 5 cities in Africa. (25) Efficient Multi-temporal and In-season Crop Mapping with Landsat Analysis Ready Data via Long Short-term Memory Networks pdf Jinfan Xu (Zhejiang University); Renhai Zhong (Zhejiang University); Jialu Xu (Zhejiang University); Haifeng Li (Central South University); Jingfeng Huang (Zhejiang University); Tao Lin (Zhejiang University) Abstract: (click to expand) Globe crop analysis from plentiful satellite images yields state-of-the-art results about estimating climate change impacts on agriculture with modern machine learning technology. Generating accurate and timely crop mapping across years remains a scientific challenge since existing non-temporal classifiers are hardly capable of capturing complicated temporal links from multi-temporal remote sensing data and adapting to interannual variability. We developed an LSTM-based model trained by previous years to distinguish corn and soybean for the current year. The results showed that LSTM outperformed random forest baseline in both in-season and end-of-the-season crop type classification. The improved performance is a result of the cumulative effect of remote sensing information that has been learned by LSTM model structure. The work pF(24rovides a valuable opportunity for estimating the impact of climate change on crop yield and early warning of extreme weather events in the future. Deployed Track (26) Autopilot of Cement Plants for Reduction of Fuel Consumption and Emissions pdf Prabal Acharyya (Petuum Inc); Sean D Rosario (Petuum Inc); Roey Flor (Petuum Inc); Ritvik Joshi (Petuum Inc); Dian Li (Petuum Inc); Roberto Linares (Petuum Inc); Hongbao Zhang (Petuum Inc) Abstract: (click to expand) The cement manufacturing industry is an essential component of the global economy and infrastructure. However, cement plants inevitably produce hazardous air pollutants, including greenhouse gases, and heavy metal emissions as byproducts of the process. Byproducts from cement manufacturing alone accounts for approximately 5% of global carbon dioxide (CO2) emissions. We have developed "Autopilot" - a machine learning based Software as a Service (SaaS) to learn manufacturing process dynamics and optimize the operation of cement plants - in order to reduce the overall fuel consumption and emissions of cement production. Autopilot is able to increase the ratio of alternative fuels (including biowaste and tires) to Petroleum coke, while optimizing operation of pyro, the core process of cement production that includes the preheater, kiln and cooler. Emissions of gases such as NOx and SOx, and heavy metals such as mercury and lead which are generated through burning petroleum coke can be reduced through the use of Autopilot. Our system has been proven to work in real world deployments and an analysis of cement plant performance with Autopilot enabled shows energy consumption savings and a decrease of up to 28,000 metric tons of CO2 produced per year. (27) (Spotlight: 10:10AM) Towards a Sustainable Food Supply Chain Powered by Artificial Intelligence Volodymyr Kuleshov (Stanford University) Abstract: (click to expand) About 30-40% of food produced worldwide is wasted. This puts a severe strain on the environment and represents a $165B loss to the US economy. This paper explores how artificial intelligence can be used to automate decisions across the food supply chain in order to reduce waste and increase the quality and affordability of food. We focus our attention on supermarkets — combined with downstream consumer waste, these contribute to 40% of total US food losses — and we describe an intelligent decision support system for supermarket operators that optimizes purchasing decisions and minimizes losses. The core of our system is a model-based reinforcement learn- ing engine for perishable inventory management; in a real-world pilot with a US supermarket chain, our system reduced waste by up to 50%. We hope that this paper will bring the food waste problem to the attention of the broader machine learning research community. (28) PVNet: A LRCN Architecture for Spatio-Temporal Photovoltaic Power Forecasting from Numerical Weather Prediction pdf Johan Mathe (Frog Labs) Abstract: (click to expand) Photovoltaic (PV) power generation has emerged as one of the leading renewable energy sources. Yet, its production is characterized by high uncertainty, being dependent on weather conditions like solar irradiance and temperature. Predicting PV production, even in the 24-hour forecast, remains a challenge and leads energy providers to left idling - often carbon-emitting - plants. In this paper, we introduce a Long-Term Recurrent Convolutional Network using Numerical Weather Predictions (NWP) to predict, in turn, PV production in the 24-hour and 48-hour forecast horizons. This network architecture fully leverages both temporal and spatial weather data, sampled over the whole geographical area of interest. We train our model on a prediction dataset from the National Oceanic and Atmospheric Administration (NOAA) to predict spatially aggregated PV production in Germany. We compare its performance to the persistence model and state-of-the-art methods. (29) Finding Ship-tracks Using Satellite Data to Enable Studies of Climate and Trade Related Issues Tianle Yuan (NASA) Abstract: (click to expand) Ship-tracks appear as long winding linear features in satellite images and are produced by aerosols from ship exhausts changing low cloud properties. They are one of the best examples of aerosol-cloud interaction experiments, which is currently the largest source of uncertainty in our understanding of climate forcing. Manually finding ship-tracks from satellite data on a large-scale is prohibitively costly while a large number of samples are required to better understand aerosol-cloud interactions. Here we train a deep neural network to automate finding ship-tracks. The neural network model generalizes well as it not only finds ship-tracks labeled by human experts, but also detects those that are occasionally missed by humans. It increases our sampling capability of ship-tracks by orders of magnitude and produces a first global map of ship-track distributions using satellite data. Major shipping routes that are mapped by the algorithm correspond well with available commercial data. There are also situations where commercial data are missing shipping routes that are detected by our algorithm. Our technique will enable studying aerosol effects on low clouds using ship-tracks on a large-scale, which will potentially narrow the uncertainty of the aerosol-cloud interactions. The product is also useful for applications such as coastal air pollution and trade. (30) Using Smart Meter Data to Forecast Grid Scale Electricity Demand Abraham Stanway (Amperon Holdings, Inc); Ydo Wexler (Amperon) Abstract: (click to expand) Highly accurate electricity demand forecasts represent a major opportunity to create grid stability in light of the concurrent deployment of distributed renewables and energy storage, as well as the increasing occurrence of extreme weather events caused by climate change. We present an overview of a deployed machine learning system that accomplishes this task by using smart meter data (AMI) within the region governed by the Electric Reliability Council of Texas (ERCOT). (31) (Spotlight: 10:30AM) Deep Learning for Wildlife Conservation and Restoration Efforts pdf Clement Duhart (MIT Media Lab) Abstract: (click to expand) Climate change and environmental degradation are causing species extinction worldwide. Automatic wildlife sensing is an urgent requirement to track biodiversity losses on Earth. Recent improvements in machine learning can accelerate the development of large-scale monitoring systems that would help track conservation outcomes and target efforts. In this paper, we present one such system we developed. 'Tidzam' is a Deep Learning framework for wildlife detection, identification, and geolocalization, designed for the Tidmarsh Wildlife Sanctuary, the site of the largest freshwater wetland restoration in Massachusetts. Ideas Track (32) (Spotlight: 4:50PM) Reinforcement Learning for Sustainable Agriculture pdf Jonathan Binas (Mila, Montreal); Leonie Luginbuehl (Department of Plant Sciences, University of Cambridge); Yoshua Bengio (Mila) Abstract: (click to expand) The growing population and the changing climate will push modern agriculture to its limits in an increasing number of regions on earth. Establishing next-generation sustainable food supply systems will mean producing more food on less arable land, while keeping the environmental impact to a minimum. Modern machine learning methods have achieved super-human performance on a variety of tasks, simply learning from the outcomes of their actions. We propose a path towards more sustainable agriculture, considering plant development an optimization problem with respect to certain parameters, such as yield and environmental impact, which can be optimized in an automated way. Specifically, we propose to use reinforcement learning to autonomously explore and learn ways of influencing the development of certain types of plants, controlling environmental parameters, such as irrigation or nutrient supply, and receiving sensory feedback, such as camera images, humidity, and moisture measurements. The trained system will thus be able to provide instructions for optimal treatment of a local population of plants, based on non-invasive measurements, such as imaging. (33) (Spotlight: 4:55PM) Stratospheric Aerosol Injection as a Deep Reinforcement Learning Problem pdf Christian A Schroeder (University of Oxford); Thomas Hornigold (University of Oxford) Abstract: (click to expand) As global greenhouse gas emissions continue to rise, the use of geoengineering in order to artificially mitigate climate change effects is increasingly considered. Stratospheric aerosol injection (SAI), which reduces solar radiative forcing and thus can be used to offset excess radiative forcing due to the greenhouse effect, is both technically and economically feasible. However, naive deployment of SAI has been shown in simulation to produce highly adversarial regional climatic effects in regions such as India and West Africa. Wealthy countries would most likely be able to trigger SAI unilaterally, i.e. China, Russia or US could decide to fix their own climates and, by collateral damage, drying India out by disrupting the monsoon or inducing termination effects with rapid warming. Understanding both how SAI can be optimised and how to best react to rogue injections is therefore of crucial geostrategic interest. In this paper, we argue that optimal SAI control can be characterised as a high-dimensional Markov Decision Process. This motivates the use of deep reinforcement learning in order to automatically discover non-trivial, and potentially time-varying, optimal injection policies or identify catastrophic ones. To overcome the inherent sample inefficiency of deep reinforcement learning, we propose to emulate a Global Circulation Model using deep learning techniques. To our knowledge, this is the first proposed application of deep reinforcement learning to the climate sciences. (34) (Spotlight: 5:00PM) Using Natural Language Processing to Analyze Financial Climate Disclosures pdf Sasha Luccioni (Mila); Hector Palacios (Element AI) Abstract: (click to expand) According to U.S. financial legislation, companies traded on the stock market are obliged to regularly disclose risks and uncertainties that are likely to affect their operations or financial position. Since 2010, these disclosures must also include climate-related risk projections. These disclosures therefore present a large quantity of textual information on which we can apply NLP techniques in order to pinpoint the companies that divulge their climate risks and those that do not, the types of vulnerabilities that are disclosed, and to follow the evolution of these risks over time. (35) Machine Learning-based Maintenance for Renewable Energy: The Case of Power Plants in Morocco pdf Kris Sankaran (Montreal Institute for Learning Algorithms); Zouheir Malki (Polytechnique Montréal); Loubna Benabou (UQAR); Hicham Bouzekri (MASEN) Abstract: (click to expand) In this project, the focus will be on the reduction of the overall electricity cost by the reduction of operating expenditures, including maintenance costs. We propose a predictive maintenance (PdM) framework for multi-component systems in renewables power plants based on machine learning (ML) and optimization approaches. This project would benefit from a real database acquired from the Moroccan Agency Of Sustainable Energy (MASEN) that own and operate several wind, solar and hydro power plants spread over Moroccan territory. Morocco has launched an ambitious energy strategy since 2009 that aims to ensure the energy security of the country, diversify the source of energy and preserve the environment. Ultimately, Morocco has set the target of 52% of renewables by 2030 with a large capital investment of USD 30 billion. To this end, Morocco will install 10 GW allocated as follows: 45% for solar, 42% for wind and 13% for hydro. Through the commitment of many actors, in particular in Research and Development, Morocco intends to become a regional leader and a model to follow in its climate change efforts. MASEN is investing in several strategies to reduce the cost of renewables, including the cost of operations and maintenance. Our project will provide a ML predictive maintenance framework to support these efforts. (36) GainForest: Scaling Climate Finance for Forest Conservation using Interpretable Machine Learning on Satellite Imagery pdf David Dao (ETH); Ce Zhang (ETH); Nick Beglinger (Cleantech21); Catherine Cang (UC Berkeley); Reuven Gonzales (OasisLabs); Ming-Da Liu Zhang (ETHZ); Nick Pawlowski (Imperial College London); Clement Fung (University of British Columbia) Abstract: (click to expand) Designing effective REDD+ policies, assessing their GHG impact, and linking them with the corresponding payments, is a resource intensive and complex task. GainForest leverages video prediction with remote sensing to monitor and forecast forest change at high resolution. Furthermore, by viewing payment allocation as a feature selection problem, GainForest can efficiently design payment schemes based on the Shapley value. (37) Machine Intelligence for Floods and the Built Environment Under Climate Change pdf Kate Duffy (Northeastern University); Auroop Ganguly (Northeastern University) Abstract: (click to expand) While intensification of precipitation extremes has been attributed to anthropogenic climate change using statistical analysis and physics-based numerical models, understanding floods in a climate context remains a grand challenge. Meanwhile, an increasing volume of Earth science data from climate simulations, remote sensing, and Geographic Information System (GIS) tools offers opportunity for data-driven insight and action plans. Defining Machine Intelligence (MI) broadly to include machine learning and network science, here we develop a vision and use preliminary results to showcase how scientific understanding of floods can be improved in a climate context and translated to impacts with a focus on Critical Lifeline Infrastructure Networks (CLIN). (38) Predicting Marine Heatwaves using Global Climate Models with Cluster Based Long Short-Term Memory Hillary S Scannell (University of Washington); Chris Fraley (Tableau Software); Nathan Mannheimer (Tableau Software); Sarah Battersby (Tableau Software); LuAnne Thompson (University of Washington) Abstract: (click to expand) Marine heatwaves make human and natural systems vulnerable to disaster risk through the disruption of ecological services and biological function. These extreme warming events in sea surface temperature are expected to become more frequent and longer lasting as a result of climate change. Large ensembles of global climate models now provide petabytes of climate-relevant data and an opportunity to probe machine learning to glean new insights about the climate conditions that cause marine heatwaves. Here we propose a k-means cluster based learning objective to map the geography of marine heatwave drivers globally to build a forecast for extreme sea surface temperatures using Long Short-Term Memory. We describe our machine learning approach to predict when and where future marine heatwaves will occur while leveraging the massive output of data from global climate models where traditional forecasting approaches fall short. The impacts of this work could warn coastal communities by providing a forecast for marine heatwaves, which would mitigate the negative effects on fishery productivity, ecosystem health, and tourism. (39) (Spotlight: 5:05PM) ML-driven search for zero-emissions ammonia production materials pdf Kevin McCloskey (Google) Abstract: (click to expand) Ammonia (NH3) production is an industrial process that consumes between 1-2% of global energy annually and is responsible for 2-3% of greenhouse gas emissions (Van der Ham et al.,2014). Ammonia is primarily used for agricultural fertilizers, but it also conforms to the US-DOE targets for hydrogen storage materials (Lanet al., 2012). Modern industrial facilities use the century-old Haber-Bosch process, whose energy usage and carbon emissions are strongly dominated by the use of methane as the combined energy source and hydrogen feedstock, not by the energy used to maintain elevated temperatures and pressures (Pfromm, 2017). Generating the hydrogen feedstock with renewable electricity through water electrolysis is an option that would allow retrofitting the billions of dollars of invested capital in Haber-Bosch production capacity. Economic viability is however strongly dependent on the relative regional prices of methane and renewable energy; renewables have been trending lower in cost but forecasting methane prices is difficult (Stehly et al., 2018; IRENA, 2017; Wainberg et al., 2017). Electrochemical ammonia production, which can use aqueous or steam H2O as its hydrogen source (first demonstrated ̃20years ago) is a promising means of emissions-free ammonia production. Its viability is also linked to the relative price of renewable energy versus methane, but in principle it can be significantly more cost-effective than Haber-Bosch (Giddeyet al., 2013) and also downscale to developing areas lacking ammonia transport infrastructure(Shipman & Symes, 2017). However to date it has only been demonstrated at laboratory scales with yields and Faradaic efficiencies insufficient to be economically competitive. Promising machine-learning approaches to fix this are discussed. (40) (Spotlight: 5:10PM) Low-carbon urban planning with machine learning pdf Nikola Milojevic-Dupont (Mercator Research Institute on Global Commons and Climate Change (MCC)); Felix Creutzig (Mercator Research Institute on Global Commons and Climate Change (MCC)) Abstract: (click to expand) Widespread climate action is urgently needed, but current solutions do not account enough for local differences. Here, we take the example of cities to point to the potential of machine learning (ML) for generating at scale high-resolution information on energy use and greenhouse gas (GHG) emissions, and make this information actionable for concrete solutions. We map the existing relevant ML literature and articulate ML methods that can make sense of spatial data for climate solutions in cities. Machine learning has the potential to find solutions that are tailored for each settlement, and transfer solutions across the world. (41) The Grid Resilience & Intelligence Platform (GRIP) pdf Ashley Pilipiszyn (Stanford University) Abstract: (click to expand) Extreme weather events pose an enormous and increasing threat to the nation's electric power systems and the associated socio-economic systems that depend on reliable delivery of electric power. The US Department of Energy reported in 2015, almost a quarter of unplanned grid outages were caused by extreme weather events and variability in the environment. Because climate change increases the frequency and severity of extreme weather events, communities everywhere will need to take steps to better prepare for, and if possible prevent major outages. While utilities have software tools available to help plan their daily and future operations, these tools do not include capabilities to help them plan for and recover from extreme events. Software for resilient design and recovery is not available commercially and research efforts in this area are preliminary. In this project, we are developing and deploying a suite of novel software tools to anticipate, absorb and recover from extreme events. The innovations in the project include the application of artificial intelligence and machine learning for distribution grid resilience, specifically, by using predictive analytics, image recognition and classification, and increased learning and problem-solving capabilities for the anticipation of grid events. (42) Meta-Optimization of Optimal Power Flow pdf Mahdi Jamei (Invenia Labs); Letif Mones (Invenia Labs); Alex Robson (Invenia Labs); Lyndon White (Invenia Labs); James Requeima (Invenia Labs); Cozmin Ududec (Invenia Labs) Abstract: (click to expand) The planning and operation of electricity grids is carried out by solving various forms of con- strained optimization problems. With the increasing variability of system conditions due to the integration of renewable and other distributed energy resources, such optimization problems are growing in complexity and need to be repeated daily, often limited to a 5 minute solve-time. To address this, we propose a meta-optimizer that is used to initialize interior-point solvers. This can significantly reduce the number of iterations to converge to optimality. (43) Learning representations to predict landslide occurrences and detect illegal mining across multiple domains pdf Aneesh Rangnekar (Rochester Institute of Technology); Matthew J Hoffman (Rochester Institute of Technology) Abstract: (click to expand) Modelling landslide occurrences is challenging due to lack of valuable prior information on the trigger. Satellites can provide crucial insights for identifying landslide activity and characterizing patterns spatially and temporally. We propose to analyze remote sensing data from affected regions using deep learning methods, find correlation in the changes over time, and predict future landslide occurrences and their potential causes. The learned networks can then be applied to generate task-specific imagery, including but not limited to, illegal mining detection and disaster relief modelling. (44) Harness the Power of Artificial intelligence and -Omics to Identify Soil Microbial Functions in Climate Change Projection Yang Song (Oak Ridge National Lab); Dali Wang (Oak Ridge National Lab); Melanie Mayes (Oak Ridge National Lab) Abstract: (click to expand) Contemporary Earth system models (ESMs) omit one of the significant drivers of the terrestrial carbon cycle, soil microbial communities. Soil microbial community not only directly emit greenhouse gasses into the atmosphere through the respiration process, but also release diverse enzymes to catalyze the decomposition of soil organic matter and determine nutrient availability for aboveground vegetation. Therefore, soil microbial community control over terrestrial carbon dynamics and their feedbacks to climate. Currently, inadequate representation of soil microbial communities in ESMs has introduced significant uncertainty in current terrestrial carbon-climate feedbacks. Mitigation of this uncertainty requires to identify functions, diversity, and environmental adaptation of soil microbial communities under global climate change. The revolution of -omics technology allows high throughput quantification of diverse soil enzymes, enabling large-scale studies of microbial functions in climate change. Such studies may lead to revolutionary solutions to predicting microbial-mediated climate-carbon feedbacks at the global scale based on gene-level environmental adaptation strategies of the microbial community. A key initial step in this direction is to identify the biogeography and environmental adaptation of soil enzyme functions based on the massive amount of data generated by -omics technologies. Here we propose to make this step. Artificial intelligence is a powerful, ideal tool for this leap forward. Our project is to integrate Artificial intelligence technologies and global -omics data to represent climate controls on microbial enzyme functions and mapping biogeography of soil enzyme functional groups at global scale. This outcome of this study will allow us to improve the representation of microbial function in earth system modeling and mitigate uncertainty in current climate projection. About ICML ICML is one of the premier conferences on machine learning, and includes a wide audience of researchers and practitioners in academia, industry, and related fields. It is possible to attend the workshop without either presenting or attending the main ICML conference. Those interested should register for the Workshops component of ICML at https://icml.cc/ while tickets last (a number of spots will be reserved for accepted submissions). About the workshop Location: Long Beach, California, USA Submission deadline: April 30, 11:59 PM Pacific Time Notification: May 15 (early notification possible on request) Submission website: https://cmt3.research.microsoft.com/CCAI2019 Contact: [email protected] We invite submission of extended abstracts on machine learning applied to problems in climate mitigation, adaptation, or modeling, including but not limited to the following topics: Power generation and grids Industrial optimization Agriculture, forestry and other land use Disaster management and relief Societal adaptation Ecosystems and natural resources Data presentation and management Accepted submissions will be invited to give poster presentations at the workshop, of which some will be selected for spotlight talks. Please contact [email protected] with questions, or if visa considerations make earlier notification important. Dual-submissions are allowed, and the workshop does not record proceedings. All submissions must be through the website. Submissions will be reviewed double-blind; do your best to anonymize your submission, and do not include identifying information for authors in the PDF. We encourage, but do not require, use of the ICML style template (please do not use the "Accepted" format as it will deanonymize your submission). Submission tracks Extended abstracts are limited to 3 pages for the Deployed and Research tracks, and 2 pages for the Ideas track, in PDF format. An additional page may be used for references. All machine learning techniques are welcome, from kernel methods to deep learning. Each submission should make clear why the application has (or could have) positive impacts regarding climate change. There are three tracks for submissions: Work that is already having an impact Submissions for the Deployed track are intended for machine learning approaches which are impacting climate-relevant problems through consumers or partner institutions. This could include implementations of academic research that have moved beyond the testing phase, as well as results from startups/industry. Details of methodology need not be revealed if they are proprietary, though transparency is encouraged. Work that will have an impact when deployed Submissions for the Research track are intended for machine learning research applied to climate-relevant problems. Submissions should provide experimental or theoretical validation of the method proposed, as well as specifying what gap the method fills. Algorithms need not be novel from a machine learning perspective if they are applied in a novel setting. Datasets may be submitted to this track that are designed to permit machine learning research (e.g. formatted with clear benchmarks for evaluation). In this case, baseline experimental results on the dataset are preferred but not required. Future work that could have an impact Submissions for the Ideas track are intended for proposed applications of machine learning to solve climate-relevant problems. While the least constrained, this track will be subject to a very high standard of review. No results need be demonstrated, but ideas should be justified as extensively as possible, including motivation for the problem being solved, an explanation of why current tools or methods are inadequate, and details of how tools from machine learning are proposed to fill the gap (i.e. it is important to justify the use of machine learning in your approach). Q: How can I keep up to date on this kind of stuff? A: Sign up to our mailing list! https://www.climatechange.ai/Mailing_list.html Q: What is the date of the workshop / when will we know? A: Friday, June 14 was recently confirmed as the date. Q: I'm not in machine learning. Can I still submit? A: Yes, absolutely! We welcome submissions from many fields. Do bear in mind, however, that the majority of attendees of the workshop will have a machine learning background; therefore, other fields should be introduced sufficiently to provide context for the work. Q: What if my submission is accepted but I can't attend the workshop? A: You may ask someone else to present your work in your stead, or we can also print a poster for you and put it up during the poster session. Q: Do I need to use LaTeX or the ICML style files? A: No, although we encourage it. Q: What do I do if I need an earlier decision for visa reasons? A: Contact us at [email protected] and explain your situation and the date by which you require a decision and we will do our best to be accomodating. Q: Can I send submissions directly by email? A: No, please use the CMT website to make submissions. Q: The submission website is asking for my name. Is this a problem for anonymization? A: You should fill out your name and other info when asked on the submission website; CMT will keep your submission anonymous to reviewers. Q: I don't know whether to submit my work in the Deployed or Research track. What's the difference? A: Deployed means it's "really being used" in a real-world setting (i.e. not just that you verify your method on real-world data). If you are still unsure, just pick whichever track you prefer your method be evaluated as. Q: Do submissions for the Ideas track need to have experimental validation? A: No, although some initial experiments or citation of published results would strengthen your submission. Q: The submission website never sent me a confirmation email. Is this a problem? A: No, the CMT system does not send automatic confirmation emails after a submission, though the submission should show up on the CMT page once submitted. If in any doubt regarding the submission process, please contact the organizers. Also please avoid making multiple submissions of the same article to CMT. Contact us at: [email protected].
CommonCrawl
Browsing Faculty of Science and Engineering by Publication Date A Framework for Web-Based Immersive Analytics John, Nigel; Ritsos, Panagiotis; Butcher, Peter W. S. (University of Chester, 2020-08-17) The emergence of affordable Virtual Reality (VR) interfaces has reignited the interest of researchers and developers in exploring new, immersive ways to visualise data. In particular, the use of open-standards Web-based technologies for implementing VR experiences in a browser aims to enable their ubiquitous and platform-independent adoption. In addition, such technologies work in synergy with established visualization libraries, through the HTML Document Object Model (DOM). However, creating Immersive Analytics (IA) experiences remains a challenging process, as the systems that are currently available require knowledge of game engines, such as Unity, and are often intrinsically restricted by their development ecosystem. This thesis presents a novel approach to the design, creation and deployment of Immersive Analytics experiences through the use of open-standards Web technologies. It presents <VRIA>, a Web-based framework for creating Immersive Analytics experiences in VR that was developed during this PhD project. <VRIA> is built upon WebXR, A-Frame, React and D3.js, and offers a visualization creation workflow which enables users of different levels of expertise to rapidly develop Immersive Analytics experiences for the Web. The aforementioned reliance on open standards and the synergies with popular visualization libraries make <VRIA> ubiquitous and platform-independent in nature. Moreover, by using WebXR's progressive enhancement, the experiences <VRIA> is able to create are accessible on a plethora of devices. This thesis presents an elaboration on the motivation for focusing on open-standards Web technologies, presents the <VRIA> visualization creation workflow and details the underlying mechanics of our framework. It reports on optimisation techniques, integrated into <VRIA>, that are necessary for implementing Immersive Analytics experiences with the necessary performance profile on the Web. It discusses scalability implications of the framework and presents a series of use case applications that demonstrate the various features of <VRIA>. Finally, it describes the lessons learned from the development of the framework, discusses current limitations, and outlines further extensions. 2^n Bordered Constructions of Self-Dual codes from Group Rings Dougherty, Steven; Gildea, Joe; Kaya, Abidin; University of Scranton; University of Chester; Sampoerna Academy (Elsevier, 2020-08-04) Self-dual codes, which are codes that are equal to their orthogonal, are a widely studied family of codes. Various techniques involving circulant matrices and matrices from group rings have been used to construct such codes. Moreover, families of rings have been used, together with a Gray map, to construct binary self-dual codes. In this paper, we introduce a new bordered construction over group rings for self-dual codes by combining many of the previously used techniques. The purpose of this is to construct self-dual codes that were missed using classical construction techniques by constructing self-dual codes with different automorphism groups. We apply the technique to codes over finite commutative Frobenius rings of characteristic 2 and several group rings and use these to construct interesting binary self-dual codes. In particular, we construct some extremal self-dual codes length 64 and 68, constructing 30 new extremal self-dual codes of length 68. Optimization of anti-wear and anti-bacterial properties of beta TiNb alloy via controlling duty cycle in open-air laser nitriding Chang, Xianwen; Smith, Graham; Quinn, James; Carson, Louise; Chan, Chi-Wai; Lee, Seunghwan; Technical University of Denmark (Chang, Lee), University of Chester (Smith), Queens University Belfast (Quinn, Chan) (Elsevier, 2020-07-09) A multifunctional beta TiNb surface, featuring wear-resistant and antibacterial properties, was successfully created by means of open-air fibre laser nitriding. Beta TiNb alloy was selected in this study as it has low Young's modulus, is highly biocompatible, and thus can be a promising prosthetic joint material. It is, however, necessary to overcome intrinsically weak mechanical properties and poor wear resistance of beta TiNb in order to cover the range of applications to loadbearing and/or shearing parts. To this end, open-air laser nitriding technique was employed. A control of single processing parameter, namely duty cycle (between 5% and 100%), led to substantially different structural and functional properties of the processed beta TiNb surfaces as analyzed by an array of analytical tools. The TiNb samples nitrided at the DC condition of 60% showed a most enhanced performance in terms of improving surface hardness, anti-friction, antiwear and anti-bacterial properties in comparison with other conditions. These findings are expected to be highly important and useful when TiNb alloys are considered as materials for hip/knee articular joint implants An analysis of the L1 scheme for stochastic subdiffusion problem driven by integrated space-time white noise Yan, Yubin; Yan, Yuyuan; Wu, Xiaolei; University of Chester, Lvliang University, Jimei University (Elsevier, 2020-06-02) We consider the strong convergence of the numerical methods for solving stochastic subdiffusion problem driven by an integrated space-time white noise. The time fractional derivative is approximated by using the L1 scheme and the time fractional integral is approximated with the Lubich's first order convolution quadrature formula. We use the Euler method to approximate the noise in time and use the truncated series to approximate the noise in space. The spatial variable is discretized by using the linear finite element method. Applying the idea in Gunzburger \et (Math. Comp. 88(2019), pp. 1715-1741), we express the approximate solutions of the fully discrete scheme by the convolution of the piecewise constant function and the inverse Laplace transform of the resolvent related function. Based on such convolution expressions of the approximate solutions, we obtain the optimal convergence orders of the fully discrete scheme in spatial multi-dimensional cases by using the Laplace transform method and the corresponding resolvent estimates. New binary self-dual codes via a generalization of the four circulant construction Gildea, Joe; Kaya, Abidin; Yildiz, Bahattin; University of Chester ; Sampoerna University ; Northern Arizona University (Croatian Mathematical Society, 2020-05-31) In this work, we generalize the four circulant construction for self-dual codes. By applying the constructions over the alphabets $\mathbb{F}_2$, $\mathbb{F}_2+u\mathbb{F}_2$, $\mathbb{F}_4+u\mathbb{F}_4$, we were able to obtain extremal binary self-dual codes of lengths 40, 64 including new extremal binary self-dual codes of length 68. More precisely, 43 new extremal binary self-dual codes of length 68, with rare new parameters have been constructed. Higher Order Time Stepping Methods for Subdiffusion Problems Based on Weighted and Shifted Grünwald–Letnikov Formulae with Nonsmooth Data Yan, yubin; Wang, Yanyong; Yan, Yuyuan; Pani, Amiya K.; University of Chester, Lvliang University, Jimei University, Indian Institute of Technology Bombay (Springer Link, 2020-05-19) Two higher order time stepping methods for solving subdiffusion problems are studied in this paper. The Caputo time fractional derivatives are approximated by using the weighted and shifted Gr\"unwald-Letnikov formulae introduced in Tian et al. [Math. Comp. 84 (2015), pp. 2703-2727]. After correcting a few starting steps, the proposed time stepping methods have the optimal convergence orders $O(k^2)$ and $ O(k^3)$, respectively for any fixed time $t$ for both smooth and nonsmooth data. The error estimates are proved by directly bounding the approximation errors of the kernel functions. Moreover, we also present briefly the applicabilities of our time stepping schemes to various other fractional evolution equations. Finally, some numerical examples are given to show that the numerical results are consistent with the proven theoretical results. A Modified Bordered Construction for Self-Dual Codes from Group Rings Kaya, Abidin; Tylyshchak, Alexander; Yildiz, Bahattin; Gildea, Joe; University of Chester; Sampoerna University; Uzhgorod State University; Northern Arizona University (Jacodesmath Institute, 2020-05-07) We describe a bordered construction for self-dual codes coming from group rings. We apply the constructions coming from the cyclic and dihedral groups over several alphabets to obtain extremal binary self-dual codes of various lengths. In particular we find a new extremal binary self-dual code of length 78. Interactive Three-Dimensional Simulation and Visualisation of Real Time Blood Flow in Vascular Networks John, Nigel; Pop, Serban; Holland, Mark, I (University of ChesterUnviersity of Chester, 2020-05) One of the challenges in cardiovascular disease management is the clinical decision-making process. When a clinician is dealing with complex and uncertain situations, the decision on whether or how to intervene is made based upon distinct information from diverse sources. There are several variables that can affect how the vascular system responds to treatment. These include: the extent of the damage and scarring, the efficiency of blood flow remodelling, and any associated pathology. Moreover, the effect of an intervention may lead to further unforeseen complications (e.g. another stenosis may be "hidden" further along the vessel). Currently, there is no tool for predicting or exploring such scenarios. This thesis explores the development of a highly adaptive real-time simulation of blood flow that considers patient specific data and clinician interaction. The simulation should model blood realistically, accurately, and through complex vascular networks in real-time. Developing robust flow scenarios that can be incorporated into the decision and planning medical tool set. The focus will be on specific regions of the anatomy, where accuracy is of the utmost importance and the flow can develop into specific patterns, with the aim of better understanding their condition and predicting factors of their future evolution. Results from the validation of the simulation showed promising comparisons with the literature and demonstrated a viability for clinical use. The past, present and future of indoor air chemistry Bekö, Gabriel; Carslaw, Nicola; Fauser, Patrick; Kauneliene, Violeta; Nehr, Sascha; Phillips, Gavin; Saraga, Dikaia; Schoemaecker, Coralie; Wierzbicka, Aneta; Querol, Xavier; et al. (Wiley, 2020-04-25) This is an editorial contribution to the Journal Indoor Air on the future direction of indoor air chemistry research. Graphene Oxide Bulk Modified Screen-Printed Electrodes Provide Beneficial Electroanalytical Sensing Capabilities Rowley-Neale, Samuel; Brownson, Dale; Smith, Graham; Banks, Craig; Manchester Metropolitan University; University of Chester (MDPI, 2020-03-19) We demonstrate a facile methodology for the mass production of graphene oxide (GO) bulk modified screen-printed electrodes (GO-SPEs) that are economical, highly reproducible and provide analytically useful outputs. Through fabricating GO-SPEs with varying percentage mass incorporations (2.5, 5, 7.5 and 10%) of GO, an electrocatalytic effect towards the chosen electroanalytical probes is observed, that increases with greater GO incorporated compared to bare/ graphite SPEs. The optimum mass ratio of 10% GO to 90% carbon ink displays an electroanalytical signal towards dopamine (DA) and uric acid (UA), which is ca. ×10 greater in magnitude than that achievable at a bare/unmodified graphite SPE. Furthermore, 10% GO-SPEs exhibit a competitively low limit of detection (3σ) towards DA at ca. 81 nM, which is superior to that of a bare/unmodified graphite SPE at ca. 780 nM. The improved analytical response is attributed to the large number of oxygenated species inhabiting the edge and defect sites of the GO nanosheets, which are available to exhibit electrocatalytic responses towards inner-sphere electrochemical analytes. Our reported methodology is simple, scalable, and cost effective for the fabrication of GO-SPEs, that display highly competitive LODs, and is of significant interest for use in commercial and medicinal applications Mathematical Modelling of DNA Methylation Roberts, Jason; Zagkos, Loukas (University of Chester, 2020-03-09) DNA methylation is a key epigenetic process which has been intimately associated with gene regulation. In recent years growing evidence has associated DNA methylation status with a variety of diseases including cancer, Alzheimer's disease and cardiovascular disease. Moreover, changes to DNA methylation have also recently been implicated in the ageing process. The factors which underpin DNA methylation are complex, and remain to be fully elucidated. Over the years mathematical modelling has helped to shed light on the dynamics of this important molecular system. Although the existing models have contributed significantly to our overall understanding of DNA methylation, they fall short of fully capturing the dynamics of this process. In this work DNA methylation models are developed and improved and their suitability is demonstrated through mathematical analysis and computational simulation. In particular, a linear and nonlinear deterministic model are developed which capture more fully the dynamics of the key intracellular events which characterise DNA methylation. Furthermore, uncertainty is introduced into the model to describe the presence of intrinsic and extrinsic cell noise. This way a stochastic model is constructed and presented which accounts for the stochastic nature in cell dynamics. One of the key predictions of the model is that DNA methylation dynamics do not alter when the quantity of DNA methylation enzymes change. In addition, the nonlinear model predicts DNA methylation promoter bistability, which is commonly observed experimentally. Moreover, a new way of modelling DNA methylation uncertainty is introduced. G-codes over Formal Power Series Rings and Finite Chain Rings Dougherty, Steven; Gildea, Joe; Korban, Adrian; University of Scranton; University of Chester (2020-02-29) In this work, we define $G$-codes over the infinite ring $R_\infty$ as ideals in the group ring $R_\infty G$. We show that the dual of a $G$-code is again a $G$-code in this setting. We study the projections and lifts of $G$-codes over the finite chain rings and over the formal power series rings respectively. We extend known results of constructing $\gamma$-adic codes over $R_\infty$ to $\gamma$-adic $G$-codes over the same ring. We also study $G$-codes over principal ideal rings. New Extremal Self-Dual Binary Codes of Length 68 via Composite Construction, F2 + uF2 Lifts, Extensions and Neighbors Dougherty, Steven; Gildea, Joe; Korban, Adrian; Kaya, Abidin; University of Scranton; University of Chester; University of Chester; Sampoerna Academy; (Inderscience, 2020-02-29) We describe a composite construction from group rings where the groups have orders 16 and 8. This construction is then applied to find the extremal binary self-dual codes with parameters [32, 16, 8] or [32, 16, 6]. We also extend this composite construction by expanding the search field which enables us to find more extremal binary self-dual codes with the above parameters and with different orders of automorphism groups. These codes are then lifted to F2 + uF2, to obtain extremal binary images of codes of length 64. Finally, we use the extension method and neighbor construction to obtain new extremal binary self-dual codes of length 68. As a result, we obtain 28 new codes of length 68 which were not known in the literature before. Terahertz Probing Irreversible Phase Transitions Related to Polar Clusters in Bi0.5Na0.5TiO3-based Ferroelectric Yang, Bin; University of Chester (Wiley-VCH, 2020-02-16) Electric-field-induced phase transitions in Bi0.5Na0.5TiO3 (BNT)-based relaxor ferroelectrics are essential to the controlling of their electrical properties and consequently in revolutionizing their dielectric and piezoelectric applications. However, the fundamental understanding of these transitions is a long-standing challenge due to their complex crystal structures. Given the structural inhomogeneity at the nanoscale or sub-nanoscale in these materials, dielectric response characterization based on terahertz (THz) electromagnetic-probe beam-fields, is intrinsically coordinated to lattice dynamics during DC-biased poling cycles. The complex permittivity reveals the field-induced phase transitions to be irreversible. This profoundly counters the claim of reversibility, the conventional support for which, is based upon the peak that is manifest in each of four quadrants of the current-field curves. The mechanism of this irreversibility is solely attributed to polar clusters in the transformed lattices. These represent an extrinsic factor which is quiescent in the THz spectral domain. Modified Quadratic Residue Constructions and New Exermal Binary Self-Dual Codes of Lengths 64, 66 and 68 Gildea, Joe; Hamilton, Holly; Kaya, Abidin; Yildiz, Bahattin; University of Chester; University of Chester; Sampoerna University; Northern Arizona University (Elsevier, 2020-02-10) In this work we consider modified versions of quadratic double circulant and quadratic bordered double circulant constructions over the binary field and the rings F2 +uF2 and F4 +uF4 for different prime values of p. Using these constructions with extensions and neighbors we are able to construct a number of extremal binary self-dual codes of different lengths with new parameters in their weight enumerators. In particular we construct 2 new codes of length 64, 4 new codes of length 66 and 14 new codes of length 68. The binary generator matrices of the new codes are available online at [8]. High‐order ADI orthogonal spline collocation method for a new 2D fractional integro‐differential problem Yan, Yubin; Qiao, Leijie; Xu, Da; University of Chester, UK; Guangdong University of Technology, PR. China; Hunan Normal University, P. R. China (John Wiley & Sons Ltd, 2020-02-05) We use the generalized L1 approximation for the Caputo fractional deriva-tive, the second-order fractional quadrature rule approximation for the inte-gral term, and a classical Crank-Nicolson alternating direction implicit (ADI)scheme for the time discretization of a new two-dimensional (2D) fractionalintegro-differential equation, in combination with a space discretization by anarbitrary-order orthogonal spline collocation (OSC) method. The stability of aCrank-Nicolson ADI OSC scheme is rigourously established, and error estimateis also derived. Finally, some numerical tests are given Additively Manufactured Graphitic Electrochemical Sensing Platforms Foster, Christopher W; El Bardisy, Hadil M; Down, Michael P; Keefe, Edmund M; Smith, Graham C; Banks, Craig E; Manchester Metropolitan University (Foster, El Bardisy, Down, Keefe, Banks), University of Chester (Smith) (Elsevier, 2020-02-01) Additive manufacturing (AM)/3D printing technology provides a novel platform for the rapid prototyping of low cost 3D platforms. Herein, we report for the first time, the fabrication, characterisation (physicochemical and electrochemical) and application (electrochemical sensing) of bespoke nanographite (NG)-loaded (25 wt. %) AM printable (via fused deposition modelling) NG/PLA filaments. We have optimised and tailored a variety of NG-loaded filaments and their AM counterparts in order to achieve optimal printability and electrochemical behaviour. Two AM platforms, namely AM macroelectrodes (AMEs) and AM 3D honeycomb (macroporous) structures are benchmarked against a range of redox probes and the simultaneous detection of lead (II) and cadmium (II). This proof-of-concept demonstrates the impact that AM can have within the area of electroanalytical sensors. Effects of obesity on cholesterol metabolism and its implications for healthy ageing. Mc Auley, Mark Tomás; University of Chester (Cambridge University Press, 2020-01-27) The last few decades have witnessed a global rise in the number of older people. Despite this demographic shift, morbidity within this population group is high. Many factors influence healthspan; however an obesity pandemic is emerging as a significant determinant of older peoples' health. It is well established obesity adversely effects several metabolic systems. However, due to its close association with overall cardiometabolic health, the impact obesity has on cholesterol metabolism needs to be recognised. The aim of this review is to critically discuss the effects obesity has on cholesterol metabolism and to reveal its significance for healthy ageing. Constructing Self-Dual Codes from Group Rings and Reverse Circulant Matrices Gildea, Joe; Kaya, Abidin; Korban, Adrian; Yildiz, Bahattin; University of Chester; Sampoerna Academy; Northern Arizona University (American Institute of Mathematical Sciences, 2020-01-20) In this work, we describe a construction for self-dual codes in which we employ group rings and reverse circulant matrices. By applying the construction directly over different alphabets, and by employing the well known extension and neighbor methods we were able to obtain extremal binary self-dual codes of different lengths of which some have parameters that were not known in the literature before. In particular, we constructed three new codes of length 64, twenty-two new codes of length 68, twelve new codes of length 80 and four new codes of length 92. Columnar self-assembly, electrochemical and luminescence properties of basket-shaped liquid crystalline derivatives of Schiff-base-moulded p-tert-butyl-calix[4]arene Sharma, Vinay S.; orcid: 0000-0003-4970-0676; Sharma, Anuj S.; Worthington, Sheena J. B.; Shah, Priyanka A.; orcid: 0000-0002-1386-6984; Shrivastav, Pranav S.; orcid: 0000-0002-1284-1558 (Royal Society of Chemistry (RSC), 2020) A new family of blue-light emitting supramolecular basket-shaped liquid crystalline compounds based on p-tert-butyl-calix[4]arene core to form self-assembly and columnar hexagonal phase.
CommonCrawl
Examining the importance of existing relationships for co-offending: a temporal network analysis in Bogotá, Colombia (2005–2018) Alberto Nieto1, Toby Davies1 & Hervé Borrion1 This study aims to improve our understanding of criminal accomplice selection by studying the evolution of co-offending networks—i.e., networks that connect those who commit crimes together. To this end, we tested four growth mechanisms (popularity, reinforcement, reciprocity, and triadic closure) on three components observed in a network connecting criminal investigations (\(M = 286\) K) with adult offenders (\(N = 274\) K) in Bogotá (Colombia) between 2005 and 2018. The first component had 4286 offenders (component 'A'), the second 227 ('B'), and the third component 211 ('C'). The evolution of these components was examined using temporal information in tandem with discrete choice models and simulations to understand the mechanisms that could explain how these components grew. The results show that they evolved differently during the period of interest. Popularity yielded negative statistically significant coefficients for 'A', suggesting that having more connections reduced the odds of connecting with incoming offenders in this network. Reciprocity and reinforcement yielded mixed results as we observed negative statistically significant coefficients in 'C' and positive statistically significant coefficients in 'A'. Moreover, triadic closure produced positive, statistically significant coefficients in all the networks. The results suggest that a combination of growth mechanisms might explain how co-offending networks grow, highlighting the importance of considering offenders' network-related characteristics when studying accomplice selection. Besides adding evidence about triadic closure as a universal property of social networks, this result indicates that further analyses are needed to understand better how accomplices shape criminal careers. Crimes can be committed either by individuals or by groups of people acting together. While there are some contexts in which the involvement of multiple offenders is incidental—it plays no material role in the commission of the crime—there are others where it is a crucial ingredient: a crime could not, or would not, take place without it Tremblay (1993). The study of co-offending, therefore, has both theoretical and practical value: as well as providing insights into criminal behaviour, understanding how criminal collaborations arise may suggest ways to disrupt the conditions that facilitate crime-related activities. Within this, a particular topic of interest is accomplice selection—i.e., how offenders choose their criminal partners. While this has been discussed extensively from a qualitative perspective, there has been little attempt to examine it using a quantitative networked approach (for an exemption see, e.g., Cornish and Clarke 2002; McCarthy et al. 1998; Weerman 2003, 2014). Several theories have been proposed to explain accomplice selection and, in particular, to suggest which factors influence the choice of co-offender (see Weerman 2014; van Mastrigt 2017 for reviews). Some of these focus on the role of personal characteristics—such as age, gender and criminal aptitude—or discuss the influence of the social environment more generally; the idea that offenders tend to commit crimes with others from their social circle, for example. Others, however, relate to previous offending behaviour: individuals may be more likely to form new collaborations if they have already co-offended with multiple individuals in the past, for example, while others may tend to repeatedly offend with the same accomplices (Charette and Papachristos 2017). Hypotheses such as these relate to the influence of prior co-offending relationships on the formation of new ones, and it is these that we focus on in this study. Networks provide a natural framework for studying these effects. Social network analysis has helped revive interest in co-offending in recent decades by providing tools and theories to study the interactions between individuals systematically (Bright and Whelan 2020; Carrington 2014; Papachristos 2011). In co-offending networks, individuals are linked based on the crimes they have co-executed: the network is composed of nodes, representing individuals, and any pair of offenders who have co-offended are connected by an edge, representing the criminal event in which they participated. Since edges represent co-offending relationships, understanding the mechanisms which drive network formation is equivalent to understanding how these relationships arise. In this study, we seek to gain insights into the principles that drive co-offending by analysing the growth of three network components representing co-offending relationships in Colombia's capital city, Bogotá, between 2005 and 2018. In previous work (Nieto et al. 2022), this network was shown to display structural regularities—in particular, triadic closure—when studied as a static network. Here we go further by examining its evolution.Footnote 1 In particular, we examine the links formed due to each criminal event during the study period. Each link formation represents the selection of an accomplice: conceptually, this selection might represent an explicit choice (e.g. recruitment), or it might reflect a more passive process (e.g. shared circumstance). In either case, identifying regularities in how these selections occur will offer insights into how co-offending relationships develop. Understanding how co-offending networks evolve over time is crucial for identifying the mechanisms which drive their formation. Apart from a few contributions (e.g., Sarnecki 2001; Charette and Papachristos 2017; Iwanski and Frank 2013; Brantingham et al. 2011), the studies that have adopted a network approach to study co-offending have analysed static networks. Static networks are snapshots that aggregate co-offending relationships into a single network, regardless of when the crimes were executed (Faust and Tita 2019). The analysis of such networks can allow us to better understand the properties of co-offending networks; however, they cannot reveal how these networks evolve through the decisions made by offenders when creating new relationships. As has been shown for networks in general, different underlying formation processes can lead to graphs with indistinguishable properties when analysed in the aggregate (Mitzenmacher 2004). This article starts to fill this gap by studying how co-offending networks evolve over time. Specifically, it applies a recently developed approach in network science that considers the formation of social networks as the result of choices made by nodes (offenders, in our case) when joining a network (Opsahl and Hogan 2011; Overgoor et al. 2019; Feinberg et al. 2020). When a node joins a network—or, if it is already part of it, creates a new connection—it selects a 'target' from the pool of nodes that are already part of the network. Discrete choice modelling examines whether the features of the potential targets influence this selection by comparing the characteristics of chosen nodes to those that were not. Identifying these influences can shed light on offenders' decisions when selecting accomplices for new criminal ventures. In this study, the features we focus on are network-related, such as the number of existing links or the presence of reciprocal connections. Since these features reflect prior offending connections, they can be used to make inferences about the role of existing relationships in guiding new ones. At least four mechanisms can explain the growth of networks in terms of nodes' preferences for particular network-related properties. These have been examined in discrete choice studies of other social networks (Opsahl and Hogan 2011; Overgoor et al. 2019). Here, we focus on four of these—popularity, reciprocity, reinforcement, and triadic closure—each of which can be interpreted in terms of offender behaviour. Popularity refers to the tendency of offenders to form links (i.e., co-offend) with those who already have many connections (i.e., recurrent or prolific co-offenders). Reciprocity refers to offenders selecting individuals who have previously selected them, while reinforcement describes the situation in which one individual re-selects another. Triadic closure describes the tendency to create links with the associates of prior associates ('co-offending with the accomplice of my accomplice'). In our analysis, we employed a discrete choice model with network features corresponding to these four growth mechanisms to study their relative roles in accomplice selection. We applied this approach to analyse three components observed in a co-offending network in Bogotá (Colombia) between 2005 and 2018, containing 4286 (component 'A'), 227 (component 'B'), and 211 (component 'C') individuals. These components were derived from criminal investigation records and included all crime types defined by Colombia's criminal law; therefore, they reflect criminal collaboration in a general sense rather than in the context of any particular offence. Theories of co-offending conceptualise accomplice selection as a fundamentally directional process in which individuals acting as recruiters instigate collaborations with others (Reiss 1988); indeed, directionality is implicit in the four mechanisms outlined above. For this reason, our underlying model of the co-offending network is a directed graph, with orientation reflecting recruitment. This, however, presents an analytical challenge since our data does not contain information about which offenders acted as recruiters. We address this by adopting a procedure in which the analysis is repeated multiple times, with the directionality of edges randomised in each case: any findings robust to the choice of orientation can be assumed to apply generally. We follow this approach because a method that disregarded directionality would not reflect the nature of co-offending (as per the mechanisms identified above) and would be of limited theoretical value. Theoretical and practical implications derive from this article. From a theoretical perspective, this study suggests that a combination of growth mechanisms might explain how co-offending networks grow, highlighting the importance of considering offenders' network-related characteristics when studying accomplice selection. It also highlights the importance of former accomplices, as the results suggest that they may act as sources of information for potential new accomplices. From a methodological perspective, this study demonstrates a recently-developed approach to studying how networks evolve over time which has not previously been applied in criminology. Researchers interested in studying co-offending or covert networks might employ this approach to study the formation of crime-related networks. Practitioners can also benefit from this study as it shows how to exploit existing information to identify and track the evolution of co-offending networks. From a strategic perspective, understanding the mechanisms at play in the evolution of particular networks can offer practitioners insights which may inform the design of crime prevention strategies. Similarly, studying the evolution of co-offending networks can assist practitioners in assessing the effectiveness of their interventions. The proposed approach can help evaluate the interventions' effectiveness by analysing the behaviours a network displays after an intervention. Co-offending is a topic that has been discussed extensively within criminology, and several theories—often based on qualitative studies—have been proposed to explain the features and dynamics of group offending (Weerman 2014). On the particular topic of accomplice selection, several perspectives have been advanced, discussing how offenders become aware of potential partners and how they evaluate their value as prospective co-offenders (van Mastrigt 2017). These theories lay along a continuum: at one end are those which discuss collaborations that arise spontaneously, while others conceptualise accomplice selection as a rational process in which offenders seek to choose partners who will be of maximum benefit. This section discusses some of these theoretical principles, which relate to the influence of prior collaborative behaviour on accomplice selection. In a network context—where links represent instances of co-offending—these theoretical principles correspond to models of link formation based on network-related features. In each case, we discuss the principle from a criminological perspective and its interpretation regarding network growth. In doing so, our conceptualisation of a co-offending network is as a directed multigraph; that is, a network which can have multiple links between any pair of nodes and where each link has an orientation. Multiple links represent distinct instances of co-offending, and the orientation reflects the initiation of the collaboration (i.e., recruitment). One suggestion that has been put forward in the literature is that individuals are more likely to be chosen as accomplices if they already have multiple co-offending connections (e.g., Sarnecki 2001). Such individuals can be considered 'popular' from a co-offending perspective because they have frequently been selected as accomplices. The mechanism is analogous to the 'rich get richer' principle for social networks, whereby individuals forming new links preferentially attach to those who are already well-connected (Newman 2018). There are two reasons why popular co-offenders may be preferred as potential accomplices. First, their popularity may be attractive in itself, implying that the individual is an experienced co-offender, and their existing co-offending relationships may be seen as a form of endorsement. On the other hand, popularity may act as a marker of the individual's underlying utility as a criminal partner: it is not popularity per se that is attractive, but rather that individuals become popular because of their aptitude for crime. Certain characteristics or assets can affect the value that an individual can contribute to a potential criminal collaboration. These characteristics—referred to as 'criminal capital'—may include information, skills, contacts and personality traits (e.g., trustworthiness) deemed beneficial for the execution of a crime (Reiss 1986; McCarthy et al. 1998; McCarthy and Hagan 2001; Hochstetler 2014). Those with these features will, in principle, be more attractive as potential accomplices and therefore selected more frequently. In this way, the popularity of an offender may simply be a proxy for their criminal capital. When popularity plays a role in the growth of a network, it is likely that a small subset of individuals will form disproportionately high numbers of connections. This can, however, be manifested in two ways. In the first scenario, the connections are formed with distinct individuals, meaning that the popular nodes have many neighbours. In the second scenario, some of the connections relate to the same co-offenders (i.e. they are multi-edges), reflecting the fact that they have interacted on multiple occasions. In the first scenario, popular nodes have numerous neighbours, but their connections tend to be 'weak' as they represent single events. In the second one, popular nodes may not have as many neighbours as in the first scenario, but their connections will be 'stronger' or heavier. This scenario echoes McGloin et al. (2008)'s findings about how frequent offenders create stable co-offending relationships. If popularity were a prevalent mechanism in co-offending networks, we would expect to see a small subset of offenders forming many links, either with different associates (first scenario) or with the same ones repeatedly (second scenario). As explained in the following section, we have considered both scenarios when analysing the growth of co-offending networks. However, popularity might also be unattractive to potential accomplices. As offenders make more connections, their visibility increases and with it does their risk of getting arrested (Morselli 2009). Accordingly, popularity may have a negative effect on accomplice selection in some circumstances, and its overall role in the growth of co-offending networks is not straightforward. Offenders with ample criminal capital will make more or stronger connections, but their attractiveness as potential accomplices may be short-lived since their popularity makes them prone to be removed by law enforcement agencies. Reciprocity and reinforcement Two further mechanisms that may play a role in the growth of co-offending networks are reciprocity and reinforcement. The mechanisms are similar in that they both refer to the formation of multiple links between pairs of offenders but differ in their directionality. Reciprocity refers to situations in which offenders select accomplices who have previously selected them—i.e., A selects B, having previously been selected by B themselves. In network terms, this corresponds to the tendency for pairs of nodes linked in one direction to be linked in the opposite direction. Such situations may arise when pairs of offenders repeatedly collaborate, each offender instigating on different occasions. Research has found that offenders do not have fixed roles throughout their criminal careers; rather, they alternate between the roles of 'recruiters' and 'followers' Van Mastrigt and Farrington (2011). Conceptually, reciprocity refers to the likelihood of observing two individuals exchanging benefits or services over time—'doing for others if they have done for them' Plickert et al. (2007); Gouldner (1960). This exchange is not mediated by an explicit negotiation or a power imbalance between participants. Instead, it is an exchange explained through social norms or self-interests of the parties involved (Molm 1997). Reciprocity in co-offending has been analysed from the perspective of individual events but not to predict new co-offending relationships. For example, the social exchange theory of co-offending proposed by Weerman (2003) describes co-offending as a reciprocal interaction between co-offenders: co-offenders exchange material and immaterial goods to access rewards hard to obtain by solo offending. However, this mutual interaction primarily relates to collaborations themselves rather than accomplice selection. Reinforcement, on the other hand, refers to instances where individuals repeatedly re-select the same accomplices (Grund and Morselli 2017; McGloin et al. 2008), thereby strengthening existing connections between connected pairs (Gouldner 1960). From a network perspective, these repeated interactions can be represented by multiple links—or 'heavier' links—connecting pairs of nodes. This tendency might be expected due to the cost or risk of forming new co-offending relationships. When committing an offence, it is easier and safer to renew a previous collaboration than to initiate a new one. Reinforcing existing relationships might also be expected when accomplice selection is viewed as a rational process (van Mastrigt 2017). Accomplices liaise with those with the criminal capital that matches the needs for successfully executing the crime at hand. Hence, reinforcing existing relationships might reduce the costs of finding new associates with the skills needed to exploit new criminal opportunities. Moreover, trust builds among those who co-execute a crime (Charette and Papachristos 2017). Hence, initial interactions between unknown offenders can help them gain trust in their accomplices, allowing them to stick together. Reinforcement overlaps to some degree with popularity because repeated re-selection can result in the selected accomplice having a high number of in-links. The mechanisms are distinct, however. The principle of popularity is that an individual X may favour an accomplice simply because of their number of prior connections; whether those connections are from X themselves is immaterial. With reinforcement, on the other hand, the preference is specifically for individuals whom they have chosen previously (with multiplicity irrelevant). Reciprocity and reinforcement are similar in that they refer to the formation of multiple links between pairs of individuals but differ in directionality. As mentioned, the directionality in these relationships is closely related to the idea of recruitment (or instigation). Co-offending relationships are created when a person, acting as a recruiter, brings together other motivated offenders, who act as followers, to execute a crime (Reiss 1986). Hence, these mechanisms are distinct, and both may play a role in explaining the empirical findings on how co-offending relationships are created and the behaviours offenders exhibit throughout their criminal careers. We included both mechanisms for these reasons and used simulation analysis as a robustness check. While both reciprocity and reinforcement represent plausible hypotheses, there is a large body of evidence concerning the instability of co-offending relationships. Numerous studies show that offenders are more likely to co-offend with new accomplices rather than stick with the same associates (e.g., Weerman 2003, 2014; Warr 2002, 1996; Carrington 2002; McGloin and Thomas 2016; McGloin and Piquero 2010; van Mastrigt 2017). If this is the case, reciprocity and reinforcement may not play a role in how co-offending networks evolve; on the contrary, we may expect that the presence of existing links has a negative effect on accomplice selection. However, research by Grund and Morselli (2017) has suggested that the instability of criminal partnerships has been overestimated in the literature due to a measurement issue. Using a method which adjusts for this, they find that the chance of a pair of offenders being arrested again is as high as 50%, contradicting prior studies and suggesting that reciprocity and reinforcement may play a role. Triadic closure Triadic closure refers to the tendency for new links to form between two unconnected individuals who share a common neighbour Wasserman and Faust (1994); Holland and Leinhardt (1971). If A co-offends separately with B and C, triadic closure predicts that B and C will likely co-execute a crime. Trust is often cited as a reason why social networks display this trait. Burt (2005) explained that when two people trust each other, there is a commitment to a relationship without knowing how the other person will behave. Two individuals sharing a connection to the same person will therefore have a basis to trust one another, increasing the chances of creating a new connection (Easley and Kleinberg 2010). In these situations, trust emerges from the possibility of using informal sanctions to discipline the person breaking social norms (Coleman 1988)—gossiping, for example, can be an informal sanction against those who break social norms. Similarly, trust plays a vital role in co-offending since offenders need to act together as planned without the need to supervise one another (Gambetta 2011). Two willing offenders sharing an accomplice have a basis for trusting each other since the shared accomplice can arbitrate between them. Accordingly, information about offenders' trustworthiness is essential for accomplice selection. McCarthy et al. (1998) contended that offenders rely on the information circulating in social networks to evaluate the trustworthiness of potential accomplices. Thrasher (1963) introduced a similar idea when referring to the underworld's 'grapevine system' in which information about offenders and their reputations circulate. In this sense, we can hypothesise that triadic closure might play a prominent role in the evolution of co-offending networks. Two individuals with an accomplice in common are more likely to co-offend: the shared contact can vouch for each individual and act as a mediator if needed. Feld (1981)'s focus theory approach provides an additional explanation for triadic closure in social networks. According to this theory, there are elements in the environment that act as social foci—settings in which individuals organise their social activities (e.g., family, workplaces, neighbourhoods). Feld suggested that two individuals sharing a connection to a third person might also share the same social foci, facilitating new relationships between unconnected pairs. Felson's (2003) notion of offender convergence settings resembles Feld's theory. In these settings, offenders contact potential accomplices to seize criminal opportunities and design criminal plans. Individuals sharing social foci (or offender convergence settings) are more likely to create new co-offending relationships than those not sharing these settings. This idea of people who share settings having more chances of creating new connections aligns with Granovetter (1973)'s explanation of triadic closure. If two individuals spend time with a third, they will likely encounter each other and potentially form a connection. The potential role of offender convergence settings is another reason we expect triadic closure to be prevalent in co-offending networks. Relatively few studies have explicitly investigated the presence of triadic closure in co-offending networks; however, those that have done so (e.g., Grund and Densley 2015; Nieto et al. 2022) have found evidence that it is indeed present. Nevertheless, the underlying mechanisms that give rise to it—for example, whether it is an effect in itself or a bi-product of convergence settings - remain unknown. Figure 1 illustrates the four mechanisms described in this section. The review presented in this section suggests that we should expect to see popularity play a relatively limited role in the growth of co-offending networks. Similarly, reinforcement and reciprocity might only partially explain the evolution of these networks, as most evidence suggests that co-offenders tend to find new accomplices as they continue their criminal careers. However, triadic closure seems likely to play a substantial role in the growth of co-offending networks since there is a clear overlap between the mechanisms which typically give rise to this trait in social networks and the principles driving co-offending selection. Based on the existing relationships (black solid lines), popularity, reinforcement, reciprocity, and triadic closure predict new connections between the nodes (dashed lines) Before going further, it is worth noting that these four mechanisms only consider nodes' network-related properties. These include the number of connections and their position in the network and exclude individual-level properties such as age, sex, or criminal history. The reason for focusing on these was partly practical—offender-level characteristics were not available in the dataset we studied—but also theoretical: our primary interest is in how prior co-offending behaviours shape accomplice selection. Nevertheless, individual-level characteristics will also play a role (Robins 2009), and their inclusion will be an essential topic for future work. It is also worth noting that edges' directionality is also crucial to better understanding co-offending. It allows us to better model co-offending relationships as they are initiated by motivated offenders who recruit those willing to participate in the criminal venture. It also allows us to differentiate between reciprocity and reinforcement. Below we discuss how we addressed edges' directionality in the co-offending components we studied here. Prior studies of network evolution Although relatively little research has formally examined the evolution of co-offending networks from the perspective of accomplice selection, several studies offer important context. For example, Charette and Papachristos (2017), using a dynamic approach to analyse co-offending dyads, showed that the longevity of co-offending relationships—measured through the number of times pairs of offenders were co-arrested—tended to be short. However, they observed that a small proportion of relationships persisted. According to their findings, homophily (i.e., the tendency to create connections with similar others), experience (i.e., criminal capital), and transitivity (i.e., shared accomplices) might explain why some co-offenders stick together despite having previously been arrested together. Another group of studies have analysed the evolution of covert criminal networks. Bright et al. (2019) found that triadic closure explained the structural changes experienced by a drug trafficking network comprised of 86 participants between 1991 and 1996 in Australia. Bright and Delaney (2013) also observed that drug trafficking networks were flexible and adaptive, as central offenders became peripheral when new individuals joined the network. Similarly, Morselli and Petit (2007), using information about a criminal investigation between 1994 and 1996 in Canada, observed that drug trafficking networks could become less centralised as law enforcement agencies try to disrupt them. While these are important insights, they are of limited relevance in the present context since the networks studied are primarily organisational rather than reflecting instances of co-offending. The networks model communication patterns between individuals participating in illegal activities as part of a wider enterprise, not the co-execution of individual crimes. As far as we know, no studies have adopted a dynamic approach to analyse the evolution of co-offending networks, particularly in terms of the mechanisms that guide the formation of links. Therefore, the question that we study here—of how this evolution offers insight into the principles guiding accomplice selection - remains unanswered. Data and network construction This study used data from the Colombian Attorney General's Office (AGO) regarding all criminal investigations—either closed or ongoing—of crimes committed in Bogotá (Colombia) between 01/01/2005 and 31/12/2018 by adult offenders (>18 years). These investigations originated from either the actions of the National Police (e.g., an arrest executed by a police officer) or the investigators working alongside prosecutors (e.g., new investigations derived from ongoing cases). A criminal investigation does not necessarily imply an arrest, as they could be in an early stage of the process, on trial, or at the last stage with a sentence. However, all arrests result in an investigation. A single investigation could involve one or more offenders and multiple crime types. We included information about all types of crime defined by Colombia's Criminal Law. Each record in our dataset corresponds to an offender's involvement in a particular criminal investigation. Offenders were identified using the (encrypted) national identity number (NIN), and Criminal Investigation Record Number (CIRN)—a code assigned by the AGO—acted as a unique identifier for criminal investigations. Each CIRN had an associated timestamp corresponding to the date the AGO started investigating a crime. We used this as a proxy for the date on which the offenders executed the crime(s). Unfortunately, no information about offenders' attributes (e.g., sex, ethnicity or criminal history) was available. We used this data to construct a bipartite network representing the associations between offenders (N = 274,689) and investigations (M = 286,591). Each link corresponds to a unique record in our dataset, representing a connection between an offender and a criminal investigation. We took the one-mode projection of this network to derive a separate (undirected) network of associations between offenders (i.e. a co-offending network). In this projected network, a link is placed between any pair of offenders connected to the same investigation; that is, two offenders have a co-offending relationship if they are both associated with the same CIRN. If two offenders shared more than one investigation, multiple edges were placed between them. This same network was used in a previous study that looked into the measurement of triadic closure in bipartite co-offending networks Nieto et al. (2022). Of those individuals included in our network, 92,376 (34%) were co-offenders (i.e. had at least one link to another offender), with solo offenders (182,313) accounting for the remainder of the network (66%). The network had 32,348 components with two or more offenders. Of all investigations, 38% included a crime against private property, 27% a crime against people's physical integrity (e.g., assault), and 9% a crime against public safety (e.g., arms trafficking). On average, each offender was connected to 1.8 investigations, and each investigation included, on average, 2.5 offenders. The largest component observed at the end of the study period included 4286 individuals (component 'A'), followed by two others with 227 ('B'), and 211 ('C') offenders. The proportion of offenders in these components is small relative to the total number of offenders in the network. They represent less than 1% of the total number of offenders (as expected, given the low network density). Still, they constitute a substantial group when placed in the broader context. For example, the offenders in these components are equivalent to 44% of Bogotá's prison capacity or, since there is prison overcrowding in this city, 30% of the actual prison population as of December 2018 (INPEC 2021). Based on their significance in terms of the number of offenders, we decided to study the growth of these components. Table 1 presents some descriptive statistics of these components, and they are plotted in Figs. 2, 3, and 4. Each plot also includes a histogram showing the degree distribution and some descriptive statistics—the average degree centrality, diameter, density and clustering coefficient. Table 1 Descriptive network statistics: number of nodes, number of edges, number of investigations in the underlying bipartite structure, the mean number of offenders per investigation, the mean number of offenders per investigation, the mean number of investigations per offender, and the proportion of investigations linked to specific crime types Component 'A' Component 'B' Component 'C' Several similarities and differences can be seen in the structure of the three components. Despite the difference in the number of nodes and investigations in the underlying bipartite structure, 'A' and 'B' had a relatively similar mean number of offenders per investigation (1.89 and 2.3, respectively), though the average degree in the latter was slightly higher (10.8) than the former (8.5). However, it is notable that 'B' contains some densely-connected clusters of nodes, which are likely to partly be a by-product of certain investigations with large numbers of participants. This is even more apparent in 'C', in which two particularly large clusters can be seen: again, this indicates the presence of large offending groups. This component had fewer investigations (128), a higher mean number of offenders per investigation (3.1), and a higher average degree (53). On the whole, however, 'A', 'B', and 'C' are relatively sparse networks and exhibit some transitivity in their connections (the clustering coefficients ranged between 0.77 and 0.95). Regarding the proportion of investigations linked to specific crime types, almost half of the investigations in these components were related to crimes against private property. Offences against public safety (e.g., arms trafficking) and public administration (e.g., obstruction of justice) were also present in nearly one-quarter of the investigations in each component.Footnote 2 We did not select these components as representative of the complete network; we chose them based on their size, as co-offending networks in their own right. Nevertheless, there was a resemblance between the entire network and the three components we studied here. They all had a considerable proportion of criminal investigations related to crimes against private property. There was little variation in the average number of offenders per investigation and the average number of investigations per offender. To get an initial understanding of the components' evolution, we partitioned the data set into landmark windows that encompass all data between a fixed start-point and sliding end-point (Cordeiro et al. 2018; Gehrke et al. 2001). Figure 5 shows a graphical representation of this approach, with the total number of offenders in the full network at each window. For the three components we studied, Table 2a–c show the number of offenders, components, incoming nodes, and investigations observed in each sliding window. These components resulted from the coalescence of smaller clusters that merged and created a large connected component when new links formed 'bridges' between smaller fragments. The pace and proportion of incoming nodes varied between components (NB: the term 'incoming nodes' in these tables refers to new nodes joining the components. It excludes existing offenders creating new co-offending relationships). A landslide approach to partition the dataset. The y-axis displays the landslide windows, starting in 2005 (the landmark) at the top. The black bars represent the number of co-offenders with the number of co-offenders in brackets, and the grey bars represent the total number of offenders per window. Between 2005 and 2018, there were 274,689 offenders, of which 92,376 were related to at least one co-offence Table 2 The number of offenders, components, incoming nodes, and investigations per window for each of the three networks considered here — (a) network 1, (b) network 2, and (c) network 3 Analytical framework For the main part of our analysis, we examined the formation of individual links in the co-offending networks to gain insight into the mechanisms via which offenders select accomplices. We did this by employing a discrete choice approach first proposed by Opsahl and Hogan (2011) and similar to that used in subsequent work by Overgoor et al. (2019), Feinberg et al. (2020), and Overgoor et al. (2020). The model is an example of the more general discrete choice framework, which seeks to describe or predict the choices made by individuals from a discrete set of alternatives (McFadden 1981). A random utility theory approach assumes rational actors make choices by considering the attributes of their options and choosing the one maximising their utility (McFadden 1974). These models have been extensively used to explain decisions such as what colleges people attend, how they travel, or how they decide whether to enter the workforce (e.g., Train 2009; Ben-Akiva et al. 1985; Simonson and Tversky 1992). In the discrete choice approach, the growth of the network is viewed as a sequence of link formations ordered in time. The key principle is to consider the formation of each link to be the outcome of a choice process, whereby one node has selected another to connect with from the set of all available nodes. By comparing the characteristics of the selected node to those of the nodes that were not, it is possible to infer which characteristics are favoured (or otherwise) when forming new links. In this context, where each association created represents a co-offence, any insights into the influence of particular network features can be interpreted in terms of the mechanisms driving accomplice selection. This approach captures two essential processes in co-offending: the first refers to the creation of new criminal relationships, and the second to the reinforcement or reciprocation of existing connections. Formally, the model considers the formation of each edge in the network in sequence. For an edge (i, j) created at time t, it is assumed that node i could have chosen to form a tie with any node already present in the network at time t. This set of possible nodes is denoted \(A_t\), and constitutes the choice set in discrete choice terminology. It is assumed that each node in this choice set has an associated utility, which is a function of its attributes and represents its quality as a potential connection. In the framework used here, this utility is assumed to take a linear form: if the attributes of each node k at time t are represented by a vector \(Z_{k, t}\), then k's utility as a potential connection is a linear function of \(Z_{k, t}\). Under some assumptions about the random components of the utility (McFadden 1974), it can be shown that the probability that j is chosen by i is given by: $$\begin{aligned} P \left\{ J_t = j | Z_{t} \right\} = \frac{\exp \left( \beta ' Z_{j, t}{} \right) }{\sum _{k \in A_t}\exp \left( \beta ' Z_{k, t}{} \right) } \end{aligned}$$ where \(\beta '\) is a vector of coefficients. These can then be estimated via maximum likelihood (Hosmer et al. 2013). Node attributes The attributes incorporated in the model represent nodes' features; these can be either endogenous characteristics (e.g. age, sex) or network-related properties (e.g., the number of existing connections). Our analysis focuses only on the latter, corresponding to the mechanisms we seek to test. We leave the integration of endogenous features in this analysis as an avenue for future research when such data becomes available. Two variables were used to reflect nodes' popularity. A node's in-weight corresponds to the number of connections to it - i.e. the number of times that the offender has previously been chosen as an accomplice (including multiple times by the same individual). On the other hand, the in-degree of a node measures only the number of other nodes connected to it, even if some are connected by multiple links (i.e. multiple crimes). Reinforcement was coded as a binary variable: 1 if there was an existing link from i to j and 0 otherwise. Similarly, we operationalised reciprocity as a binary variable: it took the value 1 if there was an existing link from j to i and 0 otherwise. Triadic closure was measured as the number of accomplices shared by a pair of co-offenders. Note that these variables can evolve: if i forms multiple edges with j, for example, j's reinforcement value will be 0 the first time and 1 after that. For each choice, the values used for \(Z_{k, t}\) were as they were at the point t when the corresponding choice was made. Applying this discrete choice framework to our data requires meeting the assumptions of the underlying model. Discrete choice models assume that individuals act rationally when making a choice—i.e., they will select the option that maximises their utility from the available options. Based on the theoretical arguments reviewed in the previous section, this is justifiable in the context of accomplice selection. A common feature in accomplice selection theories is that offenders seek to maximise benefits and reduce costs when selecting accomplices (e.g., Cornish and Clarke 2002; Tremblay 1993; Weerman 2003). In doing so, they evaluate potential partners based on their perceived trustworthiness (to minimise the risk of betrayal) and the likelihood of this individual maximising the expected benefits from the criminal venture. This evaluation implies judging the 'criminal capital' of their potential accomplices: the skills, information, and contacts deemed beneficial for successfully executing a crime (McCarthy and Hagan 2001; McCarthy et al. 1998). Since there is a strong foundation for the notion that accomplice selection is a rational process, discrete choice models are suitable for studying the growth of co-offending networks. We faced three technical challenges while completing this study. The first of these was computational. Each time a node makes a new connection, every other node currently in the component can be selected, creating a large and imbalanced choice set (i.e., dominated by non-selected nodes). This makes the estimation of models computationally expensive and can lead to biased estimates (Opsahl and Hogan 2011). This can be addressed via negative sampling: rather than including all non-chosen alternatives in the choice set, only a smaller, randomly-selected sample—referred to by Opsahl and Hogan (2011) as 'control cases'—is included (Train 2009). As long as this sample is chosen randomly, parameter estimates can be shown to be unbiased and consistent with those derived from the complete set. There is no general rule for the appropriate number of control cases; instead, it can be established via sensitivity analysis, which we present in Appendix A. In our analysis, we used 30 control cases for 'A' and 10 for 'B' and 'C'. The second challenge was related to the nature of our data. The discrete choice framework outlined above assumes—in line with accomplice selection theory—that the links in the network are directional; that is, an edge (i, j) refers specifically to i choosing j, rather than vice versa. In our co-offending network, however, links were undirected; we knew that a pair of offenders participated in an offence but not who instigated the link. To address this, we estimated the model described in Equation 1 1K times, randomly assigning the direction of each link at each iteration. The logic of this approach was that, if findings were consistent across the iterations, then the underlying principles were not sensitive to the edges' directionality. A third challenge related to investigations comprised of three or more offenders (see Table 1 for the mean number of offenders per investigation). Such investigations result in the simultaneous formation of multiple links; however, while in reality, they are formed simultaneously, the order in which they are listed in the dataset affects the statistical analysis. For each choice scenario, the attributes of candidate nodes were calculated based on the current state of the network and before the connection between i and j was created. These attributes might change as more links are added since j can be selected more times or form connections with i's neighbours. To mitigate this, we shuffled the ordering of links associated with each investigation at each iteration of the analysis. Figure 6 illustrates the randomisation process that we followed. Again, the rationale was to remove any dependency of the findings on an arbitrary ordering of links. Overall, these randomisation procedures—directionality and ordering of links—meant that the model in Equation 1 ran for 1K realisations of each of our three components of interest. Given the computational demands, we used UCL's Cluster Computing Services to perform the analysis. Starting with an undirected component (left), the order of the nodes in the edge list and the directionality of the edges between pairs of nodes were randomly assigned in each of the 1K iterations. The model described in Eq. 1 was estimated in each iteration using the resulting random version of the edge list for the three components considered here Alternative approaches Using discrete choice models to study how social networks grow differs from other approaches that seek to identify the mechanisms driving networks' evolution. In some methods, the structural properties of the network at a single point of observation are used to deduce the process by which it reached its current state (Overgoor et al. 2019); for example, the degree distribution is commonly used to infer whether a network has grown via preferential attachment. This approach has shortcomings, however, since different formation processes can lead to networks that are structurally indistinguishable (Mitzenmacher 2004). This can be avoided when the ordering of edge formations is known. Here we used the timestamps attached to each criminal investigation to formulate each instance of accomplice selection as a choice. We also favoured the proposed approach over the stochastic actor-oriented models (SAOM) (Snijders et al. 2010). SAOM use panel data (i.e. snapshots of a relatively stable group of nodes) to model network dynamics using computer simulations. Since there was a considerable variation in the number of offenders observed in each temporal window (see Table 2a–c), we deemed the panel data approach prescribed by SAOM unsuitable. The proposed analytical framework also differs from Dynamic Network Actor Models for Relational Events (DyNAM) (Stadtfeld and Block 2017). The statistical models in DyNam investigate coordination ties found in diverse social settings (e.g., scientific collaboration, international trade, and friendship). The idea of a mutual agreement between i and j to form a relationship is at DyNam's core; hence, the decision is two-sided, and the directionality of who chooses whom is irrelevant. However, this principle contradicts how co-offending relationships are formed, since, as outlined above, instigation is essential to understanding co-offending and gaining insights into crime prevention (Reiss 1988). Accordingly, the directionality in the relationships between offenders is vital when studying co-offending, limiting the applicability of DyNam. We jointly tested popularity, reciprocity, reinforcement, and triadic closure to understand the evolution of three components of the co-offending network. To analyse their growth, we ran models with two different specifications: one with in-degree as a proxy of popularity and the other with in-strength. In each case, we iterated the estimation of our model 1K times, in line with the randomisation procedure outlined above. The results across all models are summarised in Table 3, with coefficients shown for each component studied and each measure of popularity (see Appendix B for a graphical representation of the results). This table also includes the minimum and maximum C-statistic observed for each model (also known as the 'concordance' statistic or C-index), which is a measure of goodness of fit. According to the classification proposed by Hosmer et al. (2013), the models used to describe the largest network can be considered as 'strong' models as their C-statistic is above 0.8. The models used for the other two networks are 'good' since their C-statistic is above the 0.7 mark—especially for 'C'. Accordingly, these models represent a good fit for the observed data. Since the results are similar for both measures of popularity (in-degree and in-strength), we use the values for in-degree to illustrate our findings in the remainder of this section. The results reveal that the components evolved in different ways. Reciprocity, reinforcement, and triadic closure yielded positive, statistically significant coefficients for 'A', while in-degree generated negative ones. Accordingly, the odds of an offender connecting with a former recruiter (reciprocity) were 44 times higher than connecting to an offender who had not previously selected them (\({\tilde{x}} = 3.78\); exp(3.78) = 43.8). Similarly, the odds of observing an offender co-offending with a former associate were 13 times higher (\({\tilde{x}} = 2.6\)) than co-offending with an offender with no previous connections (reinforcement). Likewise, the odds of co-offending with someone with whom incoming offenders had a mutual associate were four times higher for each additional accomplice they shared (triadic closure, \({\tilde{x}} = 1.46\)). Table 3 The median coefficient observed in the 1000 simulations and the minimum and maximum values of the C-statistic Component 'B' displayed a different behaviour. Triadic closure was the only mechanism that yielded statistically significant coefficients. In this case, the odds of an offender committing a new crime with someone with whom they have a mutual associate were four times higher for each additional accomplice they shared (\({\tilde{x}} = 1.33\)). 'C' was the only component that yielded negative statistically significant coefficients for reciprocity and reinforcement. Former recruiters were 21 times less likely to be selected by a former associate (\({\tilde{x}} = -3.03\); exp(−3.03) = 0.048; 1/0.048 = 20.83). Likewise, incoming offenders were 16 times less likely to co-offend with someone they had previously selected (\({\tilde{x}} = -2.79\); exp(−2.79) = 0.061; 1/0.061 = 16.4). These results challenge the importance attributed to popular offenders in explaining how co-offending networks evolve (Sarnecki 2001; Englefield and Ariel 2017; Malm and Bichler 2011; Bichler and Malm 2018). Based on prior findings, we would have expected positive, statistically significant coefficients for both proxies of popularity. Instead, popularity only yielded negative statistically significant results for the largest component, suggesting that having multiple connections reduced the odds of an individual being selected for a subsequent collaboration. The increased visibility experienced by popular offenders could explain this outcome. As mentioned, those who have been subject to multiple criminal investigations are naturally more visible to law enforcement and have a track record of being 'caught'. Potential recruiters may therefore view them as a risky prospect from the perspective of co-offending. In turn, these results point to the limited predictive power of popularity in forecasting how co-offending networks might evolve. It is worth noting that this limited predictive power could not be identified by only considering the degree distribution of a network snapshot (see, for example, Fig. 2). This distribution shows that a large number of offenders had few links but that a small proportion had a large number of accomplices. On its own, this might suggest that some form of preferential attachment played a role in the network's evolution. However, by continuously observing its growth using the analytical strategy adopted here, we showed that, despite the skewed distribution, popular offenders had only a marginal role in explaining the evolution of the components considered here. Note that our data relate to 'failed' (i.e., detected) co-offending relationships. Popular offenders who have not been arrested or prosecuted could still have prominent roles in expanding these components or linking unconnected components in the observed network. This is an inherent limitation of studies relying on official records to study crime, and we discuss this further in the final section. Based on the results reported elsewhere about co-offending relationships being unstable (Weerman 2003, 2014; Warr 2002, 1996; Carrington 2002; McGloin and Thomas 2016; McGloin and Piquero 2010; van Mastrigt 2017), we would have expected to see similar results to the ones seen for 'C' in all cases. However, previous interactions increased the odds of observing former associates executing new crimes in 'A'. Moreover, neither reciprocity nor reinforcement yielded statistically significant coefficients for' B'. These outcomes suggest that in some networks, offenders are likely to re-offend with known associates. From a rational perspective, re-offending with the same accomplice might reduce costs linked to the search for new accomplices. Likewise, previous interactions might create and increase trust between pairs of offenders. While we cannot test these explanations here, future research could combine the analysis conducted here and the approach proposed by Charette and Papachristos (2017) to understand the factors that might explain why some co-offenders decide to stick together. The mixed results for reciprocity indicate that this mechanism might operate in contrasting ways. On the one hand, it can bring together offenders, giving rise to interactions of the sort 'offender A selects offender B', followed by 'offender B selects offender A', as seen in component 'A'. This sequence of events aligns with our earlier comments about how offenders alternate between the roles of 'recruiter' and 'follower'. Alternatively, recruitment can act as a 'repellent' between known associates, as seen in 'C'. Again, this effect could be explained by the transient nature of individuals' roles in co-offending relationships. Once an individual is instigated into a crime, they can become embedded into a criminogenic network of potential accomplices. As part of this network, this person can change roles based on the criminal expertise they acquire. Criminal expertise can help reduce the inherent risks of co-offending as people might feel less uncertain when committing a crime with a seasoned offender (McGloin and Nguyen 2012). In interpreting our results with respect to reciprocity and reinforcement, however, it is important to mention some caveats. The first is that our analysis is based on information about 'failed' co-offending relationships; i.e., those detected by law enforcement. This might explain why recruitment acts as a repellent: followers could be more inclined to seek new accomplices and avoid former recruiters based on their unsuccessful ventures. Accordingly, followers might look for seasoned criminals that could reduce detection risks. The second caveat is analytical. Because our analytical procedure involved the randomisation of link directions, we cannot confidently discriminate between reinforcement and reciprocity; we do not know the recruiters in each case. However, the fact that findings persist across our 1000 iterations suggests that the findings are not spurious. In addition, the results for both mechanisms mirror each other—the direction and significance of the effects is the same in all models—indicating that they operate (or not) in tandem. Triadic closure plays a consistent role in explaining the emergence of co-offending relationships across all components. This result suggests that former accomplices might be essential in procuring potential associates. It also supports the importance of the information circulating in the 'grapevine system' (McCarthy et al. 1998; Thrasher 1963) that facilitates finding partners and, ultimately, the execution of a crime (Tremblay 1993). The analytical strategy employed here shows how to consider multiple mechanisms when the data at hand allows researchers to observe how new connections are created in an ordered sequence. According to our results, the evolution of co-offending networks, as in other social networks, could be partly explained by the interaction of multiple mechanisms (Hedström and Swedberg 1998). Specifically, popularity was found to either be unattractive or play no role at all, while there were mixed outcomes for reciprocity and reinforcement. On the other hand, triadic closure showed consistently positive results. Moreover, the results also indicate that the models used provided either a 'strong' (for the largest component considered here) or 'good' (for the other two) fit to the data. Using a discrete choice approach to study the evolution of networks offers an alternative to previous techniques that relied on aggregated information (e.g., degree distribution) to examine how co-offending networks grow. A static network analysis might mask essential drivers of the growth of co-offending networks. Despite the limited number of networks considered here, this paper contributes to the scarce literature that has included a temporal dimension in the analysis of criminal networks (Bright and Delaney 2013; Charette and Papachristos 2017; Bright et al. 2019). We believe this work sets a basis for future analyses of similar covert networks to grasp their evolution mechanics. This analysis included four network-growth mechanisms, but, as explained by Overgoor et al. (2019), the integration of discrete choice models and network evolution is flexible enough to consider more and more complex mechanisms and integrate information such as node-level characteristics (e.g., age, sex, or prior history in the criminal system). Future research could incorporate precise information about who selected whom and use node-level information to verify findings about recruiters' characteristics. For example, Van Mastrigt and Farrington (2011) reported that recruiters tend to be older than followers in juvenile co-offending relationships; however, there are no reports about recruiters' traits in adult co-offending. Future research could also include geographical information (e.g., place of residence and where offenders committed the crimes) to gain more insights into how adult co-offenders select their accomplices. Including such information would be useful, especially when considering the mechanisms, such as triadic closure, with a geographical component as an underlying explanation (i.e., social foci/offenders' convergence settings). Incorporating more information when analysing the evolution of co-offending networks will help us better understand crime's aetiology and how to prevent the emergence of new co-offending relationships. Apart from including node-level and geographical information, future research could examine co-offending networks through a multilayer approach. Multilayer networks consist of a fixed set of nodes connected by several different types of connection, represented by multiple layers (Newman 2018), and have been studied across a wide range of contexts (Kivelä et al. 2014). However, studying criminal networks through multilayer networks is rare and has largely been limited to organised crime research (e.g. Ficara et al. 2021, 2022). In the present context, co-offending networks could be studied using a multilevel approach by using different layers to represent specific crime types or time frames. Disaggregation by crime type has particular potential in this regard: this could be used to examine whether co-offending patterns differ across crime types, or whether individuals tend to repeatedly collaborate on particular types of crime (i.e. specialise). Comparing and contrasting the layers in these networks can shed light on co-offenders' behaviours, which is an opportunity to refine this work. Our analysis is subject to some limitations, primarily relating to data availability. Although our underlying model framed accomplice selection as a directional process, with relationships initiated by offenders acting as recruiters, the data used here did not capture this trait. Future research could incorporate precise information about directionality once it becomes available. Furthermore, we did not have access to the individual-level attributes of offenders (e.g. age, sex, ethnicity), meaning that our analysis focused exclusively on the role of prior co-offending relationships. While this addresses several theoretical mechanisms, it is clear that individual-level features will also play a role in determining 'criminal capital' and therefore influencing accomplice selection (Robins 2009). Information on the incarceration or death of individuals was also missing in our dataset. Both of these would have implications for our analysis since they would mean that such individuals are not available for selection by others (i.e., they are excluded from the choice set). In particular, it constitutes a caveat to our results concerning popularity since popular offenders (i.e., prolific) are more likely to be unavailable. However, this issue is less likely to be problematic for reinforcement and reciprocity since any incarceration due to a prior offence is likely to affect both partners simultaneously. More generally, it is important to note that the fact that an offender was subject to an investigation did not preclude them from making new connections: while under investigation or on trial, they may still commit crimes. More generally, since our study is based on law enforcement data, it suffers from the inherent limitations of official records used to study criminal networks (Campana and Varese 2020). Most notably, attrition at various stages of the criminal justice system means that officially-recorded crime represents only represents a subset of all crime that takes place, with the remainder representing a 'dark' figure. Victims may fail to report crimes they suffer, or law enforcement agencies may overlook crimes once victims come forward (Carrington 2014; Campana and Varese 2020). Furthermore, prosecutors might fail to identify any or all those involved in a criminal event, resulting in a closed investigation or missing connections between offenders (i.e., co-offending networks with missing links). This issue is common to all studies of crime which rely on official records; however, such records are the only viable source of data concerning co-offending at a large scale and are used as the basis for almost all research on the topic. Some decisions were taken to minimise the impact of these data issues. We included information about all ongoing and closed investigations over a relatively long period (14 years). Moreover, our data resemble two sources of information commonly used to study co-offending—arrest records and court files. These sources are typically used separately and rarely combined. Ongoing investigations represented arrest records because every person arrested in Colombia needs to be linked to a criminal investigation. It also resembled court records because it included information about closed cases with a guilty verdict and those in which the offenders pleaded guilty. Furthermore, we included information on all possible crimes, capturing different organisational practices within the AGO and not those of a particular task force. Nevertheless, we must bear these limitations in mind when interpreting our findings. It is possible, for example, that under-recording means that the popularity of some offenders was underestimated, which might mean that the role they played in network formation is not captured. Furthermore, some missing links may connect components in our network, meaning its fragmentation is not as great as it may appear. In simple terms, our findings may provide an incomplete picture of co-offending relationships. However, it should also be borne in mind that, from a practical point of view, findings relating to officially recorded offending are still of value, even if not wholly representative of the overall situation. Law enforcement agencies can only disrupt offending of which they are aware—if an offence never comes to their attention, it cannot be a target for prevention—and so, to some extent, recorded crime is a population of interest in itself. Apart from theoretical contributions that this sort of analysis might produce, temporal analyses of co-offending networks such as the one conducted here can provide new insights to law enforcement agencies by showing the different networks displayed when evolving. Using the data collected about people arrested or sentenced, these agencies can use the framework proposed here to study the growth of particular co-offending networks. Based on these results, these agencies might be able to design interventions to prevent the expansion of these networks (Bright 2015; Cavallaro et al. 2020). Suppose popularity is a strong predictor of how a co-offending network grows. In that case, law enforcement agencies must target hubs and understand why these individuals attract new accomplices. If triadic closure explains the evolution of co-offending networks, crime prevention strategies should disrupt the underlying processes that allow offenders to converge in specific locations, for example. These interventions could also interfere with the information circulating in the 'grapevine system' to increase offenders' costs while searching for trustworthy accomplices. Recently developed findings about the effects of police-led interventions (aka crackdowns) (e.g., Smith (2021)) should be integrated to predict the dynamics that co-offending networks would display after such interventions. The unintended consequences of these interventions should also be considered (Diviák et al. 2022; Morselli et al. 2007). Suppose the police target a popular offender that happens to get notoriety given their leadership role within a criminal organisation. In that case, violence might increase as other organisation members start fighting to gain control. Removing a leader could also increase violence as it might signal an opportunity for a rival group to attack the organisation whose leader has been removed (Braga et al. 2018; Felbab-Brown 2013). There are strategic and ethical considerations when intervening in a criminal, covert network. Here, we showed an option to improve the understanding of the dynamics displayed by co-offending networks that could support the decisions adopted by the relevant stakeholders. The datasets used and/or analysed during the current study are available from the corresponding author upon reasonable request. In this article, we use the terms growth and evolution interchangeably. We relied on the classification used by the Colombian Criminal Law to group the crimes linked to each investigation. This Law groups crimes into broader categories based on the civil or human rights each crime type intends to protect. Ben-Akiva M, Litinas N, Tsunokawa K (1985) Continuous spatial choice: the continuous logit model and distributions of trips and urban densities. Transp Res Part A: General 19(2):119–154. https://doi.org/10.1016/0191-2607(85)90022-6 Bichler G, Malm A (2018) Social network analysis. In: Wortley R, Sidebottom A, Tilley N, Laycock G (eds). Routledge Braga AA, Weisburd D, Turchan B (2018) Focused deterrence strategies and crime control: an updated systematic review and metaanalysis of the empirical evidence. Criminol Public Policy 17(1):205–250 Brantingham PL, Ester M, Frank R, Glässer U, Tayebi MA (2011) Co-offending network mining. In: Counterterrorism and open source intelligence. Springer, pp 73–102 Bright D (2015) Disrupting and dismantling dark networks: lessons from social network analysis and law enforcement simulations. In: Gerdes LM (ed), Illuminating dark networks: the study of clandestine groups and organizations. Cambridge University Press, pp 39-51. https://doi.org/10.1017/CBO9781316212639.004 Bright D, Delaney JJ (2013) Evolution of a drug trafficking network: mapping changes in network structure and function across time. Global Crime 14(2–3):238–260 Bright D, Koskinen J, Malm A (2019) Illicit network dynamics: the formation and evolution of a drug trafficking network. J Quant Criminol 35(2):237–258 Bright D, Whelan C (2020) Organised crime and law enforcement: a network perspective. Routledge, London Burt RS (2005) Brokerage and closure: an introduction to social capital. Oxford University Press, Oxford Campana P, Varese F (2020) Studying organized crime networks: data sources, boundaries and the limits of structural measures. Social networks Carrington PJ (2002) Group crime in Canada. Can J Criminol 44:277 Carrington PJ (2014) Co-offending. In: Bruinsma G, Weisburd D (eds), Encyclopedia of criminology and criminal justice. Springer New York, pp 548–558. https://doi.org/10.1007/978-1-4614-5690-2_108 Cavallaro L, Ficara A, De Meo P, Fiumara G, Catanese S, Bagdasar O, Liotta A (2020) Disrupting resilient criminal networks through data analysis: the case of Sicilian Mafia. Plos one 15(8):e0236476 Charette Y, Papachristos AV (2017) The network dynamics of cooffending careers. Soc Netw 51:3–13. https://doi.org/10.1016/j.socnet.2016.12.005 Coleman JS (1988) Social capital in the creation of human capital. Am J Sociol 94:S95–S120. https://doi.org/10.1086/228943 Cordeiro M, Sarmento RP, Brazdil P, Gama J (2018) Evolving networks and social network analysis methods and techniques. Soc Media J: Trends Connect Implications 101(2) Cornish DB, Clarke RV (2002) Crime as a rational choice. Bridging the past to the future. Criminological theories, pp 77–96 Diviák T, van Nassau CS, Dijkstra JK, Snijders TA (2022) Dynamics and disruption: structural and individual changes in two dutch jihadi networks after police interventions. Soc Netw 70:364–374 Easley D, Kleinberg J (2010) Networks, crowds, and markets, vol 8. Cambridge University Press, Cambridge Englefield A, Ariel B (2017) Searching for influencing actors in cooffending networks: the recruiter. Int'l J Soc Sci Stud 5:24 Faust K, Tita GE (2019) Social networks and crime: pitfalls and promises for advancing the field. Annu Rev Criminol 2:99–122 Feinberg F, Bruch E, Braun M, Falk BH, Fefferman N, Feit EM, et al (2020) Choices in networks: a research framework. Mark Lett 31(4):349–359 Felbab-Brown V (2013) Focused deterrence, selective targeting, drug trafficking and organised crime: concepts and practicalities. Selective targeting, drug trafficking and organised crime: concepts and practicalities Feld SL (1981) The focused organization of social ties. Am J Sociol 86(5):1015–1035 Felson M (2003) The process of co-offending. Crime Prev Stud 16:149–168 Ficara A, Fiumara G, Catanese S, De Meo P, Liu X (2022) The whole is greater than the sum of the parts: a multilayer approach on criminal networks. Future Internet 14(5):123 Ficara A, Fiumara G, Meo PD, Catanese S (2021) Multilayer network analysis: the identification of key actors in a Sicilian Mafia operation. In: International conference on future access enablers of ubiquitous and intelligent infrastructures, pp 120–134 Gambetta D (2011) Codes of the underworld. In: Codes of the underworld. Princeton University Press, Princeton Gehrke J, Korn F, Srivastava D (2001) On computing correlated aggregates over continual data streams. ACM SIGMOD Rec 30(2):13–24 Gouldner AW (1960) The norm of reciprocity: a preliminary statement. Am Soc Rev, pp 161–178 Granovetter MS (1973) The strength of weak ties. Am J Sociol 78(6):1360–1380 Grund T, Morselli C (2017) Overlapping crime: stability and specialization of co-offending relationships. Soc Netw 51:14–22 Grund TU, Densley JA (2015) Ethnic homophily and triad closure: mapping internal gang structure using exponential random graph models. J Contemp Crim Justice 31(3):354–370 Hedström P, Swedberg R (1998) Social mechanisms: an introductory essay. Social mechanisms: an analytical approach to social theory, pp 1–31 Hochstetler A (2014) Co-offending and offender decision-making. In: Bruinsma G, Weisburd D (eds) Encyclopedia of criminology and criminal justice. Springer New York, pp 570–581. https://doi.org/10.1007/978-1-4614-5690-2_111 Holland PW, Leinhardt S (1971) Transitivity in structural models of small groups. Comp Group Stud 2(2):107–124 Hosmer DW Jr, Lemeshow S, Sturdivant RX (2013) Applied logistic regression, vol 398. Wiley, London INPEC (2021) Registro de la población privada de la libertad. Retrieved 17-05-2021, from https://www.inpec.gov.co/registro-de-la-poblacion-privada-de-la-libertad Iwanski N, Frank R (2013) The evolution of a drug co-arrest network. In: Crime and networks. Routledge, pp 64–92 King G, Zeng L (2001) Logistic regression in rare events data. Polit Anal 9(2):137–163 Kivelä M, Arenas A, Barthelemy M, Gleeson JP, Moreno Y, Porter MA (2014) Multilayer networks. J Complex Netw 2(3):203–271. https://doi.org/10.1093/comnet/cnu016 Malm A, Bichler G (2011) Networks of collaborating criminals: assessing the structural vulnerability of drug markets. J Res Crime Delinq 48(2):271–297. https://doi.org/10.1177/0022427810391535 McCarthy B, Hagan J (2001) When crime pays: capital, competence, and criminal success. Soc Forces 79(3):1035–1060 McCarthy B, Hagan J, Cohen LE (1998) Uncertainty, cooperation, and crime: understanding the decision to co-offend. Soc Forces 77(1):155–184. https://doi.org/10.1093/sf/77.1.155 McFadden D (1974) Conditional logit analysis of qualitative choice behavior. In: Zarembka P (ed) Frontiers in econometrics. Academic Press, New York, pp 105–142 McFadden D (1981) Econometric models of probabilistic choice. Structural analysis of discrete data with econometric applications, 198272 McGloin JM, Nguyen H (2012) It was my idea: considering the instigation of co-offending. Criminology 50(2):463–494 McGloin JM, Piquero AR (2010) On the relationship between co-offending network redundancy and offending versatility. J Res Crime Delinq 47(1):63–90 McGloin JM, Sullivan CJ, Piquero AR, Bacon S (2008) Investigating the stability of co-offending and co-offenders among a sample of youthful offenders. Criminology 46(1):155–188 McGloin JM, Thomas KJ (2016) Incentives for collective deviance: group size and changes in perceived risk, cost, and reward. Criminology 54(3):459–486 Mitzenmacher M (2004) A brief history of generative models for power law and lognormal distributions. Internet Math 1(2):226–251 Molm LD (1997) Coercive power in social exchange. Cambridge University Press, Cambridge Morselli C (2009) Inside criminal networks, vol 8. Springer, Berlin Morselli C, Giguère C, Petit K (2007) The efficiency/security trade-off in criminal networks. Soc Netw 29(1):143–153. https://doi.org/10.1016/j.socnet.2006.05.001 Morselli C, Petit K (2007) Law-enforcement disruption of a drug importation network. Global Crime 8(2):109–130 Newman M (2018) Networks. Oxford University Press, Oxford Nieto A, Davies T, Borrion H (2022) "Offending with the accomplices of my accomplices'': evidence and implications regarding triadic closure in co-offending networks. Soc Netw 70:325–333 Opsahl T, Hogan B (2011) Modeling the evolution of continuously-observed networks: communication in a facebook-like community. arXiv preprint arXiv:1010.2141 Overgoor J, Benson AR, Ugander J (2019) Choosing to grow a graph: modeling network formation as discrete choice. The World Wide Web Conference Overgoor J, Pakapol Supaniratisai G, Ugander J (2020) Scaling choice models of relational social data. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1990–1998 Papachristos AV (2011) The coming of a networked criminology. Adv Criminol Theory 17:101–140 Plickert G, Côté RR, Wellman B (2007) It's not who you know, it's how you know them: who exchanges what with whom? Soc Netw 29(3):405–429 Reiss AJ (1986) Co-offending influences on criminal careers. Criminal careers and career criminals Reiss AJ (1988) Co-offending and criminal careers. Crime Justice 10:117–170. https://doi.org/10.1086/449145 Robins G (2009) Understanding individual behaviors within covert networks: the interplay of individual qualities, psychological predispositions, and network effects. Trends Organized Crime 12(2):166–187. https://doi.org/10.1007/s12117-008-9059-4 Sarnecki J (2001) Delinquent networks: youth co-offending in Stockholm. Cambridge University Press, Cambridge Simonsohn U, Simmons JP, Nelson LD (2020) Specification curve analysis. Nat Hum Behav 4(11):1208–1214 Simonson I, Tversky A (1992) Choice in context: tradeoff contrast and extremeness aversion. J Mark Res 29(3):281–295 Smith TB (2021) Gang crackdowns and offender centrality in a countywide co-offending network: a networked evaluation of operation triple beam. J Crim Just 73:101755 Snijders TA, Van de Bunt GG, Steglich CE (2010) Introduction to stochastic actor-based models for network dynamics. Soc Netw 32(1):44–60 Stadtfeld C, Block P (2017) Interactions, actors, and time: dynamic network actor models for relational events. Sociol Sci 4(14):318–352. https://doi.org/10.15195/v4.a14 Thrasher FM (1963) The gang: a study of 1313 gangs in Chicago. University of Chicago Press Train KE (2009) Discrete choice methods with simulation. Cambridge University Press, Cambridge Tremblay P (1993) Searching for suitable co-offenders. Routine activity and rational choice, vol. 5, pp 17–36 van Mastrigt SB (2017) Co-offending and co-offender selection. The Oxford handbook of offender decision making, 6, 338. https://doi.org/10.1093/oxfordhb/9780199338801.013.21 Van Mastrigt SB, Farrington DP (2011) Prevalence and characteristics of co-offending recruiters. Justice Q 28(2):325–359 Warr M (1996) Organization and instigation in delinquent groups. Criminology 34(1):11–37 Warr M (2002) Companions in crime: the social aspects of criminal conduct. Cambridge University Press, Cambridge Wasserman S, Faust K (1994) Social network analysis: methods and applications, vol 8. Cambridge University Press, Cambridge Weerman FM (2003) Co-offending as social exchange: explaining characteristics of co-offending. Br J Criminol 43(2):398–416. https://doi.org/10.1093/bjc/43.2.398 Weerman FM (2014) Theories of co-offending. Encyclopedia of criminology and criminal justice 5173–5184. https://doi.org/10.1007/978-1-4614-5690-2_110 Colfuturo (Colombia) and the University College London, through the UCL-Overseas Research Scholarship and the Dean's Prize-Faculty of Engineering, supported Alberto Nieto throughout the completion of this research article. Department of Security and Crime Science, University College London (UCL), 35 Tavistock Square, London, WC1H 9EZ, UK Alberto Nieto, Toby Davies & Hervé Borrion Alberto Nieto Toby Davies Hervé Borrion All the authors contributed equally to this article. The final manuscript was read and approved by all of them. Correspondence to Alberto Nieto. The authors declare that they have no competing interests Appendix A: Sensitivity analysis of control cases The regression coefficients of discrete choice models can be biased when the proportion of 1's (i connection with j) to 0's is small - i.e., a large number of potential accomplices exist, but incoming co-offenders select only one. King and Zeng (2001) suggested using some control cases to prevent introducing a bias, such that any additional case included would not significantly increase the model's significance or decrease coefficients' standard errors. We conducted two sensitivity analyses to determine the number of control cases: one for the network with 4286 offenders and one for the other two. Both analyses used in-degree as a proxy of popularity. We used 5, 10, 20, 30, 40, and 60 control cases for the largest network and 2, 4, 6, 8, 10, 12, 14, 16, 18, and 20 for the one with 227 offenders. Since the analytical strategy implied using a simulated version of the original network, we simulated the network 100 times for each number of control cases. The Wald \(\chi ^2\) statistic allowed us to assess the significance of the models. Figure 7 presents the mean value of this statistic in each round of iterations and for each number of control cases. Figure 8 shows the mean value of standard errors for each independent variable (i.e., the four growth mechanisms considered). Considering how the Wald (\(\chi ^2\)) statistic and the standard errors behaved for each number of control cases, we considered that 30 control cases were an appropriate number to strike a balance suggested by King and Zeng (2001). Mean value of the Wald statistic observed for each iteration using multiple number of control cases for the network with 4286 nodes Mean value of standard errors yielded after 100 simulations for a indegree, b reinforcement, c reciprocity, and d triadic closure. Network: 4286 nodes Figures 9 and 10 present similar figures for the network with 227 nodes. Based on these results, we decided to use ten control cases for this network. Since the number of nodes is roughly similar, we also used the same number of control cases when analysing the network with 211 offenders. Mean value of the Wald statistic observed for each iteration using multiple number of control cases for the network with 270 nodes Mean value of standard errors yielded after 100 simulations for a indegree, b reinforcement, c reciprocity, and d triadic closure. Network: 270 nodes Appendix B: Graphical results of models To summarise our results, we use an approach inspired by the 'specification curve analysis' method recently proposed by Simonsohn et al. (2020). This involves plotting the fitted coefficients from the 1000 models on a curve, ordered from lowest to highest and marked according to whether they were statistically significant. Viewing the results in this way allows the distribution of results across all possible realisations—in this case corresponding to directionality and ordering—to be seen. These can be summarised by reporting the median coefficient across all models and the statistically significant proportion (Figs. 11, 12, 13, 14, 15, 16). Results of the model for the network with 4286 offenders jointly testing four growth mechanisms: a popularity (indegree as a proxy), b reinforcement, c reciprocity, and d triadic closure Results of the model for the network with 4286 offenders jointly testing four growth mechanisms: a popularity (instrength as a proxy), b reinforcement, c reciprocity, and d triadic closure Results of the model for the network with 227 offenders jointly testing four growth mechanisms: a popularity (indegree as a proxy), b reinforcement, c reciprocity, and d triadic closure Results of the model for the network with 227 offenders jointly testing four growth mechanisms: a popularity (instrength as a proxy), b reinforcement, c reciprocity, and d triadic closure Nieto, A., Davies, T. & Borrion, H. Examining the importance of existing relationships for co-offending: a temporal network analysis in Bogotá, Colombia (2005–2018). Appl Netw Sci 8, 4 (2023). https://doi.org/10.1007/s41109-023-00531-0 Network growth Co-offending networks Discrete choice models Network evolution
CommonCrawl
Realizing arbitrary $d$-dimensional dynamics by renormalization of $C^d$-perturbations of identity $ W^{1, p} $ estimates for elliptic problems with drift terms in Lipschitz domains February 2022, 42(2): 555-595. doi: 10.3934/dcds.2021128 Aubry-Mather theory for contact Hamiltonian systems II Kaizhi Wang 1, , Lin Wang 2,, and Jun Yan 3, School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China School of Mathematical Sciences, Fudan University, Shanghai 200433, China * Corresponding author: Lin Wang Received December 2020 Revised June 2021 Published February 2022 Early access September 2021 In this paper, we continue to develop Aubry-Mather and weak KAM theories for contact Hamiltonian systems $ H(x,u,p) $ with certain dependence on the contact variable $ u $. For the Lipschitz dependence case, we obtain some properties of the Mañé set. For the non-decreasing case, we provide some information on the Aubry set, such as the comparison property, graph property and a partially ordered relation for the collection of all projected Aubry sets with respect to backward weak KAM solutions. Moreover, we find a new flow-invariant set $ \tilde{\mathcal{S}}_s $ consists of strongly static orbits, which coincides with the Aubry set $ \tilde{\mathcal{A}} $ in classical Hamiltonian systems. Nevertheless, a class of examples are constructed to show $ \tilde{\mathcal{S}}_s\subsetneqq\tilde{\mathcal{A}} $ in the contact case. As their applications, we find some new phenomena appear even if the strictly increasing dependence of $ H $ on $ u $ fails at only one point, and we show that there is a difference for the vanishing discount problem from the negative direction between the minimal viscosity solution and non-minimal ones. Keywords: Aubry-Mather theory, weak KAM theory, contact Hamiltonian systems, Hamilton-Jacobi equations. Mathematics Subject Classification: Primary: 37J50, 35F21; Secondary: 35D40. Citation: Kaizhi Wang, Lin Wang, Jun Yan. Aubry-Mather theory for contact Hamiltonian systems II. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 555-595. doi: 10.3934/dcds.2021128 V. Arnold, Mathematical Methods of Classical Mechanics, $2^{nd}$ edition, Graduate Texts in Mathematics, 60. Springer-Verlag, New York, 1989. doi: 10.1007/978-1-4757-2063-1. Google Scholar P. Bernard, Existence of $C^{1, 1}$ critical sub-solutions of the Hamilton-Jacobi equation on compact manifolds, Ann. Sci. École Norm. Sup., 40 (2007), 445-452. doi: 10.1016/j.ansens.2007.01.004. Google Scholar P. Bernard, Symplectic aspects of Mather theory, Duke Math. J., 136 (2007), 401-420. doi: 10.1215/S0012-7094-07-13631-7. Google Scholar P. Bernard, The dynamics of pseudographs in convex Hamiltonian systems, J. Amer. Math. Soc., 21 (2008), 615-669. doi: 10.1090/S0894-0347-08-00591-2. Google Scholar A. Bravetti, Contact Hamiltonian dynamics: The concept and its use, Entropy, 19 (2017), 12 pp. doi: 10.3390/e19100535. Google Scholar A. Bravetti, Contact geometry and thermodynamics, Int. J. Geom. Meth. Mod. Phys., 16 (2019), 1940003. doi: 10.1142/S0219887819400036. Google Scholar A. Bravetti, H. Cruz and D. Tapias, Contact Hamiltonian mechanics, Ann. Physics, 376 (2017), 17-39. doi: 10.1016/j.aop.2016.11.003. Google Scholar P. Cannarsa, W. Cheng, K. Wang and J. Yan, Herglotz' generalized variational principle and contact type Hamilton-Jacobi equations, Trends in Control Theory and Partial Differential Equations, Springer INdAM Ser., Springer, Cham, 32 (2019), 39–67. Google Scholar P. Cannarsa, W. Cheng, L. Jin, K. Wang and J. Yan, Herglotz' variational principle and Lax-Oleinik evolution, J. Math. Pures Appl., 141 (2020), 99-136. doi: 10.1016/j.matpur.2020.07.002. Google Scholar P. Cannarsa and C. Sinestrari, Semiconcave Functions, Hamilton-Jacobi Equations, and Optimal Control, In Nonlinear Differential Equations and Their Applications, 58, Birkhäuser Boston, Inc., Boston, MA, 2004. Google Scholar Q. Chen, Convergence of solutions of Hamilton-Jacobi equations depending nonlinearly on the unknown function, Adv. Calc. Var., Published online. Google Scholar Q. Chen, W. Cheng, H. Ishii and K. Zhao, Vanishing contact structure problem and convergence of the viscosity solutions, Comm. Partial Differential Equations, 44 (2019), 801-836. doi: 10.1080/03605302.2019.1608561. Google Scholar G. Contreras, J. Delgado and R. Iturriaga, Lagrangian flows: The dynamics of globally minimizing orbits. II, Bol. Soc. Brasil. Mat. (N.S.), 28 (1997), 155-196. doi: 10.1007/BF01233390. Google Scholar M. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1-67. doi: 10.1090/S0273-0979-1992-00266-5. Google Scholar G. Contreras and R. Iturriaga, Global Minimizers of Autonomous Lagrangians, 22nd Colóquio Brasileiro de Matemática. [22nd Brazilian Mathematics Colloquium], Instituto de Matemática Pura e Aplicada (IMPA), Rio de Janeiro, 1999. Google Scholar M. Crandall and P.-L. Lions, Viscosity solutions of Hamilton-Jacobi equations, Trans. Amer. Math. Soc., 277 (1983), 1-42. doi: 10.1090/S0002-9947-1983-0690039-8. Google Scholar M. de León and C. Sardón, Cosymplectic and contact structures for time-dependent and dissipative Hamiltonian systems, J. Phys. A, 50 (2017), 255205. doi: 10.1088/1751-8121/aa711d. Google Scholar M. de León and M. L. Valcázar, Infinitesimal symmetries in contact Hamiltonian systems, J. Geome. Phys., 153 (2020), 103651. doi: 10.1016/j.geomphys.2020.103651. Google Scholar A. Davini, A. Fathi, R. Iturriaga and M. Zavidovique, Convergence of the solutions of the discounted Hamilton-Jacobi equation: Convergence of the discounted solutions, Invent. Math., 206 (2016), 29-55. doi: 10.1007/s00222-016-0648-6. Google Scholar A. Davini and L. Wang, On the vanishing discount problems from the negative direction, Discrete Contin. Dyn. Syst., 41 (2021), 2377-2389. doi: 10.3934/dcds.2020368. Google Scholar A. Fathi, Weak KAM Theorem in Lagrangian Dynamics, preliminary version 10, Lyon, unpublished (2008). Google Scholar A. Fathi and A. Siconolfi, Existence of $C^1$ critical subsolutions of the Hamilton-Jacobi equation, Invent. Math., 155 (2004), 363-388. doi: 10.1007/s00222-003-0323-6. Google Scholar D. Gomes, Generalized Mather problem and selection principles for viscosity solutions and Mather measures, Adv. Calc. Var., 1 (2008), 291-307. doi: 10.1515/ACV.2008.012. Google Scholar D. Gomes, H. Mitake and H. Tran, The selection problem for discounted Hamilton-Jacobi equations: Some non-convex cases, J. Math. Soc. Jpn., 70 (2018), 345-364. doi: 10.2969/jmsj/07017534. Google Scholar G. Herglotz, Berührungstransformationen, Lectures at the University of Göttingen, Göttingen, 1930. Google Scholar R. Iturriaga and H. Sanchez-Morgado, Limit of the infinite horizon discounted Hamilton-Jacobi equation, Discrete Contin. Dyn. Syst. Ser. B, 15 (2011), 623-635. doi: 10.3934/dcdsb.2011.15.623. Google Scholar H. Ishii, H. Mitake and H. Tran, The vanishing discount problem and viscosity Mather measures. Part 1: The problem on a torus, J. Math. Pures Appl., 108 (2017), 125-149. doi: 10.1016/j.matpur.2016.10.013. Google Scholar H. Ishii, H. Mitake and H. Tran, The vanishing discount problem and viscosity Mather measures. Part 2: Boundary value problems, J. Math. Pures Appl., 108 (2017), 261-305. doi: 10.1016/j.matpur.2016.11.002. Google Scholar W. Jing, H. Mitake and H. Tran, Generalized ergodic problems: Existence and uniqueness structures of solutions, J. Differential Equations, 268 (2020), 2886-2909. doi: 10.1016/j.jde.2019.09.046. Google Scholar P.-L. Lions, G. Papanicolaou and S. R. S. Varadhan, Homogenization of Hamilton-Jacobi Equations, preprint. Google Scholar Q. Liu, P. Torres and C. Wang, Contact Hamiltonian dynamics: Variational principles, invariants, completeness and periodic behavior, Ann. Physics, 395 (2018), 26-44. doi: 10.1016/j.aop.2018.04.035. Google Scholar S. Marò and A. Sorrentino, Aubry-Mather theory for conformally symplectic systems, Commun. Math. Phys., 354 (2017), 775-808. doi: 10.1007/s00220-017-2900-3. Google Scholar J. Mather, Action minimizing invariant measures for positive definite Lagrangian systems, Math. Z., 207 (1991), 169-207. doi: 10.1007/BF02571383. Google Scholar J. Mather, Variational construction of connecting orbits, Ann. Inst. Fourier (Grenoble), 43 (1993), 1349-1386. doi: 10.5802/aif.1377. Google Scholar R. Mañé, Lagrangain flows: The dynamics of globally minimizing orbits, Bol. Soc. Brasil. Math., 28 (1997), 141-153. doi: 10.1007/BF01233389. Google Scholar H. Mitake and K. Soga, Weak KAM theory for discounted Hamilton-Jacobi equations and its application, Calc. Var. Partial Differential Equations, 57 (2018), 57-78. doi: 10.1007/s00526-018-1359-1. Google Scholar H. Mitake and H. Tran, Selection problems for a discount degenerate viscous Hamilton-Jacobi equation, Adv. Math., 306 (2017), 684-703. doi: 10.1016/j.aim.2016.10.032. Google Scholar K. Soga, Selection problems of $\mathbb{Z}^2$ -periodic entropy solutions and viscosity solutions, Calc. Var. Partial Differ. Equ., 56 (2017), 30pp. doi: 10.1007/s00526-017-1208-7. Google Scholar X. Su, L. Wang and J. Yan, Weak KAM theory for Hamilton-Jacobi equations depending on unkown functions, Discrete Contin. Dyn. Syst., 36 (2016), 6487-6522. doi: 10.3934/dcds.2016080. Google Scholar M. L. Valcázar and M. de León, Contact Hamiltonian systems, J. Math. Phys., 60 (2019), 102902. doi: 10.1063/1.5096475. Google Scholar K. Wang, L. Wang and J. Yan, Implicit variational principle for contact Hamiltonian systems, Nonlinearity, 30 (2017), 492-515. doi: 10.1088/1361-6544/30/2/492. Google Scholar K. Wang, L. Wang and J. Yan, Variational principle for contact Hamiltonian systems and its applications, J. Math. Pures Appl., 123 (2019), 167-200. doi: 10.1016/j.matpur.2018.08.011. Google Scholar K. Wang, L. Wang and J. Yan, Aubry-Mather theory for contact Hamiltonian systems, Commun. Math. Phys., 366 (2019), 981-1023. doi: 10.1007/s00220-019-03362-2. Google Scholar K. Wang, L. Wang and J. Yan, Weak KAM solutions of Hamilton-Jacobi equations with decreasing dependence on unknown functions, J. Differential Equations, 286 (2021), 411-432. doi: 10.1016/j.jde.2021.03.030. Google Scholar Y. Wang and J. Yan, A variational principle for contact Hamiltonian systems, J. Differential Equations, 267 (2019), 4047-4088. doi: 10.1016/j.jde.2019.04.031. Google Scholar Y. Wang, J. Yan and J. Zhang, Convergence of viscosity solutions of generalized contact Hamilton-Jacobi equations, Arch. Rational Mech. Anal., 241 (2021), 885-902. doi: 10.1007/s00205-021-01667-y. Google Scholar K. Zhao and W. Cheng, On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem, Discrete Contin. Dyn. Syst., 39 (2019), 4345-4358. doi: 10.3934/dcds.2019176. Google Scholar Yasuhiro Fujita, Katsushi Ohmori. Inequalities and the Aubry-Mather theory of Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2009, 8 (2) : 683-688. doi: 10.3934/cpaa.2009.8.683 Xifeng Su, Lin Wang, Jun Yan. Weak KAM theory for HAMILTON-JACOBI equations depending on unknown functions. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 6487-6522. doi: 10.3934/dcds.2016080 Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 Ugo Bessi. Viscous Aubry-Mather theory and the Vlasov equation. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 379-420. doi: 10.3934/dcds.2014.34.379 Hans Koch, Rafael De La Llave, Charles Radin. Aubry-Mather theory for functions on lattices. Discrete & Continuous Dynamical Systems, 1997, 3 (1) : 135-151. doi: 10.3934/dcds.1997.3.135 Fabio Camilli, Annalisa Cesaroni. A note on singular perturbation problems via Aubry-Mather theory. Discrete & Continuous Dynamical Systems, 2007, 17 (4) : 807-819. doi: 10.3934/dcds.2007.17.807 Manuel de León, David Martín de Diego, Miguel Vaquero. A Hamilton-Jacobi theory on Poisson manifolds. Journal of Geometric Mechanics, 2014, 6 (1) : 121-140. doi: 10.3934/jgm.2014.6.121 Giuseppe Marmo, Giuseppe Morandi, Narasimhaiengar Mukunda. The Hamilton-Jacobi theory and the analogy between classical and quantum mechanics. Journal of Geometric Mechanics, 2009, 1 (3) : 317-355. doi: 10.3934/jgm.2009.1.317 Melvin Leok, Diana Sosa. Dirac structures and Hamilton-Jacobi theory for Lagrangian mechanics on Lie algebroids. Journal of Geometric Mechanics, 2012, 4 (4) : 421-442. doi: 10.3934/jgm.2012.4.421 Kai Zhao, Wei Cheng. On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem. Discrete & Continuous Dynamical Systems, 2019, 39 (8) : 4345-4358. doi: 10.3934/dcds.2019176 Fabio Camilli, Paola Loreti, Naoki Yamada. Systems of convex Hamilton-Jacobi equations with implicit obstacles and the obstacle problem. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1291-1302. doi: 10.3934/cpaa.2009.8.1291 Larry M. Bates, Francesco Fassò, Nicola Sansonetto. The Hamilton-Jacobi equation, integrability, and nonholonomic systems. Journal of Geometric Mechanics, 2014, 6 (4) : 441-449. doi: 10.3934/jgm.2014.6.441 Claudio Marchi. On the convergence of singular perturbations of Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1363-1377. doi: 10.3934/cpaa.2010.9.1363 Isabeau Birindelli, J. Wigniolle. Homogenization of Hamilton-Jacobi equations in the Heisenberg group. Communications on Pure & Applied Analysis, 2003, 2 (4) : 461-479. doi: 10.3934/cpaa.2003.2.461 Andrea Davini, Maxime Zavidovique. Weak KAM theory for nonregular commuting Hamiltonians. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 57-94. doi: 10.3934/dcdsb.2013.18.57 Artur O. Lopes, Rafael O. Ruggiero. Large deviations and Aubry-Mather measures supported in nonhyperbolic closed geodesics. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1155-1174. doi: 10.3934/dcds.2011.29.1155 Bassam Fayad. Discrete and continuous spectra on laminations over Aubry-Mather sets. Discrete & Continuous Dynamical Systems, 2008, 21 (3) : 823-834. doi: 10.3934/dcds.2008.21.823 Diogo A. Gomes. Viscosity solution methods and the discrete Aubry-Mather problem. Discrete & Continuous Dynamical Systems, 2005, 13 (1) : 103-116. doi: 10.3934/dcds.2005.13.103 Siniša Slijepčević. The Aubry-Mather theorem for driven generalized elastic chains. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2983-3011. doi: 10.3934/dcds.2014.34.2983 Laura Caravenna, Annalisa Cesaroni, Hung Vinh Tran. Preface: Recent developments related to conservation laws and Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : i-iii. doi: 10.3934/dcdss.201805i Kaizhi Wang Lin Wang Jun Yan
CommonCrawl
Units for Speed Calculate Speed/Distance/Time Average Speeds When we talk about speed, we need to put it in context. If I told you I ran at a speed of $5$5 yesterday you don't yet know whether to be impressed or not. If I had run $5$5 kilometres an hour, that's an OK walking speed but it's not anywhere near a running speed. If I ran $5$5 metres a second that would be more impressive. Road signs often don't have a unit given, but they are standardised across a country. In most countries it would be in km/h, but in some countries like the United States and Great Britain, it would be in miles per hour instead. Measurements of speed are a rate. A rate is a ratio between two measurements with different units. This means units of speed are actually two units put together - one unit of distance and one of time. Can you see why this would be? Think about the formula for speed. To calculate speed you need to use $\text{Speed }=\frac{\text{Distance }}{\text{Time }}$Speed =Distance Time ​ Notice how distance comes before time in the formula? Units of speed are the same, even including the divide sign $\text{/ }$/ . For common units of distance and time we often use the shorthand, so kilometres/hour becomes km/h and metres per second becomes m/s. What measurement is appropriate? When considering which measurement of speed is appropriate to use, you need to think about the likely distance that would be travelled and how much time that would probably take. You can measure speed in any combination of distance and time, but for numbers to really be useful to us we want them to be easy to compare. Jet fighter speeds For example, if I told you that a fighter jet travels $681.736$681.736 mm/s it sounds fast, but you would struggle to really understand what that means. You probably know that travelling at $110$110 km/h is fast for a car, so if I tell you a fighter jet can travel at $2454.2496$2454.2496 km/h, you get a much better idea of how fast that really is. Think of useful comparisons to help you choose appropriate units of speed. When choosing an appropriate unit of speed, you should also consider the size of the numbers. Hot Dog Eating Competition In July 2015, Matt Stonie ate $62$62 hot dogs in $10$10 minutes. Sounds like a lot, but $62$62 hot dogs/$10$10 minutes doesn't work as a unit of speed. You could think of it as $372$372 hot dogs/hour, but Matt didn't keep eating for an hour and this number is quite a large amount to think about. Or you could think of this as $0.1033333333333$0.1033333333333 hot dogs/second but that is a very small number that doesn't mean much. When did you last think about how quickly you can eat $0.1$0.1 of a hot dog? It's difficult to know if you would eat faster or slower than that. Instead you can think of this as $6.2$6.2 hot dogs/minute. Now that is definitely faster than I could eat $6$6 hot dogs! What unit is most appropriate for measuring the speed of a person while running? cm/min cm/s Sophia captured footage of a hawk diving $200$200 metres in $10$10 seconds. Using this as a comparison, what units would be most appropriate for measuring the speed of a feather falling to the ground? cm/h mm/h NA6-1 Apply direct and inverse relationships with linear proportions Apply the relationships between units in the metric system, including the units for measuring different attributes and derived measures Apply numeric reasoning in solving problems Apply measurement in solving problems
CommonCrawl
Assessment of asymptomatic Plasmodium spp. infection by detection of parasite DNA in residents of an extra-Amazonian region of Brazil Filomena E. C. de Alencar1Email author, Rosely dos Santos Malafronte2, Crispim Cerutti Junior1, Lícia Natal Fernandes2, Julyana Cerqueira Buery1, Blima Fux1, Helder Ricas Rezende3, Ana Maria Ribeiro de Castro Duarte4, Antonio Ralph Medeiros-Sousa5 and Angelica Espinosa Miranda1 Received: 26 November 2017 Accepted: 8 March 2018 The hypotheses put forward to explain the malaria transmission cycle in extra-Amazonian Brazil, an area of very low malaria incidence, are based on either a zoonotic scenario involving simian malaria, or a scenario in which asymptomatic carriers play an important role. To determine the incidence of asymptomatic infection by detecting Plasmodium spp. DNA and its role in residual malaria transmission in a non-Amazonian region of Brazil. Upon the report of the first malaria case in 2010 in the Atlantic Forest region of the state of Espírito Santo, inhabitants within a 2 km radius were invited to participate in a follow-up study. After providing signed informed consent forms, inhabitants filled out a questionnaire and gave blood samples for PCR, and thick and thin smears. Follow-up visits were performed every 3 months over a 21 month period, when new samples were collected and information was updated. Ninety-two individuals were initially included for follow-up. At the first collection, all of them were clearly asymptomatic. One individual was positive for Plasmodium vivax, one for Plasmodium malariae and one for both P. vivax and P. malariae, corresponding to a prevalence of 3.4% (2.3% for each species). During follow-up, four new PCR-positive cases (two for each species) were recorded, corresponding to an incidence of 2.5 infections per 100 person-years or 1.25 infections per 100 person-years for each species. A mathematical transmission model was applied, using a low frequency of human carriers and the vector density in the region, and calculated based on previous studies in the same locality whose results were subjected to a linear regression. This analysis suggests that the transmission chain is unlikely to be based solely on human carriers, regardless of whether they are symptomatic or not. The low incidence of cases and the low frequency of asymptomatic malaria carriers investigated make it unlikely that the transmission chain in the region is based solely on human hosts, as cases are isolated one from another by hundreds of kilometers and frequently by long periods of time, reinforcing instead the hypothesis of zoonotic transmission. Plasmodium malariae Asymptomatic carrier With an estimated 212 million new cases in 2015, leading to 429,000 deaths, malaria is an infectious disease which has an enormous global economic and social impact [1]. Efforts to control the disease over the years have resulted in a significant reduction in its incidence and specific mortality. However, these achievements are threatened not only by the development of resistance to drugs and insecticides of the protozoans [1, 2] and mosquito vectors [3, 4] involved in the transmission chain, but also by the limited financial resources available to implement the required measures [1, 5]. Malaria control strategies target the various components of the transmission chain: the protozoans of the genus Plasmodium, the anopheline mosquito vectors, and the susceptible or infected humans. However, this transmission chain has several complexities that can compromise the effectiveness of control measures. With the development of molecular biology techniques, the number of Plasmodium species known to be able to cause infections in humans has increased from four in the oldest references, to six in the most recent, with the recent split of Plasmodium ovale into two different species [6–9]. One of these new species, Plasmodium knowlesi, was originally identified as the species that caused simian malaria in Southeast Asia, but is now known to also play an important role in the disease in humans living in the same region [10–12]. Like P. knowlesi, other species may act as parasites in humans and non-human primates, forming a zoonotic transmission chain. This makes it necessary to develop more complex measures to either eliminate the disease or to at least impede its spread, as the elimination of diseases with a zoonotic cycle might only be attainable at an unacceptably high cost to the environment. Another factor that makes the malaria transmission chain more complex is the role played by asymptomatic carriers. These are known to exist ubiquitously, and their frequency varies according to the degree of endemicity [13–32]. They very often present with subpatent parasitaemia, which can only be detected by molecular methods, making identification of the infection difficult, and interfering with efforts to eliminate the disease [15, 21, 23, 29, 30]. While more than 99% of malaria cases in Brazil are restricted to the Amazon region [32], residual cases have been reported in areas of the Atlantic Forest in various states in other regions [15–17, 20, 25, 32], where the characteristics of the transmission cycle are different. For example, outbreaks do not generally occur, the incidence and parasitaemia are very low, the clinical symptoms are mild, and the species responsible for infection are Plasmodium vivax and Plasmodium malariae [17]. However, the most distinctive characteristic of malaria in this particular region is the vector. Namely, the anophelines involved in transmission of the disease belong to the subgenus Kerteszia, which breeds in the axils of bromeliads, and therefore has a very close relationship with the Atlantic Forest ecosystem [20, 33–37]. Simian malaria also occurs in these regions and is caused by parasites that are now known to belong to the same species that cause disease in humans, and to be transmitted by the same vectors [33, 35, 38–47]. While these characteristics would lend strong support to the hypothesis of zoonotic transmission, more conclusive evidence is required to corroborate it. Consideration should also be given to the alternative hypothesis, i.e. that the low-incidence malaria is being maintained by a contingent of unidentified asymptomatic carriers. A previous study in the malaria transmission area in the state of Espírito Santo detected Plasmodium spp. DNA in the blood samples of 48 out of 1527 asymptomatic inhabitants of the region, corresponding to a prevalence of 3.1% [17]. In the present study, a cohort from an area in this endemic region was followed up longitudinally to confirm the previously established prevalence and determine the incidence and persistence of the carrier state. Previously collected data on vector density, the demographics of the region, and the transmission parameters available from the literature were used to calculate the basic reproductive rate for new infections and thus determine whether the transmission chain in the region could be sustained by human hosts and vectors alone. To distinguish between the two hypotheses, the natural human recovery rate without treatment, as well as a scenario in which all human Plasmodium carriers are asymptomatic were considered. The state of Espírito Santo is the Brazilian state with the most cases of residual malaria in the Atlantic Forest [17]. The disease occurs in the mountainous region of the state, which lies no more than 50 km on average from the coast. Between 17 and 68 cases were recorded every year between 2007 and 2015 [48] in an area covering approximately 5343 square km. The highest frequency of cases was recorded in 2008. After that point, the incidence decreased, and has since remained stable. During the study period, 32, 31, and 17 cases were recorded in 2010, 2011, and 2012, respectively. After the study, the figures were 35, 30, and 48 in 2013, 2014, and 2015, respectively [48]. A survey carried out in this area between 2001 and 2004 [17] identified P. vivax in 48 out of 51 symptomatic individuals by both microscopic examination of blood films and polymerase chain reaction (PCR). Plasmodium malariae was identified by PCR in one sample, while in two samples blood smear tests revealed parasites morphologically similar to P. vivax, but with a negative PCR result. The cases were detected in nine municipalities located between 19.6° and 20.6° south latitude and 40.6° and 41° west longitude [17] (Fig. 1). In the study area, the landscape is irregular with mean altitude of around 800 meters. Despite the tropical climate, lower temperatures of around 15 °C are registered from May to August because of the high altitude. Human dwellings are close to the well-preserved tropical forest with a fauna consisting of birds, reptiles, and small mammals, including simians from the Cebidae and Atelidae families. In this study, 92 of the approximately 120 individuals living within a 2 km radius of the home of the first person diagnosed with malaria by the health authorities in 2010 were approached. Map showing the study area in Espírito Santo, Brazil Quarterly assessments of inhabitants in the region where the first malaria case detected in 2010 occurred were scheduled, resulting in a total of eight assessments over 21 months. Before the first assessment the participants signed a voluntary informed-consent form. During each assessment, a questionnaire was filled out covering demographic data and any instances since the previous collection of febrile illnesses, as well as thick-smear tests for malaria and trips outside the area where the participant lived. A total of 5 mL of blood was then collected in Vacutainer tubes containing EDTA, thick and thin peripheral blood smears were prepared, and abdominal palpation for splenic enlargement was performed. The blood collected was centrifuged at 300 g for 10 min in the Protozoology Laboratory at the Tropical Medicine Unit, Federal University of Espírito Santo, to separate the plasma from the red blood cells. The latter were then aliquoted and stored at -20 °C in Eppendorf tubes, which were sent to the Protozoology Laboratory at the São Paulo Institute of Tropical Medicine, where DNA was extracted and PCR was performed. Thick and thin smears Thick and thin blood smears were prepared following the method recommended by the World Health Organization [49]. The smears were examined under an optical microscope with a 100× objective lens. The results were based on examination of at least 200 microscopic fields. Amplification of Plasmodium DNA Genomic DNA was extracted from the samples with a NucleoSpin Tissue purification kit (Macherey–Nagel) following the manufacturer's instructions, and amplified following Win et al. [50]. Briefly, nested-PCR, which consists of two rounds of amplification, was carried out with primers that target the Plasmodium 18S RNA gene subunit. The first step was carried out in a final volume of 20 μL containing 0.8 μL of each primer, P1UP and P2 (10 μM), 0.25 μL of dNTP mix (10 mM each, Thermo Fisher Scientific), 2 μL of 10 × PCR buffer, 1 μL of MgCl2 (50 mM), 0.16 μL of Platinum Taq DNA polymerase-Invitrogen (5 U/μL) and 5 μL of DNA. The amplification was run in an Applied Biosystems thermocycler with the following programme: 92 °C for 2 min, 35 cycles of 92 °C for 30 s and 60 °C for 90 s, and one cycle of 60 °C for 5 min. The product from the first step was diluted at a ratio of 1:50 in sterile water and used in the second step with the P1 (genus–specific) primer and one of the three reverse species-specific primers (V1—P. vivax, F2—Plasmodium falciparum or M1—P. malariae). This step was carried out in a final volume of 20 μL, containing 2 μL of each primer (10 μM), 0.5 μL of dNTP (10 mM each, Thermo Fisher Scientific), 2 μL of 10 × PCR buffer, 0.16 μL of Taq DNA polymerase (5 U/μL), and 2 μL of the diluted final product from the first step. The second amplification was carried out using the same equipment as the first, and the cycling parameters were 92 °C for 2 min, followed by 18 cycles of 92 °C for 30 s and 60 °C for 1 min, and one cycle of 60 °C for 5 min. The amplified products of the second step correspond to species–specific fragments of about 100 bp. The primers used were: P1UP (F): 5′ TCC ATT AAT CAA GAA CGA AAG TTA AG 3′ P2 (R): 5′ GAA CCC AAA GAC TTT GAT TTC TCA T 3′ P1 (F): 5′ ACG ATC AGA TAC CGT CGT AAT CTT 3′ V1 (R): 5′ CAA TCT AAG AAT AAA CTC CGA AGA GAA A 3′ F2 (R): 5′ CAA TCT AAA AGT CAC CTC GAA AGA TG 3′ M1 (R): 5′ GGA AGC TAT CTA AAA GAA ACA CTC ATA T 3′ The amplified product was run on 2% agarose gel at 80 V for 40 min. The gel was stained using ethidium bromide, and the bands were visualized under a UV transilluminator. DNA extracted from peripheral blood of P. vivax malaria patients treated at the São Paulo Superintendency for the Control of Endemic Diseases (SUCEN), from P. falciparum cultures, and from blood smears positive for P. malariae provided by the Centers for Disease Control and Prevention (CDC) was used as a control. Application of the mathematical transmission model Based on entomological and demographic information for the study area, and transmission parameters available in the literature, the basic reproductive rate (R0) for the mathematical malaria transmission model proposed by Anderson and May [51] was calculated. This is a deterministic model, and R0 is calculated using the following equation: $${\text{R}}_{0} = \frac{{\left( {\frac{M}{N}} \right)b^{2} T_{mh} T_{hm} }}{\gamma \mu }e^{ - \mu p} e^{ - \gamma q} ,$$ where, R0 = Basic reproductive rate, N = Estimated population size, M = Abundance of Anopheles cruzii, the vector in the region, µ = Mortality rate of An. cruzii, TMH = Probability of transmission of Plasmodia from An. cruzii to humans, THM = Probability of transmission of Plasmodia from humans to An. cruzii, γ = Human recovery rate, b = Average An. cruzii daily biting rate, q = Incubation period for P. vivax in humans, p = Extrinsic incubation period for P. vivax. The basic reproductive rate represents the number of secondary cases that will be generated from an infected individual. In theory, for deterministic transmission models, the number of infected individuals in the population is increasing when R0 > 1 and decreasing when R0 < 1. For endemic diseases, R0 tends toward an equilibrium value, i.e. close to 1 [51]. Calculation of sample size The sample size was calculated using a value of 3.1% for the frequency of PCR-positive blood samples, the value found in the survey carried out between 2001 and 2004 [17]. It was assumed a priori that the carrier state lasts only a short amount of time, and that most individuals either develop the disease or recover spontaneously [13]. As a consequence, the annual incidence must be close to the frequency established in an earlier survey [17]. Therefore, considering an incidence of 3.1% and a 95% confidence interval, a sample of 90 patients would result in an estimate per interval of 0–7% (Epi Info version 3.4.1). This sample size was calculated taking into consideration P. vivax as the main parasite in terms of public health importance in the region, although it might overestimate the incidence of P. malariae, as this parasite can persist for longer periods in the bloodstream of infected individuals. Continuous quantitative variables were expressed as medians and interquartile ranges because of the high variability of the data, and categorical variables were expressed as absolute and relative frequencies. The data were analysed by means of SPSS 17.0. Characteristics of the cohort In March 2010, after the first case of malaria in the study region was reported that year, 92 individuals living within a 2 km radius of the location where the case occurred were included in the study. They were assessed at 3-month intervals on eight occasions, giving a total of 21 months of follow-up. The total number of individuals assessed was expressed in person-years and not all individuals attended all the assessments. At the initial assessment, demographic data and information about participants' occupations and leisure activities, as well as any habits potentially connected with malaria transmission were collected (Table 1). None of the individuals' thick or thin blood smears were positive for protozoans, and none of them presented any symptoms compatible with malaria at any of the assessments. Abdominal palpation for splenic enlargement was performed at every assessment, but failed to reveal any cases of enlarged spleen. Characteristics of the cohort of 92 individuals living in an area of the Atlantic Forest with residual malaria who were assessed at 3-month intervals between March 2010 and December 2011 Median (IQR) 32 (14.2–54.7) Same municipality Other municipality without malaria Other municipality with malaria Occupations classified in terms of their relationship with rural areas Non-agricultural and unrelated Non-agricultural but related Incursion into the forest in the previous 2 months IQR Interquartile range The most common occupations were farmer (43 individuals, or 46.8%), student (22 individuals, or 23.9%), and housewife (10 individuals, or 10.9%). The remaining occupations were self-employed (two), cook, maid (two), transportation supervisor, plasterer, timber worker, minor (three), member of the armed forces, driver, machine operator, agricultural producer, teacher, and electronics technician. Thirty-two individuals had lived in the area where the assessments were carried out for less than 5 years. Of these, eleven came from other areas in the same municipality, nine from malaria-free municipalities in the Atlantic Forest, five from other states, and two from other municipalities with malaria in the Atlantic Forest. No information was available for the remaining five. None of those who said they had come from other states were from the Amazon region, and none of the individuals had travelled to the Amazon region during or before the study period. Three individuals reported having had malaria before, one in 2002 and two in 2004. The diagnosis was based on the result of a thick blood smear test, and the parasites were morphologically consistent with P. vivax. Thirteen individuals reported having had fever episodes in the previous 2 years. Of these, two said that the fever had lasted more than 3 days (15 and 7 days, respectively). The patient who reported a 7-day fever was the only individual to report having had a fever on two separate occasions, the second lasting 3 days. None of these individuals were investigated for malaria by the local health services when they had fever. Ten individuals reported having had fever episodes between the first and second assessments; of these, three said that the fever had lasted more than 4 days (6, 7 and 14 days). A thick blood smear for the individual who had had a fever for 14 days was negative when examined by local health staff. The number of individuals who reported having had a fever between subsequent assessments was two, three, four, six, three, and zero for each of the assessments, respectively. Malaria was not diagnosed at any of the assessments, and the febrile episodes improved spontaneously. Three individuals reported prolonged febrile episodes, one between the fourth and fifth assessments, lasting 21 days, one between the fifth and sixth assessments, lasting 15 days, and one between the sixth and seventh assessments, lasting 30 days. Prevalence and incidence of asymptomatic Plasmodium DNA carrier state At the first assessment, one individual was positive for P. vivax, one for P. malariae and one for P. vivax and P. malariae, corresponding to a prevalence of 3.4% in the population sample (2.3% for each species). During follow-up, four new PCR-positive cases (two for P. vivax and two for P. malariae) were recorded, corresponding to an incidence of 2.5 infections per 100 person-years or 1.25 infections per 100 person-years for each species in the population sample analysed (Table 2) (Additional file 1). Results of the 3-monthly assessments between March 2010 and December 2011 for the PCR-positive individuals among 92 study participants living in an area of the Atlantic Forest with residual malaria Individual (no.) PM/PV PM P. malariae, PV P. vivax, − negative result, A absent R0 was calculated using the transmission parameters for P. vivax, because it is the Plasmodium species most frequently associated with symptomatic cases in the study region (Table 3 and Fig. 2). These parameters yielded a value of R0 = 0.337, suggesting that, given the estimated vector density, human hosts alone would not be sufficient to maintain endemic malaria in this region. Parameters used to calculate R0 for the mathematical model Basis for the estimate 15 inhabitants/km2 Value obtained by dividing the rural population in 2010 by the area of the municipality based on Brazilian Institute of Geography and Statistics (IBGE) figures [52] 10,815 specimens/km2 Value obtained from entomological studies carried out in the area [36, 37] and by applying linear regression following Zippin [53] Bona and Navarro-Silva [54] Chitnis et al. [55]; Nedelman [56] 0.0055 (180 days) Value obtained from the observed frequency of asymptomatic individuals and the results reported by Chitnis et al. [55] Santos [57]; Laporta et al. [58] Chitnis et al. 2008 [55] N Estimated population size, M Abundance of An. cruzii, µ Mortality rate of A. cruzii, T MH Probability of transmission of Plasmodium from An. cruzii to humans, T HM Probability of transmission of Plasmodium from humans to A. cruzii, γ Human recovery rate, b Average An. cruzii daily biting rate, q Incubation period for P. vivax in humans, p Extrinsic incubation period for P. vivax Schematic representation of the basic reproductive rate (R0) for the study region estimated using the mathematical model proposed by Anderson and May [51] (filled diamond). Only factors related to the vector are taken into account. The solid line represents the threshold for R0 = 1. The dashed line indicates how much the vector population should increase to reach this threshold In this follow-up of a cohort of individuals living in an extra-Amazonian area of Brazil endemic for malaria, the initial prevalence of asymptomatic Plasmodium carriers was 3.4%. The incidence in the population sample studied was 2.5 infections per 100 person-years, or 1.25 infections per 100 person-years for each species (P. vivax and P. malariae). A prevalence of this magnitude is similar to that reported by Cerutti et al. [17] in 1527 asymptomatic individuals investigated in an earlier study in the same region of the Atlantic Forest in the state of Espírito Santo. Although the data are for a small population sample, the present study is noteworthy because it is the first to document the incidence of asymptomatic carriers in a longitudinal follow-up outside the Amazon region. In contrast, other studies of infected individuals in different areas of Brazil outside the Amazon region are based on cross-sectional or secondary data, and therefore do not allow the incidence to be calculated [15–17, 20, 25, 59–65]. Throughout the follow-up, only three asymptomatic Plasmodium carriers remained positive in more than three blood collections, and all those positive for Plasmodium turned negative spontaneously, with the exception of a single carrier who initially tested positive for P. vivax/P. malariae, but in subsequent collections only tested positive for P. malariae. All four individuals who were positive only after the first collection subsequently tested negative for Plasmodium DNA. Individual no. 33 remained positive for P. malariae DNA between the first collection and the end of the study period, with the exception of two collections in which the PCR was negative. The parasites that cause quartan malaria, such as P. malariae, appear to be the best adapted to their hosts, who can have a chronic infection for decades without symptoms [66]. These infections can go largely unnoticed as the vast majority of symptomatic cases detected are caused by P. vivax [17]. Three of the individuals enrolled in the study reported having had malaria more than 5 years previously, when it was diagnosed based on the result of a thick blood smear that showed parasites morphologically consistent with P. vivax. One of these, individual no. 84, who reported a previous episode of malaria in 2004 and was treated in accordance with Ministry of Health recommendations, was negative in the first assessment in this study, and positive for P. malariae DNA in subsequent assessments. Although not probable, it is possible that the infection could have persisted in individual no. 84, producing gametocytes that could help to maintain the transmission cycle. Alves et al. [67] cited unpublished observations that asymptomatic individuals could remain infective, though with less capacity to infect mosquitoes than symptomatic ones. However, their limited capacity of infection would be compensated by the long periods of the shedding of gametocytes. Even considering such a possibility, the mathematical model applied in this study predicted a period of asymptomatic infection of 6 months, greater than the 2 months period observed by Alves et al., after which 40% of the asymptomatic individuals became negative [67]. In other words, had individual no. 84 not cleared his parasitaemia after treatment, staying infected by P. vivax, the probability of remaining infected would have decreased as time passed, along with his capacity of remaining infective. Even if his infection had persisted for 6 months, the mathematical model would have still been applicable and the prediction would have remained valid. There is also the possibility that individual no. 84 was in fact infected by P. malariae instead of P. vivax when diagnosed by the local health system in the first place. As infections by P. malariae can persist much longer than those by P. vivax [66], the DNA amplified in the present study could have been the same as the parasite detected in his previous infection. However, as demonstrated by Cerutti et al. [17], infections by P. malariae are much less frequent than those by P. vivax in the Atlantic Forest system of Espírito Santo, being less important from an epidemiological point of view. Even such an unlikely scenario would not have interfered with the mathematical prediction of the present study, which, incidentally, was constructed taking P. vivax into consideration. Of the thirteen individuals who presented with fever during the study period, none had splenomegaly, and only one individual who had a prolonged episode of fever was tested using a blood smear when he was seen by a local physician, with a negative result. In all the followed-up patients who had fever at some time, the condition resolved itself spontaneously without the use of antimalarial medication. Since none of the individuals migrated to areas outside the Atlantic Forest, particularly the Amazon region, at any time before or during the study, it is clear, despite the small number of individuals with samples positive for Plasmodium DNA and the small sample size, that the study area is indeed endemic for malaria, albeit at a very low level. In light of the value of R0 (0.337) and the fact that, in theory, R0 < 1 indicates that the number of infected individuals is decreasing in the study population, it is reasonable to suggest that the persistence of endemic malaria in the study area cannot be explained only by the presence of asymptomatic infected individuals but would require an additional reservoir, such as non-human primates. This explanation would currently appear to be plausible as a monkey found in the wild in the same region was positive for P. vivax DNA [46]. In a study by Rezende et al. [36] using PCR, P. vivax DNA was amplified in samples of thoraces of anopheline specimens of the subgenus Nyssorhynchus (Anopheles parvus and Anopheles galvaoi) from the study area, raising the possibility that this subgenus might be involved in malaria transmission in the Atlantic Forest. However, the authors conclude that the low endemicity of malaria is in itself evidence that these species do not participate in the transmission of the disease, as the low parasitaemia would make infection of anophelines with limited vector capacity and competence improbable. The absence of more competent vectors, such as Anopheles darlingi, in the study area leads to the conclusion that if the subgenus Nyssorhynchus is involved in the transmission cycle, its role is a secondary one, i.e. that of an incidental vector. Since it is possible that monkeys could be acting as reservoirs, a mathematical model that can take this into account is required. Development of an accurate model would in turn require further studies of the monkeys in the region. Plasmodium knowlesi malaria has been shown to be a zoonosis. Initially believed to cause only simian malaria, P. knowlesi has been responsible for continued endemic human malaria in regions where there has been a clear decrease in infections by P. falciparum and P. vivax, and has come to represent an obstacle to malaria control efforts in regions of Southeast Asia, particularly Malaysia [11]. In this scenario, the arrival of humans in simian habitats and areas where plasmodia are transmitted can modify the normal transmission cycle, leading monkeys to congregate in residual patches of forest close to humans and to spend more time on the ground. Furthermore, they may change their behavior in their microhabitat, seeking out human settlements and attacking crops or food supplies near human dwellings [12]. Deane (1992) noted that infection by Plasmodium simium can occur in humans in Brazil, although only incidentally, and that An. cruzii is abundant in the Atlantic Forest region of the state of Espírito Santo [35]. The distribution of anophelines collected in the region in the entomological survey by Rezende et al. allowed the authors to infer that if An. cruzii is considered the probable vector in this region and prefers to remain in the tree canopy feeding on non-human primates, malaria is probably being transmitted as a zoonosis [36, 37]. The possibility that monkeys are acting as Plasmodium reservoirs in areas with residual malaria was also suggested by Duarte et al. in a study in which they investigated the prevalence of antibodies against circumsporozoite protein and asexual forms of P. vivax, P. malariae and P. falciparum in monkeys in the Atlantic Forest in the state of São Paulo [68]. Their findings suggested that monkeys in the region had contact with sporozoites from infected anophelines and developed infection. Additional evidences based on the mitochondrial genome of the parasites had recently been presented by the publications of Brasil et al. and Buery et al. [45, 46]. Kirchgatter et al., investigating the feeding preferences of An. cruzii females in Juquitiba in the state of São Paulo, argued against transmission by zoonosis. Specifically, they found only human blood in 26 engorged female mosquitoes and failed to find blood from monkeys, other mammals or animals of any other species in any of the specimens [69]. The mosquitoes were collected from a peridomestic environment because the chosen area consisted of rural human settlements that had undergone anthropogenic changes, and were located at various distances from residual forest patches where monkeys are often seen. However, given that the An. cruzii specimens were collected only from the peridomestic environment, rather than also from the forest environment, the habit of mosquitoes to feed on the closest host has to be taken into account. Collections at the forest edge, the true habitat of the monkeys, might have shown that An. cruzii also feeds on these non-human primates. In a recent study in a Yanomami community in the Venezuelan Amazon, 12 of 33 samples PCR-positive for P. malariae had 18S gene sequences identical to those in a Plasmodium brasilianum strain isolated from an infected monkey in French Guiana. As P. brasilianum is morphologically similar and, apart from a handful of mutations, almost genetically identical to P. malariae, the authors speculate that they are the same species. They note the ease with which plasmodia can be exchanged between humans and monkeys, and warn that there is a lack of host specificity for quartan malaria caused by P. brasilianum, which they consider a true zoonosis [70]. Similarly, there is also evidence to support the hypothesis that P. simium is a variant of P. vivax, with minimal differences in a few molecular markers [45, 46, 71]. Although P. vivax and P. simium, as well as P. malariae and P. brasilianum, are known to be derived from each other, questions remain as to the direction in which the transfer took place, whether from humans to monkeys or vice versa [42]. The main limitation of the present study are the characteristics of the sample, which, although large enough to allow new cases to be identified, was too small to allow the detection of possible factors associated with the risk of getting infected. As the sampling of the participants was a random process, despite being triggered by the occurrence of a symptomatic case, some infections could have gone unnoticed. However, the similarity between the prevalence found in this study and that evidenced by a much larger sample in the study of Cerutti et al. [17] indicates that the frequencies of positive results detected here are indicative of the magnitude of parasitic infections at a population level. On the other hand, parasite densities below the limit of detection by semi-nested PCR could have been missed. The question of infection detectability, particularly in asymptomatic human patients who commonly exhibit low parasitaemia, is crucial for the study of the transmission chain. Nevertheless, for P. vivax, the technique used is able to detect as few as 0.12 parasites/µl [72], ensuring a good level of confidence in the conclusions of this study. The application of a modern molecular tool as a qPCR could have further enhanced the detection of infections, but, unfortunately, such resource was not available at the time. To calculate the incidence, individuals who were negative for Plasmodium DNA at the first collection and positive at one or more subsequent collections were counted only once. Furthermore, the infection was considered to be the same when an individual had a sample that was positive for the same species of Plasmodium in a subsequent assessment. In these circumstances, the fact that genotyping was not performed could be considered a limitation, as the possibility of a new infection cannot be eliminated. In spite of these limitations, the results presented here clearly indicate a need to improve our understanding of the transmission cycle in these low-endemicity areas. Specifically, genotypic studies with both human and simian hosts, as well as human and simian vectors have to be undertaken in order to fully elucidate the role each plays in the transmission cycle. Of particular note here is the finding based on our mathematical model, namely that the local transmission cycle cannot be maintained by human-mosquito-human interactions alone. The present study detected a low frequency of asymptomatic carriers of Plasmodium spp. in an area endemic for malaria in the Atlantic Forest in Brazil. These results complement previous observations of a low frequency of symptomatic infections [17], and low density of anopheline vectors [36, 37]. The application of a mathematical model showed that, given these characteristics, it is improbable that the malaria cycle observed in this region could be maintained by human-mosquito-human interactions alone. FECA participated in study design, data collection in the field, data analysis, and wrote the first draft of the manuscript. RSM participated in study design, performed the laboratory tests, organized the data, and contributed to the organization of the manuscript. CCJ participated in the study conception, study design, data collection in the field, data analysis, and helped to prepare the first draft of the manuscript. LNF performed the laboratory tests, organized the data, and contributed to the organization of the manuscript. JCB participated in the data collection in the field, sample storage, organization of the data, and improvement of the manuscript. BF participated in the sample processing and storage, as well as in data analysis. HRR helped to coordinate the field logistics, participated in the organization of the data, and helped in the improvement of the manuscript. AMRCD helped with data analysis and coordination of the team in São Paulo. ARMS performed the calculations of the mathematical model and helped to write the manuscript. ABM coordinated the teamwork, participated in study design, contributed to data analysis, and articulated the final version of the manuscript. All authors read and approved the final manuscript. The authors would like to thank the population of the rural area where the study was carried out, who understood the objectives of the study and whose extensive participation helped to ensure its success. We are also grateful to the Espírito Santo State Department of Health (SESA) for supplying logistical support and material. The datasets used and/or analysed during the current study are available from the corresponding author upon reasonable request. As the data were primarily collected for this project, it was not necessary any consent from third parties. Signed consent forms were obtained from all the individuals included in the study. The study was approved by the Committee for Ethics in Research at the Center for Health Sciences, Federal University of Espírito Santo under ref. no. 079/09. Before the first collection the participants signed a voluntary informed-consent form. This project was financed by the State of Espírito Santo Research Foundation (FAPES) (Grant No. 45617600/2009) and the State of São Paulo Research Foundation (FAPESP) (Grant No. 10/50707-5). 12936_2018_2263_MOESM1_ESM.docx Additional file 1. Additional material. Graduate Programme in Infectious Diseases, Federal University of Espírito Santo, Vitória, Brazil Protozoology Laboratory, Institute of Tropical Medicine, University of São Paulo, São Paulo, Brazil Entomology and Malacology Unit, Espírito Santo State Department of Health (SESA), Vitória, Brazil Superintendency for the Control of Endemic Diseases (SUCEN), São Paulo State Department of Health, São Paulo, Brazil Faculty of Public Health, University of São Paulo, São Paulo, Brazil Jamrozik E, de la Fuente-Núñez V, Reis A, Ringwald P, Selgelid MJ. Ethical aspects of malaria control and research. Malar J. 2015;14:518.View ArticlePubMedPubMed CentralGoogle Scholar Soko W, Chimbari MJ, Mukaratirwa S. Insecticide resistance in malaria-transmitting mosquitoes in Zimbabwe: a review. Infect Dis Poverty. 2015;4:46.View ArticlePubMedPubMed CentralGoogle Scholar Hemingway J, Ranson H, Magill A, Kolaczinski J, Fornadel C, Gimnig J, et al. Averting a malaria disaster: will insecticide resistance derail malaria control? Lancet. 2016;387:1785–8.View ArticlePubMedGoogle Scholar Newby G, Bennett A, Larson E, Cotter C, Shretta R, Phillips AA, et al. The path to eradication: a progress report on the malaria-eliminating countries. Lancet. 2016;387:1775–84.View ArticlePubMedGoogle Scholar Keeling PJ, Rayner JC. The origins of malaria: there are more things in heaven and earth. Parasitology. 2015;142(Suppl 1):S16–25.View ArticlePubMedGoogle Scholar Sutherland CJ, Tanomsing N, Nolder D, Oguike M, Jennison C, Pukrittayakamee S, et al. Two nonrecombining sympatric forms of the human malaria parasite Plasmodium ovale occur globally. J Infect Dis. 2010;201:1544–50.View ArticlePubMedGoogle Scholar Fuehrer HP, Noedl H. Recent advances in detection of Plasmodium ovale: implications of separation into the two species Plasmodium ovale wallikeri and Plasmodium ovale curtisi. J Clin Microbiol. 2014;52:387–91.View ArticlePubMedPubMed CentralGoogle Scholar Sutherland CJ. Persistent parasitism: the adaptive biology of malariae and ovale malaria. Trends Parasitol. 2016;32:808–19.View ArticlePubMedGoogle Scholar Cox-Singh J, Davis TM, Lee KS, Shamsul SS, Matusop A, Ratnam S, et al. Plasmodium knowlesi malaria in humans is widely distributed and potentially life threatening. Clin Infect Dis. 2008;46:165–71.View ArticlePubMedPubMed CentralGoogle Scholar Cox-Singh J, Singh B. Knowlesi malaria: newly emergent and of public health importance? Trends Parasitol. 2008;24:406–10.View ArticlePubMedPubMed CentralGoogle Scholar Brock PM, Fornace KM, Parmiter M, Cox J, Drakeley CJ, Ferguson HM, et al. Plasmodium knowlesi transmission: integrating quantitative approaches from epidemiology and ecology to understand malaria as a zoonosis. Parasitology. 2016;143:389–400.View ArticlePubMedPubMed CentralGoogle Scholar Alves FP, Durlacher RR, Menezes MJ, Krieger H, Silva LH, Camargo EP. High prevalence of asymptomatic Plasmodium vivax and Plasmodium falciparum infections in native Amazonian populations. Am J Trop Med Hyg. 2002;66:641–8.View ArticlePubMedGoogle Scholar Branch O, Casapia WM, Gamboa DV, Hernandez JN, Alava FF, Roncal N, et al. Clustered local transmission and asymptomatic Plasmodium falciparum and Plasmodium vivax malaria infections in a recently emerged, hypoendemic Peruvian Amazon community. Malar J. 2005;4:27.View ArticlePubMedPubMed CentralGoogle Scholar Coura JR, Suárez-Mutis M, Ladeia-Andrade S. A new challenge for malaria control in Brazil: asymptomatic Plasmodium infection—a review. Mem Inst Oswaldo Cruz. 2006;101:229–37.View ArticlePubMedGoogle Scholar Curado I, Dos Santos Malafronte R, de Castro Duarte AM, Kirchgatter K, Branquinho MS, et al. Malaria epidemiology in low-endemicity areas of the Atlantic Forest in the Vale do Ribeira, São Paulo, Brazil. Acta Trop. 2006;100:54–62.View ArticlePubMedGoogle Scholar Cerutti C Jr, Boulos M, Coutinho AF, Hatab MCLD, Falqueto A, Rezende HR, et al. Epidemiologic aspects of the malaria transmission cycle in an area of very low incidence in Brazil. Malar J. 2007;6:33.View ArticlePubMedGoogle Scholar Cucunubá ZM, Guerra AP, Rahirant SJ, Rivera JA, Cortés LJ, Nicholls RS. Asymptomatic Plasmodium spp. infection in Tierralta, Colombia. Mem Inst Oswaldo Cruz. 2008;103:668–73.View ArticlePubMedGoogle Scholar Harris I, Sharrock WW, Bain LM, Gray KA, Bobogare A, Boaz L, et al. A large proportion of asymptomatic Plasmodium infections with low and sub-microscopic parasite densities in the low transmission setting of Temotu Province, Solomon Islands: challenges for malaria diagnostics in an elimination setting. Malar J. 2010;9:254.View ArticlePubMedPubMed CentralGoogle Scholar Oliveira-Ferreira J, Lacerda MV, Brasil P, Ladislau JL, Tauil PL, Daniel-Ribeiro CT. Malaria in Brazil: an overview. Malar J. 2010;9:115.View ArticlePubMedPubMed CentralGoogle Scholar Laishram DD, Sutton PL, Nanda N, Sharma VL, Sobti RC, Carlton JM, et al. The complexities of malaria disease manifestations with a focus on asymptomatic malaria. Malar J. 2012;11:29.View ArticlePubMedPubMed CentralGoogle Scholar Lindblade KA, Steinhardt L, Samuels A, Kachur SP, Slutsker L. The silent threat: asymptomatic parasitemia and malaria transmission. Expert Rev Anti Infect Ther. 2013;11:623–39.View ArticlePubMedGoogle Scholar Starzengruber P, Fuehrer HP, Ley B, Thriemer K, Swoboda P, Habler VE, et al. High prevalence of asymptomatic malaria in south-eastern Bangladesh. Malar J. 2014;13:16.View ArticlePubMedPubMed CentralGoogle Scholar Lin JT, Saunders DL, Meshnick SR. The role of submicroscopic parasitemia in malaria transmission: what is the evidence? Trends Parasitol. 2014;30:183–90.View ArticlePubMedPubMed CentralGoogle Scholar Maselli LM, Levy D, Laporta GZ, Monteiro AM, Fukuya LA, Ferreira-da-Cruz MF, et al. Detection of Plasmodium falciparum and Plasmodium vivax subclinical infection in non-endemic region: implications for blood transfusion and malaria epidemiology. Malar J. 2014;13:224.View ArticlePubMedPubMed CentralGoogle Scholar Stresman GH, Stevenson JC, Ngwu N, Marube E, Owaga C, Drakeley C, et al. High levels of asymptomatic and subpatent Plasmodium falciparum parasite carriage at health facilities in an area of heterogeneous malaria transmission intensity in the Kenyan highlands. Am J Trop Med Hyg. 2014;91:1101–8.View ArticlePubMedPubMed CentralGoogle Scholar Baum E, Sattabongkot J, Sirichaisinthop J, Kiattibutr K, Davies DH, Jain A, et al. Submicroscopic and asymptomatic Plasmodium falciparum and Plasmodium vivax infections are common in western Thailand—molecular and serological evidence. Malar J. 2015;14:95.View ArticlePubMedPubMed CentralGoogle Scholar Waltmann A, Darcy AW, Harris I, Koepfli C, Lodo J, Vahi V, et al. High rates of asymptomatic, sub-microscopic Plasmodium vivax infection and disappearing Plasmodium falciparum malaria in an area of low transmission in Solomon Islands. PLoS Negl Trop Dis. 2015;9:e0003758.View ArticlePubMedPubMed CentralGoogle Scholar Elbadry MA, Al-Khedery B, Tagliamonte MS, Yowell CA, Raccurt CP, Existe A, et al. High prevalence of asymptomatic malaria infections: a cross-sectional study in rural areas in six departments in Haiti. Malar J. 2015;14:510.View ArticlePubMedPubMed CentralGoogle Scholar Baum E, Sattabongkot J, Sirichaisinthop J, Kiattibutr K, Jain A, Taghavian O, et al. Common asymptomatic and submicroscopic malaria infections in Western Thailand revealed in longitudinal molecular and serological studies: a challenge to malaria elimination. Malar J. 2016;15:333.View ArticlePubMedPubMed CentralGoogle Scholar Galatas B, Bassat Q, Mayor A. Malaria parasites in the asymptomatic: looking for the hay in the haystack. Trends Parasitol. 2016;32:296–308.View ArticlePubMedGoogle Scholar Griffing SM, Tauil PL, Udhayakumar V, Silva-Flannery L. A historical perspective on malaria control in Brazil. Mem Inst Oswaldo Cruz. 2015;110:701–18.View ArticlePubMedPubMed CentralGoogle Scholar Pinotti M. The biological basis for the campaign against the malaria vectors of Brazil. Trans R Soc Trop Med Hyg. 1951;44:663–82.View ArticlePubMedGoogle Scholar Deane LM, Ferreira Neto JA, Lima MM. The vertical dispersion of Anopheles (Kerteszia) cruzi in a forest in southern Brazil suggests that human cases of malaria of simian origin might be expected. Mem Inst Oswaldo Cruz. 1984;79:461–3.View ArticlePubMedGoogle Scholar Deane LM. Simian malaria in Brazil. Mem Inst Oswaldo Cruz. 1992;87(Suppl 3):1–20.View ArticlePubMedGoogle Scholar Rezende HR, Soares RM, Cerutti C Jr, Alves IC, Natal D, Urbinatti PR, et al. Entomological characterization and natural infection of anophelines in an area of the Atlantic Forest with autochthonous malaria cases in mountainous region of Espírito Santo State, Brazil. Neotrop Entomol. 2009;38:272–80.View ArticlePubMedGoogle Scholar Rezende HR, Falqueto A, Urbinatti PR, De Menezes RM, Natal D, Cerutti C Jr. Comparative study of distribution of anopheline vectors (Diptera: Culicidae) in areas with and without malaria transmission in the highlands of an extra-Amazonian region in Brazil. J Med Entomol. 2013;50:598–602.View ArticlePubMedGoogle Scholar Goldman IF, Qari SH, Millet PG, Collins WE, Lal AA. Circumsporozoite protein gene of Plasmodium simium, a Plasmodium vivax-like monkey malaria parasite. Mol Biochem Parasit. 1993;57:177–80.View ArticleGoogle Scholar Escalante AA, Freeland DE, Collins WE, Lal AA. The evolution of primate malaria parasites based on the gene encoding cytochrome b from the linear mitochondrial genome. Proc Natl Acad Sci USA. 1998;95:8124–9.View ArticlePubMedPubMed CentralGoogle Scholar Leclerc MC, Durand P, Gauthier C, Parot S, Billotte N, Menegon M, et al. Meager genetic variability of the human malaria agent Plasmodium vivax. Proc Natl Acad Sci USA. 2004;101:14455–60.View ArticlePubMedPubMed CentralGoogle Scholar Lim CS, Tazi L, Ayala EJ. Plasmodium vivax: recent world expansion and genetic identity to Plasmodium simium. Proc Natl Acad Sci USA. 2005;102:15523–8.View ArticlePubMedPubMed CentralGoogle Scholar Tazi L, Ayala FJ. Unresolved direction of host transfer of Plasmodium vivax v. P. simium and P. malariae v. P. brasilianum. Infect Genet Evol. 2011;11:209–21.View ArticlePubMedGoogle Scholar Cochrane AH, Nardin EH, de Arruda M, Maracic M, Clavijo P, Collins WE, et al. Widespread reactivity of human sera with a variant repeat of the circumsporozoite protein of Plasmodium vivax. Am J Trop Med Hyg. 1990;43:446–51.View ArticlePubMedGoogle Scholar Fandeur T, Volney B, Peneau C, de Thoisy B. Monkeys of the rainforest in French Guiana are natural reservoirs for P. brasilianum/P. malariae malaria. Parasitology. 2000;120(Pt 1):11–21.View ArticlePubMedGoogle Scholar Brasil P, Zalis MG, Pina-Costa A, Siqueira AM, Bianco C Jr, Silva S, et al. Plasmodium simium causing human malaria: a zoonoses with outbreak potential in the Rio de Janeiro Brazilian Atlantic forest. Lancet Glob Health. 2017;5:1038–46.View ArticleGoogle Scholar Buery JC, Rodrigues PT, Natal L, Salla LC, Loss AC, Vicente CR, et al. Mitcochondrial genome of Plasmodium vivax/simium detected in an endemic region for malaria in the Atlantic Forest of Espírito Santo state, Brazil: do mosquitoes, simians and humans harbour the same parasite? Malar J. 2017;16:437.View ArticlePubMedPubMed CentralGoogle Scholar Erkenswick GA, Watsa M, Pacheco MA, Escalante AA, Parker PG. Chronic Plasmodium brasilianum infections in wild Peruvian tamarins. PLoS ONE. 2017;12(9):e0184504.View ArticlePubMedPubMed CentralGoogle Scholar Ministério da Saúde. DATASUS MALÁRIA—Casos confirmados Notificados no Sistema de Informação de Agravos de Notificação—Espírito Santo. http://tabnet.datasus.gov.br/cgi/tabcgi.exe?sinannet/cnv/malaes.def. Accessed 16 Feb 2017. World Health Organization. Basic malaria microscopy. Geneva: World Health Organization; 1991.Google Scholar Win TT, Lin K, Mizuno S, Zhou M, Liu Q, Ferreira MU, et al. Wide distribution of Plasmodium ovale in Myanmar. Trop Med Int Health. 2002;7:231–9.View ArticlePubMedGoogle Scholar Anderson RM, May RM. Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press; 1991. p. 757.Google Scholar IBGE. IBGE, Censo Demográfico 2010: sinopse. http://www.cidades.ibge.gov.br/xtras/temas.php?lang=&codmun=320460&idtema=1&search=espirito-santo|santa-teresa|censo-demografico-2010:-sinopse. Accessed 16 Feb 2017. Zippin C. An evaluation of the removal method of estimating animal populations. Biometrics. 1956;12:163–89.View ArticleGoogle Scholar Dalla Bona AC, Navarro-Silva MA. Physiological age and longevity of Anopheles (Kerteszia) cruzii Dyar & Knab (Diptera: Culicidae) in the Atlantic Forest of Southern Brazil. Neotrop Entomol. 2010;39:282–8.View ArticlePubMedGoogle Scholar Chitnis N, Hyman JM, Cushing JM. Determining important parameters in the spread of malaria through the sensitivity analysis of a mathematical model. Bull Math Biol. 2008;70:1272–96.View ArticlePubMedGoogle Scholar Nedelman J. Inoculation and recovery rates in the malaria model of Dietz, Molineaux, and Thomas. Math Biosci. 1984;69:209–33.View ArticleGoogle Scholar Santos RLC. Medida da capacidade vetorial de Anopheles albitarsis e de Anopheles (Kerteszia) no Vale do Ribeira, São Paulo. São Paulo: Universidade de São Paulo; 2001. p. 81.Google Scholar Laporta GZ, Lopez de Prado P, Kraenkel RA, Coutinho RM, Sallum MA. Biodiversity can help prevent malaria outbreaks in tropical forests. PLoS Negl Trop Dis. 2013;7:e2139.View ArticlePubMedPubMed CentralGoogle Scholar Wanderley DMV, Silva RA, Andrade JCR. Aspectos epidemiológicos da malária no Estado de Säo Paulo, Brasil, 1983 a 1992. Rev Saude Publica. 1994;28:192–7.View ArticlePubMedGoogle Scholar Bértoli M, Moitinho MLR. Malária no Estado do Paraná, Brasil. Rev Soc Bras Med Trop. 2001;34:43–7.View ArticlePubMedGoogle Scholar Chaves KM, Zumpano JF, Resende MC, Pimenta FG Jr, Rocha MOC. Malaria in the State of Minas Gerais, Brazil, 1980–1992 (in Portuguese). Cad Saúde Públ. 1995;11:621–3.View ArticleGoogle Scholar Marques GRAM, Condino MLF, Serpa LLN, Cursino TVM. Aspectos epidemiológicos de malária autóctone na mata atlântica, litoral norte, Estado de São Paulo, 1985–2006. Rev Soc Bras Med Trop. 2008;41:386–9.View ArticlePubMedGoogle Scholar Couto RD, Latorre MRDO, Di Santi SM, Natal D. Malária autóctone notificada no Estado de São Paulo: aspectos clínicos e epidemiológicos de 1980 a 2007. Rev Soc Bras Med Trop. 2010;43:52–8.View ArticlePubMedGoogle Scholar de Pina-Costa A, Brasil P, Di Santi SM, de Araujo MP, Suárez-Mutis MC, Santelli AC, et al. Malaria in Brazil: what happens outside the Amazonian endemic region. Mem Inst Oswaldo Cruz. 2014;109:618–33.View ArticlePubMedPubMed CentralGoogle Scholar Miguel RB, Peiter PC, Albuquerque HD, Coura JR, Moza PG, Costa AD, et al. Malaria in the state of Rio de Janeiro, Brazil, an Atlantic Forest area: an assessment using the health surveillance service. Mem Inst Oswaldo Cruz. 2014;109:634–40.View ArticlePubMedPubMed CentralGoogle Scholar Collins WE, Jeffery GM. Plasmodium malariae: parasite and disease. Clin Microbiol Rev. 2007;20:579–92.View ArticlePubMedPubMed CentralGoogle Scholar Alves FP, Gil LH, Marrelli MT, Ribolla PE, Camargo EP, Da Silva LH. Asymptomatic carriers of Plasmodium spp. as infection source for malaria vector mosquitoes in the Brazilian Amazon. J Med Entomol. 2005;42:777–9.View ArticlePubMedGoogle Scholar Duarte AM, Porto MA, Curado I, Malafronte RS, Hoffmann EH, de Oliveira SG, et al. Widespread occurrence of antibodies against circumsporozoite protein and against blood forms of Plasmodium vivax, P. falciparum and P. malariae in Brazilian wild monkeys. J Med Primatol. 2006;35:87–96.View ArticlePubMedGoogle Scholar Kirchgatter K, Tubaki RM, Malafronte Rdos S, Alves IC, Lima GF, Guimarães Lde O, et al. Anopheles (Kerteszia) cruzii (Diptera: Culicidae) in peridomiciliary area during asymptomatic malaria transmission in the Atlantic Forest: molecular identification of blood-meal sources indicates humans as primary intermediate hosts. Rev Inst Med Trop Sao Paulo. 2014;56:403–9.View ArticlePubMedPubMed CentralGoogle Scholar Lalremruata A, Magris M, Vivas-Martínez S, Koehler M, Esen M, Kempaiah P, et al. Natural infection of Plasmodium brasilianum in humans: man and monkey share quartan malaria parasites in the Venezuelan Amazon. EBioMedicine. 2015;2:1186–92.View ArticlePubMedPubMed CentralGoogle Scholar Li J, Collins WE, Wirtz RA, Rathore D, Lal A, McCutchan TF. Geographic subdivision of the range of the malaria parasite Plasmodium vivax. Emerg Infect Dis. 2001;7:35–42.View ArticlePubMedPubMed CentralGoogle Scholar Myjak P, Nahorski W, Pieniazek NJ, Pietkiewicz H. Usefulness of PCR for diagnosis of imported malaria in Poland. Eur J Clin Microbiol Infect Dis. 2002;21:215–8.View ArticlePubMedGoogle Scholar
CommonCrawl
Research note Prevalence and patterns of skin toning practices among female students in Ghana: a cross-sectional university-based survey Williams Agyemang-Duah1, Charlotte Monica Mensah2, Reindolf Anokye3Email authorView ORCID ID profile, Esi Dadzie2, Akwasi Adjei Gyimah3, Francis Arthur - Holmes4, Prince Peprah5, Frimpong Yawson2 and Esther Afriyie Baah2 Received: 1 December 2018 The use of skin toning products has a deep historical background in low and middle-income countries. Yet, there is no empirical evidence on the prevalence, and patterns of skin toning practices among university students in Ghana. This study sought to examine the prevalence, patterns and socio-demographic factors associated with skin toning practices among female university students in Ghana using a sample of 389 undergraduate female students. 40.9% of respondents had practised skin toning within the last 12 months. Also, 51.3% used skin toning products such as creams (38.9%) and soap or gel (35.5%) to treat a skin disorder. Respondents aged 21 years were more likely to use skin toning products (AOR = 0.400, CI 0.121–1.320), those who had dark skin (AOR = 3.287, CI 1.503–7.187), attended public school (AOR = 1.9, CI 1.1–3.56) and those who attended girls school were more likely to use skin toning products (AOR = 10.764, CI 4.2–27.3). Furthermore, those who were in level 400 (AOR = 49.327, CI 8.48–286.9) and those receiving more than 500 cedis were also more likely to use skin toning products (AOR = 2.118, CI 0.419–10.703). Policy interventions that seek to reduce skin toning practices among university students should consider micro and broader socio-demographic factors. Skin toning practices Skin toning practice appears to have become a norm among people of various backgrounds, age, and gender [1–3]. Seeking for a lighter skin tone has always attracted attention in Western societies where fair or light skin colour has been a symbol of beauty, purity, sweetness, sex appeal, prominence as well as superiority and higher social ranking [4]. In Europe, White women have used bleaching creams to maintain radiant skin devoid of hyperpigmentation as a result of being exposed to heat [1] or the often-dreaded process of maturation [2]. Alghamdi [5] reported that the degree of skin toning practice has increased in Saudi Arabia, with an estimation of 38.9% reporting to be actively bleaching their skin [5]. Skin toning practices were reported among women in the Philippines [6], and in East Asia, skin toning practices have been reported among 30% Chinese, 20% Taiwanese, 18% Japanese and 8% Koreans [7]. In Africa, the World Health Organization claims that Nigeria has the highest percentage of women using skin toning products with reported 77% of women engaging in the practice [8]. A cross-sectional study in Togo reported that 58.9% of women used skin toning cosmetic products and that 30.9% used products containing mercury. Moreover, it has been reported that 25% of women in Bamako, Mali and 52% to 67% in Dakar, Senegal use skin toning products [9–12]. Skin toning practices have been reported among young women in Cameroon [12], and among 30% of women in Ghana [13]. Although the practice is global, African women are some of the biggest consumers of skin bleaching products, which include potentially harmful local concoctions made from household chemicals (e.g. automotive battery acid, bleach, laundry detergent, toothpaste), and over-the-counter creams, putting them at higher risk for a variety of adverse health outcomes [10]. In Ghana, data on skin toning practices among students remain primarily unavailable. The study, therefore, assesses the prevalence and patterns of skin toning practices and further examines socio-demographic factors associated with the practices among university students. A cross-sectional University-based survey was conducted at the Kwame Nkrumah University of Science and Technology (KNUST) to examine patterns and prevalence of skin toning practices among female university students in Ghana. Being the second largest university in Ghana, KNUST is located in Kumasi and provides educational services for several people in Ghana and other neighbouring countries. This study recruited female undergraduate students from levels 100 to 400. Female students from the various colleges of the university such as Humanities and Social Sciences, Arts and Built Environment, Science, Health Science and Agriculture, and Natural Resources were selected using a two-stage cluster and random sampling techniques. Out of the 13,738 female students at KNUST, a formula by Miller and Brewer [14] were used to select 389 respondents as a representative sample size for the study. $${\text{n}} = \frac{N}{{1 + N\left( {{\text{x}}^{2} } \right)}}$$ where n = sample size, N = total number of female undergraduate students in KNUST and x = margin of error. $${\text{n}} = \frac{13{,}738}{{1 + 13{,}738\left( {0.05^{2} } \right)}}$$ n = 388.682 or approximately 389 respondents. In each college, the number of respondents was calculated proportionately using the population of the undergraduate females in the various colleges. The respondents were asked to pick pieces of papers that were folded with 'True' and 'False' options. Those who chose 'True' were selected until all the sample size earmarked for each college was obtained. The recruitment of the respondents for the study is shown in Fig. 1. Flow diagram of recruitment of respondents A closed-ended questionnaire (Additional file 1: Questionnaire) was given to the students during their regular lecture periods. The closed-ended questionnaire was made up of two sections and was written in English. The first section comprised background characteristics of the respondents such as age, religion, ethnicity and income. The second part consisted of information on patterns and prevalence of skin toning practices among the respondents. The Questionnaire included items such as whether or not a respondent has used skin toning products in the last 12 months preceding the survey, the number of times they have used it, the frequency of usage, factors that motivate them to use, the kind of skin toning products they prefer, and the one they mostly used. The questionnaire was explained to the respondents by three trained research assistants recruited from the Department of Geography and Rural Development, KNUST. However, the data collection process was monitored by the fourth author who has a background in Medical Geography as well as Health and Development. To help check call-backs problems, the distribution, and collection of the questionnaires were done by hand and on the same day. This helped to ensure a 100% response rate in the study. The completion of each questionnaire lasted 40 min on the average. Also, written informed consent was obtained from respondents before they were recruited for the study. They were also assured that the information they provided would be treated with absolute confidentiality. Inferential analytical tools embedded in Statistical Package for the Social Sciences software (version 16) (SPSS) was employed to establish an association between socio-demographic characteristics of the respondents and the use of skin toning products with a significant level of 0.05 or less. Socio-demographic characteristics of respondents Data gathered on the demographic characteristics of respondents were presented in Table 1. Observed from Table 1, the mean age was 22 ± 1.5 years, and the majority (91%) were single. A little over half (59.1%) were categorized as dark-skinned while the majority (86.6%) grew up in an urban setting. Respondents were selected from level 100 (29.6%), level 200 (34.7%) as well as level 300 (11.6%) and level 400 (24.2%), residing on campus (50.6%) and off-campus (49.4%). Majority of the respondents were Akans (77.9%), and pursuing health-related programmes (76.3%). Mean ± SD 22 ± 1.5 Which area did you grow up Where do you reside during vacation Nature of senior high school attended Category of senior high school attended. Mixed school Girls school Level of students Where do you reside in the school Classification of accommodation Homestel Income (Ghs) 250.5 ± 1.74 Mole Dangbani Other specify Programme of study Non-health related Prevalence and patterns of skin toning practices On the prevalence and patterns of skin toning practices among female university students (Additional file 2: Table S1), it was revealed that less than half of the study population (40.9%) had practised skin toning within the last 12 months. The highest proportion of the respondents (51.3%) used skin toning products to treat a skin disorder. Moreover, Creams (38.9%) and Soap or Gel (35.5%) were the skin toning products mostly used by the respondents. Socio-demographic factors associated with skin toning practices In the multivariate analysis, the results show that respondents aged 21 years were 0.4 times more likely to use skin toning products (AOR = 0.400, CI 0.121–1.320). Respondents who had dark skin were 3.3 times more likely to use skin toning products (AOR = 3.287, CI 1.503–7.187). Those who attended public school were 1.9 times more likely to use skin toning products (AOR = 1.9, CI 1.1–3.56) and those who attended girls school were 10.7 times more likely to use skin toning products (AOR = 10.764, CI 4.2–27.3). Furthermore, those who were in level 400 were 49 times more likely to use skin toning products (AOR = 49.327, CI 8.48–286.9) and those who received more than 500 cedis were 2 times more likely to use skin toning products (AOR = 2.118, CI 0.419–10.703) as shown in Table 2. Socio-demographic factors predicting the practice of skin toning products COR (95% CI) Bi-variate analysis AOR (95% CI) Multi-variate analysis 0.680 (0.372–1.242)* 0.491 (0.204–1.184) 0.712 (2.27–2.234) .354 (0.104–1.208) 0.06 (0.001–0.005) 3.718 (0.467–6.12) 2.7 (1.398–3.203)* Family physician Where respondents grew up 0.255 (0.934–3.43)* Nature of school attended 1.9 (1.1–3.56)* Category of school attended 6.72 (3.410–15.06)* 10.764 (4.2–27.3)* Level of student 10.764 (4.23–27.336)* 49.327 (8.48–286.9)* 0.144 (0.05–0.412)* 1.01 (0.489–2.085)* 2.118 (0.419–10.703)* 0.219 (0.012-4.155) 21.893 (2.469–194.315) 7.148 (1.211–42.211) CI confidence interval, COR crude odd ratio, AOR adjusted odd ratio, 1.00 reference * p < 0.05 Socio-demographic characteristics and the practice of skin toning can be found in Additional file 3: Table S2. This study examined the prevalence, patterns and factors associated with skin toning practices among female university students in Ghana. To the best of our knowledge, this is one of the first studies in Ghana to provide a detailed understanding of skin toning practices among undergraduate female university students. Fokuo [15] was of the view that the Ghanaian society values good skin colour and it serves as a form of social capital for women especially. In this way one's self-worth, esteem and standard increases when one has light skin and therefore making light-skinned women the preferred choice in terms of marriage. Because marriage is well thought-out as the ultimate accomplishment within the Ghanaian community, women are, therefore, compelled to improve their skin tone to attract men at all costs. It was expected therefore that, the majority of the respondents of this study would have practiced skin toning within the last 12 months. However, less than half of the respondents (40.9%) had practiced skin toning within the last 12 months, and the higher proportion had done it once (40.9%). Also, about a third of the respondents (34.6%) used skin toning products once in a while, and the highest proportion of the respondents (51.3%) used skin toning products to treat a skin disorder. This suggests that obtaining a smooth and perfect complexion is paramount among women. Similarly, Ajose [16], as well as Blay [3], reported that people were motivated to tone their skin to improve its appearance. Mpengesi and Nzuza [3] reported that skin toning is seen as a practice to beautify the skin by people determined to improve their appearance and that about 63.3% of people usually tone when they want to eliminate rashes so they will look beautiful. Also, Ajose [16] reported that people tone when they want a smooth complexion or want to clear their skin of any skin disorder. Due to this, de Souza [17] indicated that smooth skin is one of the benefits of toning because everyone admires an even-toned skin without any blemish. Hunter [18], reported that light-skinned African Americans and Mexican Americans as opposed to dark skinned ones had more advantage when it comes to educational opportunities and receiving more income. Hence, being light skinned is the ultimate [19] due to its numerous benefits. The value for lightness is entrenched in the social structures of families and societies at large, thus perpetuating colour hierarchies. This study and the existing literature exhibit the value attached to having an ecstatic, evenly toned and faultless skin complexion which is seen as attractive, and therefore, commendable. This could stimulate others to use all conceivable avenues to attain such a revered attribute. The study found that respondents who had dark skin, attended public school, went to girls' school, were in level 400 and received more than GH 500 cedis were significantly more likely to practice skin toning. The findings related to attending public and girls' schools are relatively new in the existing literature. Our findings contradict with the observation of Hamed et al. [20] that people with coloured skin have an increased prevalence of using skin toning products. The difference in the finding may be attributed to the setting and the methodological differences. Further, we found that the use of skin toning products increases as the level of education of an individual also increases similar to what has been reported previously [20]. This study examined the prevalence and patterns of skin toning practices among undergraduate female university students at KNUST in Ghana. Less than half of the respondents (40.9%) had practiced skin toning within the last 12 months prior to the survey. Age, skin colour, nature of school attended, type of school attended, level of student and monthly income significantly influence the use of skin toning products among university students in Ghana. We, therefore, argue that policy interventions that seek to reduce skin toning practices among university students should consider micro and broader socio-demographic factors. The study was limited to the views of female university students; however, the inclusion of male university students could have paved the way for new findings. Further, the use of one institution and the period within which data was collected limits the extent to which the findings could be generalized. It is, therefore, recommended that future research should be extended to students in other universities and also consider the views of male students on the use of skin toning products. Our gratitude goes out to the management and staff of KNUST, as well as the students who participated. Further thanks to all whose works on skin toning practices helped in putting this work together. No external funding was received for this study. The researchers themselves covered all cost related to this research. The secondary data compilation, data analysis and collection, and interpretation were done by the first author (WA-D). The second and third authors (CMM and RA) drafted and revised the manuscript thoroughly with their expertise. In the analysis of data, all authors played a significant part as well as in designing and preparing the manuscript. Proofreading and the final approval process were also shared accordingly among all authors, and all authors have agreed to its submission for publication. All authors read and approved the final manuscript. The study was approved by the Institutional Review board (Ref: CHRPE/AP/075/18) known as Committee on Human Research Publication and ethics at Kwame Nkrumah University of Science and Technology, Kumasi-Ghana. The respondents of the study were told the objectives of the study, the possible implication and effect of the research. Written consent was obtained from all respondents before they participated in the study. The data collected had no information that could directly be traced to or associated with any individual respondent by removing identifiers such as names, and contact details of respondents. Participation was purely voluntary, and any participant who wanted to withdraw was allowed. Confidentiality was guaranteed before administering the questionnaires. Consent to publish 13104_2019_4327_MOESM1_ESM.docx Additional file 1. Questionnaire. 13104_2019_4327_MOESM2_ESM.docx Additional file 2: Table S1. Prevalence and patterns of skin toning practices among female university students. 13104_2019_4327_MOESM3_ESM.docx Additional file 3: Table S2. Socio-demographic characteristics influencing the practice of skin toning. Department of Planning, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana Department of Geography and Rural Development, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana Centre for Disability and Rehabilitation Studies, Department of Health Promotion, Education and Disability, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana Oxford Department of International Development, University of Oxford, Oxford, UK Sheffield Hallam University, Sheffield, UK Durosaro AI, Ajiboye SK, Oniye AO. Perception of skin bleaching among female secondary school students in Ibadan metropolis, Nigeria. J Educ Pract. 2012;3(7):40–6.Google Scholar Glenn EN. Yearning for lightness: transnational circuits in the marketing and consumption of skin lighteners. Gend Soc. 2008;22(3):281–302.View ArticleGoogle Scholar Mpengesi A, Nzuza N. Perceptions of skin bleaching in South Africa: a study of University of KwaZulu-Natal students (Unpublished honours project). Durban: University of KwaZulu-Natal; 2014.Google Scholar Emeriau C. Changer de peau? Les appropriatons chromatiques en Asie et en Afrique. http://www.observatoirenivea.com/Admin/AllMedias/CahiersPDF/CAHIEROBSERVATOIRE4.pdf. Accessed 22 Oct 2017. Alghamdi KM. The use of topical bleaching agents among women: a cross-sectional study of knowledge, attitude, and practices. J Eur Acad Dermatol Venereol. 2010;24(10):1214–9.View ArticleGoogle Scholar Mendoza RL. The skin whitening industry in the Philippines. J Public Health Policy. 2014;35(2):219–38.View ArticleGoogle Scholar Nielsen Company, Health, beauty & personal grooming: a global Nielsen consumer report. 2007. http://www2.acnielsen.com/reports/index_consumer.shtml. Accessed 10 Aug 2007. Rambaran A. What factors are important in the attitude and consumption concerning skin whitening products that enhance physical appearance of women of indian and chinese origin in the Netherlands. Master Theses, Erasmus University, Rotterdam, Holland. 2013.Google Scholar Diongue M, Ndiaye P, Douzima PM, Seck M, Seck I, Faye A, Diagne MC, Leye MM, Niang K, Tal AD. Economic weight of artificial depigmentation on household income in sub-Saharan Africa: the case of Senegal. Med Trop Health. 2013;23(3):308–12.Google Scholar Del Giudice P, Yves P. The widespread use of skin lightening creams in Senegal: a persistent public health problem in West Africa. Int J Dermatol. 2002;41(2):69–72.View ArticleGoogle Scholar Mahe A, Ly F, Aymard G, Dangou JM. Skin diseases associated with the cosmetic use of bleaching products in women from Dakar, Senegal. Br J Dermatol. 2003;148(3):493–500.View ArticleGoogle Scholar Kouotou EA, Bissek AC, Nouind CF, Defo D, Sieleunou I, Ndam EN. Dépigmentation volontaire: pratiques et dermatoses associées chez les commerçantes de Yaoundé (Cameroun). In: Annales de Dermatologie et de Vénéréologie, vol. 142, no. 6–7. Elsevier Masson. 2015. p. 443–5.Google Scholar Blay YA. Ahoofe kasa!: skin bleaching and the function of beauty among Ghanaian women. Jenda J Cult Afr Women Stud. 2010;14. http://www.africaknowledgeproject.org/index.php/jenda/article/viewArticle/528. Accessed 1 July 2013. Miller RL, Brewer JD, editors. The AZ of social research: a dictionary of key social science research concepts. London: Sage; 2003.Google Scholar Fokuo JK. The lighter side of marriage: skin bleaching in post-colonial Ghana. Inst Afr Stud Res Rev. 2009;25(1):47–66.Google Scholar Ajose FO. Consequences of skin bleaching in Nigerian men and women. Int J Dermatol. 2005;44:41–3.View ArticleGoogle Scholar de Souza MM. The concept of skin bleaching in Africa and its devastating health implications. Clin Dermatol. 2008;26(1):27–9.View ArticleGoogle Scholar Hunter ML. Buying racial capital: skin-bleaching and cosmetic surgery in a globalized world. J Pan Afr Stud. 2011;4(4):142–64.Google Scholar Gwaravanda ET. Shona proverbial implications on skin bleaching: some philosophical insights. J Pan Afr Stud. 2011;4(4):201.Google Scholar Hamed SH, Tayyem R, Nimer N, AlKhatib HS. Skin lightening practice among women living in Jordan: prevalence, determinants, and user's awareness. Int J Dermatol. 2010;49(4):414–20.View ArticleGoogle Scholar
CommonCrawl
Methodology article Identification of differentially expressed genes by means of outlier detection Itziar Irigoien1 and Concepción Arenas2Email authorView ORCID ID profile © The Author(s) 2018 Received: 5 June 2018 An important issue in microarray data is to select, from thousands of genes, a small number of informative differentially expressed (DE) genes which may be key elements for a disease. If each gene is analyzed individually, there is a big number of hypotheses to test and a multiple comparison correction method must be used. Consequently, the resulting cut-off value may be too small. Moreover, an important issue is the selection's replicability of the DE genes. We present a new method, called ORdensity, to obtain a reproducible selection of DE genes. It takes into account the relation between all genes and it is not a gene-by-gene approach, unlike the usually applied techniques to DE gene selection. The proposed method returns three measures, related to the concepts of outlier and density of false positives in a neighbourhood, which allow us to identify the DE genes with high classification accuracy. To assess the performance of ORdensity, we used simulated microarray data and four real microarray cancer data sets. The results indicated that the method correctly detects the DE genes; it is competitive with other well accepted methods; the list of DE genes that it obtains is useful for the correct classification or diagnosis of new future samples and, in general, it is more stable than other procedures. ORdensity is a new method for identifying DE genes that avoids some of the shortcomings of the individual gene identification and it is stable when the original sample is changed by subsamples. Differentially expressed gene Multivariate statistics Quantile Analysis of gene expression data arising from microarray or RNA-Seq technologies is a very important task and a major advance in biomedical research. In this kind of experiments, the main goal is to identify a small number of informative genes whose patterns of expression differ according to the experimental conditions. These genes, selected from thousands, are differentially expressed, between two possible conditions, as control and treatment groups or between two groups of patients. This gene discovery is challenging, because there is a large number of genes, a relatively small number of samples and it is important to identify which genes, independently of the sample studied of the same disease, are selected as differentially expressed genes. The selection of relevant genes to differentiate these two conditions has two main objectives for researchers. On the one hand, to select a small number of genes so that the information given by these genes is not redundant, and the classification or diagnostic of new samples which lead to lower prediction error. On the other hand, a large number of selected genes are related to others that have the same function and that are highly correlated. It is clear that, in order to obtain a list of genes that allows a good diagnosis for future samples, a combination of these two objectives is desirable. It is also necessary to know the relation and function of the selected genes. As, in general, the expression levels of genes are dependent on each other because genes are involved in complex regulatory pathways and networks [1], it seems convenient to consider the joint distribution of genes. However, methods to identify DE (differentially expressed) genes that are based on a gene-by-gene approach, ignoring the dependences between genes, are widely used. Maybe, the most popular is the t-test, but it has some restrictions [2, 3]. To solve the problem of unstable variances, the Significance Analysis of Microarrays (SAM) [2] was introduced. It works with a modified t-test introducing a factor to minimize the effect that small per-gene variances could make genes, with small differences between the expression conditions, statistically significant. An integrated solution for analyzing data from gene expression experiments is provided by limma [4, 5] an R package for Bioconductor [6]. limma fits a linear model for each gene and uses an empirical Bayes (eBayes) method for assessing differential expression. The empirical Bayes method (eBayes) [7] also uses moderated t-statistics, where instead of the global or single gene estimated variances, a weighted average of the global and single-gene variances is used. In the gene-by-gene approaches, as a large number of hypothesis tests are carried out, multiple testing procedures must be applied to assess the overall significance, controlling the family-wise error rate (FWER) or the false positive rate (FDR). The FWER is a very stringent criterion which measures the probability of at least one false positive in the set of significant genes, and most investigators accept a FWER of 5% [8]. A more liberal criterion is the FDR [9], which is the expected proportion of false positives among the significant genes. However, all the p-value adjustment methods lose sensitivity as they have a reduced chance of detecting true positives. As different statistical selection methods may capture different statistical aspects of expression changes, they may give different lists of selected genes. Nevertheless, these inconsistent gene lists could be rather functionally consistent [10–12]. However, it is clear that one desirable property for a proper statistical method is that it detects true differentially expressed genes and has the ability to maintain a consistent list of differentially expressed genes within a single data set, that is, with samples based on subsets of the same data. In this direction, in [3] an empirical evaluation of consistency and accuracy for different methodologies was presented, concluding that for smaller sample sizes, moderated versions of the t-test can generally be recommended, while for large data sets, the method may involve a compromise between consistency and power. A different approach was introduced in [13, 14]. In these works, the authors presented a statistic, called OR, which is useful to identify extreme observations or outliers in high-dimensional data sets. They presented the possibility to use the OR statistic as a tool to detect differentially expressed genes. The idea is that in expression studies, there are a large number of genes, and very few are expected to be important for the disease development. Thus, the important genes (which are DE) should show a different behaviour to those that are non-important. For this reason, the important genes could be considered as outliers in a background population of non-important genes. One more fruitful endeavour than searching for lists of differentially expressed genes might be to search for the best way in which the two groups under study can be distinguished. Suppose that a new sample is considered, and we wish to decide which of the two groups it belongs to. The best set of differentially expressed genes will be the one that leads to the smallest probability of misclassifying this new sample, that is, the set of differentially expressed genes that leads to the smallest error rate of all future allocations of new samples. Thus, it is very interesting to analyze if a procedure obtains lists of differentially expressed genes useful for classification of future samples. It is therefore important to recognize the two objectives: to maximize separation between the groups of available samples, and to minimize the misclassification rate over all possible future allocations. In this article, we present a new method which uses the OR statistics and two new measures which, together, are useful to obtain consistent lists of true differentially expressed genes, and which allows the correct classification of future samples. This novel approach, called ORdensity, takes into account the relation between all genes and it is not a gene-by-gene approach. In the "Methods" section, we detail the basic ideas, the new concepts, the description of the new method, a small example to aid the comprehension of the new methodology, as well as the simulation studies and gene expression data from four public cancer studies, that were used to evaluate the behaviour of the procedure. In the "Results" and "Discussion" sections we show the usefulness of our approach. We close this paper with some conclusions. The new ORdensity procedure has two main steps: finding potential differentially expressed genes and identifying differentially expressed genes. Next, we detail these two steps of the method and Fig. 1 resumes the general outline of the approach. In order to better understand the procedure development, a small artificial example is also included. General outline of the proposed ORdensity approach. In green the first step of the method and in red the second step of the method Let E={e1,…,eG} be a set of expression level values for G genes such that each eg is a vector eg=(egX,egY)′ giving the expression of the g-gene in two conditions X and Y (e.g., treatment/control or two patient groups). Then, \(\mathbf {e}_{gX}=\left (e_{{gX}_{1}}, \ldots, e_{{gXn}_{X}}\right)^{\prime }\) and \(\mathbf {e}_{gY}=\left (e_{{gY}_{1}}, \ldots, e_{{gYn}_{Y}}\right)^{\prime }\), nX+nY=n, are vectors of values giving the expression of the g-gene in the j-sample under condition X and Y, respectively. Each eg can then be considered a point in a continuous n-dimensional gene expression space S⊂Rn. Let Xg and Yg be the random variables representing the expression level of gene g in conditions X and Y, respectively (g=1,…,G). The proposed approach focuses on the differences of quantiles between samples: \(V_{gp} =F_{X_{g}}^{-1}(p)- F_{Y_{g}}^{-1}(p)\), p∈Cp where Cp is a set of probabilities. For instance, Cp={0.25,0.5,0.75} is an adequate set for small sample sizes. A gene, g, whose expressions in conditions X and Y are considered not differentially expressed (see Fig. 2 left) would verify that \(F_{X_{g}}^{-1}(p) = F_{Y_{g}}^{-1}(p)\), where F is the cumulative distribution function and p∈[0,1]. Otherwise, gene g is differentially expressed (DE) or it is important (see Fig. 2 right). Broadly speaking, matrix V=(vgp) with vgp=\( \hat {F}_{X_{g}}^{-1}(p)- \hat {F}_{Y_{g}}^{-1}(p)\), for g=1,…,G and p∈Cp must contain small values corresponding to the major number of no DE genes. However, the most differentially expressed genes should show a different behaviour, and for this reason they can be considered as outliers in V. Thus, our approach attempts, in two main steps, to find outliers in V which can be identified as differentially expressed genes. Visualization of \(\hat {F}_{X_{g}}^{-1}(p)- \hat {F}_{Y_{g}}^{-1}(p)\) differences for p∈Cp={0.25,0.5,0.75} for two genes. In the left side, a gene whose expressions in conditions X and Y are not differentially expressed (No DE gene); in the right side, a gene that is differentially expressed in conditions X and Y (DE gene) First step: finding potential differentially expressed genes Let V=(vgp) the G×P matrix with \(v_{gp} =\hat {F}_{X_{g}}^{-1}(p)- \hat {F}_{Y_{g}}^{-1}(p)\), p∈Cp where Cp is a set of probabilities (P=#Cp). As DE genes are expected to be outliers based on the \(\mathbf {v}_{g} = \left (v_{gp}\right)'_{p \in C_{p}}\) values, the procedure computes the robust index OR of outlyingness [13, 14] as follows. Define dgh the Euclidean distance between vg and vh. For a fixed gene g, the OR statistic is given by: $$OR(g)=\frac{{Median}_{h = 1, \ldots,G}\left\{d^{2}_{gh}\right\}}{1/2 {Median}_{h, \, j =1, \ldots, G}\left\{d^{2}_{hj}\right\}} $$ In this ratio, the numerator gives the median value of all (squared) distances of the gene of interest with respect to all the other genes; the denominator gives (half) the median of all (squared) distances among all genes. As a consequence, given a set of G genes, OR gives a ranking of genes, so that genes with a large value of OR will be genes that are further away from the set of all genes and therefore they are possible outliers (i.e., important genes). In this way, the original G×n data matrix is firstly transformed in a G×P matrix with the vgp differences between Cp-quantiles and, secondly, in a G dimensional vector with the OR values OR=(OR(g1),…,OR(gG))′. As the distribution of OR is unknown, the procedure considers permuted samples in order to generate values associated with genes which are not differentially expressed. Thus, the expression values of each gene are permuted B times, that is, each sample has its corresponding label and actually the permutation is carried out on the labels. Once the labels of the samples are reassigned by permutation (B times), the expressions of the genes are classified according to those two classes. Then, the procedure computes matrix \({\mathbf {V}}_{b}^{*}\) and vector ORb=(ORb(g1),…,ORb(gG))′ for each permutation b, (b=1,…,B). Given a fixed α∈(0,1), it calculates the percentile (1−α) 100% of all the elements of the matrix with the permuted samples ORb. Let us denote this value by cα. In this way, the method excludes those genes, g, with OR(g)≤cα. Then, DE genes must be in the subset of genes Sα={g | OR(g)>cα}. We call them potential genes. Second step: identifying differentially expressed genes By construction, there are α×G×B cases where ORb(h)>cα among the permuted samples and we call Rα={h | ORb(h)>cα, b=1,…B} the set of those cases. That is, in Rα there are all permuted cases h with ORb(h)>cα for some b. These cases are related to genes that are not DE but have large OR values. The corresponding values of \({\mathbf {v}}_{b, h}^{*}\), reflect the behaviour of these genes in relation to the differences between quantiles. Hence, we have, on the one hand, \(\left (\mathbf {v}_{g}'\right)_{g \in S_{\alpha }}\) differences for potential differentially expressed genes and on the other hand, we know that the differences \(\left (\mathbf {v}_{b,\,h}^{*}\right)_{h \in R_{\alpha }}\) represent the behaviour of cases that are false positives with a misleadingly large value of OR. Therefore, the analysis of the differences and similarities between vg and \(\mathbf {v}_{b,\,h}^{*}\) will provide a way to discriminate the truly DE genes among the set Sα of potential DE genes. To this aim, consider matrix \(\left (\mathbf {v}_{b,\,h}^{*}\right)_{h \in R_{\alpha }}\) randomly divided into k-folds. Then, consider the union of set Sα with the ith fold Rα,i={h∈Rα| ith fold}, that is, Ui=Sα∪Rα,i. In order to understand what happens in Ui, consider the following small artificial example. Small artificial example: We simulated [15] 1000 genes under two conditions X and Y with 30 samples for each condition and 60 of these genes were generated as differentially expressed. We considered Cp={0.25,0.5,0.75} to build the (1000×3)-matrix V of differences between the quartiles and these differences were weighted with {0.25,0.5,0.25}, respectively. The procedure obtained the matrix V of differences between the weighted quartiles and it calculates the OR statistic for the 1000 original genes, OR=(OR(g1),…,OR(g1000))′. Next, for B=100 permuted samples of X and Y, we computed matrix \({\mathbf {V}}_{b}^{*}\) of differences between the weighted quartiles and we computed their OR values ORb=(ORb(g1),…,ORb(g1000)), b=1,…,B. For a fixed α=0.05, the percentile (1−α) 100% of ORb on the B=100 permuted samples was c0.05=6.27 and the set S0.05={g | OR(g)>c0.05}={g | OR(g)>6.27} contained 100 genes, which were potential DE genes. Furthermore, between the 1000×100 permuted observations, 5000 of them had an OR value above the threshold c0.05=6.27. Next, we considered a 10-fold partition. For fold 1, the Fig. 3a shows the results of the two first principal components analysis on the matrix \(\left [\left (\mathbf {v}_{g}\right)_{g \in S_{\alpha }} \, | \left (\mathbf {v}^{*}_{b,\,h}\right)_{h \in R_{\alpha, i}}\right ]'\), with 99.3% of explained variability. As can be observed, a potential gene g that is not really differentially expressed was closely surrounded by cases from Rα,i, that is, by false positive permuted cases. On the contrary, a gene g that is really differentially expressed was not surrounded by cases from Rα,i. That holds for every fold and clearly shows the intuitive idea behind the proposed methodology. Illustrative example. a First two principal components of data \(\left [(\mathbf {v}_{g})_{g \in S_{\alpha }} \, | (\mathbf {v}_{b,h}^{*})_{h \in R_{\alpha \,, 1}}\right ]'\) corresponding to fold 1 (99.3% of explained variability), and there are represented: the potential genes (genes in S0.05) by circles; the false positives genes (genes in Rα,i) by "p"s, and the differentially expressed genes (genes generated as truly DE genes) by crosses. b Representation of the potential genes based on OR (vertical axis), FP (horizontal axis) and dFP (size of the circle is inversely proportional to its value). Truly DE genes are marked with a cross; in red and blue, genes belonging to cluster 1 and cluster 2, respectively Following with the method, let us call (fi,fi0) the proportion of potential genes and permuted cases in set Ui (i=1,…,k). As we have observed in the small artificial example, if a potential gene is genuinely DE, its behavior should be different from those cases in Rα,i and the other way round; if a potential gene presents a similar behavior to those cases in Rα,i, then it should be considered as not truly DE. So, for each gene g in Sα, its 10-Nearest-Neighbourhood (NNi(g)) in Ui is considered. We calculate two indicators in this neighbourhood: the number of cases from Rα,i, FPi(g)=#{j∈NNi(g)|j∈Rα,i} (number of false positive permuted cases) and the density \({dFP}_{i}(g) = {FP}_{i}(g)/\max _{j \in {NN}_{i}(g)}\left \{d_{gj}^{2}\right \}\). The denominator \(\max _{j \in {NN}_{i}(g)}\left \{d_{gj}^{2}\right \}\) will rarely be 0 since it would involve 10 tied nearest distances for g in NNi(g), but if it is equal to 0 then dFPi(g)=0 if FPi(g)=0 and dFPi(g)=NaN if FPi(g)>0. Gathering the k folds, we obtain for each g∈Sα the average number of false positive permuted cases (FP) and the average density (dFP) of FP in the neighbourhood: \(FP(g)=1/k\sum _{i}{FP}_{i}(g)\) and \(dFP(g)=1/k\sum _{i}{dFP}_{i}(g)\). Thus, to discriminate DE genes among those in set Sα, we have to look for those g with low value of FP(g) and dFP(g) along with high values of OR(g). Under this criterion, two types of differentially expressed gene selection can be made: ORdensity strong selection: take as differentially expressed genes those with a large OR value and with FP and dFP equal to 0. ORdensity relaxed selection: take as differentially expressed genes those with a large OR value and with small FP and dFP values. The average proportion of potential genes and the permuted cases among the k folds (\(f=1/k\sum f_{i}\) and \(f_{0}=1/k\sum f_{i0}\) respectively) are a good reference to look for small values of FP. Furthermore, beyond the inspection of individual genes, we tackled the selection of DE genes by clustering the genes in Sα based on the input variables OR, FP and dFP. This can be useful as it offers different patterns of genes based on their importance. Small artificial example (cont.): Using the Partition Around Medoids (PAM) clustering method [16] on variables OR,FP and dFP, and selecting two clusters as indicated in the silhouette analysis [17], the variables presented the following basic characteristics (Table 1) and the two clusters are represented in Fig. 3b. It is worth mentioning that the average distribution of potential genes and permuted cases in sets Ui was \(\left (\sum _{i}f_{i}/10, \sum _{i}f_{i0}/10) = (0.17, 0.83\right)\). It means that if the distribution were random, an average of 8.3 permuted cases would be expected in the 10-NN. Clearly, in cluster 1 the FP values are below this value. Finally, we checked the distribution of real DE genes across the two clusters, and the 60 genes in Cluster 1 are exactly the 60 real DE genes. Illustrative example Cluster 1 (n1=60) Basic description for the OR,FP and dFP values in the two clusters obtained using PAM and silhouette procedures Note 1: When the variability of genes among different types of samples is different, it is advisable to scale the differences between quantiles, for instance, \(v_{gp} =\left (\hat {F}_{X_{g}}^{-1}(p)- \hat {F}_{Y_{g}}^{-1}(p)\right)/\max \{RI(X_{g}), RI(Y_{g})\}\), with RI(Xg) and RI(Yg) the interquartile ranges in samples X and Y, respectively. Note 2: As the method considers the differences between quantiles, it may be interesting to give greater importance to some of them, such as the median. Simultaneously, robustness can be obtained avoiding the possible fluctuation of the estimations of the percentiles. Therefore, different weights to the quantiles can be introduced in the procedure. Note 3: We have considered for each gene g in Sα its 10-Nearest-Neighbourhood. This is a parameter of the method that could be set in different ways, and it is considered to obtain better estimations of the proportion of potential genes and permuted cases in set Ui. In this sense, a small number of neighbors as 5 does not seem adequate since the percentage of false positive permuted cases that could be detected would be very unstable. On the other hand, note that as the method takes into account the mean value of the false positive density, the possible effect of the number of chosen neighbors is minimized. Experimental setup To evaluate the usefulness of the ORdensity procedure, we simulated multiple gene expression data sets using different parameter settings. Furthermore, we applied the procedure to four real cancer data sets. All computations were performed using the R language and Environment for Statistical Computing (R) 3.3.1 [18, 19] in combination with Bioconductor 3.3 [6]. As the proposed method may depend on the quantiles estimation of each gene expression for different conditions, different sample size situations including the case of small samples were considered in the simulated studies. Moreover, for both simulated and actual data, the method was compared with other well-recognized methods in order to assess whether it could compete with them. Simulation study 1 We assumed a total of 1000 genes, among which 50, 100 or 200 were differentially expressed (DE) genes. On the one hand, the expression levels of all no DE genes were generated by \(N\left (0, \sigma _{g}^{2}\right)\) and \(N\left (0, \gamma _{g}^{2}\right)\) distributions in conditions X and Y, respectively. On the other hand, the DE genes were generated using the \(N\left (0, \alpha _{g}^{2}\right)\) and \(N\left (\mu _{g}, \beta _{g}^{2}\right)\) distributions for conditions X and Y, respectively, with |μg|=Δg max{αg,βg}. Parameter Δg sets the importance of gene g, being gene g more important as long as Δg is bigger. Within this general setting, three different scenarios were considered: Scenario 1: All genes had equal variability (σg = γg = 1,αg = βg=1), and all DE genes are equally important under this scenario, i.e., Δg=Δ, with Δ in {1.5,2,3}. Scenario 2: All genes had not necessarily equal variability (σg≠γg,αg≠βg). These variabilities were randomly selected among {1,1.2,1.5,2}. All DE genes are equally important, i.e., Δg=Δ for all g, with Δ in {1.5,2,3}. Scenario 3: All genes had not necessarily equal variability (σg≠γg,αg≠βg) neither the importance of DE genes is the same (Δg≠Δh, g≠h). Variability parameters were randomly selected in {1,1.2,1.5,2} as in the previous scenario and Δg values were randomly selected among {1.5,2,3}. To better understand the performance of our approach, we simulated the data for the three scenarios assuming equal sample sizes for X and Y (nX=nY=30) and different sample sizes (nX=30, nY=10). For the most general situation, scenario 3, we also considered the case nX=nY=10 in order to evaluate the procedure for small sample sizes. Using the area under the ROC curve, for the three scenairos and for nX=nY=30, nX=30, nY=10 and nX=nY=10, the ORdensity results were compared with those obtained using other well-known methods in this field, such as Significant Analysis of Microarrays [2] and Linear models with Empirical Bayes statistic (limma [4, 5]). Furthermore, for each situation 100 replicates were performed. The Significant Analysis of Microarrays (SAM) method is a modification of the t-statistic and it was introduced to avoid the effect of the small per-gene variances that can make small fold-changes statistically significant. This modification adds a value which is calculated from the distribution of gene-specific standard errors. SAM was applied using the package samr for Bioconductor [6] in R language and Environment for Statistical Computing [18, 19]. The Empirical Bayes statistic (eBayes) is equivalent to shrinking the estimated sample variances towards a pooled estimate, resulting in far more stable inference when the number of arrays is small. The linear model with empirical Bayes statistic (called limma in the following) was applied using the limma package for Bioconductor [6] in R language and Environment for Statistical Computing [18, 19]. The simulation was set with blocks of DE genes [20] which are correlated within each block. The data was simulated from a multivariate normal distribution, with all genes having variance 1 and correlation 0.9 between genes within each block. It means that the variance-covariance matrix was a block-diagonal matrix such as: $$\Sigma = \left(\begin{array}{cccc} \mathbf{\sigma}_{block} & \mathbf{0} & \ldots & \mathbf{0}\\ \mathbf{0} & \mathbf{\sigma}_{block} & \ldots & \mathbf{0}\\ \vdots & \vdots & \vdots & \vdots\\ \mathbf{0} & \mathbf{0} & \ldots & \mathbf{\sigma}_{block}\\ \end{array} \right)$$ where σblock=(σgh)gh with σgg=1 and σgh=0.9g≠h. We considered 1, 2 or 3 blocks, and each block with 5, 20 or 100 DE genes. Following [20], the difference between mean values of conditions X and Y was set depending on the number of blocks: One block: μA=−1.65, μB=1.65 Two blocks: μA=(−1.18,−1.18)′, μB=(1.18,1.18)′ Three blocks: μA=(−0.98,−0.98,−0.98)′, μB=(0.98,0.98,0.98)′ Once the DE genes were generated, 4000 variables representing no DE genes were added: 2000 following a N(0,1) distribution and 2000 following a \(\mathcal {U}(-1, 1)\) distribution. For the different number of blocks, we considered equal sample sizes for X and Y (nX=nY=30) and different sample sizes (nX=30, nY=10). For each situation 100 replicates were done and using the area under the ROC curve, the OR results were compared with those obtained by limma and SAM. In all the above simulations, in order to obtain comparable results throughout the 100 runs, we always considered 3 clusters determined by PAM [16] clustering procedure. Obviously, to be absolutely accurate it would have been necessary to determine the number of clusters in each dataset. Actual data sets: lymphoma, Golub, colon and prostate cancer We considered four publicly available cancer data sets: a well-known lymphoma data set which is in the R package spls, and post-processed Golub, colon and prostate data sets that were downloaded from [21]. With these data sets, we compared the ORdensity results with SAM and limma. The standard rule used for selecting a gene as DE with limma was to present an adjusted p-value smaller than 0.05. To compute the adjusted p-values for gene ranking we used a very stringent method (Bonferroni), and a more liberal procedure (BH, [9]). For SAM, the considered rule was to present a q-value [22] equal to 0. For the ORdensity procedure, the Partition Around Medoids (PAM) [16] clustering and the silhouette analysis [17] were performed in order to establish the number of clusters and both the strong and relaxed selection were considered. We evaluated the obtained results considering three different perspectives: the agreement between the three methods of the selected gene lists; the ability to maintain consistent lists with samples based on subsets of 80% of the original data selected at random, and the leave-one-out cross-validation correct classification rate for future classifications obtained when a Weighted Distance Based Discriminant analysis (WDB-discriminant) using Euclidean distance was performed [23]. Weighted Distance Based Discriminant (WDB-discriminant) is an improvement of the Distance Base rule [24, 25] which takes into account the statistical depth of the units. The WDB-discriminant was applied using the WeDiBaDis package available at https://github.com/ItziarI/WeDiBaDis. Lymphoma cancer data set: This data set [26] contains the gene expression of 1095 genes measured on 42 adults with large B-cell lymphoma (DLBCL), which can present two different molecular forms denoted by DLBCL1 and DLBCL2, respectively. Half of the samples presented the form DLBCL1 and the other half the form DLBCL2. Golub data set: The data set [27] contains the gene expression of 7129 genes. There are 72 samples with two different types of leukaemia, 47 acute lymphoblastic leukaemia (ALL) and 25 with acute myeloblastic leukaemia (AML). Colon cancer data set: This colon cancer data study [28] consists of 6000 genes measured on 62 patients, 40 of them diagnosed with colon cancer and 22 of them are healthy. Prostate cancer data set: This study [29] considered 12,626 genes and 102 samples, 50 of which were non-tumour prostate samples and 52 of which were prostate tumours. In all the experiments, simulated or actual data sets, we considered the differences between the three quartiles, that is, Cp={0.25,0.5,0.75}. Moreover, we scaled these differences and weighted them by {1/4,1/2,1/4} respectively, making the most important difference the one between the medians (See Note 1 and 2, in the "Methods" section). In the case of the Golub data set, these differences were not scaled because for some of the genes the interquartile range was null. Next, we present the results obtained with the simulated data, as well as the actual cancer data sets. With the simulated microarray data, we evaluated the behavior of the method in relation to the correct selection of DE genes since, in this case, we know which genes are actually differentially expressed. We evaluated the general behavior of the procedure in relation with the proportion of False Positive among the selected genes as DE and using the area under the ROC curve value, we compared the ORdensity results with those obtained using the alternative approaches, SAM and limma. Furthermore, we present a detail evaluation of both, the first and the second step of the method. With the actual microarray data, we evaluated and compared our procedure with the alternative approaches SAM and limma, measuring the ability of the method in preserving predictive accuracy classifying the samples, and the stability of the lists of selected genes when a fraction of the samples (20%) was eliminated at random. General behavior As a general summary, the results indicated that the three variables that the method builds, OR, FP and dFP, are good discriminative variables, separating correctly the DE genes from the not DE genes. On the one hand, the number of DE genes not detected by the method was very small and always related with the less DE genes (see "Detail evaluation of step 1" subsection). On the other hand, with the strong selection no False Positives were detected. With the more relaxed selection and considering the partition in three clusters for all the runs, the results indicated that, in all situations, the method only considers as DE genes those from clusters 1 and 2. The False Positives were mostly in cluster 2 and the worst results were obtained with 50 simulated genes and for small sample sizes (equal to 10). However, for the majority of the situations the average number of False Positives was 0 in cluster 1 and very small in cluster 2. It is worth to mention that partition in 3 clusters is not necessarily the best partition that could be obtained in each of the 100 runs. As a consequence, the variability of the percentage of False Positives in cluster 2 was very high for some cases. That is, for some of the runs, cluster 2 was formed by DE genes but not for other runs, for which a partition with different number of clusters would have probably been more appropriate (see Detail evaluation of step 2 subsection). Detail evaluation of step 1 Regarding this step, it is necessary to evaluate the number of simulated DE genes that the method did not consider as potential genes and therefore were missed. The results indicated that as the number of simulated DE genes increases, it is easy to not include at least one DE gene in the set of potential DE genes. Moreover, when the sample size for one or the two conditions was small (equal to 10), the sensibility to include among the potential all the simulated DE genes decreases. Nevertheless, the proportion of DE genes considered as potential DE genes was always very large, and DE genes not considered as potential genes were very few and always associated with Δ=1.5. With more detail, on the simulated data sets with equal sample sizes for the two conditions (nX=nY=30), and in scenarios 1 and 2, when all the simulated DE genes were related to the same value of the parameter Δg (Δg=Δ for all DE genes), we can observe that as the number of simulated DE genes increases, it is easy to not include at least one DE gene in the set of potential DE genes (Table 2, Fig. 4 and Additional file 1: Table S13, Figure S8). However, it is clear that, in any case, the proportion of DE genes considered as potential DE genes is very large, being always higher than 95% and only in two cases presents a lower value (78.80% and 85.24%, respectively). Moreover, these worst results were associated with the lowest value of Δ, specifically Δ=1.5. In scenario 3, where the importance of DE genes is different and set by their values Δg selected at random in {1.5,2,3}, it is interesting to analyze what is the importance of the genes that are not considered as potential in set Sα. For instance, in the case of 50 DE genes, on average 99.32% of them are included in S0.01. Moreover, the DE genes not detected as potential had Δg=1.5, the lowest value. Thus, all the DE genes not considered as potential in S0.01 are between the less DE. Similar results are observed in the rest of the Table (Additional file 1: Table S14, and Figure S9). Simulation study 1, scenario 1 with nX=nY=30 and 100 runs. Evaluation of the first step of the ORdensity method using different values of α. Top: in x axis number of DE genes; in y axis estimated probability, \(\hat {p}_{m}\), of no considering as potential DE gene at least one gene that it really is. Bottom: in x axis number of DE genes; in y axis mean proportion of DE genes that the procedure considered as potential DE genes Simulation study 1, scenario 1 with nX=nY=30 and 100 runs Δ=1.5 Nb. of DE genes \(\hat {p}_{m}\) 99.84 (0.54) 97.20 (2.2) Δ=2 Evaluation of the first step of the ORdensity method using different values of α. The Table shows the estimated probability, \(\hat {p}_{m}\), of no considering as potential DE gene at least one gene that it really is, and the mean proportion of DE genes (row named "%") that the procedure considered as potential DE genes. Corresponding standard deviations are in brackets In the case where the sample sizes for the two conditions were different (nX=30,nY=10) and for the three scenarios, as we can observe, the results were very similar. As the sample size in one condition is smaller, the sensibility to include among the potential all the DE genes decreases, that is, the probability of not considering as potential DE gene at least one DE gene really increases (Table 3, Fig. 5 and Additional file 1: Tables S15 and S16, Figures S10 and S11). Simulation study 1, scenario 1 with nX=30, nY=10 and 100 runs. Evaluation of the first step of the ORdensity method using different values of α. Top: in x axis number of DE genes; in y axis estimated probability, \(\hat {p}_{m}\), of no considering as potential DE gene at least one gene that it really is. Bottom: in x axis number of DE genes; in y axis mean proportion of DE genes that the procedure considered as potential DE genes Simulation study 1, scenario 1 with nX=30, nY=10 and 100 runs 97.2 (2.8) 100.0 (0.1) Evaluation of the first step of the ORdensity method using different values of α. The Table shows the estimated probability, \(\hat {p}_{m}\), of no considering as a potential DE gene at least one gene that it really is, and the mean proportion of DE genes (row named "%") that the procedure considered as potential DE genes. Corresponding standard deviations are in brackets For scenario 3, when both sample sizes are small (nX=nY=10) similar results were obtained. Nevertheless, the mean proportion of DE genes that were included in the potential was notably lower. Again, the missed DE genes were mostly related to Δg = 1.5 (Additional file 1: Table S17 and Figure S12). In the second step of the procedure, the interest lies in evaluating the number of False Positive genes selected by the method. As a general summary, with the strong selection no False Positive were selected. When the method used the more relaxed selection it retained as DE, in all cases, genes in clusters 1 and 2. The False Positive genes were mostly in cluster 2 and the worst results were obtained with 50 simulated genes and for small sample sizes (equal to 10). As mentioned before, partition in 3 clusters, throughout all runs, may lead to a high variability in cluster 2. Focusing on the details, it can be seen that in scenario 1, nX=nY=30, Δ=1.5, and for any number of simulated DE genes, by chance an average of 8.5 False Positive permuted cases would be expected in the 10-NN. For 50 simulated DE genes, clearly, in clusters 1 and 2 the mean FP value is below 8.3. Then, the method considered genes in clusters 1 and 2 as DE, with on average less than one False Positive gene in cluster 1 (0.2% in cluster 1) and an average of 4.5 False Positive genes in cluster 2 (20.7% in cluster 2). When the number of simulated DE genes increases, a small number of False Positive genes were obtained. For 100 simulated DE genes, again the method considered those in clusters 1 and 2 as DE genes, with 0 False Positive in cluster 1 and on average less than one False Positive gene in cluster 2 (1.6%). Similar results were obtained for 200 simulated DE genes. For greater values of Δ, and for any number of simulated DE genes, the average number of False Positives per cluster is 0 or very small, being in the worst case equal to 1.6 (Table 4, Fig. 6). With equal sample sizes for the two conditions (nX=nY=30), in scenarios 2 and 3, again, small values of the average number of False Positives per cluster were obtained, being 0 or less than one in the majority of the situations. In the worst situation related to 50 simulated DE genes, the average number of False Positives per cluster was 1.47. Simulation study 1, scenario 1 with nX=nY=30 and 100 runs. Evaluation of the second step of the ORdensity method with α=0.05. In x axis number of DE genes; in y axis the mean of False Positives genes per cluster in % (\(\overline {FPC}\)). In red cluster 1 (C1), in green cluster 2 (C2) and in blue cluster 3 (C3) \(\bar {n}_{i}\) \(\overline {OR}\) \(\overline {FP}\) \(\overline {dFP}\) \(\overline {FPC}\) (sd) \(C_{{1}^{*}}\) \(C_{1}^{*}\) Evaluation of the second step of the ORdensity method with α=0.05. In the first two columns, delta (Δ) values and number of total simulated DE genes. In column 3, the 10×f0 values where f0 is the average proportion of permuted cases in sets Ui. In column 4, the "*" indicates the clusters considered by the procedure. Columns 5–8 contain for each cluster: the mean number of genes (\(\bar {n}_{i}\)), the mean of OR values (\(\overline {OR}\)), the mean of FP values (\(\overline {FP}\)), the mean of dFP values (\(\overline {dFP}\)). In the last column the mean of False Positives genes per cluster in % (\(\overline {FPC}\)). Corresponding standard deviations are in brackets With different samples sizes (nX=30,nY=10) and even small sample sizes (nX=nY=10), the average number of False Positive genes was again very small. The worst cases were obtained when only 50 simulated DE genes and Δ=1.5 were considered. In the majority of the other situations, the average number of False Positive genes was 0 or less than one (Table 4, Fig. 6 and in Additional file 1: Tables S18–S23, Figures S13–S18). Moreover, for scenario 3, where DE genes were associated with different values of Δg, observing the distribution of DE genes within each cluster in relation with their corresponding Δg values, we obtained that the most important DE genes, which are related to Δg=3, are mainly in cluster 1, and that the DE genes included in cluster 3 are the less important since they are related most principally to Δg=1.5 and never with Δg=3. It is worth noting that even for a small number of samples (nX=nY=10) similar results were obtained (Additional file 1: Tables S19, S22 and S23 last column and Figures S14, S17 and S18). As in the above simulation study, the method identified correctly the DE genes. With the strong selection, again, no False Positive genes were obtained. With the more relaxed selection, the worst results were obtained with only 5 simulated DE genes. In the majority of situations the method only considers as DE genes those in cluster 1, with 0 or a very small number of False Positive genes. Evaluation of the area under the ROC curve The mean AUC values were very large and similar to those obtained by SAM and limma even when the sample sizes are small (Table 5, and Additional file 1: Tables S24 and S25). AUC mean values for Simulation study 1, scenario 1 and 100 runs. In first column: n indicates the number of DE gens; Δ the Δ values; A the ORdensity method; B the limma method and C the SAM method nX=nY=30 nX=30, nY=10 Concerning the results of the first step, we can observe that as a general behaviour, the method conrectly included the DE genes in the set of potential genes. With an equal number of samples (nX=nY=30), the method included all the DE genes in the set of potential genes, being 99.8% the worst result in the case of 3 blocks (Table 6). When the number of samples in one condition is small (nX=30, nY=10) and there is only one block of correlated genes, the same behaviour can be observed. With 2 or 3 blocks of correlated genes some of the DE can be missed, however, the proportion of DE genes that are actually in Sα is very large, being 87.5% in the worst case (Additional file 1: Table 26, Figure 19). Simulation study 2 with nX=nY=30 and 100 runs per block As a general comment, one can see a strong correlation/correspondence between being a cluster with high OR and low false positiveness in the neighbourhood (FP, dFP), along with a small number of False Positive genes in the cluster. This holds for a different number of blocks and a different number of genes in the blocks. With nX=nY=30 and one block (Table 7, Fig. 7), the worst results were obtained for 5 simulated DE genes. In this case, the method only considered those in cluster 1 as DE genes, but the size of cluster 1 varied between 5 and 58 genes. This high variability in the number of genes in cluster 1 is probably due to always considering 3 clusters. For 20 or 100 simulated DE genes, again the procedure only considers genes in cluster 1 and not one False Positive gene was detected. For two blocks, with 5 and 20, again only cluster 1 was considered and no False Positives were obtained. With 100 simulated DE genes in each block, clusters 1 and 2 were considered by the method with 0 and 15.9 False Positives genes, respectively. With three blocks, independently of the number of simulated DE genes, no False Positives were found. Similar results were obtained for nX=30,nY=10 (Additional file 1: Table 27, Figure 20). Simulation study 2 with nX=nY=30 and 100 runs. Evaluation of the second step of the ORdensity method with α=0.05. In x axis number of DE genes; in y axis the mean of False Positives genes per cluster in % (\(\overline {FPC}\)). In red cluster 1 (C1), in green cluster 2 (C2) and in blue cluster 3 (C3) Simulation study 2, with nX=nY=30 and 100 runs Evaluation of the second step of the ORdensity method with α=0.05. In the first column number of blocks. In column 2, the number of total simulated DE genes. In column 3, the 10×f0 values where f0 is the average proportion of permuted cases in sets Ui. In column 4, the "*" indicates the clusters considered by the procedure. Columns 5–8 contain for each cluster: the mean number of genes (\(\bar {n}_{i}\)), the mean of OR values (\(\overline {OR}\)), the mean of FP values (\(\overline {FP}\)), the mean of dFP values (\(\overline {dFP}\)). In the last column the mean of False Positives genes per cluster in % (\(\overline {FPC}\)). Corresponding standard deviations are in brackets For nX=nY=30, the mean AUC values were very large and somewhat better than those obtained with limma and SAM, especially for one and two blocks (Table 8). Similar results were found for nX=30, nY=10. AUC mean values for Simulation study 2, with nX=nY=30 and 100 runs Nb. DE genes ORdensity limma (0.00002) In summary, the results with simulated data showed that ORdensity correctly detects the DE genes and is competitive with other well-known methods. Actual data: lymphoma, Golub, colon and prostate cancer data sets For the three procedures, the number of genes selected under the standard criterion varies, being much larger for SAM and limma. For the lymphoma data set (1095 genes), the ORdensity procedure selected 96 potential DE genes (#S0.05=96), distributed in two clusters with 19 and 77 genes, respectively. However, the strong selection could not be used since no gene had FP and dFP values equal to 0. The limma method produced a list with 24 and 88 genes using Bonferroni and BH procedures, respectively. The SAM procedure produced a list containing 64 genes. When the Golub data set (with 7129 genes) was considered, the procedure selected 556 potential DE genes, distributed in two groups with 291 and 265 genes, respectively. Among the genes in cluster 1, only 4 had FP and dFP values equal to zero. Thus, with the ORdensity, the most restricted criterion gave a list with only 4 genes and the relaxed criterion gave a list with 291 genes. The limma method produced a list with 193 and 938 genes using, respectively, Bonferroni and BH methods, and the SAM procedure produced a list with 403 genes. For the colon data set (6000 genes), the ORdensity procedure selected 186 genes as DE potential genes (#S0.05=186), distributed in three clusters with 59, 88 and 39 genes, respectively. Among the 59 genes of cluster 1, twelve of them had no false positive permuted cases in their neighbourhood, with FP and dFP values equal to zero. Thus, with the ORdensity, the most restricted criterion gave a list with only 12 genes and the relaxed criterion gave a list with 59 genes. The limma method produced a list with 49 and 366 genes using Bonferroni and BH, respectively, and the SAM procedure produced a list containing 166 genes. In the case of the prostate data set (12626 genes), 1531 potential DE genes (#S0.05=1531) were detected belonging to two clusters with 990 and 541 genes, respectively. Out of the 990 genes in cluster 1, 131 had no false positive permuted cases in their neighbourhood, with FP and dFP values equal to zero. Different number of selected genes were considered: the 131 produced by the more restricted ORdensity; the 990 for the relaxed ORdensity; the 1531 total candidate genes; the 263 and 2684 selected by limma rule under Bonferroni and BH, respectively, and the 3322 genes selected under SAM procedure. The results obtained with these actual data sets are shown in Tables 9, 10, 11 and 12, respectively. The leave-one-out cross-validation correct classification rate (rows I in Tables 9, 10, 11 and 12) indicates that ORdensity does not lead to overfitting, and can achieve the objectives of reducing the set of selected genes and reaching high leave-one-out cross-validation correct classification accuracy rates. With the lymphoma data set, the ORdensity, with the 19 genes selected, reached 100% of leave-one-out cross-validation correct classification rate and this result was matched by SAM and limma using 19 or 24 genes, respectively. With the Golub data set and using only 4 genes, a correct classification rate of 90.28% was reached and with the 291 genes selected by the relaxed selection the maximum value of 97.22% was obtained. With limma or SAM, using the first 4 selected genes, the classification rate was improved (97.22 versus 90.28), but using limma or SAM there was not any objective criterion to select 4 genes. A similar situation happened with the colon and prostate data sets. Lymphoma cancer data set A vs B A vs C B vs C Results for different number (Ns) of selected genes: 19 with ORdensity relaxed selection; 24 with limma and Bonferroni; 64 with SAM; 88 with limma and BH, and 96 total potential DE genes. Rows I present the leave-one-out cross-validation correct classification rate. In bold, the results for the genes selected under standard criteria for ORdensity, limma and SAM procedures; in rows II, number of common selected genes between the ORdensity, limma and SAM approaches; in rows III, mean and standard deviation (in brackets) of the number of genes that for 10 subsamples were always kept selected Golub cancer data set ORdensity (A) limma (B) SAM (C) 0.5 (0.71) 150.4 (3.89) 12.3 (2.26) 151.8 (24.18) Results for different number (Ns) of selected genes: 4 with ORdensity strong selection; 193 with limma and Bonferroni; 291 with ORdensity relaxed selection; 403 with SAM; 556 total potential DE genes and 938 with limma and BH. Rows I present the leave-one-out cross-validation correct classification rate. In bold, the results for the genes selected under the standard criteria for ORdensity, limma and SAM procedures; in rows II, number of common selected genes between the ORdensity, limma and SAM approaches; in rows III, mean and standard deviation (in brackets) of the number of genes that for 10 subsamples were always kept selected Colon cancer data set Results for different number (Ns) of selected genes: 12 with ORdensity strong selection; 49 with limma and Bonferroni; 59 with ORdensity relaxed selection; 166 with SAM; 186 total potential DE genes and 366 with limma and BH. Rows I present the leave-one-out cross-validation correct classification rate. In bold, the results for the genes selected under standard criteria for ORdensity, limma and SAM procedures; in rows II, number of common selected genes between the ORdensity, limma and SAM approaches. In rows III, mean and standard deviation (in brackets) of the number of genes that for 10 subsamples were always kept selected Prostate cancer data set 1215.4 (99.66) 563 (19.87) Results for different number (Ns) of selected genes: 131 with ORdensity strong selection; 263 with limma and Bonferroni; 990 with ORdensity relaxed selection; 1531 total potential DE genes; 2684 with limma and BH, and 3322 with SAM. Rows I present the leave-one-out cross-validation correct classification rate. In bold, the results for the genes selected under standard criteria for ORdensity, limma and SAM procedures; in rows II, number of common selected genes between the ORdensity, limma and SAM approaches; in rows III, mean and standard deviation (in brackets) of the number of genes that for 10 subsamples were always kept selected As can be observed (rows II in Tables 9, 10, 11 and 12), the three methods only share some of the genes in their respective lists, as is usual in these type of procedures. Furthermore, the method with which our analysis shares more genes varies according to each data set. These results indicated that ORdensity returns a small number of crucial genes, that are strongly related to the disease, since the values of leave-one-out cross-validation correct classification rates are large. As a general trend, SAM and limma consider a larger number of DE genes, but nevertheless it does not guarantee to obtain better leave-one-out cross-validation correct classification rates. It is important to note that ORdensity identifies several genes, not detected by the other methods, that are biologically relevant. For instance, consider the strong selection with the leukemia data set. Interestingly, genes selected only by our method could be fundamental to explain some traits of the leukemia. For example, our method recognizes genes that codify for cyclin D2 (protein involved in cell cycle), neprilysin (an enzyme common in acute lymphoblastic leukemia), a protein-tyrosine phosphatase of T-cells and a protein similar to phorbolin-1 (that can be expressed in leukocytes). Finally, we evaluated the stability of the procedure in order to assess how often a gene, selected as DE with the original sample, was selected again when a fraction of the 20% of the original data was eliminated at random. In order to mitigate any effect of selection bias, the process was repeated 10 times. Note that (rows III in Tables 9, 10, 11 and 12) ORdensity procedure was, in general, the method that best kept a coherent list with DE genes. From a biomedical point of view, all the above results indicated that our method is very valuable to detect a small group of genes with a large effect in a particular disease. Therefore, this information can be used to develop new in vivo and in vitro studies. In this paper, a novel procedure, ORdensity, is proposed for the detection of differentially expressed genes. The proposed method is not a gene-by-gene procedure, and it takes into account the relationships between genes. The procedure obtains a ranking of importance of genes based in three measures, which reflect if the differences between quantiles for a gene under the two considered experimental conditions are important enough in order to consider the gene as a DE gene and if, in its neighbourhood, there are permuted false positives or not. We have presented an exhaustive evaluation of the performance of ORdensity using simulated microarray data, and results indicated that the new method correctly detects the DE genes. Furthermore, the simulation study showed that the procedure is useful even with small samples and it is competitive with other well established procedures, such as SAM and limma. The results, obtained with actual cancer microarray data sets, showed that ORdensity is very useful to obtain a small number of DE genes with high correct classification rate by the leave-one-out cross-validation approach and it is, in general, more stable than other well accepted procedures when the original sample is substituted with a subsample. The main advantages of the proposed method are, therefore, that it returns very small sets of genes that retain a high predictive accuracy; the selected gene list is stable; it avoids the classical multiple comparison restrictions; as it takes into consideration the tails of distributions, it can detect outlying genes that only exhibit differential qualities in the tails, and, moreover, as we can cluster the genes using the three discriminative values OR, FP and dFP, different patterns of genes can be obtained. The results generated by our new procedure could be of extreme interest to biomedical research, because they can focus on a short, but crucial number of genes. Thus, these small numbers of genes could act as the corner stones to understand the origin and development of several serious diseases. The idea that lies beneath the proposed methodology seems applicable to RNA-Sequencing data. However, although it is usual to model RNA-Seq data as both negative binomial distribution and as normal distribution by ln-transforming normalized count data, because the properties of RNA-Seq data have not yet been fully established, additional research is needed. Here we present a new method to identify differentially expressed genes that avoids losing sensitivity due to correction by multiple comparisons. This method is able to identify the nucleus of the genes that are candidates to explain a particular disease or pathology. With this information, further biomedical studies can be developed, focusing the attention in these candidate genes. AML: Acute myelooblastic leukemia Area under the ROC curve BH: Benjamini and Hochberg adjustment method Differentially expressed DLBCL: Large B cell lymphoma dFP: Average density of false positive permuted cases in 10-NN FDR: False discovery Rate FP: Average number of false positive permuted cases in 10-NN FPC: Percentage of false positive genes per cluster FWER: Family wise error rate limma: Linear models with empirical Bayes statistic Outlier robust index PAM: Partition around Medoids clustering method The significant analysis of microarrays WDB-discriminant: weighted distance based discriminant We express our sincere thanks to the participants of the Data Science, Statistics and Visualisation (DSSV 2017) a Satellite Conference of the 61th World Statistics Congress (Lisboa, 2017) whose comments and suggestions led to substantial improvements in the content and organization of the paper. We also thank to the geneticist Prof. F. Mestres (University of Barcelona, Spain) his help in the interpretation of the biological function of the genes. The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article. This study was partially supported: II by the Spanish Ministerio de Economia y Competitividad (TIN2015-64395-R) and by the Basque Government Research Team Grant (IT313-10) SAIOTEK ProjectSA-2013/00397 and by the University of the Basque Country UPV/EHU (Grant UFI11/45 (BAILab). CA by the Spanish Ministerio de Economia y Competitividad (SAF2015-68341-R), by the Spanish Ministerio de Economia y Competitividad (TIN2015-64395-R) and by Grant 2014 SGR 464 (GRBIO) from the Departament d'Economia i Coneixement de la Generalitat de Catalunya. The funders had no role in the study design, data collection and interpretation, or the decision to submit the work for publication. The authors declare that all data has been previously published and the corresponding articles has been cited in the text. II and CA conceived the procedure, conducted and analyzed the results, wrote and reviewed the manuscript. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Additional file 1 This file contains Tables 13-27 and Figures 8-20 cited in the text. (PDF 533 kb) Concepcion Arenas received a PhD degree in mathematics, spezializing in statistics, from the University of Barcelona (Spain), where she is now a research professor in the Department of Genetics, Microbiology and Statistics, Statistics section. Her research interests include multivariate analysis as applied to bioinformatics, specifically DNA sequence analysis and microarray interpretation. She also works in biomedical statistics. Itziar Irigoien received a PhD degree in informatics from the University of the Basque Country, (UPV/EHU), Donostia, Spain, where she is now a research professor in the Department of Computation Science and Artificial Intelligence. Her research interests include the development of new statistical methods and software to solve bioinformatics and biomedical questions. Department of Computation Science and Artificial Intelligence, University of the Basque Country UPV/EHU, Donostia, Spain Department of Genetics, Microbiology and Statistics, University of Barcelona, Barcelona, Spain Quackenbush J. Microarray analysis and tumor classification. N Engl J Med. 2006; 354(23):2463–72.View ArticlePubMedGoogle Scholar Tusher VG, Tibshirani R, Chu G. Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci U S A. 2001; 90(9):5116–21.View ArticleGoogle Scholar Yang D, Parrish RS, Brock GN. Empirical evaluation of consistency and accuracy of methods to detect differentially expressed genes based on microarray. Comput Biol Med. 2014; 46:1–10.View ArticlePubMedGoogle Scholar Smyth GK. Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004; 3(1):1–25.View ArticleGoogle Scholar Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, Smyth GK. Limma powers differential expression analyses for rna-sequencing and microarray studies. Nucleic Acids Res. 2015; 43(7):47.View ArticleGoogle Scholar Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, Ellis B, Gautier L, Ge Y, Gentry J, et al.Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10):80.View ArticleGoogle Scholar Efron B, Tibshirani R. Empirical bayes methods and false discovery rates for microarrays. Genet Epidemiol. 2002; 23(1):70–86.View ArticlePubMedGoogle Scholar Allison DB, Cui X, Page GP, Sabripour M. Microarray data analysis: from disarray to consolidation and consensus. Nat Rev Genet. 2006; 7(1):55–65.View ArticlePubMedGoogle Scholar Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc B. 1995; 57(1):289–300.Google Scholar Guo L, Lobenhofer EK, Wang C, Shippy R, Harris SC, Zhang L, Mei N, Chen T, Herman D, Goodsaid FM, et al.Rat toxicogenomic study reveals analytical consistency across microarray platforms. Nat Biotechnol. 2006; 24(9):1162–9.View ArticlePubMedGoogle Scholar Zhu J, Wang J, Guo Z, Zhang M, Yang D, Li Y, Wang D, Xiao G. GO-2D: identifying 2-dimensional cellular-localized functional modules in gene ontology. BMC Genomics. 2007; 8(1):30.View ArticlePubMedPubMed CentralGoogle Scholar Zhang M, Zhang L, Zou J, Yao C, Xiao H, Liu Q, Wang J, Wang D, Wang C, Guo Z. Evaluating reproducibility of differential expression discoveries in microarray studies by considering correlated molecular changes. Bioinformatics. 2009; 25(13):1662–8.View ArticlePubMedPubMed CentralGoogle Scholar Arenas C, Toma C, Cormand B, Irigoien I. Identifying extreme observations, outliers and noise in clinical and genetic data. Curr Bioinform. 2017; 12(2):101–17.View ArticleGoogle Scholar Arenas C, Irigoien I, Mestres F, Toma C, Cormand B. Extreme observations in biomedical data In: Ainsbury EA, Calle ML, Cardis E, et al., editors. Extended Abstracts Fall 2015. Trends in Mathematics vol 7. Birkhäuser, Cham: Springer: 2017. p. 3–8.Google Scholar Dembélé D. A flexible microarray data simulation model. Microarrays. 2013; 2(2):115–30.View ArticlePubMedPubMed CentralGoogle Scholar Kaufman L, Rousseeuw P. Clustering by Means of Medoids. Amsterdam: North-Holland; 1987.Google Scholar Rousseeuw PJ. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comp Appl Stat. 1987; 20(1):53–65.View ArticleGoogle Scholar R Development Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2015. http://www.R-project.org/. Accessed 27 Aug 2018.Google Scholar Ihaka R, Gentleman R. R: a language for data analysis and graphics. J Comp Graph Stat. 1996; 5(3):299–314.Google Scholar Díaz-Uriarte R, Alvarez De Andres S. Gene selection and classification of microarray data using random forest. BMC Bioinformatics. 2006; 7(1):3.View ArticlePubMedPubMed CentralGoogle Scholar Jeffery IB, Higgins DG, Culhane AC. Comparison and Evaluation of Microarray Feature Selection Methods. http://www.bioinf.ucd.ie/people/ian/. Accessed 27 Aug 2018. Storey JD, Tibshirani R. Statistical significance for genomewide studies. Proc Natl Acad Sci U S A. 2003; 100(16):9440–5.View ArticlePubMedPubMed CentralGoogle Scholar Irigoien I, Mestres F, Arenas C. Weighted distance based discriminant analysis: The R package WeDiBaDis. R J. 2016; 8(2):434–50.Google Scholar Anderson MJ, Robinson J. Generalized discriminant analysis based on distances. Aust NZ J Stat. 2003; 45(3):301–18.View ArticleGoogle Scholar Cuadras CM, Fortiana J, Oliva F. The proximity of an individual to a population with applications in discriminant analysis. J Classif. 1997; 14(1):117–36.View ArticleGoogle Scholar Alizadeh AA, Eisen MB, Davis RE, Ma C, Lossos IS, Rosenwald A, Boldrick JC, Sabet H, Tran T, Yu X, et al.Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature. 2000; 403(6769):503–11.View ArticlePubMedGoogle Scholar Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, et al.Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999; 286(5439):531–7.View ArticlePubMedGoogle Scholar Alon U, Barkai N, Notterman DA, Gish K, Ybarra S, Mack D, Levine AJ. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc Natl Acad Sci U S A. 1999; 96(12):6745–50.View ArticlePubMedPubMed CentralGoogle Scholar Singh D, Febbo PG, Ross K, Jackson DG, Manola J, Ladd C, Tamayo P, Renshaw AA, D'Amico AV, Richie JP, et al.Gene expression correlates of clinical prostate cancer behavior. Cancer Cell. 2002; 1(2):203–9.View ArticlePubMedGoogle Scholar
CommonCrawl
C Program To Evaluate Arithmetic Expression Using Bodmas Sass Operators. Also, calculator is simple and easy tool to perform arithmetic and trigonometric operations. Scan the expression from left to right. That completes our lesson on evaluating algebra expressions using normal positive numbers. Create free worksheets for evaluating expressions with variables (pre-algebra / algebra 1) or grades 6-9. Operator Precedence. If you wish, you can use parentheses in expressions to clarify evaluation order or to override precedence. And once you understand those expressions, you can use that knowledge to figure out expressions that add up expressions-that-add-up-numbers. • Evaluating arithmetic expressions in configuration files - This is a sample program, Evaluting BODMAS expression using MySQL cell tables. Addition/Subtraction. expression: A combination of variables, operators, and values that represents a single result value. C program to show use of switch case. Being able to embed an expression language in a piece of custom software allows an amazing degree of artistic freedom. (3) Add and subtract in order from left to right. TinyExpr is pretty fast compared to C when the expression is short, when the expression does hard calculations (e. An expression is a group of words (numbers, strings, and operators separated by spaces) which the C shell replaces by a single number, called its value. Everything, even down to the assignment statement, is evaluated as an expression. Jyoti: We use BODMAS to evaluate an expression. Notice that the final two examples are very similar, but having the brackets in the last one made a big difference to the answer. This algorithm takes as input an Infix Expression and produces a queue that has this expression converted to a postfix notation. The application would then be used for other assignments during the semester (note this. %EVAL operates by converting its argument from a character value to a numeric or logical expression. It is a small console program with all common mathematical features. So in this example, the && operator's short circuit evaluation saves you from crashing your program. The number on top of the summation sign tells you the last number to plug into the given expression. Start with basics and ask your doubts CodesDope : Learn operators in C. With an alias you set the variables at any point in the program and the expression is evaluated when, and only when, the alias variable is referenced. First evaluate Parentheses/Brackets, then evaluate Exponents/Orders, then evaluate Multiplication-Division, and finally evaluate Addition-Subtraction. For example, A+B; here, plus operator is placed between the two operands A and B. This convention is very reasonable, and I agree that the answer is 1 if we adhere to it. Before starting to evaluate our expression, we define a function that. Introduction: In this article i am going to explain how to evaluate/solve the arithmetic/mathematical expression/formula which is in string form in asp. Next, the program uses the String class's split method to split the input into an array of String objects. This non-left-to-rightness is something that many people have never really noticed! More surprisingly, it is something that some people simple don't believe and even argue about when it is first pointed out to them. You can use it, for example, to evaluate expressions read from data files or configuration files or expressions entered directly by users. Input the expression as string. Such operators are called binary operators. call() , LSBBModel. When the main program runs, factorial is called exactly once, for N=12. Thus if you need such capability, you need to build it yourself. It simply takes two integer numbers and performs arithmetic operations like addition, subtraction, multiplication. Finally we can divide. The order of evaluation of operators in an expression is determine. valid or invalid), go through this post: LEX YACC program to check / recognize valid Arithmetic Expression). While this makes sense for a general purpose language, the expression evaluations are primarily used for data binding, which often look like this: {{a. Operators and Expressions in 'C' :: 181 Of all the arithmetic operators, the unary minus has the highest precedence level. Occasionally you find situations in which you don't want short circuit evaluation. With each call of recursion N is decremented until it reaches 0. Order of Operations BODMAS Operations "Operations" mean things like add, subtract, multiply, divide, squaring, etc. Evaluation of an infix expression that is fully parenthesized using stack in java. It is commonly used to take a randomly generated number and reduce that number to a random number on a smaller range, and it can also quickly tell you if one number is a factor of another. By using the regular expression class, we can find all the alphabetic words in our evaluation expression. I wanted to learn to code in C++ to contribute. A tricky point of concern is when working integer values. Start studying Chapter 3 Study Guide. Here's a more complete list. Therefore, the expression e = a < d ? a ++: a = d, which is parsed in C++ as e = ((a < d)? (a ++): (a = d)), will fail to compile in C due to grammatical or semantic constraints in C. 1 Notation and Syntax 1. Arithmetic expressions are evaluated in accordance with the following priority rules:. C Program to Make a Simple Calculator Using switchcase Example to create a simple calculator to add, subtract, multiply and divide using switch and break statement. In order to extract the numbers, we need to split the expression using the operators (+ and -) as separators. The C language facilitates a structured and disciplined approach to computer-program design. The primary method for working with the data stored in constants and variables is in the form of expressions. Blank spaces are ignored in Fortran 77. The worksheets are included for integers, fractions and decimals. The start symbol of the context-free grammar is cobol-source-program. Using parenthesis to group and expand expressions. The test expression is evaluated every time that execution starts at the top of the loop. You can build expressions that use any combination of arithmetic, relational, and logical operators. The next step is to learn how to use these variables and constants in C# code. Writing an Interpreter with Lex, Yacc, and Memphis Memphis Examples Manuals Distribution. In an arithmetic expression the parenthesis tell the compiler which operands go with which operators but do not force the compiler to evaluate everything within the parenthesis first. NET MVC tools. ahuja Hey , I urgently need a program to calculate a Bodmas Expression In C. %EVAL operates by converting its argument from a character value to a numeric or logical expression. Hello Friends, I am Free Lance Tutor, who helped student in completing their homework. Postfix Evaluation : In normal algebra we use the infix notation like a+b*c. Learn: How to convert infix to postfix using stack in C language program? Infix to Postfix conversion is one of the most important applications of stack. Arithmetic Expression: An arithmetic expression is an expression in code that consists of a numeric value. Then start from left side to right side solving any " A " or " S " as we find them. You will see what the calculator thinks you entered (which may be a little different to what you typed), and then a step-by-step solution. The most common Boolean expressions compare the value of a variable with the value of some other variable, a constant, or perhaps a simple arithmetic. In this case, nValue1 is converted to a double before the calculation proceeds. Infix Expression : Any expression in the standard form like "2*3-4/5" is an Infix(Inorder) expression. (AW, Jan 97) Multiplication/Division vs. Operator precedence determines the order of evaluation of operations in an expression. pynto: Data analysis in Python using the concatenative paradigm. A Boolean expression is a Java expression that, when evaluated, returns a Boolean value: true or false. Unfortunately, there are no simple rules that one can follow, such as "BODMAS" that tells algebra students in which order does an expression evaluate. Working with Arithmetic Expressions In C, just as in virtually all programming languages, the plus sign ( + ) is used to add two values, the minus sign ( − ) is used to subtract two values, the asterisk ( * ) is used to multiply two values, and the slash ( / ) is used to divide two values. More complex boolean expressions can be built out of simpler expressions, using the following boolean operators:. Program to evaluate simple expressions. RadioButton In an expression using the logical operator ____, as soon as one of the compound conditions is found to be false, no further conditions are tested and the expression evaluates to false. All constants are integers. Justify what is wrong and why. Some popular titles to pair with Programming in C include:. #include < iostream > #include < conio. Z3 is used in many applications such as: software/hardware verification and testing, constraint solving, analysis of hybrid systems, security, biology (in silico analysis), and geometrical problems. So, by looking at an expression (and knowing the types of its literals and variables, and knowing the prototypes of its operators and methods) we can determine if the expression is well-formed according to the structural rules in Java, and determine the type that results from evaluating the expression. We will first convert the infix expression into the postfix expression before building expression tree with this postfix expression. Here is the list of simplification rules. In this lesson students evaluate expressions at specific values of their variables. This is due partly to its conservative nature: the result has to contain all possible values, including those where rounding errors combine in an unfavourable way. Input from a file is easy in C++. q N uMEaxdped EwSi3tmh7 iI Rn0fUiZn EiEt Yeb YAhldgePb4rra t B2H. Orders (numbers involving powers or square roots). Implied Brackets. Both stacks MUST be implemented using a linked list. Note: This C program for evaluation of postfix expression using stacks has been compiled with GNU GCC Compiler and developed using gEdit Editor in Linux Ubuntu Operating System. P divided by 9 would be considered a word expression0136. You may use commas to separate multiple expressions within a single math context. ; The AST must be used in evaluation, also, so the input may not be directly evaluated (e. Expressions. (b) Some write out the words in full, thus: "divide this expression by that expression. To get the 2. Write a program to evaluate the arithmetic statement: X = A*(B + C) + D / E* F +G 1. [Filename: k6books. This report uses the word "true" to refer to any Scheme value that counts as true, and the word "false" to refer to #f. Generally, there are several ways to reach the result. Any Scheme value can be used as a boolean value for the purpose of a conditional test. You can postpone evaluation of an expression involving x by assigning the string value of the expression, say " 3 * x + 2 ", to a variable, and then calling eval() at a later point in your script. Here are the steps: The program must #include. The most important exception to (c) occurs in the development of the theory in such a text as Chrystal. Addition/Subtraction. Strictly speaking, even a single variable or constant can be considered an expression. When the test expression evaluates to false the loop will not be run again, but will be exited. C Arithmetic Operators. 1) (7 − 2) ÷ 5 1 2). Example 9− 6+1 = 3+1 (left to right, as + and − have the same priority) = 4. For example, suppose you have a variable x. There are few other operators in C++ such as Comma operator and sizeof operator. Please review my code and suggest me on changes on approaches to make it scalable to add unary operators and parenthesis. BODMAS is a useful acronym that lets you know which order to solve mathematical problems (or sums). Programming languages — C Abstract (This cover sheet to be replaced by ISO. We can then check all those words against the math member map. Learn How To Evaluate Prefix Expression using Stack in C Programming Language. The C Preprocessor. so this kinda complicates the expression. Evaluating expressions using order of operations. Post-Fix calculator - When an arithmetic expression is presented in the postfix form, you can use a stack to evaluate the expression to get Posted 13 days ago. I wanted to learn to code in C++ to contribute. If you do not follow any specific rule to evaluate such expressions, then you probably lead to ambiguous result. You can store values inside of variables so that your program can remember those values and use them later. Arithmetic expressions can consist of a single numeric literal, a single numeric data item, or a single intrinsic function reference. First we evaluate the multiplication inside the parentheses. Getting Started. An expression in which the two operands are not the same type is called a mixed mode expression. Expressions of type "value" represent literal strings or numbers. txt) or view presentation slides online. Programming in Visual. Jyoti: In the statement Print (50+68+35)/3, BODMAS is used to evaluate the arithmetic expression and then the result is displayed. ExprTk supports numerous forms of functional, logical and vector processing semantics and is very easily extendible. algebraic expressions and evaluate these expressions to determine the most cost- The teacher will introduce the task by saying, "In this task, you will translate Developing ideas about the simplification of algebraic expressions (e. The symbol ÷ historical use. " Because of this, the size of a program's stack fluctuates constantly as the program is running, but it has some maximum size. Simple Calculations with MATLAB 1. Expressions, Operators, and Operands. 1, all values count as true in such a test except for #f. When commands are grouped, redirections can be applied to the entire command list. The order is so if I have an expression. Using a general register. In most of the examples shown today and in earlier lessons, you've seen lots of lines that looked something like this:. Moreover the postfix notation is the way the computer looks towards arithmetic expression, any expression entered into the computer is first converted into postfix notation, stored in stack and then calculated. Retrieve data from the keyboard. This Second Edition of The C Programming Language describes C as defined by the ANSI standard. The shell can perform integer and floating point arithmetic, either using the builtin let, or via a substitution of the form $(()). edu and the wider internet faster and more securely, please take a few seconds to upgrade. Then start from left side to right side solving any " A " or " S " as we find them. The C++ Mathematical Expression Toolkit Library is a simple to use, easy to integrate and extremely efficient run-time mathematical expression parser and evaluation engine. For example, you can use arithmetic expressions as comparands in relation conditions: If (a + b) > (c - d + 5) Then. pdf] - Read File Online. Expressions in JsonPath are basically code snippets that evaluate to the Boolean value. Then, it performs the evaluation. Next, the program uses the String class's split method to split the input into an array of String objects. An operator is a code element that performs an operation on one or more code elements that hold values. There are two kinds of numeric values, integers (whole numbers), and real or floating point numbers (numbers containing a decimal point). Program to convert an Infix Expression into a Postfix Expression; Program to convert a Postfix Expression into an Infix Expression; Program to implement a Translator that reads an Infix Expression translates it into a Postfix Expression and evaluate the Postfix Expression; Program of traversing a binary tree in inorder iteratively. Please review my code and suggest me on changes on approaches to make it scalable to add unary operators and parenthesis. private Dictionary Primatives; private enum Precedence { None = 11, Unary = 10, // Not actually used. When Java evaluates the expression d = b && c;, Therefore if there's a real chance your program will have a divide. Brief survey of other modern languages such as Python, Objective C, and C#. I remade an equation-solver program from Java to C++, and I plan to post it soon but I would rather not until I can determine where modulus fits in with PEMDAS. From man bash. Introduction to Powers Indices and Exponents. To evaluate a compound expression to a value, 1. The calculator can be set to perform arithmetic operations in either RPN (Reverse Polish Notation) or ALG (Algebraic) mode. In a session, results may be assigned to unlimited number of variables and used in later calculations. For example, in RPN the expression 2+3*4 would be written 2, 3, 4*,+ and it can be evaluated simply by scanning from left to right and pushing values on the stack. Arithmetic expressions can consist of a single numeric literal, a single numeric data item, or a single intrinsic function reference. Evaluate expressions by substituting in fractions, integers, or other expressions for variables. Statements like a = b + 3, ++z and 300 > (8 * k) are all expressions. In mathematics, and to some extent in computer programming, the order of operations (or operator precedence) has become a convention—a collection of rules—that tells us and defines which procedures to perform first in order to evaluate a given mathematical (arithmetical or algebraic) expression (a finite combination of constants, variables. Order of operation worksheets contain combined operations between addition, subtraction, multiplication and division; simplifying terms within parentheses; solving exponents, nested parentheses and more. Evaluating Algebraic Expressions Order of Operations/P. The algebra section allows you to expand, factor or simplify virtually any expression you choose. It helps you with simplifying equations and simplifying expressions, whenever you come across a sum (addition) or difference (subtraction) inside parantheses (brackets) that you have to multiply (multiplication) by a factor. We will solve the expression by first targeting high priority operators and then target the low priority operators. Sometimes, it can also be said as BEDMAS. In an arithmetic expression the parenthesis tell the compiler which operands go with which operators but do not force the compiler to evaluate everything within the parenthesis first. Is BODMAS used to evaluate the expression that we give in a program? Moz: Yes. Although languages like C# can evaluate complex expressions within your code, being able to evaluate an expression from a string is another matter. ∗ If value encountered, push it onto the stack ∗ If n-ary operator, pop the right number of arguments, apply the operator, and push the result. Compile to get a runnable delegate which you can call, the delegate will evaluate your expression, you'd simply loop through your array and invoke the delegate passing in arguments for "sex code", "year" and "age" and get a true/false back. Retrieve data from input files. Our example language provides arithmetic and relational expressions as well as assignment and print statements. The following 11 C++ reserved words are not essential when the standard ASCII character set is being used, but they have been added to provide more readable alternatives for some of the C++ operators, and also to facilitate programming with character sets that lack characters needed by C++. evaluate arithmetic expression how do you get excel 2010 to evaluate/calculate a valid arithmetic expression in a cell? for example a cell has the following in it: 1+2 so i want a formula that references the cell and produces a value of 3 in another cell. I have this variable that I 'set' with but pulling numbers off of stings with set STUFF = `grep string file | awk command` Now I would like to add up the numbers that. Math Expressions / Formula Calculator. I tried to implement one but I always get nowhere. When the main program runs, factorial is called exactly once, for N=12. T hese are simple rules need to be followed for simplifying or calculating using BODMAS rule. In this chapter we will look in detail at C# expressions and operators. Evaluation of Simple Arithmetic Expressions We use the operator precedence and associativity rules to determine the meaning and value of an expression in an unambiguous manner. [Filename: k6books. He says he has to create a C++ program to evaluate the series (He says we must use functions): $$\displaystyle\large\sin x\approx x- \frac{x^3}{3! }+\frac{x^5 Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their. An unsafe function-like macro is one whose expansion results in evaluating one of its parameters more than once or not at all. Learn C programming, Data Structures tutorials, exercises, examples, programs, hacks, tips and tricks online. (b) Some write out the words in full, thus: "divide this expression by that expression. We will cover them in detail in a separate tutorial. Programming9 | Programs and Tutorials Search. You can find it on this address: Simple Arithmetic Expression Evaluator. If you wish, you can use parentheses in expressions to clarify evaluation order or to override precedence. C Program To Evaluate Postfix Expression using Stack. To write any arithmetic expression, simply use the formula: a + ( n - 1) * d a is the first number of the sequence, while n is the place within the sequence, and d is the common difference. A function that is defined for use in one program may be of value in another program that needs to perform the same task. ) Perform division and multiplication, working from left to right. If a simple mathematical statement is written, for example 5+5 , the calculation will be evaluated by the C++ compiler, and the result put in the place of the expression - in this case, 10. Arithmetic Operators are the operators which perform arithmetic calculations on operands same as these are used in algebra. Computing Arithmetic Expressions given as Strings: C++ Solution The most basic task any programming language should be able to handle is to compute arithmetic expressions including constants, numbers, variables as well as function calls and other value holding "objects". We have to evaluate expressions. The order of division and multiplication doesn't matter; 5 / 2 * 3 is the same as 5 * 3 / 2. In an arithmetic expression, to concatenate two or more strings, you can use the + operator. Operator precedence and associativity specifies order of evaluation of operators in an expression. 1)First program demonstrates how to evaluate the formula in a string. However, if the variable to which an expression is assigned has been declared as an integer variable, any decimal places resulting from the evaluation of the expression will be lost. Suppose two operators have the same priority order in an expression, then the evaluation will start from left or right as shown in the above table. Here in this setup of an Interview, we'll use Djikstra's Two-Stack Algorithm at its core to build a simple Arithmetic Expression Compiler that can Multiply, Divide, and Add non-negative N-Digit Rational numbers together following Operator precedence of the BODMAS rule. X=A-B+C*(D*E-F)/G+H*k (i) Using a general register computer with one address instruction. Many useful commands involve arithmetic expressions. It is also known as Bodmas rule, which tells which process to perform first to evaluate a given numerical expression. It is run every time that execution starts at the top of the loop. For example, multiplication and division take precedence over addition and subtraction. In mathematics and computer programming, the order of operations (or operator precedence) is a collection of rules that reflect conventions about which procedures to perform first in order to evaluate a given mathematical expression. In infix notation or expression operators are written in between the operands while in postfix notation every operator follows all of its operands. Asked in Math and Arithmetic What does evaluating expressions Asked in Math and Arithmetic, C Programming How do you evaluate. This is easily done using the Split(…) method of the string class. So since (ab)/c = a(b/c), it is not necessary to use parentheses until encountering the third operator operating over the 4th operand. (division and multiplication rank equally and done left to right). calculator for example (postfix), the LISP family of languages (prefix), and the APL language (all right-associative), all of which do not have differing precedence of operators at all, and write expressions in different ways. integer variable 5 * 2. The final value of the arithmetic expression is that of the last comma-delimited expression. Press the "tree" button to see your sum as a tree. Such expressions are called infix expressions. Recall that the operators in an expression are bound to their operands in the order of their precedence. Jared likes to make things. Performance Skills • Use an IDE to create a solution to solve a problem. In programming, an expression is any legal combination of symbols that represents a value. For example, what is the intended meaning of the following expression? #6-:2(1+2)#. SCSS Syntax:. The evaluation of expressions is made simple. When parentheses are used, the expressions within parentheses assume highest. The C language facilitates a structured and disciplined approach to computer-program design. develop a program to print the charges for a person,if he enter his age then program ask the user to enter his age and by using if else structure print the charges,if age is 55 and above it the charges is 10$ if age is 21 to 54 it charges is 15$ if age is 13 to 20 it charges is 8$ if age is 3 to 12 it charges is 5$ if under 3 no charges that. Expressions. The data structure that the parser will use to describe a program consists of expression objects, each of which has a type property indicating the kind of expression it is and other properties to describe its content. If the expression is of incorrect syntax return -1. Using a general register. Operands must be integers and there should be space in between two operands. We will cover them in detail in a separate tutorial. Operators are used to combine or to transform the expressions. Because a computer is simply a very fast manipulator of bits (ones and zeros), through the the power of abstraction, computer scientists have layered levels of object. Let's first briefly define summation notation. Never invoke an unsafe macro with arguments containing an assignment, increment, decrement, volatile access, input/output, or other expressions with side effects (including function calls, which may cause side effects). This non-left-to-rightness is something that many people have never really noticed! More surprisingly, it is something that some people simple don't believe and even argue about when it is first pointed out to them. BODMAS stands for Bracket, of, Division, Multiplication, Addition, and Subtraction. A tricky point of concern is when working integer values. An expression is like 2 + 3. Although this works perfectly well, it is also unnecessarily complex and awkward. Syntax: SELECT [arithmetic operator]. The project consist of ten source files, listed at the left of this page. The order of division and multiplication doesn't matter; 5 / 2 * 3 is the same as 5 * 3 / 2. And mind you there are as many as 45 odd operators in C, and these can affect the evaluation of an expression in subtle and unexpected ways if we aren't careful. ) This document specifies the form and establishes the interpretation of programs expressed in the programming language C. com offers free software downloads for Windows, Mac, iOS and Android computers and mobile devices. Because a computer is simply a very fast manipulator of bits (ones and zeros), through the the power of abstraction, computer scientists have layered levels of object. According to the US learning system, it is 'PEMDAS' and in UK learning system, it is 'BODMAS'. Some popular titles to pair with Programming in C include:. Evaluation of Simple Arithmetic Expressions We use the operator precedence and associativity rules to determine the meaning and value of an expression in an unambiguous manner. Arithmetic Expansion Arithmetic expansion allows the evaluation of an arithmetic expression and the substitution of the result. com/zcodesys/pdf. To get the 2. The expression syntax allows the calculation of values from primitive parts using arithmetic, logical, set, and other operations. Programming languages — C Abstract (This cover sheet to be replaced by ISO. Evaluation of an infix expression that is fully parenthesized using stack in java. Converts expression into Reverse Polish Notation by applying the Shunting Yard Algorithm and determines the arithmetic result. Arithmetic expression in C is a combination of variables, constants and operators written in a proper syntax. The application would then be used for other assignments during the semester (note this. If an expression contains several multiplication, division and remainder operations, evaluation proceeds from left to right. An expression is a combination of one or more operands, zero or more operators, and zero or more pairs of parentheses. Variables, Data Types, and Arithmetic Expressions. NCalc is a mathematical expressions evaluator in. java using a programming text editor (such as Sublime Text, Atom, Notepad++, Textpad, gEdit) or an IDE (such as Eclipse or NetBeans). To understand expression evaluation, need to be familiar with the orders of operator and operand evaluation. 36th Avenue Stone Park, IL 60165 1-708-344-9344 OBJECTIVES:-to use grouping symbols and the standard order of operations to simplify numerical expressions. false) 1 (representing. When two operators have the same priority, the expression is evaluated from left to right. Each programming language and application has its own rules for what is legal and illegal. Order of evaluation of any part of any expression, including order of evaluation of function arguments is unspecified (with some exceptions listed below). Like :- 1*2 -1 is equals to 1. Value elements include variables, constants, literals, properties, returns from Function and Operator procedures, and expressions. Postfix Expression : The Postfix(Postorder) form of the above expression is "23*45/-". In this case, nValue1 is converted to a double before the calculation proceeds. When the test expression evaluates to false the loop will not be run again, but will be exited. Consider once again the expression A + B * C. Post-Fix calculator - When an arithmetic expression is presented in the postfix form, you can use a stack to evaluate the expression to get Posted 13 days ago. Prerequisites: Programming and problem solving at the Programming Abstractions level. ) with full confidence. Those are the parentheses right there. stack is as a last in, first out (LIFO) abstract data type and linear data structure. Type arithmetic (or type-level computation) are calculations on the type-level, often implemented in Haskell using functional dependencies to represent functions. Consider the expression 12x 1 42. In short, these rules dictate that, to carry out the computations of an arithmetic expression, evaluate the exponents rst, then multiplications and divisions, then. Before starting to evaluate our expression, we define a function that. Yes I know tcsh sucks for scripting and arithmetic but I have to write a script for multiple users and they all use tcsh. The order in which these operations are evaluated can be changed using parentheses. Input the expression as string. Read more about C Programming Language. Here is a simple example:. Programming9 | Programs and Tutorials Search. You've seen some of these I'm sure, but just to make sure you know the most basic. How to evaluate a mathematical expression which can be in any form like (a-b)/(a+b)*(a^b)+c (with or without brackets) following the BODMAS rule?The logic:-1. C Program for Evaluation of Postfix ExpressionIn this program we evaluate the Postfix Expression, using the stack. Unfortunately, there are no simple rules that one can follow, such as "BODMAS" that tells algebra students in which order does an expression evaluate. C program can only use lowercase letters in variable names. The C language facilitates a structured and disciplined approach to computer-program design. (See My top 10 reasons not to use the C shell. You can postpone evaluation of an expression involving x by assigning the string value of the expression, say " 3 * x + 2 ", to a variable, and then calling eval() at a later point in your script. Pointers can also point to function which make it easy to call different functions in the case of defining an array of pointers. Note that this is exactly how interval arithmetic works on intervals. expression: A combination of variables, operators, and values that represents a single result value. This mathematics lesson is appropriate for students in 6th grade. You have seen several expressions in the previous C tutorial on Operators in which the examples involved expressions. 1)First program demonstrates how to evaluate the formula in a string. I have done it using queue and used the infix to postfix algorithm. There are other challenges of parsing the formula in compiled languages. (Note: If you want to check just validity of arithmetic expression (i. Programming languages — C Abstract (This cover sheet to be replaced by ISO. Using a Stack to Evaluate an Expression. Context of evaluation is specified by a comma separated list of equations. When S0C4 Abend occurs. i'm looking for a =DOTHEMATH(A1) formula. TinyExpr is slow compared to C when the expression is long and involves only basic arithmetic. Test cases: a) 1+2*3 will be evaluated to 9. valid or invalid), go through this post: LEX YACC program to check / recognize valid Arithmetic Expression).
CommonCrawl
Recognition of printed small texture modules based on dictionary learning Lifang Yu1, Gang Cao2, Huawei Tian ORCID: orcid.org/0000-0002-4079-59743, Peng Cao1, Zhenzhen Zhang1 & Yun Q. Shi4 Quick Response (QR) codes are designed for information storage and high-speed reading applications. To store additional information, Two-Level QR (2LQR) codes replace black modules in standard QR codes with specific texture patterns. When the 2LQR code is printed, texture patterns are blurred and their sizes are smaller than\(0.5{\mathrm{cm}}^{2}\). Recognizing small-sized blurred texture patterns is challenging. In original 2LQR literature, recognition of texture patterns is based on maximizing the correlation between print-and-scanned texture patterns and the original digital ones. When employing desktop printers with large pixel extensions and low-resolution capture devices, the recognition accuracy of texture patterns greatly reduces. To improve the recognition accuracy under this situation, our work presents a dictionary learning based scheme to recognize printed texture patterns. To our best knowledge, it is the first attempt to use dictionary learning to promote the recognition accuracy of printed texture patterns. In our scheme, dictionaries for all kinds of texture patterns are learned from print-and-scanned texture modules in the training stage. And these learned dictionaries are employed to represent each texture module in the testing stage (extracting process) to recognize their texture pattern. Experimental results show that our proposed algorithm significantly reduces the recognition error of small-sized printed texture patterns. Two-dimensional code [1] breaks through the constraints of the original industrial marketing model and pushes the traditional industrial structure to the "Internet + " industry by means of online-to-offline information docking. Nowadays, QR codes are widely used in logistics, transportation, product sales, and many other areas that need automated information management. QR code is a kind of trademark bar code with a machine-readable optical label. By scanning the label, the information of the bar code can be extracted quickly. In addition, the error correction function of QR codes allows barcode readers to recover the QR code data when the QR code becomes dirty or damaged [2], without any loss. With barcode readers, QR data can be easily and efficiently accessed. However, because QR codes are machine-readable symbols, their public encoding makes their stored information insecure. To improve the security of QR code applications, researchers have proposed a series of security solutions. The early methods of carrying secret information with QR codes adopted traditional digital secret hiding and watermarking technology [3,4,5] to hide the secret in the carrier image. These processes embed secrets into the pixels/coefficients and spatial/frequency domains of the carrier image. Such embedding algorithms, unfortunately, are not suitable for QR tags [6,7,8,9,10]. Because the embedding scheme treats the QR tag as an image, and the secret is concealed in the pixels or coefficients in the QR image without considering the characteristics of the QR modules. The decoding process requires further image processing such as pixel and frequency transformation. Some literature deeply studies QR codes and embed information in combination with their structural characteristics. In such methods, the information on the public level of original QR codes can be read without further image processing. Chuang et al. [11] used the Lagrange interpolation algorithm to divide secret information into several parts and share them, and then coded them into two-level QR codes. Huang et al. [12] proposed a bidirectional authentication scheme based on the Needham-Schroeder protocol and analyzed the anti-attack capability of the authentication protocol using Gong-Needham-Yahalom logic. Krishna et al. [13] put forward a product anti-counterfeit scheme using DES (Data Encryption Standard) algorithm, which encoded the encrypted ciphertext and plaintext information into the two-layer two-dimensional code. Because references [11,12,13] store both plaintext and cipher-text in a given carrier QR code, the secret payload capacity is limited. Chiang et al. [2] concealed the secret into the QR modules directly by exploiting the error correction capability. Lin et al. [14] embedded the secret data into the cover QR code without distorting the readability of QR content. General QR readers can read the QR content from the marked QR code for the sake of reducing attention. Only the authorized receiver can encrypt and retrieve the secret from the marked QR code. Lin et al. [15] used the error correction ability of QR code and LSB Matching revisited as the embedding method to reverse the color of some modules and embed information. Chow et al. [16] investigated a method of distributing shares by embedding them into cover QR codes in a secure manner using cryptographic keys. Zhao et al. [17] proposed a scheme with low computational complexity and was suitable for low-power devices in Internet-of-Things systems because of utilizing the error correction property of QR codes to hide secret information. Liu et al. [18] introduced a novel rich QR code with three-layer information that utilized the characteristics of the Hamming code and the error correction mechanism of the QR code to protect the secret information. References [2, 14,15,16,17,18] make use of the error correction capability of QR codes and flip the black and white modules of the QR code to hide secret information. On the one hand, the embedding capacity of such methods would be affected by the error-correcting level of the carrier QR code. The embedding capacity of high error-correcting level QR code is large, and vice versa. On the other hand, the embedding of secret information reduces the robustness of public level data in QR codes. Based on the information hiding scheme, Tkachenko et al. [19] selected the black modules of the QR code according to the secret scrambling sequence and replaced them with texture patterns to realize two-level storage of information. The scheme is called the Two-Level QR (2LQR) code. In public level QR message reading, the texture patterns would be recognized as black modules. Hence, this scheme would not affect the error-correction capacity of cover QR codes. The advantages of this approach include the following two aspects. On the one hand, the embedding of secret information would not reduce the robustness of public level information. On the other hand, the capacity of secret information would not be limited by the error correction level of the cover QR code. All black modules in data areas and error correction areas can be used to embed secret information. Moreover, due to masking operation in traditional QR codes, the numbers of black and white modules tend to be the same. For the same version of QR codes, the embedding capacity is almost the same no matter what the error correction level is. In reference [19], the extraction of texture patterns adopts the Pearson correlation. To resist the error occurrences in texture pattern recognition (i.e., secret message extraction), the ternary Golay code is used to encode the secret message before embedding [19]. Tkachenko et al. [20] also studied the performance of other correlations in recognition of texture pattern patches, that is, Spearman, Kendall, and Kendall weighted correlations. And Kendall weighted correlation methods were demonstrated to achieve the best results, which calculates probability values during a preprocessing step. Although it has the advantage of not affecting the error-correction capacity of cover QR code, 2LQR code uses the variation of texture module to express secret message digit, which is equivalent to hiding information at the pixel level. After the P&S process, distortions occur in both the pixel values and the geometric boundary of the P&S image. These distortions will cause the low recognition accuracy of texture patterns and reduce the effective embedding capacity of secret information. In Sect. 2, the impact of the P&S process on pattern recognition of texture modules is stated in detail. Discussion in Sect. 2 implies that a recognition scheme that can extract the global structure of the texture module is needed. Inspired by Kendall weighted correlation, we expect that the process of training can boost the pattern recognition accuracy of the texture module. As shown in reference [19], Kendall weighted outperforms Kendall based method. The only difference between them is that Kendall weighted calculates probability values during a preprocessing step to weigh a Kendall measure. The probability values are computed from a representative set of texture pattern batches. This preprocessing step is similar to training in that it extracts some information about sample distribution from the representative set. Inspired by this, this work tends to also adopt training-based methods, and further excavate information about sample distribution from the training set. Base on analysis in the above two paragraphs, a training-based method, which can extract structural information of texture module, is expected to work well for pattern recognition of P&S texture module. The dictionary learning method, which has shown excellent performance in many fields, such as image denoising [21, 22], inpainting [23,24,25], and classification [26,27,28], is adopted in this work. Never until now have dictionary learning techniques been used for this type of application. In this paper, we propose a pattern recognition method of P&S texture modules for 2LQR based on dictionary learning. In the dictionary generation stage, dictionaries are learned based on a training set to optimally represent P&S texture modules. In the pattern recognition stage, each P&S texture module is represented by the learned dictionaries. The dictionary that provides the smallest reconstruction error indicates the pattern. The rest of this paper is organized as follows. The following section states the problem and the choice of dictionary learning in detail. Related works are reviewed in Sect. 3, which includes QR code, 2LQR code, and dictionary learning. The proposed dictionary learning-based pattern recognition method is demonstrated in Sect. 4, which constitutes dictionary generation for texture patterns, pattern recognition via learned dictionaries, and the framework of the whole recognition system. In Sect. 5, we describe the performed experiments and evaluate the observed results. Finally, the conclusion is made in Sect. 6. In this section, we will discuss the application scenarios we focused on, the impact of the P&S process on the pattern recognition of the P&S texture module, and explain the choice of dictionary learning. To promote the 2LQR code being widely used, we hope to be able to extract secret information from the 2LQR code in the following situations. Printer: Common desktop laser printer with 600 dpi (dot per inch) resolution; Scanning devices: 600 dpi or lower resolution scanner, hand-held QR code scanner, smartphone. Common hand-held QR code scanners bear a resolution of 400–600 dpi or less. Smartphones have a wide range of resolutions, and even with high-resolution mobile phone cameras. But the actual optical resolution that is used for the QR code declines when capturing it from a long distance. Based on the above statement, this work focuses on the condition that the scanner resolution is equal to or less than the printer resolution. The P&S process that a texture module went through is shown in Fig. 1. Texture modules in 2LQR are first printed to be a hardcopy version and then scanned to a P&S version. After the P&S process, distortions occur in both the pixel values and the geometric boundary of the P&S image. These distortions will make recognition of the pattern of the texture module difficult. The P&S process that a texture module went through The image quality of the P&S texture module is mainly influenced by printer and scanner resolution. In the case the scanner resolution is less than the printer resolution, the original black-and-white texture module will be blurred due to down-sampling and interpolation. The lower the scanner resolution is, the more the texture module is blurred. When ignoring other distortions in the P&S process, scanning with lower resolution, is similar to resizing an image to be smaller than its original size. Figure 2 shows this effect vividly. When an original texture pattern image is resized to \(2/3\), \(1/2\), and \(1/3\) of its original size, it becomes more and more blurred. To extract secret information under this situation, the recognition scheme should extract the inherent structure information from the blurred image. When an a original texture pattern image is resized to b \(2/3\), c \(1/2\), and d \(1/3\) of its original size, it becomes blurred. The size of the texture pattern is \(12\times 12\). Each dot in the figure represents a pixel Some other factors will also bring distortions to the P&S image. Such as, printer and scanner's inherent system noise, random noise, rotation, scaling, and cropping (RSC) brought by the equipment itself and the operator. These distortions will cause the correction and positioning of texture modules not accurate, e.g., 1 row/column misplacement. Misplacement makes the texture module to be a shifted version of the ideally corrected one. This makes texture module reading methods that depend on pixel values and positions (e.g., correlation-based methods) incorrect. Figure 3 show the texture pattern \({P}_{1}\) [19], its upper-shifted version by only 1 row \({P}_{1}^{^{\prime}}\), and texture pattern \({P}_{3}\) [19]. A digital texture module as \({P}_{1}^{^{\prime}}\) should be recognized as pattern \({P}_{1}\). However, \(corr\left({P}_{1}^{^{\prime}},{P}_{3}\right)\) is bigger than \(corr\left({P}_{1}^{^{\prime}},{P}_{1}\right)\), where \(corr\left(X,\mathrm{Y}\right)\) represents the Pearson correlation value between variable \(X\) and \(\mathrm{Y}\). So if we take the Pearson correlation based method in reference [19], \({P}_{1}^{^{\prime}}\) would be considered to be pattern \({P}_{3}\). To handle this misplacement problem, a strategy that captures structure of the texture module is urgently needed. Two texture patterns, one shifted version, and their correlation values. a Texture pattern \({P}_{1}\), b Upper-shifted version of pattern \({P}_{1}\), c Texture pattern \({P}_{3}\), d Correlation values Dictionary captures the global structure of data [29], and has achieved good results in face recognition [30,31,32]. In face recognition, it is also the common cause that the acquisition resolution is low and the acquisition image is a rotated, scaled, and/or shifted version of the standard face image. Inspired by this, this work takes the dictionary learning approach and expects it to perform well in the pattern recognition of the P&S texture module. This section is split into two Sects. (3.1, 3.2). We start with a description of the standard QR code features in Sect. 3.1. Descriptions of the two-level QR code proposed by Tkachenko et al. [19] are presented in Sect. 3.2. QR codes are two-dimensional barcodes that were developed by Denso Wave in 1994, which is one of the most popular two-dimensional (2D) bar codes. It consists of black and white square modules [1, 33, 34]. Compared with the traditional one-dimensional (1-D) bar code, QR code has a wide range of matrix modules, which can carry a larger amount of data content. There are 40 versions in the QR code standard [33]. The higher version of the QR code can carry a larger data capacity. For example, the data capacity of QR version 1 is 208 modules and the QR version 40 is 29,648 modules. QR codes can be not only printed on paper but also displayed on screens. Decoders are no longer confined to special barcode scanners but can be replaced with mobile devices equipped with camera lenses. Because cameras have become standard equipment on smartphones, the popularity of smartphones creates a very favorable environment for using QR codes. Figure 4 shows the basic structure of the QR code. They are quiet zone, position detection patterns, separators for position detection patterns, timing patterns, alignment patterns, format information, version information, data, and error correction codewords. Some details of the QR code can be referred to [1]. As the functional region is used to locate and geometric correct the QR code, usually they are not utilized for secret embedding. Data and error correction codewords are often employed to conceal secret information. The structure of QR codes [11] Two-level QR code Two-Level QR code [19] is proposed by Tkachenko in 2016, which is a new rich QR code that has two storage levels. It enriches the encoding capacity of the standard QR code by replacing its black modules with specific textured patterns. These patterns are always perceived as black modules by QR code readers, hence it does not introduce disruption in the standard QR reading process. The generation process of the 2LQR code is depicted in Fig. 5 and can be divided into four steps. Step 1: Generate standard QR code according to public message \({M}_{1}\). The size of the QR code is \(N\times N\) pixels. Step 2: Encode private message \({M}_{2}\) using error correction encoding and then scramble codewords. Step 3: Select texture patterns from the pattern database. If the codewords in Step 2 are q-ary, then q texture patterns are selected. Step 4: Replace black modules in standard QR codes with texture patterns according to the scrambled codewords. The generation process of 2LQR code The first level of the 2LQR code is the same as the standard QR code storage level and is accessible for any classical QR code reader. The second level is constructed by substituting black modules with specific texture patterns. It consists of information encoded using q-ary code with an error correction capacity. 42% of the texture patterns are covered by black pixels, and they will be treated as black modules by QR code reader. This allows the 2LQR code to increase the storage capacity of the QR code without affecting the error-correction level of the cover QR code. The decoding process of the 2LQR code consists of two parts, that is, the decoding of the public message and private message. The overview of the 2LQR code decoding process is shown in Fig. 6. Firstly, the geometrical distortion of the scanned 2LQR code has to be corrected during the pre-processing step. The position tags are localized by the standard process [1] to determine the position coordinates. The linear interpolation is applied to re-sample the scanned 2LQR code. Therefore, at the end of this step, the 2LQR code has the correct orientation and original size \(N\times N\) pixels. Secondly, perform the module classification by a global threshold method, which is calculated as a mean value of the whole scanned 2LQR code. If the mean value of block \(p\times p\) pixels is smaller than the global threshold, this block is in a black class (BC). Otherwise, this block is in a white class. The result of this step is two classes of modules. Thirdly, two parallel procedures are completed. On one side, the decoding of public message \({M}_{1}^{^{\prime}}\) is performed by using the standard QR code decoding algorithm [1]. And on the other side, the BC class is used for pattern recognition of the textured pattern in scanned 2LQR code. The pattern detection method compares the scanned patterns with characterization patterns by using the Pearson correlation. After descrambling and error correction decoding, private message \({M}_{2}^{^{\prime}}\) is obtained. The decoding process of 2LQR code Dictionary learning Over the past few years, dictionary learning for sparse representation of signals has attracted interest in the signal processing community. An overview of the latest dictionary learning techniques is given in Refs. [35, 36] and the references therein. Some pioneering contributions to the problem of data-adaptive dictionary learning have been made by Aharon et al. [37, 38], who proposed the K-SVD algorithm. So far, the K-SVD algorithm is the most popular algorithm for dictionary design. Theoretical guarantees for K-SVD performance can be found in [39] and [40]. Dai et al. [41] developed a general-purpose dictionary learning framework called SimCO that makes MOD [42] and K-SVD special cases. K-SVD is used to solve the problem of image denoising, especially to suppress zero-mean additive Gaussian white noise. The main idea behind this approach is to train a dictionary to represent image patches economically [22]. K-SVD uses ℓ2 distortion as a measure of data fidelity. The problem of dictionary learning is also solved from the perspective of analysis [43,44,45]. In addition to denoising, dictionary-based techniques have also been applied in inpainting [23,24,25] and classification [26,27,28]. The goal of dictionary learning is to learn an over-complete dictionary matrix \(D\in {\mathbb{R}}^{n\times K}\) that contains \(K\) signal-atoms (in this notation, columns of \(D\)). A signal vector \(y\in {\mathbb{R}}^{n\times K}\) can be represented, sparsely, as a linear combination of these atoms; to represent \(y\), the representation vector \(x\) should satisfy the condition \(y\approx Dx\), made precise by requiring that \({\Vert y-Dx\Vert }_{l}\le \epsilon\) for some small value \(\epsilon\) and some \({L}_{l}\) norm. The vector \(x\in {\mathbb{R}}^{K}\) contains the representation coefficients of the signal \(y\). Typically, the norm \(l\) is selected as \({L}_{1}\), \({L}_{2}\), or \({L}_{\infty }\). If \(n<K\) and \(D\) is a full-rank matrix, an infinite number of solutions are available for the representation problem. Hence, constraints should be set on the solution. Also, to ensure sparsity, the solution with the fewest nonzero coefficients is preferred. However, to achieve a linear combination of atoms in \(D\), the sparsity term of the constraint is relaxed so that the number of nonzero entries of each column \({x}_{i}\) can be more than 1, but less than a number \({T}_{0}\). So, the objective function is $$\underset{D,X}{\mathrm{min}}\sum_{i}{\Vert {x}_{i}\Vert }_{0},\mathrm{ subject to }\forall i, {\Vert Y-DX\Vert }_{F}^{2}\le\upepsilon ,$$ where the letter \(F\) denotes the Frobenius norm. In the K-SVD algorithm, the \(D\) is first fixed and the best coefficient matrix \(X\) is found. Then search for a better dictionary \(D\). Only one column of the dictionary \(D\) is updated each time while fixing \(X\). Better \(D\) and \(X\) are searched alternatively. Methods—proposed dictionary learning based scheme The private message capacity of 2LQR is decided by the accuracy of texture pattern recognition. The higher the accuracy of pattern recognition is, the lower the redundancy of error-correcting encoding is, and then the higher the storage capacity of the private message is. As discussed in Sect. 2, we adopt the dictionary learning method which can capture the global structure of data to do pattern recognition of the P&S texture module. Dictionary learning (DL) aims to learn a set of atoms, or called visual words in the computer vision community, in which a few atoms can be linearly combined to well approximate a given signal. In this paper, dictionaries are used for texture pattern recognition. The basic idea behind this approach is that the reconstruction errors of the texture modules are different according to the dictionaries used. The concept of the proposed method is illustrated in Fig. 7, wherein the learned dictionaries correspond to the basis vectors, which can be linearly combined with the coefficients \({\alpha }^{k}\) to represent the input texture module. It is assumed based on the statistics of natural images [37] that most coefficients \({\alpha }^{k}\) are zero, i.e., \({l}_{0}\)-norm \({\Vert \alpha \Vert }_{0}\) is less than the constant value, \(TH\). Given an input P&S texture module \({S}_{m,i}\), dictionarie \({D}_{m}\) for texture module sets \({S}_{m}\) can more accurately reconstruct \({S}_{m,i}\) than dictionarie \({D}_{n}\) for sets \({S}_{n}\), \(\left(n\ne m\right)\). Therefore, \(q\) learned dictionaries that optimally represent \(q\) types of P&S texture module can infer the texture pattern. Concepts of the proposed pattern recognition method: an input texture module will be recognized as one type of texture pattern, whose dictionary can represent the P&S module with the smallest error Dictionary generation for texture pattern Dictionary generation is based on training sets. Training data is obtained in the following way. First, P&S 2LQR codes are image-preprocessed and module classified to get black and white modules as stated in Sect. 3.2. Those modules considered to be black are texture modules carrying secret information. Let \(q\) represents the number of types of patterns. The texture modules are assigned to \(q\) sets, \({S}_{1},{S}_{2},\cdots ,{S}_{q}\), according to corresponding patterns (in training sets, the pattern of each P&S texture module is known). Then, selected or every module in set \({S}_{j}\) are reshaped into column vectors, which are columns of the matrix \({X}_{j},\left\{j=1,\cdots ,q\right\}\). In the rest of this paper, \({X}_{j}\) is denoted as \(X\), whose subscript is the same as \({D}_{j}\). In the dictionary generation process, we need to generate \(q\) dictionaries that optimally represent the modules in \(q\) training sets, respectively. The \(q\) dictionaries,\({D}_{j},\left\{j=1,\cdots ,q\right\}\), were obtained by minimizing the following cost function: $$\underset{{D}_{j},A}{\mathrm{min}}\sum_{k}{\Vert A\left(k\right)\Vert }_{0},\mathrm{ subject to } {\Vert X-{D}_{j}A\Vert }_{F}^{2}\le\upepsilon ,$$ where \(A\left(k\right)\) is the \(k\) th column vector of the matrix \(A\) which indicates the representation column vector corresponding to \(X\left(k\right)\). The K-SVD algorithm [38] is used to minimize Eq. (2). The optimization process in Eq. (2) is executed \(q\) times to obtain \(q\) dictionaries. The size of dictionary \({D}_{j}\) is \({p}^{2}\times m\) where \({p}^{2}\) is the size of texture pattern and \(m\) is the number of atoms in the dictionary. In the experimental section, we will discuss how to select the value of \(m\). Pattern recognition via learned dictionaries To decode the private message in a P&S 2LQR code, first P&S 2LQR codes are image-preprocessed and module classified to get black and white modules as stated in Sect. 3.2. Those modules classified to be black are texture modules. Pattern recognition is performed on each texture module. In our dictionary learning based scheme, when the pattern of a P&S texture module is to be determined, it is first reshaped to a column vector \({x}^{i}\). And then the following optimization problem is solved: $$\underset{j=\left\{1,\cdots ,q\right\}}{\mathrm{min}}\sum_{k}{\Vert {x}^{i}-{D}_{j}{\alpha }^{i}\Vert }_{F}^{2},\mathrm{ subject to } {\Vert {\alpha }^{i}\Vert }_{0}\le TH,$$ \({\alpha }^{i}\) is the predicted representation column vector of \({x}^{i}\), and its sparsity is controlled by the constant value, \(TH\). Equation \((3)\) shows that one of \({D}_{j},\left\{j=1,\cdots ,q\right\}\) will represent \({x}^{i}\) with the smallest error. Therefore, the pattern can be estimated by evaluating the reconstruction errors, as shown in Equation \((3)\). Gradient pursuit algorithm [46], which is the fast version of the orthogonal matching pursuit [38] is utilized to solve Equation \((3)\). When Equation \((3)\) is solved, a value \(j\in \left\{1,\cdots ,q\right\}\) is obtained which indicates the pattern of the P&S texture module. Framework of the whole recognition system In this subsection, we will demonstrate the framework of the complete recognition system based on the dictionary learning method exhaustively. Figure 8 depicted the whole process of learning dictionaries (training) and pattern recognition (testing). In the training process, dictionaries for each type of texture pattern are learned. Firstly, numerical 2LQR codes are printed and scanned to get P&S 2LQR codes. Then, the same image pre-processing step as that in Fig. 6 is utilized to correct the geometrical distortion of P&S 2LQR codes. After that, black module patches are assigned to \(3\) sets according to their original texture patterns, and the training dataset is generated. Then dictionaries for each type of texture pattern are learned on the training dataset. In the testing process, a scanned 2LQR code first goes through the image pre-processing step to correct geometrical distortion. Then module classification is performed by the same threshold method as that in [1], and black class modules and white class modules are obtained. After that, with dictionaries generated from the training process, pattern recognition is performed for every black class module, except those for position tags. The whole process of learning dictionaries and pattern recognition Experimental setups All codes in our experiments are implemented in Matlab. Matlab function corr with parameters 'Pearson', 'Spearman', and 'Kendall' is used to simulate Pearson correlation used in reference [19], Spearman, and Kendall correlation used in reference [20], respectively. We implemented Kendall weighted correlation according to its illustration in reference [20]. And according to its superior performance over Pearson, Spearman, and Kendall, which is the same as the statement in reference [20], we have reason to believe that its implementation is correct. In our experiments, HP LaserJet 1022 and HP LaserJet P1008 printers are used, which are commonly used desktop printers in the office. RICOH MP 3053sp is used as a scanner. The printer resolution is \(600\) dpi (dot per inch), and the scanner resolution varies in the set \(\left\{200, 300, 400, 600\right\}\) dpi. The same texture patterns as in reference [19] are used, which are shown in Fig. 9. And will be denoted as\({P}_{1}\), \({P}_{2}\) and\({P}_{3}\). The size of the texture patterns are \(p\times p\) pixels, where \(p=12\) as in [19]. QR code of version \(2\) as in reference [19] is utilized. Public and private messages are generated randomly. Public messages are used to generate standard QR codes. Private messages are embedded into a QR code by substituting black modules with corresponding texture patterns. The resulting numerical 2LQR codes are used in the following experiments. An example of numerical QR and its corresponding 2LQR code are shown in Fig. 10. The three texture patterns used in [19]. Each black or white dot is related to a pixel The numerical QR and 2LQR codes We generate \(700\) numerical 2LQR codes and print them by HP LaserJet 1022 and P1008 printers at \(600\) dpi, respectively. Then the printed 2LQR codes are scanned at different resolutions to obtain their P&S versions. Datasets printed by HP LaserJet 1022 are represented by P&S 1022, and those printed by HP LaserJet P1008 are represented by P&S 1008, respectively. To obtain training datasets, firstly P&S 2LQR codes are image-preprocessed and module classified to get black and white modules as shown in Fig. 6. Then black module patches are divided into \(3\) groups according to their corresponding original texture patterns. For DL method, the number of training samples in each set is \(3708\), which is the column size of the matrix \(X\) in Equation \(\left(2\right)\) and represent the total number of patches in dictionary learning. For Kendall weighted method, the size of the training dataset is not stated in the original work [20]. Figure 11 depicted the relationship between the number of training samples in each set and the error probability of pattern recognition. For both P&S 1022 and P&S 1008 datasets, the general trend of error probability versus the number of training samples is stated as follows. When the number of training samples is smaller than \(600\), the error probability decreases fast as the training samples grow. When the number of training samples is larger than \(600\), the error probability is almost constant. Therefore, the number of training samples is set to be \(600\) for Kendall weighted method. Error probability versus the number of training samples for the Kendall weighted method Parameters selection in DL The number of atoms in dictionaries affects the performance of our proposed DL. Figure 12 depicted the relationship between error probability and the number of atoms. When the number of atoms is smaller than \(256\), the error probability decreases as the number of atoms grows. And when the number of atoms is greater than \(256\), the error probability increases as the number of atoms grows. That is, the best pattern recognition performance of DL is obtained when the number of atoms is \(256\). Therefore, the number of atoms, \(m\), is set to be 256 in our experiments. Error probability versus the number of atoms in each dictionary when using the DL (proposed) method Pattern recognition performance In this subsection, we will show the pattern recognition result using our proposed DL method, and compare it with correlation-based methods in references [19] and [20]. Before pattern recognition, the training process (as shown in Fig. 8) is performed and dictionaries for each type of texture pattern are obtained, which are visualized in Fig. 13. Many atoms in dictionaries bear similar shapes as their corresponding texture patterns. This implies the expressiveness of the dictionary learning approach to printed texture modules. Visualized learned dictionaries \({D}_{1}\), \({D}_{2}\), \({D}_{3}\) Same resolution of scanner and printer In this subsubsection, we will explore the performance of our proposed DL method in pattern recognition of the P&S texture modules and compare it with other techniques. The scanner resolution is the same as that of the printer, that is, \(600\) dpi. Table 1 shows the error probabilities of pattern recognition of the DL method, Pearson correlation-based method [19], and the other three correlation-based methods [20] in its second to sixth rows. The lowest error probability of these methods is set in bold. Figure 14 also depicted these results vividly. Take P&S 1022 for example, the error probability of the proposed DL method is only \(0.04\mathrm{\%}\), and that of Pearson, Kendall, Spearman, Kendall weighted correlation is \(16.84\mathrm{\%}\), \(14.90\mathrm{\%}\), \(14.90\mathrm{\%}\), and \(4.01\mathrm{\%}\), respectively. The error probability of the DL method is \(1/421-1/133\) of that of correlation-based methods. Moreover, besides the DL method, Kendall weighted method bears the best performance. This implies that employing information from the training stage is good for pattern recognition. Table 1 Error probability of pattern recognition Error probability of pattern recognition In practice, 2LQR codes may be printed by more than one printer. In this case, we wish to decode 2LQR codes using only one set of trained dictionaries. Hence, we need to read P&S 2LQR codes generated by a printer different from that used for training datasets. DL and Kendall weighted methods in these cases are referred to as DL-2 and Kendall weighted-2, respectively. The last two rows of Table 2 shows performances of DL-2 and Kendall weighted-2. For P&S 1022 testing images, we use dictionaries learned (or pre-computed probabilities) from P&S 1008 training dataset, and for P&S 1008 testing images, we use dictionaries learned (or pre-computed probabilities) from P&S 1022 training dataset. We can see that even using training information from a different printer, DL and Kendall weighted methods still outperform other methods, and the DL method still bears the lowest error probability. Table 2 Error probability of pattern recognition, when the model of printers used for training and testing database is different Pattern recognition performance at low scanner resolution QR codes are usually captured and decoded by low-power barcode readers and mobile devices, which bear low resolutions. If the 2LQR code can also be decoded by these low-resolution devices, its application in the real world will expand. In this subsection, we investigate the performance of 2LQR when the resolution of the scanner is lower than that of the printer. Table 3 shows the error probability of pattern recognition results at low scanner resolution, that is \(400\), \(300\), and \(200\) dpi. These experiments are conducted on a 2LQR database printed by HP LaserJet 1022 printer at \(600\) dpi. The error probabilities at these low scan resolutions are much higher than those at scan resolution of \(600\) dpi. The lower the scan resolution is, the worse the texture patterns are blurred, and the harder it is to recognize the texture patterns. We can see that DL based method is significantly better than correlation-based methods [19, 20] when 2LQR codes are scanned with low-resolution, which situation is closer to the practical application. Table 3 Error probability of pattern recognition at low scan resolution Computation time The computation time of a pattern recognition scheme is very important, especially in the testing period. Short response time is conducive to improve the user experience, thus improve the practicability of the scheme. To test the response time of each pattern recognition method fairly, we run them on a ThinkPad × 1 carbon laptop with Intel Core i7-8650U CPU and 16 GB memory. The time spent is averaged over the test database with more than 2000 scanned images. Figure 15 shows the computation time for each method to recognize texture patterns in a 2LQR code. DL bears the shortest response time, and it will potentially improve the user experience. Computation time for each method to recognize texture patterns in a 2LQR code As dictionaries can be trained only once and used in all of the posterior testings, they can be trained offline and then assembled into 2LQR reading devices. Therefore, it is the storage space occupied by dictionaries that may affect the user experience and product availability, not the training time. Each dictionary contains \(144\times 256\) floating numbers. When representing a floating number in 4 bytes, 0.14 megabytes are needed for storing one dictionary. If readers are interested in the training time, it is about 11 s for training one dictionary when performed on our ThinkPad × 1 carbon laptop. Print-and-Scan (P&S) process blurs texture modules in 2LQR, and pattern recognition of P&S texture modules is a challenge. This phenomenon decreases the accuracy of the private message reading in the 2LQR code. To ensure exact private message extraction, larger redundancy is needed, and effective embedding capacity is reduced. In previous literature, correlation measures are employed to recognize the texture modules. To boost private message embedding capacity, a powerful pattern recognition algorithm of P&S texture modules is needed. This paper proposes a pattern recognition scheme for the P&S texture module based on Dictionary Learning (DL). This is the first time that the DL technique is used for this type of application. Our method is suitable for ordinary use, e.g., desktop laser printers that bear high pixel expansion, scanner resolution that is the same or lower than the printer resolution (namely 2/3, 1/2, and 1/3 of the printer resolution). The experimental results show that the dictionary learning-based method performs significantly better than the correlation-based method. Moreover, the dictionary learning-based method takes the least time in the detecting stage. These advantages of dictionary learning-based methods will enhance the practicability of the 2LQR code. The print-and-scanned 2LQR code used in experiments can be downloaded from https://pan.baidu.com/s/1QGLZxIqXN768kihSiBDqTw (extraction code: 2o39). QR: Quick response code 2LQR: P&S: Print-and-scan DL: dpi: Dot per inch ISO. Information Technology Automatic Identification and Data Capture Techniques- QR Code 2005 Bar Code Symbology Speciification. 2006, Standard IEC 18004. Y.-J. Chiang, P.-Y. Lin, R.-Z. Wang, Y.-H. Chen, Blind QR code steganographic approach based upon error correction capability. KSII Trans. Internet Inf. Syst. 7, 2527–2543 (2013) Buczynski, D. MSB/LSB tutorial. Available online: http://www.buczynski.com/Proteus/msblsb.html. Accessed on Mar 9 2021. S. Katzenbeisser, F. Petitolas, Information hiding techniques for steganography and digital watermaking. EDPACS the EDP Audit Control & Security Newsletter 28, 1–2 (1999) X. Zhang, S. Wang, Efficient steganographic embedding by exploiting modification direction. IEEE Commun Lett 10, 783 (2006) C.-H. Chung, W.-Y. Chen, C.-M. Tu, Image Hidden Technique Using QR-Barcode. Proc Intell Inform Hiding Multimedia Signal Process 9(12/2009), 522–525 (2009) S. Dey, K. Mondal, J. Nath, A. Nath, Advanced steganography algorithm using randomized intermediate QR host embedded with any encrypted secret message: ASA_QR algorithm. Int J Modern Educ Comput Sci 4, 59–67 (2012). https://doi.org/10.5815/IJMECS.2012.06.08 P.-Y. Lin, Distributed secret sharing approach with cheater prevention based on QR Code. IEEE Trans. Industr. Inf. 12, 384–392 (2016). https://doi.org/10.1109/TII.2015.2514097 W.-Y. Chen, J.-W. Wang, Nested image steganography scheme using QR-barcode technique. Opt. Eng. (2009). https://doi.org/10.1117/1.3126646 H.-C. Huang, F.-C. Chang, W.-C. Fang, Reversible data hiding with histogram-based difference expansion for QR code applications. IEEE Trans. Consum. Electron. 57, 779–787 (2011). https://doi.org/10.1109/TCE.2011.5955222 C. Jun-Chou, H. Yu-Chen, K. Hsien-Ju, A novel secret sharing technique using QR code. Int. J. Image Process. 4(5), 468–75 (2010) C.-T. Huang, Y.-H. Zhang, L.-C. Lin, W.-J. Wang, S.-J. Wang, Mutual authentications to parties with QR-code applications in mobile systems. Int. J. Inf. Secur. 16, 525–540 (2017). https://doi.org/10.1007/S10207-016-0349-6 M.B. Krishna, A. Dugar, Product authentication using qr codes: a mobile application to combat counterfeiting. Wireless Pers. Commun. 90, 381–398 (2016). https://doi.org/10.1007/S11277-016-3374-X P.-Y. Lin, Y.-H. Chen, E.J.-L. Lu, P.-J. Chen, Secret hiding mechanism using QR barcode. Proc. Signal-Image Technol. Internet-Based Syst. 12(2/2013), 22–25 (2013) P.-Y. Lin, Y.-H. Chen, High payload secret hiding technology for QR codes. EURASIP J. Image Video Process. (2017). https://doi.org/10.1186/S13640-016-0155-0 Y.-W. Chow, W. Susilo, J. Tonien, E. Vlahu-Gjorgievska, G. Yang, Cooperative secret sharing using QR codes and symmetric keys. Symmetry (2018). https://doi.org/10.3390/SYM10040095 Article MATH Google Scholar Q. Zhao, S. Yang, D. Zheng, B. Qin, A QR code secret hiding scheme against contrast analysis attack for the internet of things. Sec Commun Netw 2019, 1–8 (2019). https://doi.org/10.1155/2019/8105787 S. Liu, Z. Fu, B. Yu, Rich QR codes with three-layer information using hamming code. IEEE Access 7, 78640–78651 (2019). https://doi.org/10.1109/ACCESS.2019.2922259 I. Tkachenko, W. Puech, C. Destruel, O. Strauss, J.-M. Gaudin, C. Guichard, Two-level QR code for private message sharing and document authentication. IEEE Trans. Inf. Forensics Secur. 11, 571–583 (2016). https://doi.org/10.1109/TIFS.2015.2506546 I. Tkachenko, C. Destruel, O. Strauss, W. Puech, Sensitivity of different correlation measures to print-and-scan process. Electr. Imaging 2017, 121–127 (2017). https://doi.org/10.2352/ISSN.2470-1173.2017.7.MWSF-335 Y. Liu, S. Canu, P. Honeine, S. Ruan, Mixed integer programming for sparse coding: application to image denoising. IEEE Trans. Comput. Imaging 5, 354–365 (2019). https://doi.org/10.1109/TCI.2019.2896790 MathSciNet Article Google Scholar M.H. Alkinani, M.R. El-Sakka, Patch-based models and algorithms for image denoising: a comparative review between patch-based images denoising methods for additive noise reduction. EURASIP J. Image Video Process. (2017). https://doi.org/10.1186/S13640-017-0203-4 S. Li, Q. Cao, Y. Chen, Y. Hu, L. Luo, C. Toumoulin, Dictionary learning based sinogram inpainting for CT sparse reconstruction. Optik 125, 2862–2867 (2014). https://doi.org/10.1016/J.IJLEO.2014.01.003 P. Trampert, S. Schlabach, T. Dahmen, P. Slusallek, Exemplar-based inpainting based on dictionary learning for sparse scanning electron microscopy. Microsc. Microanal. 24, 700–701 (2018). https://doi.org/10.1017/S1431927618003999 F. Meng, X. Yang, C. Zhou, Z. Li, A sparse dictionary learning-based adaptive patch inpainting method for thick clouds removal from high-spatial resolution remote sensing imagery. Sensors (2017). https://doi.org/10.3390/S17092130 A. Fawzi, M. Davies, P. Frossard, Dictionary learning for fast classification based on soft-thresholding. Int. J. Comput. Vision 114, 306–321 (2015). https://doi.org/10.1007/S11263-014-0784-7 MathSciNet Article MATH Google Scholar M. Yang, D. Dai, L. Shen, L.V. Gool, Latent dictionary learning for sparse representation based classification. Proc Comput Vision Pattern Recogn. 6(23/2014), 4138–4145 (2014) S. Kim, R. Cai, K. Park, S. Kim, K. Sohn, Modality-invariant image classification based on modality uniqueness and dictionary learning. IEEE Trans. Image Process. 26, 884–899 (2017). https://doi.org/10.1109/TIP.2016.2635444 P. Zhou, C. Fang, Z.C. Lin, C. Zhang, E.Y. Chang, Dictionary learning with structured noise. Neurocomputing 273, 414–423 (2018). https://doi.org/10.1016/j.neucom.2017.07.041 M.M. Liao, X.D. Gu, Face recognition based on dictionary learning and subspace learning. Digital Signal Process. 90, 110–124 (2019). https://doi.org/10.1016/j.dsp.2019.04.006 X.L. Luo, Y. Xu, J. Yang, Multi-resolution dictionary learning for face recognition. Pattern Recogn. 93, 283–292 (2019). https://doi.org/10.1016/j.patcog.2019.04.027 Y. Xu, Z.M. Li, J. Yang, D. Zhang, A survey of dictionary learning algorithms for face recognition. IEEE Access 5, 8502–8514 (2017). https://doi.org/10.1109/access.2017.2695239 Psytec QR code editor software. http://www.psytec.co.jp/docomo.html. Accessed on March 9 2021. Denso-wave. Available online: http://www.qrcode.com/en/index.html. Accessed on March 9 2021. R. Rubinstein, A.M. Bruckstein, M. Elad, Dictionaries for Sparse Representation Modeling. 98, 1045–1057 (2010). https://doi.org/10.1109/JPROC.2010.2040551 Tošić, I.; Frossard, P. Dictionary Learning. IEEE Signal Processing Magazine 2011. M. Elad, M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15, 3736–3745 (2006). https://doi.org/10.1109/TIP.2006.881969 M. Aharon, M. Elad, A. Bruckstein, K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54, 4311–4322 (2006). https://doi.org/10.1109/TSP.2006.881199 S. Arora, R. Ge, A. Moitra, New algorithms for learning incoherent and overcomplete dictionaries. Proc. Conf. Learn. Theory 5(29/2014), 779–806 (2014) K. Schnass, On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD. Appl. Comput. Harmon. Anal. 37, 464–491 (2014). https://doi.org/10.1016/J.ACHA.2014.01.005 W. Dai, T. Xu, W. Wang, Simultaneous Codeword Optimization (SimCO) for dictionary update and learning. IEEE Trans. Signal Process. 60, 6340–6353 (2012). https://doi.org/10.1109/TSP.2012.2215026 K. Engan, S.O. Aase, J.H. Husoy. Method of optimal directions for frame design. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, 3/15/1999, 1999; pp. 2443–2446 R. Rubinstein, T. Peleg, M. Elad, Analysis K-SVD: a dictionary-learning algorithm for the analysis sparse model. IEEE Trans. Signal Process. 61, 661–677 (2013). https://doi.org/10.1109/TSP.2012.2226445 E.M. Eksioglu, O. Bayir, K-SVD meets transform learning: transform K-SVD. IEEE Signal Process. Lett. 21, 347–351 (2014). https://doi.org/10.1109/LSP.2014.2303076 J. Dong, W. Wang, W. Dai. Analysis SIMCO: a new algorithm for analysis dictionary learning. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, 5/4/2014, 2014; pp. 7193–7197 T. Blumensath, M.E. Davies, Gradient Pursuits. IEEE Trans. Signal Process. 56, 2370–2382 (2008). https://doi.org/10.1109/TSP.2007.916124 We would like to thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. The work is funded by the National Natural Science Foundation of China (Nos. 61972405, 62071434, 61972042) and Beijing municipal education commission project (Nos. KM202010015001, KM202110015004). Department of Information Engineering, Beijing Institute of Graphic Communication, Beijing, 100026, China Lifang Yu, Peng Cao & Zhenzhen Zhang School of Computer Science, Communication University of China, Beijing, 100024, China Gang Cao School of National Security and Counter Terrorism, People's Public Security University of China, Beijing, 100038, China Huawei Tian Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ, 07102, USA Yun Q. Shi Lifang Yu Peng Cao Zhenzhen Zhang All authors take part in the discussion of the work described in this paper. LY, PC, and HT conceived and designed the experiments; LY, GC, and ZZ performed the experiments; LY, GC, and HT analyzed the data; LY, HT, GC, and ZZ wrote the paper. All authors read and approved the final manuscript. Correspondence to Huawei Tian. The author declares that they have no competing interests. Yu, L., Cao, G., Tian, H. et al. Recognition of printed small texture modules based on dictionary learning. J Image Video Proc. 2021, 31 (2021). https://doi.org/10.1186/s13640-021-00573-3 Received: 11 March 2020 Print-and-scan process
CommonCrawl
DENCAST: distributed density-based clustering for multi-target regression Roberto Corizzo ORCID: orcid.org/0000-0001-8366-60591,2 na1, Gianvito Pio1,2 na1, Michelangelo Ceci1,2 na1 & Donato Malerba1,2 Recent developments in sensor networks and mobile computing led to a huge increase in data generated that need to be processed and analyzed efficiently. In this context, many distributed data mining algorithms have recently been proposed. Following this line of research, we propose the DENCAST system, a novel distributed algorithm implemented in Apache Spark, which performs density-based clustering and exploits the identified clusters to solve both single- and multi-target regression tasks (and thus, solves complex tasks such as time series prediction). Contrary to existing distributed methods, DENCAST does not require a final merging step (usually performed on a single machine) and is able to handle large-scale, high-dimensional data by taking advantage of locality sensitive hashing. Experiments show that DENCAST performs clustering more efficiently than a state-of-the-art distributed clustering algorithm, especially when the number of objects increases significantly. The quality of the extracted clusters is confirmed by the predictive capabilities of DENCAST on several datasets: It is able to significantly outperform (p-value \(<0.05\)) state-of-the-art distributed regression methods, in both single and multi-target settings. The generation of massive amounts of data in different forms (such as activity logs and sensor measurements) has increased the need for novel data mining algorithms, which are capable of building accurate models efficiently and in a distributed fashion. In recent years, several researchers proposed novel approaches to distribute the workload among several machines for classical clustering, classification and regression tasks [1]. However, only a few of them tackle the specific problem of density-based clustering. This problem has received much attention in the last decades, because of many desirable properties of the extracted clusters (arbitrarily-shaped, noise-free, robustness to outliers) which turn out the be useful in many application domains (e.g., spatial data analysis). Starting from the seminal work of DBSCAN [2], many algorithms have been proposed, but only a few of them are distributed. Unfortunately, existing distributed methods for density-based clustering suffer from several limitations. In particular, they are limited to data organized in a specific structure (e.g., they can analyze only low-dimensional feature spaces), or they suffer from overhead and scalability issues when the number of instances and attributes increase considerably [3,4,5]. These limitations depend from the inherent difficulty in upgrading existing non-distributed density-based clustering algorithms towards their equivalent (or, at least, approximated) distributed counterpart. Finally, most of the existing methods are strictly tailored for pure clustering and do not exploit clusters to support predictive tasks, as in predictive clustering trees [6]. Therefore, our research focused on the following questions: can we perform density-based clustering on large-scale and high dimensional data, without incurring in computational bottlenecks? Can we profitably exploit these clusters for predictive purposes? To answer to these questions, we propose DENCAST, which simultaneously solves all the issues mentioned above. Specifically, it is a novel density-based clustering algorithm, implemented in the Apache Spark framework, which is able to handle large-scale, high-dimensional data. The proposed approach exploits the identified clusters, built on labeled data, to predict the value assumed by one or more target variables of unlabeled objects in an inductive, supervised learning setting. This characteristic allows the proposed method to solve any single- or multi-target predictive task. In this paper, we focus on single- and multi-target regression tasks, which are central in several real-world applications (see Fig. 1 for a graphical overview of the environment in which DENCAST works). For example, solving a multi-target regression task can be useful in energy planning and trading from renewable sources, such as photovoltaic or wind plants [7]. In this context, multi-step ahead forecasting (usually 24 h) is necessary to predict the energy produced by renewable sources, in order to minimize the production from polluting sources and possible money losses [8]. Other domains where multi-target regression finds application include traffic flow forecasting [9], air quality forecasting [10], bike demand forecasting [11, 12], life sciences (e.g., predicting the toxicity of molecules) and ecology (e.g., analysis of remotely sensed data, habitat modelling) [13]. The peculiarities of data in such application domains further motivate the adoption of the predictive clustering framework in this paper. Indeed, not only several studies in the literature proved the effectiveness of predictive clustering frameworks [6, 14,15,16], but it has shown to be particularly appropriate when data exhibit different forms of autocorrelation [17], i.e., objects which are close to each other (spatially, temporally, or in a network) appear more related than distant objects. Such phenomena are commonly present in data regarding the cited domains and approaches based on clustering can naturally detect them. Overview of the environment in which DENCAST works. In the figure, data coming from multiple sources are fed into DENCAST, which produces a clustering model that is exploited for predictive purposes on new data (single-target or multi-target regression). DENCAST runs on multiple computational nodes, in a distributed fashion Therefore, the main contribution of this paper consists in a method for distributed density-based clustering which, contrary to existing works (see "Distributed methods for density-based clustering" and "Distributed methods for multi-target regression" sections), simultaneously shows all the following key features: It works on the neighborhood graph. In this way, the algorithm needs only object IDs and their neighborhood relationships (instead of their initial, possibly high-dimensional, representation) and thus it requires limited space resources. We build such a neighborhood graph efficiently from high-dimensional data through the locality-sensitive hashing (LSH) method [18]. It is implemented in the Apache Spark framework and it is fully distributed. Therefore, it does not require pre-processing or post-processing steps, usually performed on a single machine (see " Distributed methods for density-based clustering" section for details about this aspect in other methods). This aspect allows our method to analyze large-scale datasets without incurring in computational bottlenecks. The identified density-based clusters can be exploited to predict the value of one or more target variables, by means of a density- and distance-based approach. The result is that the proposed method can be adopted to solve single-target and multi-target regression tasks in a distributed setting. Overall, we propose a distributed density-based clustering algorithm that is capable to (i) handle large-scale data; (ii) deal with the high dimensionality of data; (iii) exploit the identified clusters to perform predictions in both single-target and multi-target settings. To the best of our knowledge, existing methods are limited in one or more of these aspects, or are not able to address all of them simultaneously. In "Background" section, we introduce some background notions and briefly review existing methods that are related to this paper. In "Method" section, we propose our distributed density-based (predictive) clustering method, while in "Time complexity analysis" section we analyze its time complexity. In "Results and discussion" section, we describe the experimental evaluation, showing that our method obtains accurate predictions and appears efficient in dealing with massive amounts of high-dimensional data. Finally, in "Conclusion" section we draw some conclusions and outline future work. The pioneer density-based clustering approach in the literature is DBSCAN [2]. This approach is able to identify arbitrarily shaped clusters (i.e., not only spherical) without requiring the number of clusters to be extracted as an input parameter. However, it requires two other parameters, i.e., eps and minPts. Since several concepts that characterize density-based algorithms are in common with those adopted in this paper, we recall some useful notions: The neighborhood N(p) of an object p is defined as the set of objects whose distance from p, according to a given measure, is within the threshold eps. Formally, \(N(p) = \{ q \ | \ dist(p,q) < eps \}\). An object p is a core object w.r.t. eps and minPts if it has at least minPts objects in its neighborhood N(p). Formally, p is a core object if \(|N(p)| \ge minPts\). An object p is directly density-reachable from an object q if \(p \in N(q)\) and q is a core object. An object \(p_w\) is density-reachable from an object \(p_1\) if there exists a chain of objects \(p_1, p_2, \dots , p_w\), such that for each pair of objects \(\langle p_i\), \(p_{i+1} \rangle\), \(p_{i+1}\) is directly density-reachable from \(p_i\) w.r.t. eps and minPts. An object p is density-connected to an object q if there exists an object o, such that both p and q are density-reachable from o w.r.t. eps and minPts. A cluster is a non-empty subset of objects, where each pair of objects \(\langle p,q \rangle\) is density-connected. Non-core objects belonging to at least one cluster are called border objects, whereas objects not belonging to any cluster are considered noise objects. Specifically, DBSCAN starts with an arbitrary object o and, if this is a core object, retrieves all the objects which are density-reachable from it w.r.t. eps and minPts, returning a cluster. The algorithm then proceeds with the next unclustered object. Other density-based methods follow a slightly different approach. For example, Density Peaks Clustering (DPC) [19] follows a hybrid workflow which takes inspiration from both centroid-based and density-based methods. In particular, it selects some objects as cluster centroids, and subsequently assigns other objects to centroids according to the fact that they locally show density peaks. The algorithm identifies density peaks according to two indicators: a local density indicator, which corresponds to the concept of eps-neighborhood in DBSCAN, and the maximum similarity (or minimum distance) indicator, computed between the current object and any object with higher local density. Similarly, Mean Shift [20] selects an object, identifies a circle of a pre-defined radius, computes the centroid of objects which fall within the radius, and it moves its center towards it. This process, which works iteratively, allows the algorithm to find a local maximum (in terms of density) for each object, and to group objects that appear to be tied to the same local maximum. Mean Shift does not require to specify the number of clusters to be extracted, coherently with other density-based clustering algorithms. However, it requires to specify the radius, and its time complexity is \(O(n^2 \cdot I)\), where n is the number of objects and I is the number of iterations, which can be considered high for high-dimensional and large-scale data. These density-based methods, even if they follow different approaches, are able to identify accurate and arbitrary shaped clusters, and are almost independent of the order of the analysis of objects. Moreover, many variants available in the literature aim to adapt density-based clustering algorithms (in particular DBSCAN) to specific contexts or to overcome limitations on time and space complexity. Regarding this aspect, in "Distributed methods for density-based clustering" section we briefly review existing works focusing on novel strategies to make density-based approaches applicable to large datasets. Moreover, since this paper has its roots also in methods for multi-target regression, in "Distributed methods for multi-target regression" section we briefly describe some related works in this field. Distributed methods for density-based clustering Although in the literature we can find some existing clustering algorithms which are able to handle large-scale and/or high-dimensional data [21,22,23], only few of them are density-based. The first attempts focused on extensions or variations of the well-known DBSCAN algorithm. The first extensions of DBSCAN concern the estimation of the optimal value of the input parameters eps and minPts [24] and its applicability to different contexts, such as data streams [25] and spatio-temporal data [26]. Due to the necessity to process large, high-dimensional datasets, more recent works have focused on the optimization of the time complexity, which is originally \(O(n^2 \cdot m)\), where n is the number of objects and m is the number of features. Since the dominating phase is the identification of the neighborhood of all nodes (which time complexity is, thus \(O(n^2 \cdot m)\)), in [27] the authors proposed to adopt the locality-sensitive hashing (LSH) [18] to perform this phase in \(O(n \cdot m)\). Although this method is able to process high-dimensional data, it cannot distribute the workload both in time and space to multiple machines and, consequently, cannot scale in the presence of distributed architectures, thus limiting the possibility of handling large datasets. In [3], the authors proposed a distributed variant of DBSCAN for MapReduce, which exploits an R-Tree-based index to compute the distance among objects. However, R-Tree-based indexes are not efficient with high-dimensional data [28], due to a high overlap among bounding boxes. For this reason, experiments are limited to 2D datasets. In [4] and [5], the authors proposed a variant of DBSCAN, named RDD-DBSCAN, implemented in Apache Spark. RDD-DBSCAN consists of three phases: data partitioning, local clustering and global labeling. The algorithm takes as input the same parameters as DBSCAN, and defines a bounding rectangle for the whole dataset. Subsequently, the algorithm splits this rectangle into two parts, containing approximately the same number of data points. The resulting partitions are clustered locally on executor nodes. In the global labeling phase, RDD-DBSCAN examines all the points that are within a specified distance (eps) of the borders of the bounding rectangle of each partition. If two clusters contain some common objects, the algorithm assumes they are the same cluster. Given these characteristics, these two works show the same limitations of the method proposed in [3], i.e., experiments are limited to 2D data. Moreover, one common limitation of [3,4,5], also present in the density-based approach proposed in [29], is the necessity of a merging phase which aggregates partial results obtained by the worker machines. This phase usually takes place on a single (driver) machine and can, in principle, require a complexity \({\mathcal {O}}(n^2)\), possibly leading to a significant increase of the overall running time. Our method faces all the issues raised by large-scale, high-dimensional datasets. In particular, we propose an approach which is computationally efficient and distributed in all its steps, leading to the easy handling of large-scale datasets, and that, inspired by the work in [27], adopts locality-sensitive hashing to handle high-dimensional data. Distributed methods for multi-target regression In the literature, and specifically in the field of Structured Output Prediction, researchers paid much attention to the multi-target regression task [13], i.e., to the learning of regression models for multiple target attributes. The easiest way to solve this task consists in the application of methods for single-target regression for each target attribute, independently (local models). In this way, almost all the existing approaches allow to perform multi-target regression, even if they require an adaptation step. Focusing on methods for processing large-scale datasets, in the literature we can find some distributed approaches for single-target regression (elastic net regularized linear regression [30] and isotonic regression [31]) that can be adapted to multi-target regression tasks. A recent survey [32] highlighted the advantages and disadvantages of prediction algorithms in parallel multicore systems. Some simple algorithms, such as AutoRegressive Integrated Moving Average (ARIMA) [33], k-nearest neighbors and linear regression, show a moderate computational cost and good prediction performances in many scenarios when the task is that of prediction or forecasting with a limited time horizon. Other algorithms, such as Neural Networks and deep neural networks [34], show a higher predictive accuracy and the ability to consider nonlinearity in the in data, at the cost of a higher computational complexity. More complex approaches for multi-target regression learn a global model which is able to predict the value of all the target attributes as a whole. Since these approaches specifically perform multi-target regression, i.e., they can exploit possible dependencies among the target attributes, usually lead to better predictive performance (see [35] for an example showing the superiority of global methods in the case of predictive clustering trees). Statistical approaches can be considered as the first attempt to deal with the simultaneous prediction of multiple real-valued target attributes [36]. Subsequent attempts focused on extending support vector regression (SVR) models in order to allow them to manage multiple target variables. For example, in [37] the authors developed a vector-valued SVR (i.e., able to predict a vector of numeric values) by adapting the concepts of estimator, loss function and regularization function from the scalar-valued case to the vector-valued case. Another recent approach [38] proposed to extend the least squares SVR to the multi-target setting. Alternative approaches (see [35] and [39]) proposed a multi-target variant of regression trees, which exploits possible correlations among the different target attributes. Moreover, it is noteworthy that neural networks and deep learning algorithms can naturally be applied to the multi-target setting, by defining the output layer with multiple neurons. In this class of methods, it is worthy to mention the long short-term neural networks [40], which are particularly powerful when data describe seasonal and recurrent phenomena characterized by temporal correlations. However, to the best of our knowledge, global methods for multi-target regression that are distributed, and therefore able to process large-scale, high-dimensional datasets, are still scarcely available in the literature. An exception is the implementation of the ARIMA models [33], available in the Spark-TS library,Footnote 1 which, however, is tailored for the analysis of time series. In particular, the different target variables regard the same feature predicted in different time instants in the future. Moreover, recently, researchers put a significant effort to the adaptation of deep learning algorithms towards distributed frameworks, such as Apache Spark. Important examples are DeepLearning4J,Footnote 2 ElephasFootnote 3 and TensorFlowOnSparkFootnote 4 which provide straightforward approaches to distribute: (i) the data during the training phase, (ii) the workload in the hyper-parameter optimization, or (iii) the learning of ensembles of models. Workflow of the proposed method. Orange blocks represent inputs (labeled and unlabeled objects) and outputs (predictions); grey blocks represent intermediate results; blue blocks represent the different phases of the proposed method On the basis of the notions introduced in "Background" section, in this section we describe our method, which general workflow is depicted in Fig. 2 and formalized in Algorithm 1. Note that, since our method is implemented in the Apache Spark framework, we adopt the Resilient Distributed Dataset (RDD) data structure and its operations (see [41] for details). Given the dataset \(A_L\) consisting of n labeled objects represented by \(m+k\) attributes (m descriptive attributes and k target attributes), we first apply a distributed variant of locality-sensitive hashing—LSH [42] (line 2) to identify an approximate neighborhood graph. The obtained graph consists of a node for each labeled object and an undirected edge for each pair of nodes \(\langle u, v \rangle\), which appear similar enough according to the representation obtained after the application of the LSH algorithm and a threshold minSim. This step and the specific details about the distributed variant adopted are described in "Identification of the neighborhood graph" section. From this point, the algorithm only uses the neighborhood graph, which can be considered an approximate representation of the objects and their distances, instead of objects represented in the original feature space. This design choice, which has been conveniently adopted by several clustering algorithms (see for example [43]), reduces significantly the space and the time necessary for the next steps: identification of the neighbours of each node (line 3) and identification of core objects (line 4), i.e., those having at least minPts nodes in their neighborhood. Our method for density-based clustering then maps each labeled node to a cluster (line 5), by propagating cluster IDs from core objects through their neighbors. As we will describe in "Density-based clustering" section, our approach is iterative and requires a stopping criterion, based on a threshold (labelChangeRate), aiming to avoid unnecessary iterations, which would lead to slight changes in cluster assignments. It is noteworthy that not all the objects will be necessarily assigned to a cluster, i.e., similarly to existing density-based clustering algorithms, our algorithm is able to discard objects that can be considered noise or outliers, since they are too far in the feature space from the identified clusters. Finally, we re-associate all the nodes with the original features of the corresponding objects (line 6) and exploit the identified clusters to predict the value of the target attributes for all the nodes of a set of unlabeled objects \(A_U\) (line 7). The prediction step is described in detail in "Exploiting clusters for multi-target regression" section. Identification of the neighborhood graph As we mentioned in "Method" section, we adopt a distributed variant of the locality-sensitive hashing (LSH) method to efficiently identify an approximate neighborhood graph, which will then be exploited by our clustering algorithm. LSH hashes objects so that similar objects map to the same buckets with a high probability (where the number of buckets is much smaller than the number of analyzed objects). Contrary to conventional hash functions, LSH maximizes the probability of collision for similar objects [44]. LSH exploits some properties of the cosine similarity: given two objects represented as \((m\,+\,k)\)-dimensional vectors, the probability of a random hyperplane to correctly separate them increases as the angle between them increases [42]. Accordingly, the computation of the neighborhood of each node in \(A_L\) through LSH is reformulated as follows: Generate r random \((m + k)\)-dimensional hyperplanes, where \(r \ll (m+k)\); Represent each object \(p \in A_L\) as an r-dimensional bit stream \(p_r\), where the i-th feature is 0 or 1 according to the side of the i-th hyperplane which p falls into; Generate numPerm random permutations of r elements. For each permutation, permute the bit stream of all the objects in \(A_L\) (each object is represented by numPerm bitstreams). Bitstreams, for each permutation, are then sorted lexicographically. Find the set \({\tilde{N}}(p)\) of the B nearest neighbors of each object p in every sorted list and compute the Hamming distance between the bitstream of p and the bitstream of the objects in \({\tilde{N}}(p)\). Every object \(q \in {\tilde{N}}(p)\) having a Hamming distance with p smaller than a given threshold minSim is included in N(p). The implementation we adoptFootnote 5 is the distributed variant proposed in [42]. Such a variant identifies N(p) by replacing the Hamming distance with the exact cosine similarity, which avoids the presence of false positives (objects detected as neighbors, that actually are not). Formally: \(N(p) = \{ q | q \in {\tilde{N}}(p) \wedge cosine(p,q) \ge minSim \}\). Although other variants of LSH, based on different similarity/distance measures are available in the literature [45], they mainly exploit the Euclidean distance on the unit sphere, which actually corresponds to the cosine similarity. Moreover, their adoption would require an additional step to normalize the values in [0,1], leading to introduce possible approximation errors. Density-based clustering In this section, supported by the pseudo-code of Algorithm 2 and by Fig. 3, we describe our distributed density-based clustering method. Our implementation exploits GraphX APIsFootnote 6 of Apache Spark to analyze the neighborhood graph identified by LSH. GraphX [47] internally represents graphs through a collection of vertices and a collection of edges, built on top of the Spark RDD. The vertex collection is hash-partitioned by vertex IDs and supported by a local hash index in each partition, which facilitates frequent joins across vertex collections. The edge collection is horizontally partitioned and supported by a routing table that enables the efficient lookup of edges according to their source and target vertices. GraphX also adopts specific strategies to reduce network costs and to avoid unnecessary movements of unchanged data in subsequent iterations. Additional details can be found in [47]. Interestingly, GraphX provides the counterpart of the most common primitives of Spark RDDs for graph processing, such as map, filter and aggregate, which are exploited by our method in order to manipulate distributed graphs. Graphic representation of the clustering approach. a Initialization of cluster IDs. b Core objects propagate their cluster ID to their neighbors (map phase). Nodes highlighted in red receive multiple messages. c Multiple cluster IDs received by the same node are aggregated (reduce phase). Dotted lines represent the final clustering result, after some iterations The novelty of the proposed method relies on the formulation of a density-based clustering method that performs the exploration of the neighborhood graph, through the GraphX programming primitives, in a fully distributed way. We stress this last aspect, since, contrary to existing methods [3,4,5], it does not require a merging phase at the end, usually performed on a single driver machine. We recall that the original DBSCAN implementation [2] identifies a cluster starting from an arbitrary core object and retrieving all objects which are density-reachable from it w.r.t. eps and minPts. Our approach aims at identifying all the reachable nodes of all the core objects simultaneously. This is performed by propagating the cluster assignment of all the core objects to their neighbors, until the cluster assignment appears stable enough. The first step consists in assigning a different cluster ID to each core object. Non-core objects are associated with 0 (lines 2–4). Then we start a process which, as mentioned in "Method" section, iterates until a criterion based on the number of propagations is not satisfied. In particular, we stop the iterative process when the number of propagated IDs is below a given percentage (labelChangeRate) of the number of edges of the neighborhood graph (line 5). This strategy avoids the execution of additional iterations that would only lead to slight changes in cluster assignments. Each iteration consists in the propagation of the cluster ID from all the core objects towards their neighbors. To this aim, we perform a map phase which works on the set of edges of the neighborhood graph (lines 8–14). In particular, for each edge \(\langle src, dst \rangle\),Footnote 7 we propagate the cluster ID of the node src towards the node dst if src is a core object and if its current cluster ID is higher than the cluster ID of the object dst.Footnote 8 This choice guarantees a deterministic behaviour of our approach, as well as its convergence. Moreover, similarly to [46], it leads to a reduction of the number of messages possibly exchanged among different machines in the cluster. After propagation, each node receives multiple cluster IDs from its neighboring core objects. Therefore, the final step of each iteration consists of a reduce phase (line 15), which aggregates the set of received cluster IDs into a single cluster ID. Coherently with the approach adopted during the map phase, each node will be assigned to the highest cluster ID received (see footnote 8). This leads, after some iterations, to collapse neighboring clusters in a single cluster, that is, the cluster with the highest ID. This makes the final merging phase, typically performed by existing distributed clustering methods on a single driver node, not necessary. In Algorithm 3, we report a higher-level (non-parallel) pseudo-code description of Algorithm 2, from which it is possible to observe the performed steps also without going into details about the Apache Spark primitives. An example of an iteration performed by our density-based clustering method can be observed in Fig. 3. Exploiting clusters for multi-target regression In this section, we describe the strategy we adopt to exploit the identified clusters to solve both single-target and multi-target regression tasks (see Fig. 4). Formally, given the set of unlabeled objects \(A_U\), we aim at predicting the value of the target attributes for each object in \(A_U\). Inspired by other solutions which exploit clusters for predictive purposes [15] we estimate, for each unlabeled object \(u \in A_U\), the cluster c(u) to which u ideally belongs: that is, the cluster to which u would have been assigned, if it had been known during the clustering process. We identify the most similar labeled object l and assign u to its cluster, i.e. \(c(u) = c(l)\). Formally: $$\begin{aligned} c(u) = c\left( \displaystyle argmin_{l \in A_L}\ \ cosineSim\left( u,l_{[1:m]}\right) \right) , \end{aligned}$$ where \(cosineSim(\cdot ,\cdot )\) is the cosine similarity between two vectors and \(l_{[1:m]}\) is the sub-vector of the object l, consisting of only the descriptive attributes. Finally, we predict the value of the target attributes of each unlabeled object u by computing the average of the values associated to the target attributes of all the objects falling in the cluster c(u), weighted according to their similarity with u. Formally: $$\begin{aligned} u_{[m+1:m+k]} = \frac{\sum _{l \in A_L(c(u))}cosineSim(u, l_{[1:m]}) \cdot l_{[m+1:m+k]}}{\sum _{l \in A_L(c(u))} cosineSim(u, l_{[1:m]})}, \end{aligned}$$ where \(l_{[m+1:m+k]}\) is the sub-vector of the object l, consisting of only the target attributes, \(A_L(c(u))\) is the subset of objects of \(A_L\) falling in c(u) and \(u_{[m+1:m+k]}\) is the part of the vector u reserved for target values. Note that, this approach is coherent with the philosophy of density-based clustering because it extends the concept of density-connection to unlabeled examples. More details about the implementation of the prediction phase are formalized in Algorithm 4. In particular, we first compute the cosine similarity between all the unlabeled objects in \(A_U\) and all the labeled objects in \(A_L\) (lines 2–8). Then we associate each unlabeled object to the cluster in which its most similar labeled object falls (lines 9–17). Finally, we predict the value of the target attributes of each unlabeled object according to Eq. 2 (lines 18–33). Graphical representation of the prediction phase of our approach. Each unlabeled object is assigned to the cluster containing the most similar labeled object (dotted lines) and the value of its target attributes are predicted according to a weighted average of the values assumed by the labeled objects in such a cluster (crossed circle) As for the clustering phase, also this phase is fully distributed. Here, the idea is to perform an incremental computation of the weighted average, which mainly consists of a map phase (lines 25–27) and a reduce phase (lines 28–30). Such a computation also exploits the operators: \(+\) between two vectors, which distributedly computes their element-wise sum and \(*\) (resp. /) between a vector and a scalar, which distributedly computes the multiplication (resp. division) of each element of the vector by the scalar. Time complexity analysis In this section, we analyze the time complexity of the proposed method. First, we consider the time complexity of the training phase, by following the steps of the main algorithm (Algorithm 1) and of the clustering algorithm (Algorithm 2). In particular, the first step performed by our method is the identification of the neighborhood graph through LSH (Algorithm 1 , line 2), which has a time complexity of \(O(|V| \cdot m)\) [18], where |V| is the number of nodes (i.e., objects) and m is the number of features. Next, the identification of the neighborhood of each object (Algorithm 1, line 3) requires a scan of the whole neighborhood graph, i.e., \(O(|V| \cdot B)\) operations, where B is the maximum number of neighboring objects identified by LSH for each node. Given the neighborhood of each object, the identification of core objects (Algorithm 1, line 4) only requires a single scan of all the objects, leading to a complexity of O(|V|). Finally, after executing the density-based clustering algorithm, a final join operation (Algorithm 1, line 6) is performed. This join operation, since the used data structures (i.e., paired RDD) are indexed on the node identifier, has a time complexity of \(O(2 \cdot |V|)\). Therefore, the pre-processing and the post-processing steps of our main clustering algorithm require an overall time complexity of: $$\begin{aligned} O(|V| \cdot m) + O(|V| \cdot B) + O(|V|) + O(2 \cdot |V|) \end{aligned}$$ that, since B is a constant, can be approximated to $$\begin{aligned} O(|V| \cdot m). \end{aligned}$$ Focusing on the main clustering algorithm (Algorithm 2), we can observe that the initialization (Algorithm 2, lines 2–4) requires O(|V|) operations, since it performs an assignment for each object. Then, the algorithm performs u iterations of the main loop (Algorithm 2, lines 6–15), each of which consisting of a map and a reduceByKey performed on all the links of the neighborhood graphs. This means that, since the neighborhood graph has \(O(|V| \cdot B)\) links, the main clustering algorithm has an overall time complexity of: $$\begin{aligned} O(u \cdot 2 \cdot |V| \cdot B) = O(|V| \cdot u). \end{aligned}$$ The number of performed iterations u can be (pessimistically) estimated as the average number of steps required to propagate the cluster ID of a core object to all the other objects. Since objects in the neighborhood graph have at most B neighbors, we can observe that, starting from a given core object, in u iterations we are able to propagate its cluster ID to \(B + O(B^2) + O(B^3) + \cdots + O(B^u)\) objects,Footnote 9 that can be approximated to \(O(B^u)\). This means that, in order to reach all the objects (we have this guarantee when \(B^u \ge |V|\)), we need at least \(u=\log _B|V|\) iterations. In fact: $$\begin{aligned} B^u \ge |V| \Rightarrow \log _BB^u \ge \log _B|V| \Rightarrow u \ge \log _B{|V|} \end{aligned}$$ Therefore, by assuming \(u = O(\log _B{|V|})\) and by combining Eqs. 4 and 5, we can conclude that the time complexity of the training phase is: $$\begin{aligned} O(|V| \cdot m) + O(|V| \cdot \log _B{|V|}). \end{aligned}$$ As regards the prediction phase (Algorithm 1, line 7 and Algorithm 4), we compute the cosine similarity between the unlabeled object and all the labeled objects (Algorithm 4, lines 2–8), leading to a time complexity of \(O(|V| \cdot m)\). Then, the identification of the cluster containing the most similar labeled object (Algorithm 4, lines 9–17) requires a scan of all the labeled objects, which time complexity is O(|V|). Finally, the weighted average computed to make the predictions (Algorithm 4, lines 18–33), requires the scan of the objects belonging to the selected cluster that, in the worst case scenario (i.e., all the objects belong to one single cluster), requires O(|V|) operations. Therefore, the overall complexity of the prediction phase is: $$\begin{aligned} O(|V| \cdot m) + O(|V|) + O(|V|) = O(|V| \cdot m) \end{aligned}$$ In the following, we first describe the experimental setting, the competitor systems, and the adopted datasets. Finally, we show and discuss the obtained results. Experimental setting and competitor systems Our experiments focused on two aspects: scalability and regression performances. It is noteworthy that the accuracy achieved in solving the regression task is a clear indicator of the quality of the identified clusters. Regarding scalability, we compared DENCAST with a highly optimized clustering method available in Apache Spark, i.e., K-means, on a large-scale dataset. The adoption of the K-means implementation available in Apache Spark is motivated by its high popularity, as well as by its native presence as an official, stable implementation since the early versions of the Spark MLLib machine learning library. For a fair comparison, we let K-means extract the same number of clusters identified by DENCAST. We run the experiments on a cluster of five machines, each equipped with a 4-cores (8 threads) CPU at 3.40 GHz, 32 GB of RAM and a 750 GB SSD hard drive. Moreover, we measured DENCAST running times on a single machine, with an increasing number of instances, and compared them with those obtained on the cluster of machines, in order to directly evaluate the performance gain due to the distributed environment. By exploiting such results, we also evaluated the speedup factor, i.e., the ratio between the running time on a single machine and the running time on the cluster. Finally, we measured the scaleup factor, which shows the ability of DENCAST to exploit the computational power of multiple CPUs to process an increasing number of instances. Regression performances were evaluated in a forecasting setting since all the considered datasets contain measurements of a target variable at different time stamps (see "Datasets" section). Moreover, we performed the experiments in both the single-target (ST) setting, to predict a single target value in the future, and the multi-target (MT) setting to predict a time series in the future. We performed the evaluation in terms of Root Mean Square Error (RMSE) and, for the time series, on the average RMSE over the time series. We compared DENCAST with: (i) a baseline strategy, which predicts the value of the target attributes using the average value in the training set (AVG); (ii) four distributed regression algorithms, i.e., linear regression (LR), isotonic regression (ISO), ARIMA and a distributed implementation, based on DeepLearning4J, of long short-term memory neural networks (LSTM) for regression; (iii) the K-means clustering algorithm, extended to solve regression tasks.Footnote 10 Methodologically, LR trains an elastic net regularized linear regression model [30], which overcomes the limitations of the LASSO method by combining the L1 and L2 penalties of the LASSO and ridge methods. ISO is capable of fitting a non-decreasing free-form line to a set of points, without assuming the linearity of the target function [31]. LSTM learns neural networks that effectively model temporal dependencies in sequential data, by introducing loops in the structure of the network that allow the information to persist [40]. For K-means, we adopted a prediction strategy similar to that described in "Exploiting clusters for multi-target regression" section, except for the cluster assignment that was performed according to the closest cluster centroid (coherently with the way it performs clustering). We clarify that, although several additional methods for single- and multi-target regression exist in the literature, we focus on those for which a distributed implementation is available. Non-distributed algorithms have been widely investigated before, and a comparison with them is out of the scope of our analysis. We run LR and ISO only in the single target setting since they do not perform multi-target regression. On the other hand, we adapted K-means and AVG to perform multi-target regression, following the same principle adopted by DENCAST to solve regression tasks. We optimized the input parameters of all the methods on an independent split of each dataset as follows: Table 1 Experimental results on LSH showing RMSE and execution time obtained with different configurations of r and numPerm DENCAST We set \(labelChangeRate = 5\%\) and performed a grid search to identify the best values of the parameters \(minPts \in \{3,5,10\}\) and \(minSim \in \{0.8, 0.9, 0.95, 0.97, 0.98, 0.99 \}\). The other LSH parameters were optimized in a separate preliminary experiment (see Table 1 for some results) and were accordingly set as follows: \(r = 5\) (number of hyperplanes), \(B = minPts \cdot 2\), (number of nearest neighbors to consider), \(numPerm = 20\) (number of random permutations). These values provided a good trade-off between accuracy and running times. AVG This approach does not need any tuning. K-means We performed a grid search to choose the best value of k from the set \(\{ \sqrt{n}/8, \sqrt{n}/4, \sqrt{n}/2, \sqrt{n}, \sqrt{n} \cdot 2, \sqrt{n} \cdot 4, \sqrt{n} \cdot 8\}\). ARIMA The best values of its parameters (i.e., the parameters p, d, q [33]) were automatically optimized by a tuning procedure available in the Spark-TS library (based on the AUTO-ARIMA algorithm [48]); Linear regression (LR) We performed a grid search to optimize the regularization parameter in \(\{0.15,0.3,0.45\}\). Isotonic regression (ISO) The specific implementation in Spark does not require any optimization. Long short-term memory neural networks (LSTM) We performed a grid search to optimize the values of different hyperparameters: learning rate \(lr \in \{10^{-1}, 10^{-2}, 10^{-3}\}\), dropout \(d \in \{0.1, 0.3, 0.5\}\), batch size \(bs \in \{64,128,256,512,1024\}\). We set the remaining (secondary) parameters to their default values. In our experiments, we considered the following datasets (also see Table 2): PVItaly data on energy production, aggregated hourly, collected from Jan 1\(^{\mathrm {st}}\), 2012, to May 4\(^{\mathrm {th}}\), 2014, by sensors located on 17 photovoltaic plants in Italy (see details in [7]). PVNREL simulated photovoltaic data from 6000 plants, aggregated hourly, for the year 2006. In the scalability test, we considered a dataset of 20 M instances, which allowed us to deeply assess the efficiency of the proposed approach compared to K-means. In the evaluation of the regression performances, we also used a reduced version, consisting of the 48 plants obtained by a stratified sampling which selected 3 plants for each of the 16 States with the highest global horizontal irradiation (GHI). The reduced version allowed us to perform an extensive comparison with the approaches mentioned above since most of them were not able to process the full dataset in a reasonable time. LightSource:Footnote 11 solar energy production data from 7 plants based in the United Kingdom. We enriched production data with irradiance data from PVGIS and weather data from Forecast.io, and aggregated spot values (1 min data) hourly. WindNREL measurements of wind power plants from more than 30,000 sites. We selected five plants with the highest production, obtaining the time series of wind speed, production and climatic data (extracted from Forecast.io), aggregated hourly, from Jan 1\(^{\mathrm {st}}\), 2005, to Dec 31\(^{\mathrm {st}}\), 2006. Bike sharing data from the Capital bikeshare system on the rental of bikes, aggregated hourly and daily, from/to different positions, collected in 2011 and 2012 [49]. Data include the count of rented bikes and weather information. Burlington data from 51 kW DC rooftop photovoltaic installations owned by Dealer.com with 216 modules. The data span a period between 2nd Nov, 2012 and 18th Sep, 2014, aggregated hourly, and include the average measurements of temperature, irradiance, energy production and weather conditions extracted from Forecast.io. Table 2 Quantitative information on the datasets used for the scalability analysis and for the single-target (ST) and multi-target (MT) regression tasks Given the nature and temporal granularity of all the datasets considered, for each dataset, we select five random splits containing \(10\%\) of the days and adopt them as five different test sets. For each day of the test set, we consider the measurements of each hour of the previous 30, 60 and 90 days as training set, predict the value assumed by the target variable for each hour of the day and collect the average RMSE computed over the splits and the days. In the MT setting, each object represents one day, therefore: (i) the descriptive attributes correspond to the time series of the measurements and (ii) we predict the time series of the target attribute. The time series represent the hours of the day. In Table 3 we show the average RMSE obtained by DENCAST and by all the considered competitor systems. As anticipated, the MT results are not available for LR and ISO. Additionally, ISO was not able to provide the results within 20 days of execution for some configurations of the PVItaly and PVNREL datasets. The results obtained by ISO were also poor in terms of RMSE and almost in line with the results obtained by the baseline (AVG). From the results obtained with different sizes of the training set we can observe that, in general, there is no significant variation. An important exception resides in the results obtained by ARIMA, which leads to more errors when the size of the training set increases. This behavior is reasonable since ARIMA only observes the time series described by the target variables. Therefore, its predictions are negatively affected by objects that are too distant, in terms of timestamp, from the unlabeled objects in the test set. On the contrary, in some configurations, other methods took advantage of larger training sets. An example can be seen in the Bike sharing dataset (single-target setting), for which DENCAST obtains the best results only with the largest training set (90 days). Table 3 Forecasting results in terms of average RMSE Focusing on the two different settings (MT vs. ST), we observe that the MT setting provides advantages in most cases. In some datasets such a difference is significant. It is noteworthy that DENCAST is the system that benefits most from the MT setting (Table 3, last column). This result confirms that it is reasonable in many domains to combine the MT setting (which takes into account dependency between the values of the same time series), with a density-based predictive clustering solution. Comparing DENCAST with other methods in terms of RMSE, we can see (Table 3) that it shows the best results in almost all the configurations, in both the ST and MT settings. We can observe some exceptions in the datasets LightSource and Bike Sharing, where K-means shows the best performances in the ST setting, and in the Burlington dataset, where LSTM outperforms the other competitors in the MT setting. The only dataset in which DENCAST obtains worse results than K-means in the MT setting (but not in ST) is WindNREL. This behavior is possibly motivated by the highly variable nature of wind that makes long-term prediction challenging for density-based clustering methods. Here density-based clustering tends to extract bigger clusters when compared to highly-fragmented centroid-based ones. This hypothesis is confirmed by the fact that, in this dataset, the optimal value of k for K-means is the highest (\(\sqrt{n}\cdot 8\)). In Table 4, where we report the number of clusters extracted by K-means and DENCAST in their best-performing configurations, we can observe that this behavior is confirmed for all the datasets: DENCAST generally extracts a lower amount of clusters than K-means in the MT setting, while it extracts a significantly higher number of clusters in the ST setting. This result confirms that, although in the MT setting the number of clusters is generally lower for both the approaches (due to the lower number of instances), the density-based approach leads to a number of clusters which is not strictly dependent on the number of instances, and that better adapts to the data distribution. Table 4 Number of clusters extracted by K-means and DENCAST in their best performing configuration for all the datasets and window sizes However, in most of the cases in which DENCAST obtains worse results with respect to the competitors, the difference is marginal. In order to statistically confirm this conclusion, we used the Friedman test with the Nemenyi post-hoc test at \(\alpha = 0.05.\) In Fig. 5, we depict the result of the test, which shows that in the ST setting ISO, ARIMA and LSTM do not appear statistically better than the baseline AVG, while LR, K-means and DENCAST significantly outperform it. Moreover, DENCAST significantly outperforms all the competitors. Looking at the results in the MT setting, the difference is more evident, showing a clear dominance of DENCAST. Nemenyi test for a single-target and b multi-target settings. If the distance between methods is less than the critical distance (at p-value = 0.05), there is no statistically significant difference between them Moreover, we performed the Wilcoxon test on the average standard deviations which showed a p-value < 0.05 for all the single-target configurations and the 30-days configuration of the multi-target setting. Another aspect we want to emphasize is that DENCAST generally provides more stable predictions than our version of K-means (which, as stated above, we extended with the same prediction step proposed for DENCAST). To show this aspect, in Table 5 we report the results in terms of the standard deviation of the predictions, which clearly show that DENCAST always leads to lower standard deviations than K-means. Table 5 Standard deviations of the predictions measured for K-means and DENCAST and the result of the Wilcoxon signed-rank tests In order to evaluate the efficiency of the proposed approach, we compared the performance of DENCAST, on the full version of the PVNREL dataset (20 M objects–400 million edges), with the predictive K-means, which is highly-optimized in terms of efficiency in Apache Spark. In Fig. 6, we show the running times observed when the number of objects (and, thus, the number of edges in the neighborhood graph) increases. DENCAST appears much more efficient than the predictive K-means and scales better when the number of edges increases significantly. On the other hand, K-means was not able to analyze the full version of the dataset within 20 days. DENCAST vs K-means. Running times measured with different sizes of the PVNREL dataset (100 K, 1 M, 5 M, 10 M, 20 M objects and 0.7 M, 7 M, 30 M, 130 M, 400 M edges in the neighborhood graph) Moreover, as introduced in "Results and discussion" section, we evaluated the DENCAST speedup factor. In particular, we first measured the running times on a single machine and on the cluster of machines for the analysis of the dataset PVNREL, with different sizes (in terms of the number of objects and, accordingly, of edges in the neighborhood graph). In Fig. 7a, it is possible to observe a comparison in terms of running times, which shows that the distributed approach scales much more efficiently on larger datasets with respect to the local variant. This improvement is confirmed by the speedup factor plotted in Fig. 7b, which shows that, with the largest version of the PVNREL dataset, DENCAST boosts the performance up to a 5\(\times\) factor, which is the ideal speedup factor with our cluster consisting of five machines. DENCAST-local vs DENCAST-distributed. a Running times and b speedup factor, with different sizes of PVNREL (100 K, 1 M, 5 M, 10 M, 20 M objects and 0.7 M, 7 M, 30 M, 130 M, 400 M edges in the graph) Finally, we measured the scaleup factor, in order to evaluate the ability of DENCAST to exploit the computational power of multiple CPUs when dealing with datasets with a linearly increasing size. In particular, we considered four different sizes of the PVNREL dataset with a proportional increase in the number of cores used by DENCAST. The results plotted in Fig. 8 show that the scaleup factor is very close to 1 for almost all the configurations, which means that the overhead introduced by DENCAST in the distribution of the workload to multiple machines is very low. Moreover, processing even larger datasets through DENCAST would only require a linear increase in the number of available CPUs in the cluster of machines. DENCAST-local vs DENCAST-distributed. Scaleup factor measured with different sizes of the PVNREL dataset (1.25 M, 3,75 M, 5 M, 6.25 M edges) and increasing number of cores (8, 16, 24, 32) In this paper, we proposed DENCAST, a density-based clustering algorithm implemented in Apache Spark, which is able to handle large-scale and high-dimensional data. We also exploited the clusters identified by DENCAST to predict the value assumed by one or more target variables of unlabeled objects, i.e., for regression purposes in both single and multi-target settings. Our experimental evaluation, performed on several datasets, demonstrated the ability of DENCAST to obtain predictions with a higher accuracy than existing distributed regression approaches. Such competitive regression results also confirm the quality of the extracted clusters. A further analysis showed that DENCAST clearly benefits from the multi-target setting. In particular, the combination of the density-based predictive clustering solution with the multi-target setting led DENCAST to dominate over all the considered competitors. This is an important result, since it confirms that catching possible dependencies among the target attributes provides a great margin of improvement in the regression task. Moreover, an analysis focused on the efficiency emphasized the ability of DENCAST to significantly outperform the distributed version of K-means in Apache Spark, in terms of running times. Finally, a scalability analysis has shown that DENCAST exhibits optimal speedup and scaleup performances. In particular, it reached a 5\(\times\) speedup factor with five machines, corresponding to the ideal speedup factor, and a scaleup factor close to 1, which emphasizes a very low overhead due to the distribution of the workload. This relevant result is due to the advantage of performing all the stages in a fully distributed manner, without incurring in any computational bottleneck. For future work, we aim to introduce the possibility to handle mixed-types attributes (i.e., not only numerical attributes). Moreover, we plan to extend the proposed approach in order to make it able to solve classification tasks as well to measure and explicitly model spatio-temporal autocorrelation phenomena during the clustering and the prediction phases. The system DENCAST and all the datasets are available at http://www.di.uniba.it/~gianvitopio/systems/dencast/. github.com/sryza/spark-timeseries. deeplearning4j.org/docs/latest/deeplearning4j-scaleout-intro. github.com/maxpumperla/elephas. github.com/yahoo/TensorFlowOnSpark. https://github.com/soundcloud/cosine-lsh-join-spark. Although other APIs for graph analysis have been recently proposed [46], they can only improve the efficiency when there is an unidirectional value propagation. Since, in our case, propagation can happen in both directions, GraphX appeared the most appropriate approach, since directly integrated within Apache Spark. We remind that the neighborhood graph is undirected. The identifiers src and dst are used only to distinguish between the two nodes involved in the link. This means that, in the algorithms, \(\forall p, q \in V\) the edge \(\langle p, q \rangle\) is interchangable with (equivalent to) the edge \(\langle q, p \rangle\). It is noteworthy that this is only an implementation choice. Indeed, we could propagate the lowest cluster IDs without any change in the final clustering result, since the density-connection property that we catch from the neighborhood graph is symmetric and transitive. This assumes that, during the exploration of the graph, the propagation always happens towards new (not already visited) objects. However, if an object receives the same cluster ID multiple times during the execution of different iterations, the propagation does not happen and, therefore, it is not counted (Algorithm 2, lines 10–12). The adopted implementation of the ARIMA algorithm is available at https://github.com/sryza/spark-timeseries. LR, ISO, and K-means are available in the Apache Spark MLlib library. Not publicly available, even if anonymized, due to legal reasons. ARIMA: AutoRegressive Integrated Moving Average DPC: density peaks clustering GHI: global horizontal irradiation IoT: isotonic regression LR: LSH: locality-sensitive hashing LSTM: long short-term neural networks MT: multi-target RDD: resilient distributed dataset RMSE: root mean square error single-target SVR: Cannataro M, Congiusta A, Pugliese A, Talia D, Trunfio P. Distributed data mining on grids: services, tools, and applications. IEEE Trans Syst Man Cybern B. 2004;34(6):2451–65. Ester M, Kriegel H-P, Sander J, Xu X, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. Kdd. 1996;96:226–31. He Y, Tan H, Luo W, Mao H, Ma D, Feng S, Fan J. MR-DBSCAN: an efficient parallel density-based clustering algorithm using MapReduce. In: Proceeding of ICPADS. 2011. p. 473–80. Cordova I, Moh T-S. DBSCAN on resilient distributed datasets. In: High performance computing & simulation. 2015. p. 531–40. Han D, Agrawal A, Liao WK, Choudhary A. A novel scalable DBSCAN algorithm with Spark. In: International parallel and distributed processing symposium workshops. 2016. p. 1393–402. Blockeel H, Raedt LD, Ramon J. Top–down induction of clustering trees. In: Shavlik JW, editor. Proceeding of ICML. Madison: Morgan Kaufmann; 1998. p. 55–63. Ceci M, Corizzo R, Fumarola F, Malerba D, Rashkovska A. Predictive modeling of PV energy production: how to set up the learning task for a better prediction? IEEE Trans Ind Inform. 2017;13(3):956–66. Ceci M, Corizzo R, Malerba D, Rashkovska A. Spatial autocorrelation and entropy for renewable energy forecasting. Data Mining Knowl Discov. 2019;33:698–729. Chen X, Cai X, Liang J, Liu Q. Ensemble learning multiple lssvr with improved harmony search algorithm for short-term traffic flow forecasting. IEEE Access. 2018;6:9347–57. Liu B-C, Binaykia A, Chang P-C, Tiwari MK, Tsao C-C. Urban air quality forecasting based on multi-dimensional collaborative support vector regression (svr): a case study of beijing-tianjin-shijiazhuang. PLoS ONE. 2017;12(7):0179763. Liu J, Sun L, Li Q, Ming J, Liu Y, Xiong H. Functional zone based hierarchical demand prediction for bike system expansion. In: Proceeding of ACM SIGKDD 2017. New York: ACM; 2017. p. 957–66. Li Y, Zheng Y, Zhang H, Chen L. Traffic prediction in a bike-sharing system. In: SIGSPATIAL. New York: ACM; 2015. p. 33. Xioufis ES, Tsoumakas G, Groves W, Vlahavas IP. Multi-target regression via input space expansion: treating targets as inputs. Mach Learn. 2016;104(1):55–98. Dincer NG, Akkuş Ö. A new fuzzy time series model based on robust clustering for forecasting of air pollution. Ecol Inform. 2018;43:157–64. Stojanova D, Ceci M, Appice A, Dzeroski S. Network regression with predictive clustering trees. Data Mining Knowl Discov. 2012;25(2):378–413. Pio G, Serafino F, Malerba D, Ceci M. Multi-type clustering and classification from heterogeneous networks. Inform Sci. 2018;425:107–26. Stojanova D, Ceci M, Appice A, Malerba D, Džeroski S. Dealing with spatial autocorrelation when learning predictive clustering trees. Ecol Inform. 2013;13:22–39. Charikar MS. Similarity estimation techniques from rounding algorithms. In: Proceeding of the 34th annual ACM symposium on theory of computing. New York: ACM; 2002. p. 380–8. Rodriguez A, Laio A. Clustering by fast search and find of density peaks. Science. 2014;344(6191):1492–6. Comaniciu D, Meer P. Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell. 2002;24(5):603–19. Sreedhar C, Kasiviswanath N, Reddy PC. Clustering large datasets using k-means modified inter and intra clustering (km-i2c) in hadoop. J Big Data. 2017;4(1):27. Zhang H, Raitoharju J, Kiranyaz S, Gabbouj M. Limited random walk algorithm for big graph data clustering. J Big Data. 2016;3(1):26. Kaur A, Datta A. A novel algorithm for fast and scalable subspace clustering of high-dimensional data. J Big Data. 2015;2(1):17. Ankerst M, Breunig MM, Kriegel H-P, Sander J. Optics: ordering points to identify the clustering structure. SIGMOD Rec. 1999;28(2):49–60. Aggarwal CC, Han J, Wang J, Yu PS. A framework for clustering evolving data streams. In: VLDB. 2003. p. 81–92. Birant D, Kut A. ST-DBSCAN: an algorithm for clustering spatial-temporal data. Data Knowl Eng. 2007;60(1):208–21. Wu Y-P, Guo J-J, Zhang X-J. A linear DBSCAN algorithm based on LSH. In: International conference on machine learning and cybernetics, vol. 5. IEEE. 2007. p. 2608–14. Berchtold S, Keim DA, Kriegel H-P. The X-tree: an index structure for high-dimensional data. In: Proceedings of VLDB '96, San Francisco, CA, USA. 1996. p. 28–39. Huang F, Zhu Q, Zhou J, Tao J, Zhou X, Jin D, Tan X, Wang L. Research on the parallelization of the DBSCAN clustering algorithm for spatial data mining based on the Spark platform. Rem Sens. 2017;9:12. Zou H, Hastie T. Regularization and variable selection via the elastic net. J R Stat Soc. 2005;67(2):301–20. Barlow R, Brunk H. The isotonic regression problem and its dual. J Am Stat Assoc. 1972;67(337):140–7. Ababei C, Moghaddam MG. A survey of prediction and classification techniques in multicore processor systems. IEEE Trans Parallel Distrib Syst. 2018;30:5. Box GE, Jenkins GM, Reinsel GC, Ljung GM. Time series analysis: forecasting and control. 5th ed. Oxford: Wiley; 2015. Corizzo R, Ceci M, Japkowicz N. Anomaly detection and repair for accurate predictions in geo-distributed Big Data. Big Data Res. 2019;16:18–35. Kocev D, Vens C, Struyf J, Džeroski S. Tree ensembles for predicting structured outputs. Pattern Recogn. 2013;46(3):817–33. Borchani H, Varando G, Bielza C, Larrañaga P. A survey on multi-output regression. Wiley Interdiscip Rev. 2015;5(5):216–33. Brudnak M. Vector-valued support vector regression. In: IJCNN'06. IEEE international joint conference on neural networks. 2006. p. 1562–9. Xu S, An X, Qiao X, Zhu L, Li L. Multi-output least-squares support vector regression machines. Pattern Recogn Lett. 2013;34(9):1078–84. Appice A, Džeroski S. Stepwise induction of multi-target model trees. In: European conference on machine learning. Berlin: Springer; 2007. p. 502–9. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9:1735–80. Zaharia M, Chowdhury M, Franklin MJ, Shenker S, Stoica I. Spark: cluster computing with working sets. In: Proceeding of HotCloud'10. 2010. p. 10. Ravichandran D, Pantel P, Hovy E. Randomized algorithms and NLP: using locality sensitive hash function for high speed noun clustering. In: Meeting on association for computational linguistics. ACL '05. 2005. p. 622–9. Ferreira LN, Zhao L. Time series clustering via community detection in networks. Inform Sci. 2016;326:227–42. Leskovec J, Rajaraman A, Ullman JD. Mining of massive datasets. 2nd ed. New York: Cambridge University Press; 2014. Andoni A, Indyk P, Laarhoven T, Razenshteyn I, Schmidt L. Practical and optimal lsh for angular distance. In: Proceedings of the 28th international conference on neural information processing systems, volume 1. NIPS'15. Cambridge: MIT Press. 2015. p. 1225–33. Tian X, Guo Y, Zhan J, Wang L. Towards memory and computation efficient graph processing on spark. In: International conference on Big Data (Big Data). 2017. p. 375–82. Gonzalez JE, Xin RS, Dave A, Crankshaw D, Franklin MJ, Stoica I. Graphx: graph processing in a distributed dataflow framework. OSDI. 2014;14:599–613. Hyndman RJ, Khandakar Y, et al. Automatic time series for forecasting: the forecast package for r. Technical report. Monash University, Department of Econometrics and Business Statistics. 2007. Fanaee TH, Gama J. Event labeling combining ensemble detectors and background knowledge. Progr Artif Intell. 2013;2:1–15. We thank Lynn Rudd for her help in reading the manuscript. We acknowledge the support of the European Commission through the projects MAESTRA—Learning from Massive, Incompletely annotated, and Structured Data (Grant No. ICT-2013-612944) and TOREADOR—Trustworthy Modelaware Analytics Data Platform (Grant No. H2020-688797). We also acknowledge the support of Ministry of Education, Universities and Research (MIUR) through the PON project ComESto—Community Energy Storage: Gestione Aggregata di Sistemi d'Accumulo dell'Energia in Power Cloud (Grant No. ARS01 01259). Roberto Corizzo, Gianvito Pio and Michelangelo Ceci contributed equally to this work Department of Computer Science, University of Bari Aldo Moro, Via Orabona, 4, Bari, Italy Roberto Corizzo , Gianvito Pio , Michelangelo Ceci & Donato Malerba Big Data Laboratory, National Interuniversity Consortium for Informatics (CINI), Rome, Italy Search for Roberto Corizzo in: Search for Gianvito Pio in: Search for Michelangelo Ceci in: Search for Donato Malerba in: RC, GP and MC collaborated in the design of the method. RC and MC took care of the study of the related work. RC and GP implemented the system and ran the experiments. MC, RC and GP performed the analysis of the results. RC, GP and MC contributed to the manuscript drafting. GP, MC, RC and DM contributed to the manuscript finalization. MC and DM supervised the study. All authors read and approved the final manuscript. Correspondence to Roberto Corizzo. Corizzo, R., Pio, G., Ceci, M. et al. DENCAST: distributed density-based clustering for multi-target regression. J Big Data 6, 43 (2019) doi:10.1186/s40537-019-0207-2 Accepted: 24 May 2019 Distributed clustering Multi-target regression
CommonCrawl
Search all SpringerOpen articles EURASIP Journal on Wireless Communications and Networking Separation characteristics between time domain and frequency domain of wireless power communication signal in wind farm Jie Wan1,2,3, Kun Yao4, E. Peng1, Yong Cao4, Yuguang NIU5 & Jilai Yu2,6 EURASIP Journal on Wireless Communications and Networking volume 2020, Article number: 5 (2020) Cite this article Understanding the intrinsic characteristics of wind power is important for the safe and efficient parallel function of wind turbines in large-scale wind farms. Current research on the spectrum characteristics of wind power focuses on estimation of power spectral density, particularly the structural characteristics of Kolmogorov's scaling law. In this study, the wavelet Mallat algorithm, which is different from the conventional Fourier transform, with compactly supported characteristics is used to extract the envelope of the signal and analyze the instantaneous spectral characteristics of wind power signals. Then, the theory for the change in the center frequency of the wind power is obtained. The results showed that within a certain range, the center frequency decreases as the wind power increases by using enough wind farm data. In addition, the center frequency remains unchanged when the wind power is sufficiently large. Together with the time domain characteristics of wind power fluctuation, we put forward the time-frequency separation characteristics of wind power and the corresponding physical parameter expressions, which corresponds to wind speed's amplitude and frequency modulation characteristics. Lastly, the physical connotation of the time-frequency separation characteristics of wind power from the perspective of atmospheric turbulent energy transport mechanism and wind turbine energy transfer mechanism is established. Currently, utilizing wind energy to its complete potential has been the goal of energy development in all countries worldwide, especially in China. Although this initiative had a slow start, it has rapidly developed [1,2,3]. However, the variable output brings new challenges to the safe and efficient operation of the power system [4,5,6], especially in China. The structural differences of energy have resulted in an increasing number of high-power coal-fired units to perform deep and variable load operation and consequently increasing the probability of fault occurrences [7, 8]. Therefore, understanding wind power characteristics is extremely important to understand not only the requirements of wind farm sits but also for the real-time scheduling and optimization control of new energy power systems [9, 10]. In general, the uncertain characteristics of wind energy include randomness, volatility, and intermittency. The study on the time domain characteristics is an important aspect in exploring the variable wind power fluctuations. In [11,12,13], a model was developed for the fluctuations in wind power, which was based on measured data, and the model accuracy was verified. Because wind turbines channel wind to generate winder power, studying the characteristics of wind speed is fundamental. In [14], a large number of actual wind speed data was used for the range of wind speed fluctuations in wind farms to correct the quantitative characterization model to develop and improve it as per the IEC standard. On this basis, literature [15] further evaluated the frequency modulation capability and explored the application of the quantitative characterization model for a range of wind speed fluctuations. In [16], a modeling strategy that can quantitatively describe the rate of wind speed fluctuation based on a variable differential model was proposed. Therefore, the understanding of intermittent quantitative description models is also the basic concern of this study [17, 18]. In [19], the quantitative characterization model of the intermittency of wind speed based on abrupt changes in the duty cycle from the intermittent nature of atmospheric turbulence was developed. Moreover, this method can be applied to the intermittent quantitative characterization of wind power [20]. In addition, although the wind power prediction model is constantly improving, because of the uncertainty of wind power, and incorporating some intrinsic characteristics of wind power, the model accuracy can still improve and the actual prediction error is inevitable [21]. Therefore, many scholars are also analyzing the error [22,23,24]. The power spectral density of a random signal is used to describe the relationship between the energy characteristics of a signal and the frequency. Therefore, the power spectrum analysis of wind power is also an important aspect of wind power characteristics research. Further study has been carried out very early, especially in the analysis of the spectral characteristics of wind. In 1941, a former Soviet Union mathematician named Andrei N. Kolmogorov proposed the Kolmogorov hypothesis [25]. When the Reynolds number is sufficiently large, there is a region with a high wave number, in which the turbulence satisfies local uniformity and isotropy. In addition, the turbulence characteristics are only determined by the energy dissipation rate and the molecular viscosity coefficient. There is also an inertial sub-region in the local isotropic region, where the turbulence characteristics are only determined by the energy dissipation rate. Based on this assumption, Kolmogorov came up with the famous "− 5/3 law." After this, many domestic and overseas scholars have verified and improved the Kolmogorov's "− 5/3 law" [26, 27]. The results show that the energy spectrum of the near-surface turbulence satisfies the Kolmogorov spectral distribution theory under certain conditions. However, there is a deviation in the exponent values around − 5/3 because of different geographical and climatic factors. Nevertheless, the energy spectrum exponentially decreases in a certain frequency domain. The energy spectrum characteristics of the wind turbine output power are studied further on this basis [28, 29]. In [30], wind farm power fluctuations and spatial sampling of turbulent boundary layers are presented. The experimental results show that the frequency spectrum of the total wind farm power follows a power law with a slope between − 5/3 and − 2, and up to frequencies lower than seen for any individual turbine model. However, the results of the above research state that the conventional power spectrum characteristics, i.e., the variation law of wind power signal's frequency, were compared and analyzed with those of the structural characteristics of the Kolmogorov scaling law. Moreover, the above studies consider the time domain and the frequency domain in isolation. In addition, the IEC standard improved the linear turbulence model for wind farm turbines' integrated fatigue loading by combining ambient turbulence and wakes [31]. In fact, this is the amplitude modulation (AM) effect. In [32], experiments in the wind tunnel revealed the AM effect of all three velocity components. The center frequency is another vital parameter to describe wind speed uncertainty as well, which is usually estimated for classical wind speed power spectrum [33]. In the signal processing field, there are three kinds of modulation, including AM, FM (frequency modulation), and PM (phase modulation) [34]. And the current research is also focused on the scale rate of the wind speed frequency spectrum without combining the time domain with frequency domain. In this study, the difference in the characteristics between the time domain and frequency domain of wind power signal is studied. This paper is organized as follows. In Section 2, we introduced methods for obtaining and analyzing transient spectral features and analyzed the principles and advantages of the Mallat algorithm with compact support. In Section 3, we used the wavelet Mallat algorithm to extract the envelope of the signal and analyze the instantaneous spectral characteristics of the actual wind power data. Combined with the time-domain uncertainty of wind power fluctuation time-frequency separation characteristics we gave the corresponding physical parameter expressions, including wind speed corresponding characteristics. In Section 4, it shows that the physical connotation of time-frequency separation characteristics of wind power from the perspective of atmospheric turbulence energy transport mechanism and wind turbine energy transfer mechanism. Power spectrum estimation based on Fourier transform [28, 30, 35] The wind signal is an indeterminate random signal. According to the stochastic process theory, statistics such as mean, mean square, correlation function, and power spectral density function can be used to describe the characteristics of a random process or a random signal. Moreover, the spectrum characteristic analysis of the actual signal can be performed by using the Fourier transform method. Since an arbitrary function x(t) can be decomposed into the sum of an infinite number of sinusoidal signals of different frequencies, this is similar to the phenomenon in which white light is refracted by prisms and dispersed into light of different colors. Comparing the Fourier transform with the refractive index of light, it is apparent that the Fourier transform method is equivalent to the triangular prism in the spectral analysis. The signal x(t) is equivalent to a beam of white light. In addition, after x(t) is analyzed by using Fourier transform, the "spectrum" of the resulting signal is equivalent to the spectrum, as shown in Fig. 1. Schematic of the spectral obtained using the characteristics analysis method Power spectrum estimation (PSD) is an important research subject matter in digital signal processing. PSD can be divided into classical PSD (non-parametric estimation) and modern PSD (parameter estimation). Classical PSD includes correlation function, period diagram, Bartlett, and Welch period diagram methods. Modern PSD includes the maximum entropy spectrum analysis (AR model), Pisarenko harmonic decomposition, Prony extraction pole method, Prony spectral line decomposition, and Capon maximum likelihood. When applying PSD to discrete sampling, the general solutions for the two types of errors and the trend term problem are as follows: After sampling, the spectral function Sn is changed to ST(n): $$ {S}_T(n)={\int}_{-\infty}^{\infty }S(f)\frac{\sin \pi T\left(n-f\right)}{\pi \left(n-f\right)} df $$ The modified spectrum ST(n) has a false high-frequency component. In general, the higher T is, the closer ST(n) is to the true spectrumSn. The smaller T is, the larger the difference is between ST(n) and Sn, and the greater the impact of leakage is. Therefore, the spectrum can be smoothed by a suitable energy window W(n), which is a weighted average method to reduce leakage. Two aspects of distortion occur when sampling at equal intervalsΔt as sampling intervals: On the one hand, the spectrum will reduce from (−∞, ∞) to \( \left(-\frac{n_f}{2},\frac{n_f}{2}\right) \),\( \mid n\mid \ge \frac{n_f}{2} \), and \( {n}_f=\frac{1}{\varDelta t} \). On the other hand, the spectrum becomes a folding spectrum in the reduced range, and the aliasing frequency is\( \frac{n_f}{2}=\frac{1}{2\varDelta t} \). Therefore, we generally choose the appropriate Δt (\( \varDelta t<\frac{1}{2{n}_c} \), where nc is the highest frequency and Δt is the sampling interval) to avoid aliasing. The trend term is a problem that must be carefully considered in the turbulence analysis. It usually needs to eliminate the large-scale influence through low-pass mathematical filtering because the correlation function and spectrum obtained by detrending or non-detrending is significantly different in the low-frequency part. The Welch periodogram method based on the improved periodic graph method solves the above problem well, and it is also the most commonly used PSD method. The specific method steps are as follows: Segment the random sequence so that each piece of data has partial overlap. Let X(j), j = 0, ..., N − 1 be a sample of a second-order random sequence. We take separate line segments at a starting point L and length L, which may be overlapping. Let X1(j), j = 0, ..., L − 1 be the first line segment; then, X1(j) = X(j), j = 0, ..., L − 1; likewise, X2(j) = X(j + D), j = 0, ..., L − 1. At last, XK(j) = X(j + (K − 1)D), j = 0, ..., L − 1. Suppose there are K line segments: X1(j), ..., XK(j), which cover the entire record, i.e., (K − 1)D + L = N as shown in Fig. 2. Smoothing each segment of data with a suitable window function and finally averaging the segments to obtain the desired power spectrum. Description of record segmentation (RS) We calculate its modified periodogram for each length L. We select a data window W(j), j = 0, ..., L − 1 to form a sequence X1(j)W(j), ..., XK(j)W(j). Furthermore, we take these sequences A1(n), ..., AK(n) that are transformed by the finite Fourier. $$ {A}_k(n)=\frac{1}{L}\sum \limits_{j=0}^{L-1}{X}_k(j)W(j){e}^{\frac{-2 kijn}{L}} $$ and i2 = − 1. Finally, the correction period K is obtained: $$ {I}_k\left({f}_n\right)=\frac{L}{U}{\left|{A}_k(n)\right|}^2,k=1,2,...,K $$ Here, \( {f}_n=\frac{n}{L},n=0,...,\frac{L}{2} \), and \( U=\frac{1}{L}\sum \limits_{j=0}^{L-1}{W}^2(j) \). The spectral estimate is the average of these periodic graphs, i.e., $$ \hat{P}\left({f}_n\right)=\frac{1}{K}\sum \limits_{k=1}^K{I}_k\left({f}_n\right) $$ Now, we can prove $$ E\left\{\hat{P}\left({f}_n\right)\right\}={\int}_{-\frac{1}{2}}^{\frac{1}{2}}h(f)P\left(f-{f}_n\right) df $$ Here, \( h(f)=\frac{1}{LU}{\left|\sum \limits_{j=0}^{L-1}W(j){e}^{2\pi ifj}\right|}^2 \) and \( {\int}_{-\frac{1}{2}}^{\frac{1}{2}}h(f) df=1 \). Thus, we obtain a spectral estimate \( \hat{P}(f) \) whose area of the composite spectral window is uniform and the width is 1/L. When using Fourier to analyze a random signal, the local features of the time domain signal cannot be characterized and the Gibbs effect is produced. At the same time, it does not work well for both the abrupt and non-stationary signals. Wavelet Mallat algorithm with compact support [36, 37] The wavelet transforms (WTs) can analyze the signal with a set of basic functions whose analysis width is constantly changing. This change adapts to the basic requirement that different resolutions are required in different frequency ranges for signal analysis. Moreover, wavelet analysis has the chrematistics of time domain and frequency domain localization, multiresolution, and adjustable time-frequency window, which is obviously an effective time-frequency multi-scale analysis tool for signal processing. Therefore, the envelope of the desired frequency range signal can be extracted by the wavelet envelope analysis method without the need of a filter. At the same time, the Mallat wavelet algorithm has more advantages than other wavelet algorithms. Let x(t) ∈ L2(R), ψ(t) ∈ L2(R), and ψ(t) satisfy the admissible condition: $$ {C}_{\psi }={\int}_{-\infty}^{+\infty}\frac{{\left|\hat{\psi}\left(\omega \right)\right|}^2}{\mid \omega \mid } d\omega <+\infty $$ Then, the continuous WT is defined as $$ {\mathrm{WT}}_x\left(a,b\right)={\int}_{-\infty}^{+\infty }x(t)\overline{\psi}\left(\frac{t-b}{a}\right) dt,a\ne 0 $$ Or the inner product can be denoted as $$ {\mathrm{WT}}_x\left(a,b\right)=\left\langle x,{\psi}_{a,b}\right\rangle $$ where a is the scale parameter, which controls the expansion and contraction of the wavelet function, and b is the translation parameter, which controls the translation of the wavelet function. The scale a corresponds to the frequency (inverse ratio) and the shift amount b corresponds to the time. ψa, b(t) meets \( {\psi}_{a,b}(t)=\frac{1}{\sqrt{a}}\psi \left(\frac{t-b}{a}\right) \). In addition, ψa, b(t) is finitely supported in the time domain. Then, the dot product of ψa, b(t) and s(t) is calculated. The WT is also finitely supported in time domain and achieves the time domain positioning function. \( {\hat{\psi}}_{a,b}\left(\omega \right) \) has a band-pass characteristic, i.e., it is finitely supported around the center frequency in the frequency domain. Moreover, after calculating the dot product of \( {\hat{\psi}}_{a,b}\left(\omega \right) \) and S(ω),S(ω) will also reflect the local features at the center frequency of the window, thus achieving the desired frequency positioning function. Thus, we can obtain the instantaneous power spectrum characteristics of the signal. When applying WT, we usually use the dispersion process. Simultaneously, in order to attain the resolution of time and frequency using the WT variable, it is necessary to change the values of a and b. In practical applications, it is usually achieved by using binary dynamic sampling: a0 = 2, b0 = 1. Now, the discrete wavelet is $$ {\psi}_{j,k}(t)={2}^{-j/2}\psi \left({2}^{-j}t-k\right) $$ This is the binary wavelet. However, the discrete wavelet still has a certain degree of redundancy. We require that the wavelet cluster {ψj, k} can be an orthogonal basis, thus achieving non-redundant expansion and reconstruction. Frequency domain equal resolution is an inherent characteristic of short-time Fourier transform, and multi-resolution is an inherent characteristic of WT. However, the multi-scale analysis theory provides the most effective way to solve this problem. In simple terms, the multi-scale analysis theory is to represent the x(t) function with a series of approximation function limits in space L2(R). A series of gradually refined approximation functions are obtained by the smooth approximation of function x(t) under different scale conditions. Therefore, it is called multi-scale analysis. Multi-scale analysis in space L2(R) can be understood by constructing a subspace sequence {Vj, j ∈ Z} in space L2(R) that makes it have monotonicity (Vj ∈ Vj − 1, ∀ j ∈ Z), approximation property (\( \mathrm{close}\left\{\underset{j=-\infty }{\overset{+\infty }{\cup }}{V}_j\right\}={L}^2(R) \), \( \underset{j=-\infty }{\overset{+\infty }{\cap }}{V}_j=\left\{0\right\} \)), elasticity (ϕ(t) ∈ Vj ⇔ ϕ(2t) ∈ Vj + 1), translation invariance (ϕ(t) ∈ Vj ⇔ ϕ(t − 2jk) ∈ Vj k ∈ Z), and existence of Riesz bases (∀ϕ(t) ∈ V0, so that {ϕ(t − 2−jk), k ∈ Z} can constitute the Riesz base of V0). Theorem: If Vj(j ∈ Z) represents an approximation of multiple scales in space L2(R), there must be a unique function ϕ(t) ∈ L2(R): $$ {\phi}_{j,k}={2}^{-j/2}\phi \left({2}^{-j}t-k\right),\kern0.84em k\in Z $$ The function must represent a standard orthonormal basis in Vj. ϕ(t) is called a scaling function. Simultaneously, define the wavelet subspace formed by the wavelet function ψ(2−jt) as $$ {W}_j=\mathrm{close}\left\{{\psi}_{j,k}:k\in Z\right\},j\in Z $$ In the process of constructing an orthogonal wavelet basis, we should ensure that $$ {V}_{j-1}={V}_j\oplus {W}_j,\kern0.6em \forall j\in Z $$ $$ {V}_j\perp {W}_j $$ It is true to all j ∈ Z. The ⊕ in the formula represents "orthogonal sum." Here, the subspaces Vj and Wj can be regarded as the complementary subspace of Vj − 1. Wj that represents the orthogonal complement of Vjon Vj − 1 is called wavelet space of scale j. It is clear that V0 = V1 ⊕ W1 = V2 ⊕ W2 ⊕ W1 = … = VN ⊕ WN ⊕ WN − 1 ⊕ ⋯ ⊕ W2 ⊕ W1. If xj ∈ Vj is an approximation of the function x ∈ L2(R) with a resolution of 2−j and dj ∈ Wj is the approximation error, then the above equation can be expressed as $$ {x}_0={x}_1+{d}_1={x}_2+{d}_2+{d}_1=\dots ={x}_N+{d}_N+{d}_{N-1}+\dots +{d}_2+{d}_1 $$ If x ≈ x0, the above formula can be written as $$ x\approx {x}_0={x}_N+\sum \limits_{i=0}^N{d}_i $$ This implies that any one of the signals x ∈ L2(R) can be completely reconstructed by the approximation of the signals and the approximation error of the signals at different scales. The Mallat algorithm is based on the above ideas in multi-scale analysis. Assuming function f(t) ∈ Vj − 1, the function can be expanded in space Vj − 1 as $$ f(t)=\sum \limits_k{c}_{j-1,k}{2}^{\left(-j+1\right)/2}\phi \left({2}^{-j+1}t-k\right) $$ Decompose f(t) and then project onto the Vj and Wj spaces. Thus, $$ f(t)=\sum \limits_k{c}_{j,k}{2}^{-j/2}\phi \left({2}^{-j}t-k\right)+\sum \limits_k{d}_{j,k}{2}^{-j/2}\psi \left({2}^{-j}t-k\right) $$ cj, k represents the scale factor on the j scale and dj, k represents the detail coefficient on the j scale. In addition, the following conditions need to be satisfied $$ {c}_{j,k}=\left\langle f(t),{\phi}_{j,k}(t)\right\rangle =\underset{R}{\int }f(t){2}^{-j/2}\overline{\phi \left({2}^{-j}t-k\right)} dt $$ $$ {d}_{j,k}=\left\langle f(t),{\psi}_{j,k}(t)\right\rangle =\underset{R}{\int }f(t){2}^{-j/2}\overline{\psi \left({2}^{-j}t-k\right)} dt $$ After a series of derivations, $$ {\displaystyle \begin{array}{l}{c}_{j,k}=\sum \limits_nh\left(n-2k\right){c}_{j-1,n}\\ {}{d}_{j,k}=\sum \limits_ng\left(n-2k\right){c}_{j-1,n}\end{array}} $$ This is the pyramid algorithm of the Mallat decomposition. Similarly, the reconstruction algorithm of Mallat is $$ {c}_{j-1,m}=\sum \limits_k{c}_{j,k}h\left(n-2k\right)+\sum \limits_k{d}_{j,k}g\left(n-2k\right) $$ According to the above theory, the center frequency of the wind power signal can be obtained by using the compactly supported wavelet to extract the envelope. And compared with the Fourier transform method, wavelet transforms the time and space-frequency localization analysis, which through the telescopic translation operations (functions) of signal gradually multi-scale refinement. Ultimately, it achieves high-frequency time segment and the low frequency in the frequency segment, which can automatically adapt to the requirement of time-frequency signal analysis. And it focuses on the arbitrary signal details to solve the difficult problem of Fourier transform. Therefore, wavelet transform becomes an important breakthrough in scientific methods following Fourier transform. Discussion and results Instantaneous spectral characteristic extraction of wind power We use the 1-year measured data of a wind farm in Inner Mongolia, China. The wind farm has 100 wind turbines of Vestas V80-2000 with a rated power of 2000 kW, and the sampling period is 5 s. First, based on the output power signal of the single wind turbines, we performed conventional power spectral density estimation and extracted the power spectrum with instantaneous characteristics of the signal. As shown in Fig. 3, the result we achieved is similar to the result we obtained using the Welch periodogram algorithm (based on the Fourier transform algorithm). Moreover, the result is in agreement with the classic Kolmogorov "− 5/3 law" [28]. Therefore, it shows the rationality of extracting the instantaneous spectral characteristics of wind power signals based on the compactly supported wavelet Mallat algorithm. We analyzed the data of several other wind turbines in this wind farm and obtained consistent results. Energy spectrum of wind power On this basis, we obtained the law for the change in center frequency and average power, as shown in Fig. 4. It can be seen that the center frequency tends to be constant as the power of the wind turbine increases. Furthermore, power spectral density estimation is performed on the measured wind power of all wind turbines of the wind farm and the obtained results are consistent. It shows that this characteristic is also adapted to the entire farm power signal. Relationship between spectral characteristic and average power Time and frequency separation characteristics of wind power Similarly, we explore the joint characteristics of wind power in the time and frequency domains because few people associate the time domain and frequency domain of wind power. We used the multi-resolution wavelet Mallat algorithm to analyze the measured wind power data in the time domain. We could obtain the dependents between the uncertainty part and average power in the wind power signal, which can be used to describe the quantitative characterization model [14, 15]: $$ {I}_w=\frac{\sigma_w}{\overline{W}} $$ Here, σw represents the standard deviation of wind power and \( \overline{W} \) represents the average wind power. Iw represents the fluctuation range of real-time wind power fluctuations near the average of the wind power output. As showed in Fig. 5, we will achieve a single-machine (full-farm turbines) three-parameter power law universal model by the result based on power generation signals of multiple turbines. Moreover, as the average power of single (full-field) wind turbine increases, the wind power fluctuation intensity tends to substantially be unchanged. And The detailed data processing and calculation fitting process can refer to [14, 15]. Relationship between wind power uncertainty and the average value It can be seen from Figs. 3 and 4 that the wind power has a time-frequency separation characteristic, which can be described by the following formula: $$ S\left(f,t\right)={\sigma}^2\left[\overline{P}(t)\right]{S}_0\left(\raisebox{1ex}{$f$}\!\left/ \!\raisebox{-1ex}{${f}_0$}\right.\right) $$ f is the real-time frequency of wind speed, t is the time, and f0 is the center frequency. The wind power spectrum characteristics have many practical applications, one of which is shown in [15]. However, in [15], the conventional statistical power spectrum is used. In addition, the application of instantaneous power spectrum can be studied. Energy conversion mechanism of wind turbines As shown in Fig. 6, the wind turbine power signal is an energy conversion made by the wind speed driven by complex turbulence. Therefore, we must first understand the spectral characteristics of wind speed in order to understand its physical mechanism. Schematic of the wind turbine energy conversion process [29] According to the above method, we analyzed the four quarters of the wind speed data of this wind farm and obtained the dependency relationship between the hourly average wind speed and the wind speed of the turbulence. Furthermore, we analyzed the instantaneous spectrum characteristics of the wind speed and obtained the relationship between the center frequency and the hourly average wind speed, as shown in Fig. 7. Within a certain range, the center frequency of the wind speed increases as the wind speed increases. However, the center frequency f0 remains unchanged when the wind speed is sufficiently large. That is to say, f0 also has the modulation effect, which is defined as frequency modulation (FM) effect. Relationship between spectral characteristic and average wind speed In fact, based on the research of Leithead, W.E, Welfonder used the wind speed data of two wind fields for one month each to further improve the spectrum characteristic model and noticed the influence of the average wind speed on frequency, i.e., as the wind speed increases, the frequency also increases [38]. $$ {G}_F\left( i\omega \right)=\frac{V_F}{{\left(1+ i\omega {\overset{\frown }{T}}_F\right)}^{5/6}} $$ Here, \( {V}_F\approx \sqrt{\frac{2\pi }{B\left(\frac{1}{2},\frac{1}{3}\right)}\frac{{\overset{\frown }{T}}_F}{T}} \), \( {\overset{\frown }{T}}_F=\frac{L}{{\overline{\nu}}_{\omega }} \), and \( {\overline{\nu}}_{\omega } \) represent the average wind speed, L represents the scale of turbulent length, \( \mathrm{B}\left(\frac{1}{2},\frac{1}{3}\right) \) is the specified beta function, T is the finite sampling time, and \( {f}_0=\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{${\overset{\frown }{T}}_F$}\right. \) is the center frequency. Therefore, the wind speed has local characteristics, and the spectral characteristics have special characteristics because of the influence of the complex underlying surface. This is closely related to the energy transport mechanism of atmospheric turbulence. As the wind turbine has cutin and cutoff wind speeds, the center frequency and average of the wind speed change characteristic, as shown in Fig. 7. And the actual complex conditions changed the turbulence structure. As a result, the wind turbine can output the power signal characteristics, as shown in Fig. 4, by conversion of wind turbine energy, as shown in Fig. 6. However, the relationship between wind speed's average value and its corresponding fluctuation part in Fig. 8 does not change by conversion of wind turbine energy shown in Fig. 6. And there is an amplitude modulation (AM) relationship between wind power's average value and its corresponding uncertainty in Fig. 5. So, the separation characteristics between time domain and frequency domain of wind power signal corresponds to the amplitude and frequency modulation characteristics of wind speed in wind power farm. Relationship between wind speed's average value and its corresponding fluctuation part In addition, the whole wind farm data calculating results show that wind farm power output also has similar AM character. In the future, more cases will be further studied from the perspective of full farm power. Atmospheric turbulent energy transport mechanism In general, the spectrum of the turbulent motion of the atmospheric boundary layer includes the energetic region of large-scale turbulence, the inertial sub-region of small-scale turbulence, and the dissipative region. The uncertainty of wind speed fluctuations is caused by turbulence. Turbulence consists of eddies with large differences and various scales. The energy of the largest scale turbulence eddy region comes directly from the Reynolds stress work in the mean flow field and the buoyancy work in the atmospheric boundary layer. The energy obtained from outside by the large-scale turbulence eddy is transferred to the secondary turbulence eddy in stages and finally dissipated on the smallest scale turbulence eddy. In the process of cascade transmission, the small-scale vortices reach a statistical equilibrium state and no longer depend on the external conditions that generate turbulence, and thus form a so-called local uniform isotropic turbulence. Figures 9 and 10 show the cascade transport mechanism of turbulence energy flow, and the downward arrow indicates the dissipation of the turbulent energy. Cascade transport mechanism of turbulence flow energy [39, 40] Fig. 10 The full development of wind turbulence [19] Therefore, the time-frequency separation characteristics of the wind power signal are derived from the transmission and conservation of turbulence energy. The power spectrum characteristics of the active power of wind power are an important manifestation of the wind power fluctuation characteristics. In addition, the instantaneous power spectrum characteristics are important for real-time scheduling and optimal control of new energy power systems. We studied the relationship between the time domain and frequency domain of wind power by calculating and analyzing the measured data of the wind farm. The following conclusions were observed: The Mallat algorithm based on wavelets with compact support realized the instantaneous spectral characteristics of wind power big data. Moreover, the obtained power spectrum is in agreement with Kolmogorov's "− 5/3 law." We observed variations in the center frequency as the wind power changes; the center frequency tends to be constant as the wind turbine power increases. Combining the dependence of the wind power mean and the variance of the opposite, we obtained the time-frequency separation characteristics of wind power and gave an explanatory expression. And the wind power's separation characteristics between time domain and frequency domain correspond to wind speed's amplitude and frequency modulation characteristics. Combined with the dual mechanisms of wind turbine energy conversion and atmospheric turbulence operation, we obtained the physical connotation of wind power time-frequency separation. As the atmospheric turbulence is increasing within a certain wind speed range, the center frequency increases with the increase of wind speed. However, when the wind speed is sufficiently large, i.e., when the atmospheric turbulence increases to a certain extent, the center frequency remains basically unchanged. In the future, we can study the spectral characteristics of wind farms in different regions with varying latitudes and longitudes because the energy transfer mechanism of atmospheric turbulent is significantly affected by the latitude and longitude, underlying surface, and topography. Thus, it lays a foundation for the safe and efficient grid-connected operation of large-scale wind power. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Amplitude modulation AR: Auto regressive FFT: Fast Fourier transform FM: Frequency modulation Phase modulation PSD: Power spectrum density Record segmentation Wavelet transform M. Kenisarin, V.M. Karslı, M. Çağlar, Wind power engineering in the world and perspectives of its development in Turkey. Renew. Sust. Energ. Rev. 10(4), 341–369 (2006) A.A. Juárez, A.M. Araújo, J.S. Rohatgi, Development of the wind power in Brazil: political, social and technical issues. Renew. Sust. Energ. Rev. 39(6), 828–834 (2014) Z He, S Xu, W Shen, et al. Overview of the development of the Chinese Jiangsu coastal wind-power industry cluster. Renewable and Sustainable Energy Reviews, 57: 59-71(2016). G. Callegari, P. Capurso, F. Lanzi, et al., Wind power generation impact on the frequency regulation: study on a national scale power system. Wind Energy, 1–6 (2010) A S Brouwer, M Van Den Broek, A Seebregts, et al. Impacts of large-scale Intermittent Renewable Energy Sources on electricity systems, and how these can be modeled. Renewable and Sustainable Energy Reviews, 33: 443-466(2014). G. Ren, J. Liu, J. Wan, et al., Overview of wind power intermittency: impacts, measurements, and mitigation solutions. Appl. Energy 204, 47–65 (2017) W. Wang, S. Jing, Y. Sun, et al., Combined heat and power control considering thermal inertia of district heating network for flexible electric power regulation. Energy 169, 988–999 (2019) X. Li, Y. Zou, J. Wan, et al., Study on failure characteristics and solutions for large turbine load oscillation. 2016 IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC) (IEEE, 2016), pp. 1321–1325 L. Lu, H. Yang, J. Burnett, Investigation on wind power potential on Hong Kong islands—an analysis of wind power and wind turbine characteristics. Renewable Energy 27(1), 1–12 (2014) J.A. Carta, S. Velázquez, A new probabilistic method to estimate the long-term wind speed characteristics at a potential wind energy conversion site. Energy 36(5), 2671–2685 (2011) S.P. Rensen, A.D. Hansen, P.A.C. Rosas, Wind models for simulation of power fluctuations from wind farms. J. Wind Eng. Ind. Aerodyn. 90(12-15), 1381–1402 (2002) P. Sorensen, N.A. Cutululis, A. Viguerasrodriguez, et al., Power fluctuations from large wind farms. IEEE Trans. Power Syst. 22(3), 958–965 (2007) P. Sørensen, N.A. Cutululis, A. Vigueras-Rodríguez, et al., Modelling of power fluctuations from large offshore wind farms. Wind Energy 11(1), 29–43 (2008) G. Ren, J. Liu, J. Wan, et al., The analysis of turbulence intensity based on wind speed data in onshore wind farms. Renewable Energy 123, 756–766 (2018) Y. Guo, Q. Wang, D. Zhang, et al., A stochastic-process-based method for assessing frequency regulation ability of power systems with wind power fluctuations. J. Environ. Inf. (2018) J. Liu, G. Ren, J. Wan, et al., Variogram time-series analysis of wind speed. Renewable Energy 99, 483–491 (2016) X. Yi, X. Zha, Q. Liang, et al., Research on wind power ramp events prediction based on strongly convective weather classification. Iet Renew. Power Generation 11(8), 1278–1285 (2017) C. Gallego, A. Costa, Á. Cuerva, et al., A wavelet-based approach for large wind power ramp characterisation. Wind Energy 16(2), 257–278 (2013) G. Ren, J. Liu, J. Wan, et al., Measurement and statistical analysis of wind speed intermittency. Energy 118, 632–643 (2017) G. Ren, J. Wan, J. Liu, et al., Analysis of wind power intermittency based on historical wind power data. Energy 150, 482–492 (2018) Q. Hu, S. Zhang, M. Yu, et al., Short-term wind speed or power forecasting with heteroscedastic support vector regression. IEEE Transactions on Sustainable Energy 7(1), 241–249 (2017) R Girard, D Allard. Spatio‐temporal propagation of wind power prediction errors. Wind Energy, 16(7): 999-1012(2013). Y. Jie, Y. Liu, H. Shuang, et al., Reviews on uncertainty analysis of wind power forecasting. Renew. Sust. Energ. Rev. 52, 1322–1330 (2015) M.G.D. Giorgi, A. Ficarella, M. Tarantino, Error analysis of short term wind power prediction models. Appl. Energy 88(4), 1298–1311 (2011) A.N. Kolmogorov, Dissipation of Energy in Locally Isotropic Turbulence. Akademiia Nauk Sssr Doklady 32(1890), 15–17 (1941) D. Ruelle, F. Takens, On the nature of turbulence. Commun. Math. Phys. 20(3), 167–192 (1971) J.C. Kaimal, L. Kristensen, Time series tapering for short data samples. Boundary Layer Meteorol. 57(1-2), 187–194 (1991) J. Apt, The spectrum of power from wind turbines. J. Power Sources 169(2), 369–374 (2007) N. Tobin, H. Zhu, L.P. Chamorro, Spectral behaviour of the turbulence-driven power fluctuations of wind turbines. J. Turbul. 16(9), 832–846 (2015) J. Bossuyt, C. Meneveau, J. Meyers, Wind farm power fluctuations and spatial sampling of turbulent boundary layers. J. Fluid Mech. 823, 329–344 (2017) S. Frandsen, M.L. Thøgersen, Integrated fatigue loading for wind turbines in wind farms by combining ambient turbulence and wakes. Wind Eng., 327–339 (1999) K.M. Talluru, R. Baidya, N. Hutchins, I. Marusic, Amplitude modulation of all three velocity components in turbulent boundary layers. J. Fluid Mech. 746 (2014) S.T. Frandsen, Generated structural loading in wind turbine clusters (2007) D. Mencarelli, A. Di Donato, T. Rozzi, Analytical study of the optical spectrum shift in a modulating channel. J. Lightwave Technol. 24(2), 1035 (2006) P. Welch, The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Trans. Audio Electroacoust. 15(2), 70–73 (1967) R.S. Pathak, The Wavelet Transform (Atlantis Press, 2009) I. Daubechies, The wavelet transform, time-frequency localization and signal analysis. J Rene. Sust. Energy. 36(5), 961–1005 (2015) E. Welfonder, R. Neifer, M. Spanner, Development and experimental identification of dynamic models for wind turbines. Control. Eng. Pract. 5(1), 63–73 (1997) S.B. Pope, Turbulent flows (Cambridge university press, 2000) J C Kaimal, J J Finnigan. Atmospheric boundary layer flows: their structure and measurement. Oxford university press, 1994. We thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. I would like to acknowledge all our team members. They contributed equally to this work. This work was financially supported through grants from the Key R&D Project of China under grant number 2016YFB0901900, the State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources (grant no. LAPS19012), CERNET Innovation Project (no. NGIICS20190801), and the Natural Scientific Research Innovation Foundation in Harbin Institute of Technology of no. HIT.NSRIF.2020044. Fundamental Space Science Research Center, Harbin Institute of Technology, Harbin, 150001, Heilong, China Jie Wan & E. Peng Postdoctoral Research Station of Electrical Engineering, Harbin Institute of Technology, Harbin, 150001, Heilong, China Jie Wan & Jilai Yu Shenzhen Research Institute, The University of Hong Kong, Shenzhen, 518057, Guangdong, China Jie Wan School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, 518055, China Kun Yao & Yong Cao The State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources (North China Electric Power University), Beijing, 102206, China Yuguang NIU School of Electrical Engineering & Automation, Harbin Institute of Technology, Harbin, 150001, Heilong, China Jilai Yu Kun Yao E. Peng Yong Cao JW formulated the idea. All authors take part in the discussion of the work described in this paper. KY and PE contributed equally to this work and should be considered co-correspondent authors. All authors read and approved the final manuscript. Correspondence to Kun Yao or E. Peng. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Wan, J., Yao, K., Peng, E. et al. Separation characteristics between time domain and frequency domain of wireless power communication signal in wind farm. J Wireless Com Network 2020, 5 (2020). https://doi.org/10.1186/s13638-019-1573-3 Power of wind turbine Mallat Instantaneous power spectrum Time domain fluctuation Time-frequency separation Turbulent energy transport Turbine energy transform Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Derivative of tanx formula Proof The derivative of tan function with respect to a variable is equal to square of secant. If $x$ is a variable, then the tangent function is written as $\tan{x}$. The differentiation of the $\tan{x}$ with respect to $x$ is equal to $\sec^2{x}$ and it can be proved mathematically by first principle. Express Differentiation of function in Limit form According to definition of the derivative, the derivative of the function in terms of $x$ can be written in the following limiting operation form. $\dfrac{d}{dx}{\, f(x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{f(x+h)-f(x)}{h}}$ If $f{(x)} = \tan{x}$, then $f{(x+h)} = \tan{(x+h)}$. Now, the proof of the differentiation of $\tan{x}$ function with respect to $x$ can be started from first principle. $\implies$ $\dfrac{d}{dx}{\, (\tan{x})}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\tan{(x+h)}-\tan{x}}{h}}$ Simplify the entire function The difference of the tan functions in the numerator can be simplified by the quotient rule of sine and cos functions. $=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\dfrac{\sin{(x+h)}}{\cos{(x+h)}} -\dfrac{\sin{x}}{\cos{x}}}{h}}$ $=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\dfrac{\sin{(x+h)}\cos{x}-\cos{(x+h)}\sin{x}}{\cos{(x+h)}\cos{x}}}{h}}$ $=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\sin{(x+h)}\cos{x}-\cos{(x+h)}\sin{x}}{h\cos{(x+h)}\cos{x}}}$ The trigonometric expression in the numerator is the expansion of the angle difference identity of the sin function. $=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\sin{(x+h-x)}}{h\cos{(x+h)}\cos{x}}}$ $=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \require{\cancel} \dfrac{\sin{(\cancel{x}+h-\cancel{x})}}{h\cos{(x+h)}\cos{x}}}$ $=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\sin{h}}{h\cos{(x+h)}\cos{x}}}$ Now, divide the limit of the function as the limit of product of two functions. $=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg[\dfrac{\sin{h}}{h}}$ $\times$ $\dfrac{1}{\cos{(x+h)}\cos{x}} \Bigg]$ Apply product rule of limits for evaluating limit of the product of the functions. $=\,\,\,$ $\Bigg(\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\sin{h}}{h}\Bigg)}$ $\times$ $\Bigg(\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{1}{\cos{(x+h)}\cos{x}}\Bigg)}$ Evaluate Limits of trigonometric functions As per limit of sinx/x as x approaches 0 formula, the limit of the first trigonometric function is equal to one as $h$ tends to zero. $=\,\,\,$ $1 \times \Bigg(\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{1}{\cos{(x+h)}\cos{x}}\Bigg)}$ $=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{1}{\cos{(x+h)}\cos{x}}}$ Now, evaluate the limit of the trigonometric function by the direct substitution method. $=\,\,\, \dfrac{1}{\cos{(x+0)}\cos{x}}$ $=\,\,\, \dfrac{1}{\cos{x}\cos{x}}$ $\implies$ $\dfrac{d}{dx}{\, (\tan{x})}$ $\,=\,$ $\dfrac{1}{\cos^2{x}}$ The reciprocal of the cosine function is equal to secant as per reciprocal identity of cos function. $\,\,\, \therefore \,\,\,\,\,\,$ $\dfrac{d}{dx}{\, (\tan{x})}$ $\,=\,$ $\sec^2{x}$ Thus, the derivative of tan function with respect to a variable is equal to square of secant function, is derived mathematically from first principle.
CommonCrawl
Liouville's theorem for a fractional elliptic system A priori bounds and existence result of positive solutions for fractional Laplacian systems March 2019, 39(3): 1533-1543. doi: 10.3934/dcds.2018121 Symmetry for an integral system with general nonlinearity Yingshu Lü , and Chunqin Zhou School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China * Corresponding author: Yingshu Lü Received July 2017 Revised December 2017 Published April 2018 In this paper, we study the radial symmetry of the solution to the following system of integral form: $\left\{ {\begin{array}{*{20}{l}}{{u_i}(x) = \int_{{{\bf{R}}^n}} {\frac{1}{{|x - y{|^{n - \alpha }}}}} {f_i}(u(y))dy,\;\;x \in {{\bf{R}}^n},\;\;i = 1, \cdots ,m,}\\{0 < \alpha < n,\;{\rm{ and }}\;u(x) = ({u_1}(x),{u_2}(x), \cdots ,{u_m}(x)).}\end{array}} \right.\;\;\;\;\;\;\left( 1 \right)$ $f_i(s)∈ C^1(\mathbf{R^m_+})\bigcap$ $ C^0(\mathbf{\overline{R^m_+}})$ $(i = 1,2,···,m)$ are real-valued functions, nonnegative and monotone nondecreasing with respect to the variables $s_1$ $···$ $s_m$ . We show that the nonnegative solution $u = (u_1,u_2,···,u_m)$ is radially symmetric in the general condition that $f_i$ satisfies monotonicity condition which contains the critical and subcritical homogeneous degree as special cases. The main technique we use is the method of moving planes in an integral form. Due to our condition here is more general, the more subtle method is needed to deal with this difficulty. Keywords: Integral system, general nonlinearity, method of moving planes in an integral form, radial symmetry. Mathematics Subject Classification: Primary: 45G05, 45G15, 45M99. Citation: Yingshu Lü, Chunqin Zhou. Symmetry for an integral system with general nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1533-1543. doi: 10.3934/dcds.2018121 L. Caffarelli, B. Gidas and J. Spruck, Asymptotic symmetry and local behavior of semilinear elliptic equations with critical Sobolev growth, Comm. Pure Appl. Math., 42 (1989), 271-297. doi: 10.1002/cpa.3160420304. Google Scholar W. Chen, C. Li and B. Ou, Classification of solutions for a system of integral equations, Comm. Partial Differential Equations., 30 (2005), 59-65. doi: 10.1081/PDE-200044445. Google Scholar W. Chen, C. Li and B. Ou, Qualitative properties of solutions for an integral equation, Discrete Contin. Dyn. Syst., 12 (2005), 347-354. Google Scholar W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math., 59 (2006), 330-343. Google Scholar W. Chen and C. Li, Classification of positive solutions for nonlinear differential and integral systems with critial exponents, Acta Math. Sci., 29 (2009), 949-960. doi: 10.1016/S0252-9602(09)60079-5. Google Scholar W. Chen, C. Li and Y. Li, A direct method of moving planes for the fractional Laplacian, Adv. Math., 308 (2017), 404-437. Google Scholar C. Cheng, Z. Lü and Y. Lü, A direct method of moving planes for the system of the fractional Laplacian, Pacific J. Math., 290 (2017), 301-320. doi: 10.2140/pjm.2017.290.301. Google Scholar Y. Fang and W. Chen, A Liouville type theorem for poly-harmonic Dirichlet problems in a half space, Adv. Math., 229 (2012), 2835-2867. doi: 10.1016/j.aim.2012.01.018. Google Scholar C. Jin and C. Li, Symmetry of solutions to some integral equations, Proc. Amer. Math. Soc., 134 (2006), 1661-1670. doi: 10.1090/S0002-9939-05-08411-X. Google Scholar Y. Lei and C. Ma, Radial symmetry and decay rates of positive solutions of a Wolff type integral system, Proc.Amer.Math.Soc., 140 (2012), 541-551. doi: 10.1090/S0002-9939-2011-11401-1. Google Scholar C. Li, Local asymptotic symmetry of singular solutions to nonlinear elliptic equations, Invent. Math., 123 (1996), 221-231. doi: 10.1007/s002220050023. Google Scholar C. Li and L. Ma, Uniqueness of positive bound states to Shrodinger systems with critical exponents, SIAM J. Math. Anal., 40 (2008), 1049-1057. doi: 10.1137/080712301. Google Scholar C. Lin and J. V. Prajapat, Asymptotic symmetry of singular solutions of semilinear elliptic equations, J. Differential Equations., 245 (2008), 2534-2550. doi: 10.1016/j.jde.2008.01.022. Google Scholar B. Liu and L. Ma, Radial symmetry results for fractional Laplacian system, Nonlinear Anal., 146 (2016), 120-135. doi: 10.1016/j.na.2016.08.022. Google Scholar C. Liu and S. Qiao, Symmetry and monotonicity for a system of integral equations, Commun. Pure Appl. Anal., 8 (2009), 1925-1932. doi: 10.3934/cpaa.2009.8.1925. Google Scholar R. Yin, J. Zhang and X. Shang, The overdetermined equations on bounded domain, Complex Var. Elliptic Equ., 61 (2016), 1566-1586. doi: 10.1080/17476933.2016.1188296. Google Scholar X. Yu, Liouville type theorems for integral equations and integral systems, Calc. Var. Partial Differential Equations., 46 (2013), 75-95. doi: 10.1007/s00526-011-0474-z. Google Scholar X. Yu, Liouville type theorem in the Heisenberg group with general nonlinearity, J.Differential Equations., 254 (2013), 2173-2182. doi: 10.1016/j.jde.2012.11.021. Google Scholar X. Yu, Liouville type theorem for nonlinear elliptic equation with general nonlinearity, Discrete Contin. Dyn. Syst., 34 (2014), 4947-4966. doi: 10.3934/dcds.2014.34.4947. Google Scholar R. Zhuo, W. Chen, X. Cui and Z. Yuan, Symmetry and non-existence of solutions for a nonlinear system involving the fractional laplacian, Discrete Contin. Dyn. Syst., 36 (2016), 1125-1141. Google Scholar Yuxia Guo, Shaolong Peng. A direct method of moving planes for fully nonlinear nonlocal operators and applications. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020462 Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432 Nguyen Anh Tuan, Donal O'Regan, Dumitru Baleanu, Nguyen H. Tuan. On time fractional pseudo-parabolic equations with nonlocal integral conditions. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020109 Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445 Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, 2021, 20 (1) : 389-404. doi: 10.3934/cpaa.2020273 Waixiang Cao, Lueling Jia, Zhimin Zhang. A $ C^1 $ Petrov-Galerkin method and Gauss collocation method for 1D general elliptic problems and superconvergence. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 81-105. doi: 10.3934/dcdsb.2020327 Simon Hochgerner. Symmetry actuated closed-loop Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 641-669. doi: 10.3934/jgm.2020030 Anh Tuan Duong, Phuong Le, Nhu Thang Nguyen. Symmetry and nonexistence results for a fractional Choquard equation with weights. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 489-505. doi: 10.3934/dcds.2020265 Lucio Damascelli, Filomena Pacella. Sectional symmetry of solutions of elliptic systems in cylindrical domains. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3305-3325. doi: 10.3934/dcds.2020045 Tomáš Oberhuber, Tomáš Dytrych, Kristina D. Launey, Daniel Langr, Jerry P. Draayer. Transformation of a Nucleon-Nucleon potential operator into its SU(3) tensor form using GPUs. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1111-1122. doi: 10.3934/dcdss.2020383 D. R. Michiel Renger, Johannes Zimmer. Orthogonality of fluxes in general nonlinear reaction networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 205-217. doi: 10.3934/dcdss.2020346 Shao-Xia Qiao, Li-Jun Du. Propagation dynamics of nonlocal dispersal equations with inhomogeneous bistable nonlinearity. Electronic Research Archive, , () : -. doi: 10.3934/era.2020116 Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270 Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033 Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 Kai Yang. Scattering of the focusing energy-critical NLS with inverse square potential in the radial case. Communications on Pure & Applied Analysis, 2021, 20 (1) : 77-99. doi: 10.3934/cpaa.2020258 Norman Noguera, Ademir Pastor. Scattering of radial solutions for quadratic-type Schrödinger systems in dimension five. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021018 Meng Chen, Yong Hu, Matteo Penegini. On projective threefolds of general type with small positive geometric genus. Electronic Research Archive, , () : -. doi: 10.3934/era.2020117 Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020048 Yingshu Lü Chunqin Zhou
CommonCrawl
Synthesis and crystal structure of $M(hmt)_2(H_2O)_6(NO_3)_2.4H_2O$ complexes, where $M=Mn^{2+},Co^{2+}$ Chopra, Deepak and Dagur, Pritesh and Prakash, AS and Row, Guru TN and Hegde, MS (2005) Synthesis and crystal structure of $M(hmt)_2(H_2O)_6(NO_3)_2.4H_2O$ complexes, where $M=Mn^{2+},Co^{2+}$. In: Journal of Crystal Growth, 275 (1-2). e2049-e2053. Synthesis_and_crystal_structure_of.pdf A series of M-hmt complexes, where $M=Mn^{2+},Co^{2+}$ were synthesized and studied by single crystal X-ray diffraction. The cobalt complex crystallizes in the triclinic space group P-1 with a = 9.098(2)\AA, b = 9.390(2) \AA, c = 9.649(2)\AA, $\alpha = 88.3(1)^o,\beta= 75.6(2)^o, \gamma=61.64(3)^o$ with Z = 1. The Mn complex crystallized in the monoclinic space group $P2_1/n$ with a=9.511(3)\AA b=16.232(4)\AA c =19.426(5)\AA $\beta=90.6(4)^o$ with Z = 4: The structure consists of hexa-coordinated metal cations with water as the ligand having slightly distorted octahedral geometry. The organic ligand, hexamethylenetetramine is not directly coordinated to the metal ion but its presence stabilizes the molecular assembly because of the presence of a rich variety of intermolecular interactions. Copyright of this article belongs to Elsevier. A1. Crystal structure;A1. Single-crystal growth;A1. X-ray diffraction;B1. Inorganic compounds;B1. Metals;B1. Organic compounds Division of Chemical Sciences > Materials Research Centre Division of Chemical Sciences > Solid State & Structural Chemistry Unit Satish MV
CommonCrawl
Ohm's law of a circuit which has both a voltage source and current source The voltage source has a specific voltage through it regardless of the circuit's current and its resistance while the current source has a specific current through it regardless of the voltage through it and its resistance. When there's a circuit which has both a voltage source and a current source with a load, a resistor R for example. How can people apply Ohm's law on it. The sum of both voltage and current must be reserved. For example, the circuit has a current source and a voltage source connected in a series with a single resistor which The voltage source supply 10V The current source supply 3A The resistance of the resistor is 4 ohm. How can Ohm's law be applied in this case. If we take 10V then the current will be 2.5A which is lack of 0.5 A to make the sum of current equal 3 as the current source supply while if we take the current through it is 3 A then the voltage through it wil be 12 V that over the one which voltage source can supply. I have seen many circuit which have both of these source without knowing how to apply Ohm's law on it. current-source ohms-law voltage-source edited Jan 26 at 9:43 JRE aukxnaukxn \$\begingroup\$ Can you add a schematic for such a circuit? \$\endgroup\$ – Amit Hasan Sep 16 '14 at 13:51 \$\begingroup\$ 12 volts across the resistor and -2V across the current source. \$\endgroup\$ – Andy aka Sep 16 '14 at 14:30 This only answers one part of your question, but I think it might be the main place where you are misunderstanding. How can the Ohm Law be applied in this case? "Ohm's Law" isn't a law for all circuits. It's a description of one particular type of component: the ideal linear resistor. It doesn't matter how your power supplies are arranged in your circuit. Ohm's law has nothing to do with power supplies. The general laws you want to consider that describe how currents and voltages relate in any circuit are Kirchoff's Current Law and Kirchoff's Voltage Laws. These laws, along with specific current-voltage relationships for each type of device (one of which is Ohm's Law), are the main tools for analyzing circuits. The PhotonThe Photon Well, in practice, a current source is a voltage source that adjusts itself to send said current, so if you build one of those in real life, the voltage source will try to apply 3A in the resistor, for that to happen, the voltage running in the resistor would be V=3*4=12, but 10v are already coming from the voltage source, so the voltage the CS would apply should be 2v. This way, the 10V of the VS + the 2V of the CS could make the 12V, that 12/4 = 3A. LockedLocked I understand the circuit you propose is like Fig. It can be solved by applying the principle of superposition. First, solve for the voltage source, current source passivated. In this particular case, passivating the current source, the voltage applied to the load is zero, and that is in open circuit. Secondly, solving for the current source should passivate the voltage source (replace it for a short-circuit). This allows us to calculate the voltage across the resistor as: $$ V_R = I\cdot R = 3\cdot 4 = 12\mathrm{V} $$ This is consistent with the physical characteristics of a voltage source (low output impedance) and a current source (high output impedance). In practice, the voltage source should be capable of supporting the current set by the current source. Martin PetreiMartin Petrei \$\begingroup\$ it look more like this i.imgur.com/F6FwwQF.png . Still, I don't understand why in 1st case it is open circuit \$\endgroup\$ – aukxn Sep 16 '14 at 15:15 \$\begingroup\$ @aukxn I do not see the difference between the schemes ... However, passivation is the current source for the first case. Passivating a current source corresponds to replace it with an open circuit. Analyzing the circuit in this condition, there is no voltage present on the resistor. \$\endgroup\$ – Martin Petrei Sep 16 '14 at 15:26 \$\begingroup\$ Sorry, I'm not native english speaker, the word "passivation" is too hard for me to understand. Can you explain it in other word? \$\endgroup\$ – aukxn Sep 16 '14 at 15:27 \$\begingroup\$ @aukxn "Passivation" means nullify the effects of that source. This is necessary to apply the principle of superposition, where it is analyzed separately the effect of each source in the circuit. \$\endgroup\$ – Martin Petrei Sep 16 '14 at 15:31 Note that a voltage source's impedance (internal resistance) is zero Ohm (ideal) and a current source's impedance is infinity (ideal). So in a circuit, replacing a voltage source with a zero Ohm resistor (or a short) is one step to solve for a voltage across a resistor if a current is known. In your case it is since there's a current source. It can be reduced to one step, applying Ohms law and calculate the voltage drop across the resistor by using: $$ V = I * R $$ From current source on Wikipedia: An ideal voltage source [...]. Such a theoretical device would have a zero ohm output impedance in series with the source. A real-world voltage source has a very low, but non-zero output impedance: often much less than 1 ohm. Conversely, a current source [...]. An ideal current source has an infinite output impedance in parallel with the source. A real-world current source has a very high, but finite output impedance. In the case of transistor current sources, impedances of a few megohms (at DC) are typical. (Emphasis mine.) Technically, a current source is a device that guarantees the configured current and does whatever it needs to provide it. try-catch-finally Speaking with respect to ideal components: Voltages sources supply whatever current is necessary to get the desired voltage. The current can be positive, zero, and even negative. Similarly, current sources supply whatever voltage is necessary to get the desired current. So there is a voltage across the current source. So, we know due to Ohms Law that 3A through a 4 Ohm resistor produces 12V across the resistor. We know the voltage across the voltage source and the current through it as well (3A). Therefore there must be 2V across the current source. I did a simple simulation to show this. answered Jan 26 at 0:43 Joe MacJoe Mac Not the answer you're looking for? Browse other questions tagged current-source ohms-law voltage-source or ask your own question. How do transformer obey Ohm's law? Dependent or independent sources of voltage and current how can we solve a circuit with two opposing voltage sources using thevenin theorem LED current limiting resistor and Ohm's Law Ohm's Law confusion — can there be voltage without current? Voltage drop through a resistor using Ohm's Law? Why isn't Ohm's law working for this simple circuit? Application of Ohm's law to a single resistor and to a series circuit High voltage transmission, transformers and Ohm's law General definiton of Ohm's law Ohms law and voltage UJT Negative Resistance and Ohm's Law Logic circuits use current source or voltage source?
CommonCrawl
Long-time behavior of solutions for a fractional diffusion problem Ailing Qi1, Die Hu1 & Mingqi Xiang ORCID: orcid.org/0000-0002-0712-71491 This paper deals with the asymptotic behavior of solutions to the initial-boundary value problem of the following fractional p-Kirchhoff equation: $$ u_{t}+M\bigl([u]_{s,p}^{p}\bigr) (-\Delta )_{p}^{s}u+f(x,u)=g(x)\quad \text{in } \Omega \times (0, \infty ), $$ where \(\Omega \subset \mathbb{R}^{N}\) is a bounded domain with Lipschitz boundary, \(N>ps\), \(0< s<1<p\), \(M:[0,\infty )\rightarrow [0,\infty )\) is a nondecreasing continuous function, \([u]_{s,p}\) is the Gagliardo seminorm of u, \(f:\Omega \times \mathbb{R}\rightarrow \mathbb{R}\) and \(g\in L^{2}(\Omega )\). With general assumptions on f and g, we prove the existence of global attractors in proper spaces. Then, we show that the fractal dimensional of global attractors is infinite provided some conditions are satisfied. Introduction and main results Let \(s\in (0,1)\) and let Ω be a bounded domain in \(\mathbb{R}^{N}(N>2s)\) with smooth boundary ∂Ω. We consider the asymptotic behavior of solutions to the following fractional p-Kirchhoff problem: $$\begin{aligned} \textstyle\begin{cases} u_{t}+M([u]_{s,p}^{p})(-\Delta )_{p}^{s}u+f(x,u)=g(x) &\text{in } \Omega \times \mathbb{R}^{+}, \\ u=0 &\text{on } (\mathbb{R}^{N}\setminus \Omega ) \times \mathbb{R}^{+}, \\ u(x,0)=u_{0}(x) &\text{in } \Omega , \end{cases}\displaystyle \end{aligned}$$ where \(u_{0}\in L^{2}(\Omega )\), \(g\in L^{2}(\Omega )\), \([u]_{s,p}^{p}=\iint _{\mathcal{Q}} \frac{|u(x,t)-u(y,t)|^{p}}{|x-y|^{N+ps}}\,dx \,dy\) with \(\mathcal{Q}=\mathbb{R}^{2N}\setminus (\mathcal{C}\Omega \times \mathcal{C}\Omega )\) and \(\mathcal{C}\Omega =\mathbb{R}^{N}\setminus \Omega \), \(M:\mathbb{R}^{+}_{0}\rightarrow \mathbb{R}^{+}_{0}\) is a continuous function, and \((-\Delta )_{p}^{s}\) is the fractional p-Laplacian, which (up to normalization factors) may be defined by $$\begin{aligned} (-\Delta )_{p}^{s}\varphi (x)=2\lim_{R\rightarrow 0^{+}} \int _{ \mathbb{R}^{N}\setminus B_{R}(x)} \frac{ \vert \varphi (x)-\varphi (y) \vert ^{p-2}(\varphi (x)-\varphi (y))}{ \vert x-y \vert ^{N+ps}} \,dy \end{aligned}$$ for all \(\varphi \in C_{0}^{\infty }(\mathbb{R}^{N})\), where \(1< p<\frac{N}{s}\) and \(s\in (0,1)\). Henceforward \(B_{R}(x)\) denotes the ball of \(\mathbb{R}^{N}\) centered at \(x\in \mathbb{R}^{N}\) and radius \(R>0\). We assume that \(f:\Omega \times \mathbb{R}\rightarrow \mathbb{R}\) is a Carathéodory function satisfying: \((f_{1})\): there exists a positive constant λ such that $$\begin{aligned} \bigl(f(x,\xi _{1})-f(x,\xi _{2})\bigr) (\xi _{1}-\xi _{2})\geq -\lambda \vert \xi _{1}- \xi _{2} \vert ^{2} \end{aligned}$$ for any \((x,t)\in \Omega \times \mathbb{R}^{+}\) and \(\xi _{1},\xi _{2}\in \mathbb{R}\); there exist positive constants c, \(c_{1}\), \(c_{2}\) such that $$\begin{aligned} c_{1} \vert \xi \vert ^{q}-c\leq f(x,\xi )\xi \leq c_{2} \vert \xi \vert ^{q}+c\quad \text{for any } (x,t)\in \Omega \times \mathbb{R}^{+} \text{ and } \xi \in \mathbb{R}, \end{aligned}$$ where q satisfies \(2\leq q<\infty \). A typical example of f is given by \(f(x,\xi )=a(x)|\xi |^{q-2}\xi -b(x)|\xi |^{r-2}\xi \) for all \(x\in \Omega \) and \(\xi \in \mathbb{R}\), with \(2< r< q<\infty \). Here a, b are two bounded continuous functions in Ω. Throughout this paper, we always assume that \(s\in (0,1)\), \(N>2s\), \(u_{0}(x)\in L^{2}(\Omega )\cap W_{0} \) is a nonzero function. For the coefficient function M, we assume that \((M_{1})\): \(M:[0,\infty )\rightarrow (0,\infty )\) is a nondecreasing and continuous function and there exists \(m_{0}>0\) such that \(M(\tau )\geq m_{0}\) for all \(\tau \geq 0\). In particular, if \(p=2\), then \((-\Delta )_{p}^{s}\) reduces to the fractional Laplace operator \((-\Delta )^{s}\), see [5] and the references cited there. For the basic properties of fractional p-Laplacian, we refer the readers to [13, 16, 17]. Problem (1.1) is a class of nonlocal fractional diffusion problems. It is relevant in anomalous diffusion theory. More precisely, if \(p=2\) and \(M\equiv 1\), as stated in [7], the function \(u(x,t)\) is thought of as a density of population at the point x and time t, the term \((-\Delta )^{s}u(x,t)\) represents the diffusion of density, \(f(x,u)\) is the source term, g is the perturbation term, and the coefficient \(M:[0,\infty )\rightarrow (0,\infty )\) denotes the possible changes of total population in whole space. This implies that the behavior of individuals is subject to total population, such as the diffusion process of bacteria, see [21, 27]. It is worth mentioning that the interest in fractional problems goes beyond the mathematical curiosity. Actually, the study of problems like (1.1) has many significant applications, as explained by Caffarelli in [2, 3], Laskin in [15], and Vázquez in [31]. Very recently, Fiscella and Valdinoci [9] first proposed a stationary Kirchhoff variational equation which models the nonlocal aspect of the tension of the string. Indeed, the stationary problem (1.1) is a fractional version of a model, the so-called stationary Kirchhoff equation, introduced by Kirchhoff in [14]. The body of literature on elliptic type problems involving nonlocal operators is quite large, here we just collect a few works, see [1, 8, 9, 19–22, 26, 28, 33, 36] and the references cited there. To the best of our knowledge, there are a few papers that deal with the asymptotic behavior of solutions for problems like (1.1). Pucci, Xiang, and Zhang [27] discussed the initial-boundary value problem of the following equation: $$\begin{aligned} u_{t}+M \bigl([u]_{s,p}^{p} \bigr) (-\Delta )_{p}^{s}u=f(x,t) \quad \text{in } \Omega \times (0, \infty ), \end{aligned}$$ where \(M:[0,\infty )\rightarrow (0,\infty )\) is a continuous and nondecreasing function. Using the sub-differential approach, the authors obtained the well-posedness of solutions for (1.4). Moreover, the large-time behavior and extinction property of solutions were given. In [25], Pan, Zhang, and Cao studied the initial-boundary value problem of the following fractional p-Kirchhoff equation: $$\begin{aligned} u_{t}+[u]_{s,p}^{(\lambda -1)p}(-\Delta )_{p}^{s}u= \vert u \vert ^{q-2}u \quad \text{in } \Omega \times (0,\infty ), \end{aligned}$$ where \(1\leq \lambda < N/(N-sp)\) and \(p< q< Np/(N-sp)\) with \(1< p< N/s\). The existence of a global solution was obtained by employing the Galerkin method and the potential well theory. Xiang, Rădulescu, and Zhang [21] studied the initial-boundary value problem of the following fractional Kirchhoff equation: $$\begin{aligned} u_{t}+M\bigl([u]_{s,2}^{2}\bigr) (-\Delta )^{s}u= \vert u \vert ^{p-2}u \quad \text{in } \Omega \times (0,\infty ), \end{aligned}$$ where \(M:[0,\infty )\rightarrow (0,\infty )\) is continuous, and there exist \(m_{0}>0\) and \(\theta >1\) such that \(M(\sigma )\geq m_{0}\sigma ^{\theta -1}\) for all \(\sigma \geq 0\), and M also satisfies the following: \((M)\): There exists \(\theta \in [1,p_{s}^{*}/p)\) such that \(tM(t)\leq \theta \mathscr{M}(t)\) for all \(t\geq 0\), where \(\mathscr{M}(t)=\int _{0}^{t} M(\tau )\,d\tau \). Under suitable assumptions, the authors obtained the local existence of nonnegative solutions by applying the Galerkin method and proved that the local nonnegative solutions blow up in finite time. In [34], Xiang and Yang studied the initial-boundary value of the following equation: $$\begin{aligned} u_{t}+M\bigl([u]_{s,p}^{p}\bigr) (-\Delta )_{p}^{s}u=\lambda \vert u \vert ^{q-2}u-\mu \vert u \vert ^{r-2}u. \end{aligned}$$ The authors gave some sufficient conditions such that the solutions of the above equation vanish in finite time. Moreover, the non-extinction of solutions was also investigated. Very recently, Wang and Huang [32] considered a weakly damped fractional nonlinear Schrödinger equation on the real line \(\mathbb{R}\) $$\begin{aligned} u_{t}-i(-\Delta )u^{s}+i \vert u \vert ^{2}u+\gamma u=f(x),\quad u(x,0)=u_{0}(x), \end{aligned}$$ where \(s\in (1/2,1)\), \(\gamma >0\), and \(f\in L^{2}(\mathbb{R})\). The authors proved that (1.5) possesses a finite dimensionality of global attractor. In [12], Hurtado studied the following initial-boundary value problem of fractional Laplacian equation with variable exponents: $$\begin{aligned} u_{t}+(-\Delta )_{p(\cdot )}^{s}u=f(x,u), \end{aligned}$$ where \((-\Delta )_{p(\cdot )}^{s}\) is the fractional \(p(\cdot )\)-Laplacian defined as $$\begin{aligned} (-\Delta )_{p}^{s}u(x)=2\lim_{R\rightarrow 0^{+}} \int _{\mathbb{R}^{N} \setminus B_{R}(x)} \frac{ \vert u(x)-u(y) \vert ^{p(x,y)-2}(u(x)-u(y))}{ \vert x-y \vert ^{N+p(x,y)s}} \,dy, \end{aligned}$$ and \(p\in C(\overline{\Omega }\times \overline{\Omega })\). The author established the well-posedness of solutions by using the techniques of monotone operators. Moreover, the author obtained the existence of global attractors under suitable assumptions. When \(s=1\) and \(M\equiv 1\), the equation in (1.1) reduces to the following equation: $$\begin{aligned} u_{t}-\Delta _{p}u+f(x,u)=g(x)\quad \text{in } \Omega , \end{aligned}$$ where \(\Delta _{p}u={\mathrm{div}}(|\nabla u|^{p-2}\nabla u)\) is the p-Laplacian. In [37], Yang et al. obtained the existence of global attractors for (1.7). Indeed, the existence of global attractors is an important asymptotic property of solutions for parabolic equations which have been studied extensively by many researchers, see for example [6, 10, 23, 24, 38]. In this paper, inspired by the above-mentioned papers, we discuss the existence of global attractors and fractal dimension of solutions for problem (1.1) involving the fractional p-Laplacian and nonlocal diffusion coefficient. Motivated by [23, 37], we first study the existence and uniqueness of solutions by using the Galerkin method. Then we give the existence of global attractors in proper spaces and obtain the fractal dimension of the global attractors. Since problem (1.1) contains a nonlocal coefficient M, in order to get the uniqueness of solutions, we impose a monotonicity assumption on M. However, to the best of our knowledge, there are no results on the existence of global attractors for problem (1.1) in the literature. To introduce our main results, we first give the definition of (weak) solutions, see [25, 27]. For any fixed \(T>0\), a function \(u\in L^{p}(0,T;W_{0})\cap C(0,T;L^{2}(\Omega ))\cap L^{q}( \Omega \times (0,T))\) is said to be a (weak) solution of problem (1.1) if \(u_{t} \in L^{2}(0,T;L^{2} (\Omega ))\) and, for a.e. \(t\in (0,T)\), $$\begin{aligned} \int _{\Omega }u_{t}\varphi \,dx+M\bigl([u]_{s,p}^{p} \bigr)\langle u,\varphi \rangle _{W _{0}}+ \int _{\Omega }f(x,u)\varphi \,dx= \int _{\Omega }g(x) \varphi \,dx \end{aligned}$$ for all \(\varphi \in W_{0}\), where $$ \langle u,\varphi \rangle _{W _{0}}= \iint _{\mathcal{Q}} \frac{ \vert u(x,t)-u(y,t) \vert ^{p-2}[u(x,t)-u(y,t)][\varphi (x)-\varphi (y)]}{ \vert x-y \vert ^{N+sp}}\,dx \,dy, $$ and \(W_{0}\) is the fractional Sobolev space which will be introduced in Sect. 2. Assume that \(2\leq p< N/s\), M satisfies \((M_{1})\), f fulfils \((f_{1})\)–\((f_{2})\) with \(2\leq q< Np/(N-sp)\), and \(g\in L^{2}(\Omega )\). Then the semigroup associated with problem (1.1) admits a global attractor \(\mathscr{A}_{q}\) in \(L^{q}(\Omega )\),i.e., \(\mathscr{A}_{q}\) is compact, invariant in \(L^{q}(\Omega )\) and attracts every bounded subset of \(L^{2}(\Omega )\) in the norm topology of \(L^{q}(\Omega )\). The monotonicity assumption on M is just for getting the uniqueness of solutions. Finally, we consider the following problem: $$\begin{aligned} \textstyle\begin{cases} u_{t}+M([u]_{s,p}^{p})(-\Delta )_{p}^{s}u+ \vert u \vert ^{q-2}u= \vert u \vert ^{r-2}u &\text{in } \Omega \times \mathbb{R}^{+}, \\ u=0 &\text{on } (\mathbb{R}^{N}\setminus \Omega ) \times \mathbb{R}^{+}, \\ u(x,0)=u_{0}(x) &\text{in } \Omega . \end{cases}\displaystyle \end{aligned}$$ Assume that \(0< s<1\), \(2< r< p< N/s\), \(r< q< Np/(N-sp)\), and M satisfies \((M_{1})\). Then the semigroup associated with problem (1.9) admits a symmetric global attractor in \(L^{q}(\Omega )\) and its fractal dimension is infinite. The rest of the paper is organized as follows. In Sect. 2, we give some preliminary lemmas. The existence of global attractors is proved in Sect. 3. The existence of infinite dimensional attractors is obtained in Sect. 4. In this section, we first recall some necessary definitions and properties of the fractional Sobolev spaces, see [5, 8, 9, 13, 35] for further details. From now on, we shortly denote by \(\|\cdot \|_{\nu }\) the norm of \(L^{\nu }(\Omega )\) (\(\nu \geq 1\)). Let \(0 < s < 1 < p < \infty \) be real numbers and the fractional critical exponent \(p_{s}^{*}\) be defined as $$\begin{aligned} p_{s}^{*}= \textstyle\begin{cases} \frac{Np}{N-sp}, &sp< N, \\ \infty ,&\text{otherwise.} \end{cases}\displaystyle \end{aligned}$$ In the following, we set \(\mathcal{Q}=\mathbb{R}^{2N}\setminus (\mathcal{C}\Omega \times \mathcal{C}\Omega )\), where \(\mathcal{C}\Omega =\mathbb{R}^{N}\setminus \Omega \). W is a linear space of Lebesgue measurable functions from \(\mathbb{R}^{N}\) to \(\mathbb{R}\) such that the restriction to Ω of any function u in W belongs to \(L^{ p} (\Omega )\) and $$\begin{aligned} \iint _{\mathcal{Q}}\frac{ \vert u(x)-u(y) \vert ^{p}}{ \vert x-y \vert ^{N+sp}}\,dx \,dy< \infty . \end{aligned}$$ The space W is equipped with the norm $$\begin{aligned} \Vert u \Vert _{W}= \Vert u \Vert _{L^{p}(\Omega )}+ \biggl( \iint _{Q} \frac{ \vert u(x)-u(y) \vert ^{p}}{ \vert x-y \vert ^{N+sp}}\,dx \,dy \biggr)^{\frac{1}{p}}. \end{aligned}$$ It is easy to get that \(\Vert \cdot\Vert _{W}\) is a norm on W. We shall work in the closed linear subspace $$\begin{aligned} W_{0}=\bigl\{ u\in W:u(x)=0\text{ a.e. in } \mathbb{R}^{N} \setminus \Omega \bigr\} . \end{aligned}$$ By the result in [35], one can deduce that $$ [u]_{s,p}= \biggl( \iint _{\mathcal{Q}} \frac{ \vert u(x)-u(y) \vert ^{p}}{ \vert x-y \vert ^{N+sp}}\,dx \,dy \biggr)^{\frac{1}{p}} $$ is an equivalent norm of \(W_{0}\). (see [23, 24]) Let \(\{S(t)\}_{t\geq 0}\) be a semigroup on Banach space X. A subset \(\mathscr{A}\subset X\) is called a global attractor for the semigroup if \(\mathscr{A}\) is compact in X and satisfies the following properties: (1) \(\mathscr{A}\) is invariant, i.e., \(S(t)\mathscr{A}=\mathscr{A}\) for all \(t\geq 0\); (2) \(\mathscr{A}\) attracts all bounded subset of X, that is, for any bounded subset \(B\subset X\), $$\begin{aligned} {\mathrm{dist}}\bigl(S(t)B,\mathscr{A}\bigr)\rightarrow 0\quad \text{as } t \rightarrow \infty , \end{aligned}$$ where \(dist(A,B)\) is the Hausdorff semidistance of two sets A and B given by $$\begin{aligned} {\mathrm{dist}}(A,B)=\sup_{x\in A}\inf_{y\in B} \Vert x-y \Vert _{X}. \end{aligned}$$ Let \(\{S(t)\}_{t\geq 0}\) be a semigroup on Banach space X. A subset \(B_{0}\subset X\) is called an absorbing set for the semigroup \(\{S(t)\}_{t\geq 0}\) if \(B_{0}\) satisfies that, for any bounded subset \(B\subset X\), there exists a positive constant \(T=T(B)\) such that $$\begin{aligned} S(t)B\subset B_{0}\quad \text{for any } t\geq T. \end{aligned}$$ (see [4]) Let X be a metric space. Assume that M is a compact subset in X. The fractal dimension \(\dim _{f}M\) of M is defined by $$\begin{aligned} \dim _{f}M=\limsup_{\varepsilon \rightarrow 0} \frac{\ln n(M,\varepsilon )}{\ln (1/\varepsilon )}, \end{aligned}$$ where \(n(M,\varepsilon )\) denotes the minimal number of closed balls of the radius ε which cover the set M. Global attractors in \(L^{q}(\Omega )\) In this section, we provide the existence results for problem (1.1), and then we show the existence of a global attractor in \(L^{q}(\Omega )\). Assume that \(2\leq p< N/s\), M satisfies \((M_{1})\), g satisfies \((g_{1})\), and f fulfils \((f_{1})\)–\((f_{2})\) with \(2\leq q< Np/(N-sp)\). Then problem (1.1) admits a unique solution $$ u\in C\bigl([0,T];L^{2}(\Omega )\bigr)\cap L^{p}(0,T;W_{0}) \cap L^{q}\bigl(0,T;L^{q}( \Omega )\bigr). $$ Moreover, the mapping \(u_{0}\rightarrow u(t)\) is continuous in \(L^{2}(\Omega )\). The existence of solutions for problem (1.1) can be obtained by using the Galerkin method, see for example [21, 25]. For completeness, we give a sketch of the proof. Choose a sequence of functions \(\{e_{j}\}_{j=1}^{\infty }\subset C_{0}^{\infty }(\Omega )\) which is an orthonormal basis in \(L^{2}(\Omega )\). We shall find the approximate solutions as follows: $$\begin{aligned} u_{n}(x,t)=\sum_{j=1}^{n}\bigl( \eta _{n}(t)\bigr)_{j}e_{j}(x)\quad \text{for all } n\in \mathbb{N}, \end{aligned}$$ where the unknown functions \((\eta _{n}(t))_{j}\) are determined by the following ODEs: $$\begin{aligned} \textstyle\begin{cases} \eta ^{\prime }_{n}(t)=-I_{n}(t,\eta _{n}(t)),\quad t\in \mathbb{R}^{+}, \\ \eta _{n}(0)=U_{0n}. \end{cases}\displaystyle \end{aligned}$$ Here \(U_{0n}= (\int _{\Omega }u_{0n}(x)e_{1}(x)\,dx,\ldots ,\int _{\Omega }u_{0n}(x)e_{n}(x)\,dx )\), \(u_{0n}\rightarrow u_{0} \) in \(W_{0}\), $$\begin{aligned} \bigl(I_{n}(t,\eta _{n})\bigr)_{j}&=M \bigl([u_{n}]_{s,p}^{p}\bigr)\langle u_{n}, e_{j}\rangle _{W_{0}} + \int _{\Omega }f\bigl(x,u_{n}(x,t)\bigr)e_{j}(x) \,dx \\ &\quad {}- \int _{\Omega }g(x)e_{j}(x)\,dx, \quad j=1,2,\ldots ,n. \end{aligned}$$ The definition of \(\langle u_{n},e_{j}\rangle _{W_{0}}\) is given by $$ \langle u_{n},e_{j}\rangle _{W _{0}} = \iint _{ \mathcal{Q}} \frac{ \vert u_{n}(x,t)-u_{n}(y,t) \vert ^{p-2} [u_{n}(x,t)-u_{n}(y,t)][e_{j}(x)-e_{j}(y)]}{ \vert x-y \vert ^{N+sp}}\,dx \,dy. $$ By the continuity of M and the definition of \(I_{n}\), we know that \(I_{n}\) is continuous on \(\mathbb{R}^{+}_{0}\times \mathbb{R}^{n}\). Then the Peano theorem (see [11]) yields that there exists a local solution of problem (3.1) on \((0, T_{n})\) (\(0< T_{n}<\infty \)). The following a priori estimate implies that the local solution can be extended to \((0,\infty )\). Multiplying (3.1) by \(\eta _{n}(t)\), we obtain $$\begin{aligned} &\frac{1}{2}\frac{d}{dt} \int _{\Omega } \bigl\vert u_{n}(x,t) \bigr\vert ^{2}\,dx+M\bigl([u_{n}]_{s,p}^{p}\bigr) \iint _{\mathcal{Q}}\frac{ \vert u_{n}(x,t)-u_{n}(y,t) \vert ^{p}}{ \vert x-y \vert ^{N+sp}} \,dx \,dy \\ &\quad {}+ \int _{\Omega }f(x,u_{n})u_{n}\,dx= \int _{\Omega }g(x)u_{n}\,dx. \end{aligned}$$ Let \(u_{0}\in W_{0}\cap L^{q}(\Omega )\). Then, multiplying (3.1) by \(\eta _{n}^{\prime }(t)\), we get $$\begin{aligned} & \int _{\Omega } \biggl\vert \frac{\partial u_{n}(x,t)}{\partial t} \biggr\vert ^{2}\,dx+ \frac{d}{dt} \biggl[\mathscr{M}\bigl([u_{n}]_{s,p}^{p} \bigr) + \int _{\Omega }F(x,u_{n})\,dx \biggr]= \int _{\Omega }g(x)\frac{\partial u_{n}(x,t)}{\partial t}\,dx. \end{aligned}$$ It follows from \((M_{1})\), (1.3), (3.2), and the Hölder inequality that $$\begin{aligned} &\frac{1}{2}\frac{d}{dt} \int _{\Omega } \bigl\vert u_{n}(x,t) \bigr\vert ^{2}\,dx+C_{1} \int _{\Omega } \bigl\vert u_{n}(x,t) \bigr\vert ^{q}\,dx\leq C \vert \Omega \vert + \bigl\Vert g(x) \bigr\Vert _{2} \bigl\Vert u_{n}(x,t) \bigr\Vert _{2}. \end{aligned}$$ Further, by \(q\geq 2\) and the Young inequality, we obtain $$\begin{aligned} \frac{1}{2} \frac{d}{dt} \int _{\Omega } \bigl\vert u_{n}(x,t) \bigr\vert ^{2}\,dx + \frac{C_{1}}{2} \int _{\Omega } \bigl\vert u_{n}(x,t) \bigr\vert ^{2}\,dx \leq \biggl[(C+C_{1}) \vert \Omega \vert + \frac{2}{C_{1}} \bigl\Vert g(x) \bigr\Vert _{2}^{2} \biggr], \end{aligned}$$ $$\begin{aligned} & \int _{\Omega } \bigl\vert u_{n}(x,t) \bigr\vert ^{2}\,dx \\ &\quad \leq \int _{\Omega } \bigl\vert u_{0n}(x,0) \bigr\vert ^{2}dxe^{-C_{1}t} + \frac{2(C+C_{1}) \vert \Omega \vert }{c_{1}}\bigl(1-e^{-C_{1}t} \bigr) +\frac{4}{C_{1}} \int _{0}^{t} \Vert g \Vert _{2}^{2}e^{C_{1}(\tau -t)}\,d\tau \\ &\quad \leq C, \end{aligned}$$ where \(C>0\) denotes various constants independent of n and t. This together with (3.2) deduces that the local solution \(u_{n}\) can be extended to \((0,\infty )\). Then, using a similar discussion as that in [21], we can obtain that the limit of \(\{u_{n}\}\) is a solution of problem (1.1). Next we prove that problem (1.1) only has one solution. Assume that u and v are two solutions of problem (1.1). Taking \(\varphi =u-v\) as a test function in Definition 1.1, we have $$\begin{aligned} &\frac{1}{2}\frac{d}{dt} \int _{\Omega } \vert u-v \vert ^{2}\,dx +M \bigl([u]_{s,p}^{p}\bigr) \langle u,u-v\rangle _{W_{0}}-M\bigl([v]_{s,p}^{p}\bigr)\langle u,u-v\rangle _{W_{0}} \\ &\quad {}+ \int _{\Omega }\bigl(f(x,u)-f(x,v)\bigr) (u-v)\,dx=0. \end{aligned}$$ Note that $$\begin{aligned} \langle u,u-v\rangle _{W_{0}}=[u]_{s,p}^{p}-\langle u,v\rangle _{W_{0}}. \end{aligned}$$ By the Young inequality, we have $$\begin{aligned} \langle u,v\rangle _{W_{0}} &= \iint _{\mathcal{Q}} \frac{ \vert u(x)-u(y) \vert ^{p-2}(u(x)-u(y))(v(x)-v(y))}{ \vert x-y \vert ^{N+sp}}\,dx \,dy \\ &\leq \biggl(1-\frac{1}{p} \biggr)[u]_{s,p}^{p} + \frac{1}{p}[v]_{s,p}^{p}. \end{aligned}$$ $$\begin{aligned} \langle u,u-v\rangle _{W_{0}}\geq \frac{1}{p}\bigl([u]_{s,p}^{p}-[v]_{s,p}^{p} \bigr). \end{aligned}$$ Similarly, $$\begin{aligned} \langle v,u-v\rangle _{W_{0}}\leq \frac{1}{p}\bigl([u]_{s,p}^{p}-[v]_{s,p}^{p} \bigr). \end{aligned}$$ Using the above inequalities and assumption (1.2), we arrive at the inequality $$\begin{aligned} \frac{1}{2}\frac{d}{dt} \int _{\Omega } \vert u-v \vert ^{2}\,dx + \frac{1}{p} \bigl[M\bigl([u]_{s,p}^{p}\bigr)-M \bigl([v]_{s,p}^{p}\bigr) \bigr]\bigl([u]_{s,p}^{p}-[v]_{s,p}^{p} \bigr) \leq \lambda \int _{\Omega } \vert u-v \vert ^{2}\,dx. \end{aligned}$$ Since M is a nondecreasing function, we deduce that $$ \frac{d}{dt} \int _{\Omega } \vert u-v \vert ^{2}\,dx\leq 2\lambda \int _{\Omega } \vert u-v \vert ^{2}\,dx, $$ which implies that \(u-v=0\) a.e. in \(\Omega \times (0,\infty )\). Hence the solution is unique. With a similar discussion as the uniqueness of solution, we can obtain the continuity of the mapping \(u_{0}\rightarrow u (t)\) in \(L^{2}(\Omega )\). □ Now we define a functional \(E:W_{0}\rightarrow \mathbb{R}\) by $$\begin{aligned} E(u)=\frac{1}{p}\mathscr{M}\bigl([u]_{s,p}^{p}\bigr)+ \int _{\Omega } F(x,u)\,dx- \int _{\Omega }g(x)u\,dx \end{aligned}$$ for all \(u\in W_{0}\), where \(F(x,u)=\int _{0}^{u} f(x,\xi )\,d\xi \). Then we have the following. Assume that \(u_{0}\in W_{0}\cap L^{q}(\Omega )\). Let u be a solution of problem (1.1), then $$\begin{aligned} E\bigl(u(x,t)\bigr)=E(u_{0})- \int _{0}^{t} \int _{\Omega } \bigl\vert u_{\tau }(x,\tau ) \bigr\vert ^{2}dxd \tau ,\quad t>0. \end{aligned}$$ Let us recall that the solution of problem (1.1) can be obtained as the limit of the following sequence of Galerkin's approximation (see [25]): $$\begin{aligned} u_{n}(x,t)=\sum_{j=1}^{n} \bigl(g_{n}(t)\bigr)_{j}e_{j}(x),\quad n=1,2, \ldots , \end{aligned}$$ where \(g_{n}(t)\in C^{1}[0,T]\) and \(\{e_{j}\}\subset C_{0}^{\infty }(\Omega )\) is an orthonormal basis in \(L^{2}(\Omega )\). Let u be a sufficiently smooth solution of problem (1.1)(or the approximate solution \(u_{n}\)). Choosing \(\varphi =u_{t}\) in Definition 1.1 and using the fact that $$\begin{aligned} \langle u,u_{t}\rangle _{W_{0}}=\frac{1}{p} \frac{d}{dt}\mathscr{M}\bigl([u]_{s,p}^{p}\bigr), \end{aligned}$$ $$\begin{aligned} \int _{\Omega } \vert u_{t} \vert ^{2}\,dx+ \frac{d}{dt}E\bigl(u(x,t)\bigr)=0, \end{aligned}$$ which implies that the function \(E(u(x,t))\) is nonincreasing with respect to t. Moreover, integrating the above equality with respect to t from 0 to t, we arrive at the equality $$\begin{aligned} \int _{0}^{t} \int _{\Omega } \vert u_{\tau } \vert ^{2}\,dx \,d\tau +E\bigl(u(x,t)\bigr)-E(u_{0})=0. \end{aligned}$$ This ends the proof. □ By Theorem 3.1, the solution of problem (1.1) generates a semigroup \(\{S(t)\}_{t\geq 0}\) in \(L^{2}(\Omega )\). Next, we show that the semigroup possesses a global attractor in \(L^{q}(\Omega )\). Under the assumptions of Theorem 3.1, the semigroup \(\{S(t)\}_{t\geq 0}\) associated with problem (1.1) possesses an absorbing set in \(L^{2}(\Omega )\) and \(W_{0}\cap L^{q}(\Omega )\), respectively. Taking \(\varphi =u\) in Definition 1.1, we obtain $$ \frac{1}{2}\frac{d}{dt} \int _{\Omega } \vert u \vert ^{2}\,dx+M \bigl([u]_{s,p}^{p}\bigr)[u]_{s,p}^{p}+ \int _{\Omega }f(x,u)u\,dx= \int _{\Omega }gudx. $$ Note that assumption (1.3) implies that $$ \int _{\Omega }f(x,u)u\,dx\geq c_{1} \int _{\Omega } \vert u \vert ^{q}\,dx-c \vert \Omega \vert . $$ This together with the Young inequality and assumption \((M_{1})\) yields that $$ \frac{1}{2}\frac{d}{dt} \int _{\Omega } \vert u \vert ^{2} \,dx+m_{0}[u]_{s,p}^{p}+c_{1} \int _{\Omega } \vert u \vert ^{q}\,dx\leq C_{\varepsilon } \int _{\Omega } \vert g \vert ^{2}\,dx+ \varepsilon \int _{\Omega } \vert u \vert ^{2}\,dx+c \vert \Omega \vert . $$ Using the Young inequality, one can deduce that $$\begin{aligned} \int _{\Omega } \vert u \vert ^{2}\,dx&\leq \int _{\Omega }\frac{2}{q} \vert u \vert ^{q} \,dx + \int _{\Omega }\frac{q-2}{q}\,dx \\ &=\frac{2}{q} \int _{\Omega } \vert u \vert ^{q}\,dx + \frac{q-2}{q} \vert \Omega \vert . \end{aligned}$$ $$ \frac{q}{2}c_{1} \int _{\Omega } \vert u \vert ^{2}\,dx \leq c_{1} \int _{\Omega } \vert u \vert ^{q}\,dx+ \frac{q-1}{2}c_{1} \vert \Omega \vert . $$ Inserting this inequality into (3.6), we get $$\begin{aligned} &\frac{1}{2}\frac{d}{dt} \int _{\Omega } \vert u \vert ^{2} \,dx+m_{0}[u]_{s,p}^{p}+ \frac{q}{2}c_{1} \int _{\Omega } \vert u \vert ^{2}\,dx \\ &\quad \leq \varepsilon ^{-1} \int _{\Omega } \vert g \vert ^{2}\,dx+\varepsilon \int _{ \Omega } \vert u \vert ^{2}\,dx+ \biggl(c+ \frac{q-1}{2}c_{1} \biggr) \vert \Omega \vert . \end{aligned}$$ Choose \(\varepsilon =\frac{qc_{1}}{4}\). Then $$\begin{aligned} &\frac{1}{2}\frac{d}{dt} \int _{\Omega } \vert u \vert ^{2}\,dx+ \frac{q c_{1}}{4} \int _{\Omega } \vert u \vert ^{2}\,dx \\ &\quad \leq \frac{4}{qc_{1}} \int _{\Omega } \vert g \vert ^{2}\,dx + \biggl(c+ \frac{q-1}{2}c_{1} \biggr) \vert \Omega \vert . \end{aligned}$$ Then, using a similar discussion as (3.4), we get that there exists \(t_{0}>0\) such that $$ \bigl\Vert u(x,t) \bigr\Vert _{2}\leq C \quad \text{for any } t \geq t_{0}. $$ Thus, the semigroup has an absorbing set in \(L^{2}(\Omega )\). Integrating (3.6) with respect to t over \([t, t+1]\), \(t \geq t_{0}\), we obtain $$\begin{aligned} & \int ^{t+1}_{t} \bigl(m_{0}\bigl[u(x,\tau )\bigr]_{s,p}^{p}+c_{1} \bigl\Vert u(x,\tau ) \bigr\Vert _{q}^{q} \bigr)\,d\tau \\ &\quad \leq C_{\varepsilon } \Vert g \Vert ^{2}_{2}+ \bigl\Vert u(x,t) \bigr\Vert ^{2}_{2}+C \vert \Omega \vert \leq C \quad \text{for } t\geq t_{0}, \end{aligned}$$ $$\begin{aligned} \int ^{t+1}_{t} \bigl(\bigl[u(x,\tau ) \bigr]_{s,p}^{p}+ \bigl\Vert u(x,\tau ) \bigr\Vert _{q}^{q} \bigr)\,d\tau \leq C\quad \text{for } t\geq t_{0}. \end{aligned}$$ On the other hand, using Lemma 3.1 and the Young inequality, we deduce $$ \frac{1}{2} \int _{\Omega } \vert u_{t} \vert ^{2}\,dx+ \frac{d}{dt} \mathscr{M}\bigl([u]_{s,p}^{p}\bigr) + \frac{d}{dt} \int _{\Omega }F(x,u)\,dx\leq \frac{1}{2} \Vert g \Vert ^{2}_{2}. $$ By assumption (1.3), we have $$ c_{1} \vert u \vert ^{q}-c\leq F(x,u) \leq c_{2} \vert u \vert ^{q}+c. $$ Integrating (3.9) over \([\tau ,t + 1] \), \(t_{0} \leq t < \tau < t + 1\), one can deduce $$\begin{aligned} &\mathscr{M}\bigl(\bigl[u(x,t)\bigr]_{s,p}^{p}\bigr) + \int _{\Omega }F\bigl(x,u(x,t)\bigr)\,dx \\ &\quad \leq C \Vert g \Vert ^{2}_{2}+ \biggl(\mathscr{M}\bigl( \bigl[u(x,\tau )\bigr]_{s,p}^{p}\bigr) + \int _{\Omega }F\bigl(x,u(x,\tau )\bigr)\,dx \biggr). \end{aligned}$$ Integrating the above inequality with respect to τ between t and \(t + 1\), we obtain $$\begin{aligned} &\mathscr{M}\bigl(\bigl[u(x,t+1)\bigr]_{s,p}^{p}\bigr) + \int _{\Omega }F\bigl(x,u(x,t+1)\bigr)\,dx \\ &\quad \leq C \Vert g \Vert ^{2}_{2} + \int _{t}^{t+1} \biggl(\mathscr{M}\bigl(\bigl[u(x, \tau )\bigr]_{s,p}^{p}\bigr) + \int _{\Omega }F\bigl(x,u(x,\tau )\bigr)\,dx \biggr)\,d\tau . \end{aligned}$$ Gathering (3.8) and (3.10), we get $$ \bigl[u(x,t)\bigr]_{s,p}^{p}+ \bigl\Vert u(x,t) \bigr\Vert _{q}^{q}\leq C \quad \text{for all } t \geq t_{0}+1. $$ The proof is complete. □ By the compact imbedding results in [35] and [5, Theorem 6.7], we are now in a position to obtain the global attractor in \(L^{q}(\Omega )\). The proof is inspired by [30]. Let \(B_{0}\) be an absorbing set in \(L^{q}(\Omega )\). We define the ω-limit set of \(B_{0}\) as $$\begin{aligned} \omega (B_{0}):=\bigcap_{\tau \geq 0} \overline{ \bigcup_{t\geq \tau }S(t)B_{0}}^{L^{q}(\Omega )}. \end{aligned}$$ Here \(A^{L^{q}(\Omega )}\) denotes the closure of A in the topology of \(L^{q}(\Omega )\). Note that \(\varphi \in \omega (B_{0})\) if and only if there exist sequences \(\{\varphi _{n}\}\subset B_{0}\) and \(t_{n}\rightarrow \infty \) such that $$\begin{aligned} S(t_{n})\varphi _{n}\rightarrow \varphi \quad \text{as } n \rightarrow \infty . \end{aligned}$$ Set \(\mathscr{A}=\omega (B_{0})\). Next we verify that \(\mathscr{A}\) is a global attractor of the semigroup \(S(t)\) in \(L^{q}(\Omega )\). (1) \(\mathscr{A}\) is compact. Clearly, by the compact imbedding results in [35] and [5, Theorem 6.7], one can obtain that \(\mathscr{A}\) is compact in \(L^{q}(\Omega )\) being \(q\in (2,Np/(N-sp))\). (2) \(\mathscr{A}\) is invariant. If \(v\in S(t)\mathscr{A}\), then \(v=S(t)\varphi \), \(\varphi \in \mathscr{A}\). Thus, there exist \(\varphi _{n}\) and \(t_{n}\) such that $$ S(t)S(t_{n})\varphi _{n}=S(t+t_{n})\varphi _{n}\rightarrow S(t) \varphi =v, $$ which implies that \(v\in \mathscr{A}\). If \(v\in \mathscr{A}\), then there exist \(\varphi _{n}\in B_{0}\) and \(t_{n}\rightarrow \infty \) such that \(S(t_{n})\varphi _{n}\rightarrow v\). Observe that, for \(t_{n}\geq t\), the sequence \(S(t_{n}-t)\varphi _{n}\) is compact in \(L^{q}(\Omega )\). Thus, there exist a subsequence \(t_{n_{k}}\rightarrow \infty \) and \(\varphi \in L^{q}(\Omega )\) such that \(S(t_{n_{k}}-t)\varphi _{n_{k}}\rightarrow \varphi \). It follows that \(\varphi \in \mathscr{A}\). By the continuity of \(S(t)\), we deduce $$\begin{aligned} S(t_{n_{k}})\varphi _{n_{k}}= S(t)S(t_{n_{k}}-t)\varphi _{n_{k}} \rightarrow S(t)\varphi =v. \end{aligned}$$ It yields that \(v\in S(t)\mathscr{A}\). Consequently, we obtain that \(S(t)\mathscr{A}=\mathscr{A}\). (3) \(\mathscr{A}\) attracts any bounded sets in \(L^{q}(\Omega )\). Arguing by contradiction, we assume that for some bounded set \(B_{1}\) of \(L^{2}(\Omega )\), \({\mathrm{dist}}(S(t)B_{1},\mathscr{A})\) does not limit to 0 as \(t\rightarrow \infty \). Hence there exist \(\delta >0\) and a sequence \(t_{n}\rightarrow \infty \) such that, for all n, $$ {\mathrm{dist}}\bigl(S(t_{n})B_{1},\mathscr{A}\bigr)\geq \delta >0. $$ For each \(n\geq 1\), there exists \(\varphi _{n}\in B_{1}\) such that $$\begin{aligned} {\mathrm{dist}}\bigl(S(t_{n})\varphi _{n}, \mathscr{A}\bigr)\geq \frac{\delta }{2}>0. \end{aligned}$$ Recall that \(B_{0}\) is an absorbing set. Then \(S(t_{n})\varphi _{n}\subset B_{0}\) for \(t_{n}\geq t_{0}:=t_{0}(B)\). Since \(S(t_{n})\varphi _{n}\) is compact, there exist \(\varphi \in L^{q}(\Omega )\) and a subsequence of \(t_{n}\) denoted by \(t_{n_{k}}\) such that $$\begin{aligned} \varphi =\lim_{n_{k}\rightarrow \infty } S(t_{n_{k}})\varphi _{n_{k}}= \lim_{n_{k}\rightarrow \infty } S(t_{n_{k}}-t_{0})S(t_{0}) \varphi _{n_{k}}. \end{aligned}$$ It follows from \(S(t_{0})\varphi _{n}\in B_{0}\) that \(\varphi \in \mathscr{A}_{q}\), which contradicts (3.11). □ Infinite dimensional global attractors In this section, we study the fractal dimension of the global attractor. First, we prove that the \(Z_{2}\) index of the global attractor is infinite. Then, by the Mané projection theorem [18], we obtain the infinite dimensionality of the global attractor. Let X be a Banach space. Denote by \(\sum =\{A\subset V: A\text{ is closed}, A=-A\}\) the class of closed symmetric subsets of X. For any \(A\in \sum \), the \(Z_{2}\) index of A is defined as follows: $$ \gamma (A)= \textstyle\begin{cases} \inf \{m:\exists h\in C^{0}(A;\mathbb{R}^{m}\setminus \{0\}), h(-u)=-h(u) \}; \\ \infty \quad \text{if }\{\cdots \}=\emptyset , \text{ in particular if } 0\in A; \\ 0 \quad A=\emptyset . \end{cases} $$ Now we list some properties of \(Z_{2}\) which will be used later, for more details see [29]. The \(Z_{2}\) index defined on Σ satisfies the following properties: (1) \(\gamma (A)=0\Leftrightarrow A=\phi \). (2) If \(A\subset B\subset \Sigma \), then \(\gamma (A)\leq \gamma (B)\). (3) For any \(A,B \subset \Sigma \), \(\gamma (A\cup B)\leq \gamma (A)+\gamma (B)\). (4) If \(A\in \Sigma \) is a compact set, then \(\exists \delta >0\) such that \(\gamma (\overline{U_{\delta }(A)})=\gamma (A)\), where \(U_{\delta }(A)\) is a symmetric δ-neighborhood of A. (5) \(\gamma (A)\leq \gamma (\overline{h(A)})\), \(\forall A\in \Sigma \), and \(h:V\rightarrow V\) is an odd and continuous function. To prove Theorem 1.2, we need the following lemma. Let \(\{S(t)\}_{t\geq 0}\) be an odd semigroup on a complete metric space X, which possesses a symmetric global attractor \(\mathscr{A}\). Then, for any \(m\in \mathbb{N}\), there exists a neighborhood \(U(0)\) of 0 such that the \(Z_{2}\) index of the set \(\mathscr{A}\setminus U(0)\) satisfies \(\gamma (\mathscr{A}\setminus U(0))\geq m\). We first show that, for any integer \(m > 0\), there exists a symmetric set \(B_{m}\) such that $$\begin{aligned} \gamma (B_{m})\geq m\quad \text{and}\quad \omega (B_{m})=\bigcap_{\tau \geq 0}\overline{\bigcup _{t\geq \tau }S(t)B_{m}}\subset \mathscr{A} \setminus \{0\}. \end{aligned}$$ For any \(k\in \mathbb{N}^{+}\), let \(V_{k}\) be a k-dimensional subspace of \(W_{0}\cap L^{q}(\Omega )\). Set \(A_{k}=\{u\in V_{k}:[u]_{s,p}=1\}\), then \(A_{k}\) is compact in \(W_{0}\cap L^{q}(\Omega )\) and \(L^{r}(\Omega )\). Since all norms are equivalent on a finite dimensional Banach space, there exists \(C>0\) such that $$ \Vert u \Vert _{r}\geq C[u]_{s,p}=C\quad \text{for all } u\in A_{m}. $$ Set \(\epsilon A_{m}= \{\epsilon u : u\in A _{m} \}\), \(0 < \epsilon < 1\). Then \(\gamma (\epsilon A _{m})=\gamma (A _{m})=m\). For \(\nu =\epsilon u\in \epsilon A_{m}\), we have $$\begin{aligned} E(\nu ) &=\frac{1}{p}\mathscr{M}\bigl(\epsilon ^{p}[u]_{s,p}^{p} \bigr)+ \epsilon ^{q}\frac{1}{q} \int _{\Omega } \vert u \vert ^{q}\,dx-\epsilon ^{r} \int _{ \Omega } \vert u \vert ^{r}\,dx \\ &\leq \frac{1}{p} \epsilon ^{p}\max_{\tau \in [0,1]}M( \tau )+C \epsilon ^{q} -C\epsilon ^{r}. \end{aligned}$$ Since \(2\leq r< p\) and \(r< q\), for ϵ small enough such that \(E(\nu )<0\) for all \(\nu \in \epsilon A_{m}\). Now fix \(\epsilon >0\) such that \(E(\nu )<0\). Since \(E(0)=0\) and the function \(t\rightarrow E(u(t))\) is nonincreasing, we have \(\omega (\epsilon A_{m})\subset \mathscr{A}\setminus \{0\}\). Thus, (4.1) holds true by taking \(B_{m}=\varepsilon A_{m}\). Since \(B_{m}\subset \mathscr{A}\setminus \{0\}\) and \(B_{m}\) is closed and compact, there exist open neighborhoods of 0 and \(\omega (B)\), denoted respectively by \(U(0)\) and \(\mathscr{N}(B_{m})\), such that $$ U( 0 )\cap \mathscr{N}(B_{m})=\emptyset . $$ Since \(S(t)B_{m}\subset \mathscr{N}(B_{m})\) for t large enough, we have \(S(t)B_{m}\subset \mathscr{N}(\mathscr{A})\) for t large enough. Therefore, there exists T such that, for \(t>T\), $$ S(t)B_{m}\subset \mathscr{N}(B_{m})\subset \mathscr{N}( \mathscr{A}) \setminus U(0) \subset \mathscr{N}\bigl(\mathscr{A}\setminus U(0) \bigr). $$ Note that \(\mathscr{A}\setminus U(0)\) is compact. Choosing a proper neighborhood \(\mathscr{N}(\mathscr{A}\setminus U(0))\), by (4) in Lemma 4.1, we have $$ \gamma \bigl(\mathscr{A}\setminus U(0)\bigr)=\gamma \bigl( \overline{\mathscr{N} \bigl(\omega (A)\setminus U(0)\bigr)}\bigr) \geq \gamma \bigl( \overline{S(t)B_{m}}\bigr)\geq \gamma (B_{m})\geq m,\quad t \text{ large enough}. $$ By our assumptions, one can obtain that the semigroup of problem (1.9) is odd. Indeed, for any \(u_{0} \in L^{2}(\Omega )\), clearly, \(-u_{0} \in L^{2}(\Omega )\). Let u be the unique solution of problem (1.9) with initial data \(u_{0}\). Since f is odd, −u is the unique solution of problem (1.9) corresponding to initial data \(-u_{0}\). Thus, \(S(t)(-u_{0})=-u=-S(t)u_{0}\), i.e., \(S(t)\) is odd. Let B be a symmetric absorbing set. Then the symmetry of the global attractor follows from the fact that $$ \mathscr{A}=\omega (B)=\bigcap_{s\geq 0} \overline{ \bigcup_{t\geq s}S(t)B}. $$ Finally, we may take a linear (and thus odd) projection in the Mané projection theorem (see [18]). If there exists \(m\in \mathbb{N}\) such that the fractal dimension of \(\mathscr{A}\) is less than m, then every symmetric closed subset of the attractor (not containing zero) has a \(Z_{2}\) index less than \(2m+1\), which contradicts Lemma 4.2. Thus the fractal dimension of the global attractor is infinite. □ Data sharing not applicable to this paper as no datasets were generated or analyzed during the current study. Barrios, B., Colorado, E., de Pablo, A., Sánchez, U.: On some critical problems for the fractional Laplacian operator. J. Differ. Equ. 252, 6133–6162 (2012) Caffarelli, L.: Some nonlinear problems involving non-local diffusions. In: ICIAM 07-6th International Congress on Industrial and Applied Mathematics, pp. 43–56. Eur. Math. Soc., Zürich (2009) Caffarelli, L.: Non-local diffusions, drifts and games. In: Nonlinear Partial Differential Equations, Abel Symposia, vol. 7, pp. 37–52 (2012) Chueshov, I., Lasiecka, I.: Long-Time Behavior of Second Order Evolution Equations with Nonlinear Damping. Memoirs of AMS, vol. 912. Am. Math. Soc., Providence (2008) Di Nezza, E., Palatucci, G., Valdinaci, E.: Hitchhiker's guide to the fractional Sobolev spaces. Bull. Sci. Math. 136, 521–573 (2012) Efendiev, M.A., Ôtani, M.: Infinite-dimensional attractors for evolution equations with p-Laplacian and their Kolmogorov entropy. Differ. Integral Equ. 20, 1201–1209 (2007) MathSciNet Google Scholar Fife, P.: Some nonclassical trends in parabolic and parabolic-like evolutions. In: Trends in Nonlinear Analysis, pp. 153–191. Springer, Berlin (2003) Fiscella, A., Servadei, R., Valdinoci, E.: Density properties for fractional Sobolev spaces. Ann. Acad. Sci. Fenn., Math. 40, 235–253 (2015) Fiscella, A., Valdinoci, E.: A critical Kirchhoff type problem involving a nonlocal operator. Nonlinear Anal. 94, 156–170 (2014) (2015) 235–253 Geredeli, P.G.: On the existence of regular global attractor for p-Laplacian evolution equation. Appl. Math. Optim. 71, 517–532 (2015) Hartman, P.: Ordinary Differential Equations, 2nd edn. Birkhäuser, Boston (1982) Hurtado, E.J.: Non-local diffusion equations involving the fractional \(p(\cdot )\)-Laplacian. J. Dyn. Differ. Equ. 32, 557–587 (2020) Iannizzotto, A., Liu, S., Perera, K., Squassina, M.: Existence results for fractional p-Laplacian problems via Morse theory. Adv. Calc. Var. 9, 101–126 (2016) Kirchhoff, G.: Vorlesungen über Mathematische Physik. Mechanik, Teubner, Leipzig (1883) Laskin, N.: Fractional quantum mechanics and Lévy path integrals. Phys. Lett. A 268, 298–305 (2000) Lindgren, E., Lindqvist, P.: Fractional eigenvalues. Calc. Var. Partial Differ. Equ. 49, 795–826 (2014) Lorenzo, B., Enea, P.: The second eigenvalue of the fractional p-Laplacian. Adv. Calc. Var. 9, 323–356 (2016) Mané, R.: Lecture Notes in Math., vol. 898, pp. 230–242, Springer, New York (1981) Mingqi, X., Rădulescu, V., Zhang, B.: Fractional Kirchhoff problems with critical Trudinger-Moser nonlinearity. Calc. Var. Partial Differ. Equ. 58, 57 (2019) Mingqi, X., Rǎdulescu, V.D., Zhang, B.: Nonlocal Kirchhoff problems with singular exponential nonlinearity. Appl. Math. Optim. (2020). https://doi.org/10.1007/s00245-020-09666-3 Mingqi, X., Radulescu, V.D., Zhang, B.L.: Nonlocal Kirchhoff diffusion problems: local existence and blow-up of solutions. Nonlinearity 31, 3228–3250 (2018) Molica Bisci, G., Rădulescu, V.: Ground state solutions of scalar field fractional for Schrödinger equations. Calc. Var. Partial Differ. Equ. 54, 2985–3008 (2015) Niu, W.: Long-time behavior for a nonlinear parabolic problem with variable exponents. J. Math. Anal. Appl. 393, 56–65 (2012) Niu, W., Zhong, C.: Global attractors for the p-Laplacian equations with nonregular data. J. Math. Anal. Appl. 392, 123–135 (2012) Pan, N., Zhang, B., Cao, J.: Degenerate Kirchhoff-type diffusion problems involving the fractional p-Laplacian. Nonlinear Anal., Real World Appl. 37, 56–70 (2017) Pucci, P., Xiang, M.Q., Zhang, B.L.: Multiple solutions for nonhomogeneous Schrödinger-Kirchhoff type equations involving the fractional p–Laplacian in \(\mathbb{R}^{N}\). Calc. Var. Partial Differ. Equ. 54, 2785–2806 (2015) Pucci, P., Xiang, M.Q., Zhang, B.L.: A diffusion problem of Kirchhoff type involving the nonlocal fractional p-Laplacian. Discrete Contin. Dyn. Syst. 37, 4035–4051 (2017) Servadei, R., Valdinoci, E.: The Brezis–Nirenberg result for the fractional Laplacian. Trans. Am. Math. Soc. 367, 67–102 (2015) Struwe, M.: Variational Methods, Applications to Nonlinear Partial Differential Equations and Hamiltonian Systems, 2nd edn. Springer, Berlin (1996) Temam, R.: Infinite-Dimensional Dynamical Systems in Mechanics and Physics. Springer, New York (1997) Vázquez, J.L.: Nonlinear diffusion with fractional Laplacian operators. In: Nonlinear Partial Differential Equations, Abel Symp., vol. 7, pp. 271–298. Springer, Heidelberg (2012) Wang, M., Huang, J.H.: Finite dimensionality of the global attractor for a fractional Schrödinger equation on \(\mathbb{R}\). Appl. Math. Lett. 98, 432–437 (2019) Xiang, M.Q., Pucci, P., Squassina, M., Zhang, B.L.: Nonlocal Schrödinger-Kirchhoff equations with external magnetic field. Discrete Contin. Dyn. Syst. 37, 1631–1649 (2017) Xiang, M.Q., Yang, D.: Nonlocal Kirchhoff problems: extinction and non-extinction of solutions. J. Math. Anal. Appl. 477, 133–152 (2019) Xiang, M.Q., Zhang, B.L., Ferrara, M.: Existence of solutions for Kirchhoff type problem involving the non-local fractional p-Laplacian. J. Math. Anal. Appl. 424, 1021–1041 (2015) Xiang, M.Q., Zhang, B.L., Rǎdulescu, V.D.: Existence of solutions for a bi-nonlocal fractional p-Kirchhoff type problem. Comput. Math. Appl. 71, 255–266 (2016) Yang, M., Sun, C., Zhong, C.: Global attractors for p-Laplacian equation. J. Math. Anal. Appl. 327, 1130–1142 (2007) Zhong, C., Niu, W.: On the \(Z_{2}\) index of the global attractor for a class of p-Laplacian equations. Nonlinear Anal. 73, 3698–3704 (2010) The authors thank the anonymous reviewers for their valuable suggestions on improving this paper. Ailing Qi was supported by the National Nature Science Foundation of China (No. 11871368) and the Foundation Research Funds for the Central Universities (No. 3122019145). Mingqi Xiang was supported by the Natural Science Foundation of China (No. 11601515) and the Tianjin Youth Talent Special Support Program. College of Science, Civil Aviation University of China, Tianjin, 300300, P.R. China Ailing Qi, Die Hu & Mingqi Xiang Ailing Qi Die Hu Mingqi Xiang All authors read and approved the final manuscript. Correspondence to Mingqi Xiang. Qi, A., Hu, D. & Xiang, M. Long-time behavior of solutions for a fractional diffusion problem. Bound Value Probl 2021, 10 (2021). https://doi.org/10.1186/s13661-021-01483-z 35R11 Fractional p-Laplacian Global attractors Fractal dimension
CommonCrawl
Recent questions tagged jee Find the polynomial whose roots are the solutions of the equation $x^2 + 2x + 3 = 0$. asked 2 days ago in Mathematics by ♦Gauss Diamond (71,835 points) | 2 views Prove that the polynomial $x^4 + 1$ is not factorable over the real numbers. Find the polynomial whose roots are \(2 + 3i\) and \(2 - 3i\) Find the sum of the roots of the polynomial $x^4 + 3x^3 + 3x^2 + x + 1$. Using the Factor Theorem, factor the polynomial $x^3 + 6x^2 + 11x + 6$. kcse What is the inverse function of \(f(x)=2 x-4\)? asked Jan 22 in Mathematics by ♦MathsGee Platinum (163,814 points) | 9 views Solve for \(r\) in \(r^3=-27\) Solve \(r^3=27\) Find the roots of the polynomial $x^2 - 3x + 2$. asked Jan 9 in Mathematics by ♦MathsGee Platinum (163,814 points) | 9 views Find the inverse of the function \(f(x) = 4x - 5\) Given the function \(f(x) = 3x + 5\), find the inverse function Find the inverse of the function \(f(x) = x^2 - 3\) Factor the polynomial $x^2 + 5x + 6$. asked Jan 2 in Mathematics by ♦MathsGee Platinum (163,814 points) | 16 views If \(x\) is a real number such that \((x-3)(x-1)(x+1)(x+3)+16=116^2\), what is the largest possible value of \(x\) ? asked Dec 8, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 25 views Let \(H\) be a regular hexagon with area 360 . Three distinct vertices \(X, Y\), and \(Z\) are picked randomly, with all possible triples of distinct vertices equally likely. Let \(A, B\), and \(C\) be the unpicked vertices. The value of \(\left(\frac{1+\sin \frac{2 \pi}{9}+i \cos \frac{2 \pi}{9}}{1+\sin \frac{2 \pi}{9}-i \cos \frac{2 \pi}{9}}\right)^3\) is : asked Nov 29, 2022 in Mathematics by Maths Genie SIlver Status (12,152 points) | 36 views There are two positive number \(\mathrm{p}\) and \(\mathrm{q}\) such that \(\mathrm{p}+\mathrm{q}=2\) and \(\mathrm{p}^4+\mathrm{q}^4=272\). Find the quadratic equation whose roots are \(\mathrm{p}\) and \(\mathrm{q}\). Complete the following number sequence. \[ 3 ; 9 ; 27 ; 81 \text {; .. } \] Complete the following number sequence. \[ 1 ;-3 ; 9 ;-27 ; \] Find the values of \(x\) which satisfy the equation \(4x+3=33\) We have a chessboard of size \(\left(k^2-k+1\right) \times\left(k^2-k+1\right)\), where \(k-1=p\) is a prime number. asked Sep 23, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 121 views The positive integers \(a, b, c, d, p\), and \(q\) satisfy \(a d-b c=1\) and \(\frac{a}{b}>\frac{p}{q}>\frac{c}{d}\). Prove that: asked Sep 23, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 32 views Suppose that the real numbers \(x, y, z\) are different from 1 and satisfy Find all triples of integers \((a, b, c)\) such that Let \(x \leq y \leq z\) be real numbers such that \(x y+y z+z x=1\). Prove that \(x z<\frac{1}{2}\). Is it possible to improve the value of constant \(\frac{1}{2}\) ? If \(a, b, c\) are nonnegative real numbers with \(a^2+b^2+c^2=1\), prove that Suppose \(x, y, a\) are real numbers such that \(x+y=x^3+y^3=x^5+y^5=a\). Find all possible values of \(a\). Real numbers \(x, y, z\) are not all equal and satisfy the condition \[ x+\frac{1}{y}=y+\frac{1}{z}=z+\frac{1}{x}=k . \] Find all possible values of \(k\). Find all triples \((x, y, z)\) of integers satisfying the equation \[ x^3+y^3+z^3-3 x y z=2003 \] For any set \(A\) of positive integers, let \(n_A\) be the number of triples \((x, y, z)\) of elements of \(A\) with \(x<y\) and \(x+y=z\). If \(A\) is a seven-element set, find the maximum possible value of \(n_A\). Let \(T(n)\) be the sum of the decimal digits of a natural number \(n\). Let \(s(t)\) denote the number of digits of a natural number \(t\). Find all solutions to the system Find all real numbers \(x, y, z>1\) that satisfy the equality Let \(a, b\) and \(c\) be positive real numbers. Prove that \[ \frac{a}{b}+\frac{b}{c}+\frac{c}{a} \leq \frac{a^2}{b^2}+\frac{b^2}{c^2}+\frac{c^2}{a^2} . \] Real numbers \(a, b, c\) satisfy \(a^2+b^2=c^2\). Find all real numbers \(x, y, z\) such that \(x^2+y^2=z^2 \quad\) and \(\quad(x+a)^2+(y+b)^2=(z+c)^2\). If \(a, b, c\) are positive real numbers that satisfy \[ \frac{a^2}{1+a^2}+\frac{b^2}{1+b^2}+\frac{c^2}{1+c^2}=1 \] prove that \(|a b c| \leq \frac{1}{2 \sqrt{2}}\) If \(a, b, c\) are positive real numbers that satisfy \(a^2+b^2+c^2=1\), find the minimal value of Find the roots of the equation \[ z^2+2 z+17=0, \] The sum of coefficient of first, second, and third terms in \(\left(x^{2}+\frac{1}{x}\right)^{\mathbf{n}}\) is 46 then coefficient of term that does not contain \(\mathrm{x}\) is For real numbers \(a, b\), and \(c\) such that \(a \neq \frac{c-b}{2}\), how many real values of \(x\) satisfy the equation \( (x-a)(x-c)=(x+b)(a-x) ? \) Let \(x\) be a nonzero real number. If \(x+\frac{1}{x}<2\), then \(x<0\). asked Jul 28, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 71 views Prove that it is not possible to find different natural numbers \(x, y, z, t\) which are solutions of \[ x^{x}+y^{y}=z^{z}+t^{t} . \] In the excellent Monoface, we could change the head, left and right eyes, nose and mouth of some zany guys who work at momo-1.com. We are told that there are 759,375 possible faces. Where does this number come from? asked Jul 22, 2022 in Data Science & Statistics by ♦Gauss Diamond (71,835 points) | 84 views \[ x^{2}+2 x+3=0 \] if a is one of its roots, then \[ \frac{a^{5}+3 a^{4}+3 a^{3}-a^{2}}{a^{2}+3}=? \] Find the common factors of 136, 64, 24, 16 Let \(z\) and \(w\) be complex numbers such that \(|z|=2\) and \(|w|=5\). Find the smallest possible value of \(|w-z|\). asked Jul 7, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 103 views Let \(a, b\), and \(c\) be positive real numbers such that \(a^{2}=b c\) and \(a+b+c=a b c\). Find the smallest possible value of \(a^{2}\). What is the greatest possible quotient of any two distinct members of the set \(\left\{\frac{2}{5}, \frac{1}{2}, 5,10\right\}\) ? Specifically, we wish to maximize \(\frac{x}{y}\), where \(x\) and \(y\) are chosen from the previous set. Let \(a, b, c, d\) be distinct complex numbers such that \(|a|=|b|=|c|=|d|=1\) and \(a+b+c+d=0\). Find the maximum value of \[ |(a+b)(a+c)(a+d)(b+c)(b+d)(c+d)| . \] If we express \(3 x^{2}+x-4\) in the form \(a(x-h)^{2}+k\), then what is \(k\) ?
CommonCrawl
export.arXiv.org > hep-ex > arXiv:2112.01120 hep-ex hep-ph High Energy Physics - Experiment Title: Impact of jet-production data on the next-to-next-to-leading-order determination of HERAPDF2.0 parton distributions Authors: H1, ZEUS Collaborations: I. Abt, R. Aggarwal, V. Andreev, M. Arratia, V. Aushev, A. Baghdasaryan, A. Baty, K. Begzsuren, O. Behnke, A. Belousov, A. Bertolin, I. Bloch, V. Boudry, G. Brandt, I. Brock, N.H. Brook, R. Brugnera, A. Bruni, A. Buniatyan, P.J. Bussey, L. Bystritskaya, A. Caldwell, A.J. Campbell, K.B. Cantun Avila, C.D. Catterall, K. Cerny, V. Chekelian, Z. Chen, J. Chwastowski, J. Ciborowski, R. Ciesielski, J.G. Contreras, A.M. Cooper-Sarkar, M. Corradi, L. Cunqueiro Mendez, J. Currie, J. Cvach, J.B. Dainton, K. Daum, R.K. Dementiev, A. Deshpande, C. Diaconu, S. Dusini, G. Eckerlin, S. Egli, E. Elsen, L. Favart, A. Fedotov, J. Feltesse, J. Ferrando, M. Fleischer, A. Fomenko, B. Foster, C. Gal, E. Gallo, D. Gangadharan, A. Garfagnini, J. Gayler, A. Gehrmann-De Ridder, T. Gehrmann, A. Geiser, L.K. Gladilin, E.W.N. Glover, L. Goerlich, N. Gogitidze, Yu.A. Golubkov, M. Gouzevitch, C. Grab, T. Greenshaw, G. Grindhammer, G. Grzelak, C. Gwenlan, D. Haidt, R.C.W. Henderson, J. Hladký, D. Hochman, D. Hoffmann, R. Horisberger, T. Hreus, F. Huber, A. Huss, P.M. Jacobs, M. Jacquet, T. Janssen, N.Z. Jomhari, A.W. Jung, H. Jung, I. Kadenko, M. Kapichine, U. Karshon, J. Katzy, P. Kaur, C. Kiesling, R. Klanner, M. Klein, U. Klein, C. Kleinwort, H.T. Klest, R. Kogler, I.A. Korzhavina, P. Kostka, N. Kovalchuk, J. Kretzschmar, D. Krücker, K. Krüger, M. Kuze, M.P.J. Landon, W. Lange, P. Laycock, S.H. Lee, B.B. Levchenko, S. Levonian, A. Levy, W. Li, J. Lin, K. Lipka, B. List, J. List, B. Lobodzinski, B. Löhr, E. Lohrmann, O.R. Long, A. Longhin, F. Lorkowski, O.Yu. Lukina, I. Makarenko, E. Malinovski, J. Malka, H.-U. Martyn, S. Masciocchi, S.J. Maxfield, A. Mehta, A.B. Meyer, J. Meyer, S. Mikocki, V.M. Mikuni, M.M. Mondal, T. Morgan, A. Morozov, K. Mueller, B. Nachman, K. Nagano, J.D. Nam, Th. Naumann, P.R. Newman, C. Niebuhr, J. Niehues, G. Nowak, J.E. Olsson, Yu. Onishchuk, D. Ozerov, S. Park, C. Pascaud, G.D. Patel, E. Paul, E. Perez, A. Petrukhin, I. Picuric, I. Pidhurskyi, J. Pires, D. Pitzl, R. Polifka, A. Polini, S. Preins, M. Przybycień, A. Quintero, K. Rabbertz, V. Radescu, N. Raicevic, T. Ravdandorj, P. Reimer, E. Rizvi, P. Robmann, R. Roosen, A. Rostovtsev, M. Rotaru, M. Ruspa, D.P.C. Sankey, M. Sauter, E. Sauvan, S. Schmitt, B.A. Schmookler, U. Schneekloth, L. Schoeffel, A. Schöning, T. Schörner-Sadenius, F. Sefkow, I. Selyuzhenkov, M. Shchedrolosiev, L.M. Shcheglova, S. Shushkevich, I.O. Skillicorn, W. Słomiński, A. Solano, Y. Soloviev, P. Sopicki, D. South, V. Spaskov, A. Specka, L. Stanco, M. Steder, N. Stefaniuk, B. Stella, U. Straumann, C. Sun, B. Surrow, M.R. Sutton, T. Sykora, P.D. Thompson, K. Tokushuku, D. Traynor, B. Tseepeldorj, Z. Tu, O. Turkot, T. Tymieniecka, A. Valkárová, C. Vallée, P. Van Mechelen, A. Verbytskyi, W.A.T. Wan Abdullah, D. Wegener, K. Wichmann, M. Wing, E. Wünsch, S. Yamada, Y. Yamazaki, J. Žáček, A.F. Żarnecki, O. Zenaiev, J. Zhang, Z. Zhang, R. Žlebčík, H. Zohrabyan, F. Zomer et al. (174 additional authors not shown) Abstract: The HERAPDF2.0 ensemble of parton distribution functions (PDFs) was introduced in 2015. The final stage is presented, a next-to-next-to-leading-order (NNLO) analysis of the HERA data on inclusive deep inelastic $ep$ scattering together with jet data as published by the H1 and ZEUS collaborations. A perturbative QCD fit, simultaneously of $\alpha_s(M_Z^2)$ and and the PDFs, was performed with the result $\alpha_s(M_Z^2) = 0.1156 \pm 0.0011~{\rm (exp)}~ ^{+0.0001}_{-0.0002}~ {\rm (model}$ ${\rm +~parameterisation)}~ \pm 0.0029~{\rm (scale)}$. The PDF sets of HERAPDF2.0Jets NNLO were determined with separate fits using two fixed values of $\alpha_s(M_Z^2)$, $\alpha_s(M_Z^2)=0.1155$ and $0.118$, since the latter value was already chosen for the published HERAPDF2.0 NNLO analysis based on HERA inclusive DIS data only. The different sets of PDFs are presented, evaluated and compared. The consistency of the PDFs determined with and without the jet data demonstrates the consistency of HERA inclusive and jet-production cross-section data. The inclusion of the jet data reduced the uncertainty on the gluon PDF. Predictions based on the PDFs of HERAPDF2.0Jets NNLO give an excellent description of the jet-production data used as input. Comments: 43 pages, 24 figures, to be submitted to Eur. Phys. J. C Subjects: High Energy Physics - Experiment (hep-ex); High Energy Physics - Phenomenology (hep-ph) Report number: DESY-21-206 Cite as: arXiv:2112.01120 [hep-ex] (or arXiv:2112.01120v1 [hep-ex] for this version) From: Matthew Wing [view email] [v1] Thu, 2 Dec 2021 10:53:16 GMT (3930kb,D)
CommonCrawl
Finding all maximal perfect haplotype blocks in linear time Jarno Alanko1, Hideo Bannai2, Bastien Cazaux1, Pierre Peterlongo3 & Jens Stoye ORCID: orcid.org/0000-0002-4656-71554 Algorithms for Molecular Biology volume 15, Article number: 2 (2020) Cite this article Recent large-scale community sequencing efforts allow at an unprecedented level of detail the identification of genomic regions that show signatures of natural selection. Traditional methods for identifying such regions from individuals' haplotype data, however, require excessive computing times and therefore are not applicable to current datasets. In 2019, Cunha et al. (Advances in bioinformatics and computational biology: 11th Brazilian symposium on bioinformatics, BSB 2018, Niterói, Brazil, October 30 - November 1, 2018, Proceedings, 2018. https://doi.org/10.1007/978-3-030-01722-4_3) suggested the maximal perfect haplotype block as a very simple combinatorial pattern, forming the basis of a new method to perform rapid genome-wide selection scans. The algorithm they presented for identifying these blocks, however, had a worst-case running time quadratic in the genome length. It was posed as an open problem whether an optimal, linear-time algorithm exists. In this paper we give two algorithms that achieve this time bound, one conceptually very simple one using suffix trees and a second one using the positional Burrows–Wheeler Transform, that is very efficient also in practice. As a result of the technological advances that went hand in hand with the genomics efforts of the last decades, today it is possible to experimentally obtain and study the genomes of large numbers of individuals, or even multiple samples from an individual. For instance, the National Human Genome Research Institute and the European Bioinformatics Institute have collected more than 3500 genome-wide association study publications in their GWAS Catalog [1]. Probably the most prominent example of large-scale sequencing projects is the 1000 Genomes Project (now International Genome Sample Resource, IGSR), initiated with the goal of sequencing the genomes of more than one thousand human individuals to identify 95% of all genomic variants in the population with allele frequency of at least 1% (down toward 0.1% in coding regions). The final publications from phase 3 of the project report about genetic variations from more than 2500 genomes [2, 3]. Recently, several countries announced large-scale national research programs to capture the diversity of their populations, while some of these efforts started already more than 20 years ago. Since 1996 Iceland's deCODE company is mining Icelanders' genetic and medical data for disease genes. In 2015, deCODE published insights gained from sequencing the whole genomes of 2636 Icelanders [4]. Genome of the Netherlands (GoNL) is a whole genome sequencing project aiming to characterize DNA sequence variation in the Dutch population using a representative sample consisting of 250 trio families from all provinces in the Netherlands. In 2016, GoNL analysed whole genome sequencing data of 769 individuals and published a haplotype-resolved map of 1.9 million genome variants [5]. Similar projects have been established in larger scale in the UK: Following the UK10K project for identifying rare genetic variants in health and disease (2010–2013), Genomics England was set up in late 2012 to deliver the 100,000 Genomes Project [6]. This flagship project has by now sequenced 100,000 whole genomes from patients and their families, focusing on rare diseases, some common types of cancer, and infectious diseases. The scale of these projects is culminating in the US federal Precision Medicine Initiative, where the NIH is funding the All of Us research programFootnote 1 to analyze genetic information from more than 1 million American volunteers. Even more extreme suggestions go as far as to propose "to sequence the DNA of all life on Earth"Footnote 2. The main motivation for the collection of these large and comprehensive data sets is the hope for a better understanding of genomic variation and how variants relate to health and disease, but basic research in evolution, population genetics, functional genomics and studies on demographic history can also profit enormously. One important approach connecting evolution and functional genomics is the search for genomic regions under natural selection based on population data. The selection coefficient [7] is an established parameter quantifying the relative fitness of two genetic variants. Unfortunately, haplotype-based methods for estimating selection coefficients have not been designed with the massive genome data sets available today in mind, and may therefore take prohibitively long when applied to large-scale population data. In view of the large population sequencing efforts described above, methods are needed that—at similar sensitivity—scale to much higher dimensions. Only recently a method for the fast computation of a genome-wide selection scan has been proposed that can be computed quickly even for large datasets [8]. The method is based on a very simple combinatorial string pattern, maximal perfect haplotype blocks. Although considerably faster than previous methods, the running time of the algorithm presented in that paper is not optimal, as it takes \(O(kn^2)\) time in order to find all maximal perfect haplotype blocks in k genomes of length n each. This is sufficient to analyse individual human chromosomes on a laptop computer, for datasets of the size of the 1000 Genomes Project (thousands of genomes and millions of variations). However, with the larger datasets currently underway and with higher resolution it will not scale favourably. More efficient methods are therefore necessary and it was phrased as an open question whether there exists a linear-time algorithm to find all maximal perfect haplotype blocks. In this paper we settle this open problem affirmatively. More specifically, after some basic definitions in "Basic definitions" section we present in "Linear-time method I: based on suffix trees" and "Linear-time method II: based on the positional BWT" sections two new algorithms for finding all maximal perfect haplotype blocks in optimal time. The latter of these two algorithms is then experimentally compared to the one from [8] in "Empirical evaluation" section, proving its superiority in running time by a factor of about 5 and memory usage by up to two orders of magnitude for larger data sets. "Conclusion" section concludes the paper. This paper is an extended version of the preliminary work presented in [9]. Source code and test data are available from https://gitlab.com/bacazaux/haploblocks. Basic definitions The typical input to genome-wide selection studies is a set of haplotype-resolved genomes, or haplotypes for short. Clearly, for a given set of haplotypes only those sites are of interest where there is variation in the genomes. Therefore, formally, we consider as input to our methods a k × nhaplotype matrix where each of the k rows corresponds to one haplotype and each of the n columns corresponds to one variable genetic site. Most methods distinguish only between ancestral and derived allele, reflecting the fact that most sites are biallelic. Therefore the entries in a haplotype matrix are often considered binary where the ancestral allele is encoded by 0 and the derived allele is encoded by 1. However, the computational problem and its solutions considered in this paper do not depend on this restriction and instead are applicable to any type of sequence over a constant-size alphabet \(\Sigma\). The concept of a maximal perfect haplotype block as defined in [8] is the following, where s[i, j] denotes the substring of a string s from position i to position j and \(S|_K\) denotes the elements of an ordered set S restricted to index set K: Definition 1 Given k sequences \(S = (s_1,\ldots ,s_k)\) of the same length n (representing the rows of a haplotype matrix), a maximal perfect haplotype block is a triple (K, i, j) with \(K \subseteq \{1,\ldots ,k\}\), \(\vert K \vert \ge 2\) and \(1 \le i \le j \le n\) such that \(s[i,j] = t[i,j]\) for all \(s,t \in S|_K\)(equality), \(i = 1\) or \(s[i-1] \ne t[i-1]\) for some \(s,t \in S|_K\)(left-maximality), \(j = n\) or \(s[j+1] \ne t[j+1]\) for some \(s,t \in S|_K\)(right-maximality), and \(\not \exists K' \subseteq \{1,\ldots ,k\}\) with \(K \subset K'\) such that \(s[i,j] = t[i,j]\) for all \(s,t \in S|_{K'}\)(row-maximality). Definition 1 is illustrated in Fig. 1. Illustration of Definition 1. A binary \(3 \times 8\) haplotype matrix with three maximal perfect haplotype blocks \((\{1,3\},1,4)\), \((\{2,3\},4,7)\) and \((\{1,2,3\},6,7)\) highlighted. (The example contains additional maximal perfect haplotype blocks that are not shown.) In Cunha et al. [8] it was shown that the number of maximal perfect haplotype blocks is O(kn), while the algorithm presented there takes \(O(kn^2)\) time to find all blocks. It is based on the observation that branching vertices in the trie \(T_p\) of the suffixes of the input sequences starting at position p correspond to right-maximal and row-maximal blocks, while left-maximality can be tested by comparing \(T_p\) and \(T_{p-1}\). In the next two sections we show how this running time can be improved. Linear-time method I: based on suffix trees In this section, we present our first algorithm to find all maximal perfect haplotype blocks in linear time. This solution is purely theoretical, it would likely require large amounts of memory while being slow in practice. However, it demonstrates the connection to the concept of maximal repeats in strings. We recall from [10, Section 7.12] that a maximal repeat is a substring occurring at least twice in a string or a set of strings and such that it cannot be extended to the left or to the right without losing occurrences. Let \(\mathbb {S} = s_1\$_1s_2\$_2\ldots s_k\$_k\), with the \(\$_i\) being k different characters absent from the original alphabet \(\Sigma\). The key point is that any maximal perfect haplotype block in S is a maximal repeat in \(\mathbb {S}\). The opposite is not true: In a maximal perfect haplotype block, all occurrences of the repeat are located at the same position of each sequence of S (equality condition in Definition 1), while this constraint does not exist for maximal repeats in \(\mathbb {S}\). Nevertheless, finding all maximal perfect haplotype blocks in S can be performed by computing all maximal repeats in \(\mathbb {S}\), while keeping only those whose occurrences are located at the same positions over all \(s_i\) in which they occur. This can be done by performing the following procedureFootnote 3: "Decorate" each sequence \(s_i \in S\) to create \(s_i^+=\alpha _0s_i[1]\alpha _1s_i[2]\alpha _2\ldots s_i[n]\alpha _n\), where the index characters\(\alpha _0, \alpha _1, \ldots , \alpha _n\) are \(n+1\) symbols from an alphabet \(\Sigma '\), disjoint from the original alphabet \(\Sigma\). Find in \(\mathbb {S}^+ = s_1^+\$_1s_2^+\$_2\ldots s_k^+\$_k\) all maximal repeats. Any maximal repeat \(r = \alpha _pr_1\alpha _{p+1}r_2\alpha _{p+2}\ldots r_\ell \alpha _{p+\ell }\) in \(\mathbb {S}^+\) with \(\ell \ge 1\) corresponds to a maximal perfect haplotype block of length \(\ell\), starting at position \(p+1\) in the input sequences from S. The key idea here is that the index characters impose that each maximal repeat occurrence starts at the same position in all sequences and, as a consequence, ensure that all occurrences occur in distinct sequences from S. Hence any maximal repeat \(r = \alpha _pr_1\alpha _{p+1}\ldots r_\ell \alpha _{p+\ell }\) defines a unique maximal perfect haplotype block \((K,p+1,p+\ell )\). The value |K| is the number of occurrences of r. Also the set K can be derived from occurrence positions of r in \(\mathbb {S}^+\), as any position in r corresponds to a unique position in \(\mathbb {S}\). We prefer to omit useless technical details here. The maximal repeat occurrences in \(\mathbb {S}^+\) may be found using a suffix tree, constructed in time linear with respect to the size of the input data O(kn), even for large integer alphabets [12], as we have here. The maximal repeat detection is also linear with the size of the input data [10, Section 7.12.1]. Therefore the overall time complexity is O(kn). Linear-time method II: based on the positional BWT Here we present our second algorithm to find all maximal perfect haplotype blocks in linear time. It works by scanning the haplotype matrix column by column while maintaining the positional Burrows–Wheeler Transform (pBWT) [13] of the current column. For simplicity of presentation we assume that all rows of the haplotype matrix S are distinct. Recall that the pBWT of S consists of a pair of arrays for each column of S: For each l, \(1\le l\le n\), we have arrays \(a_l\) and \(d_l\) of length k such that the array \(a_l\) is a permutation of the elements in the set \(\{1,2,\ldots ,k\}\) with \(S\left[ a_l[1]\right] [1,l] \le \cdots \le S\left[ a_l[k]\right] [1,l]\) colexicographically (i.e. right-to-left lexicographically) sorted, and the array \(d_l\) indicates the index from which the current and the previous rows coincide. Formally, \(d_l[1] = l+1\) and for all r, \(1 < r \le k\), we have \(d_l[r] = 1 + \max \{j \in [1,l] : S\left[ a_l[r]\right] [j] \ne S\left[ a_l[r-1]\right] [j]\}.\) Further let us denote by \(a_l^{-1}\) the inverse permutation of \(a_l\). For readers familiar with string processing terminology, the arrays \(a_l\) and \(a_l^{-1}\) are analogous to the suffix array and the inverse suffix array, respectively, while the arrays \(d_l\) are analogous to the LCP array. Conditions 1, 2 and 4 (equality, left-maximality and row-maximality) of Definition 1 can be stated in terms of the arrays \(a_l\) and \(d_l\) as follows. A quadruple (i, j; x, y) with \(1\le i\le j\le n\) and \(1\le x<y\le k\) is called an available block if the following holds: \(d_j[r] \le i\) for all \(r \in [x+1,y]\) (equality), there exists at least one \(r \in [x+1,y]\) such that \(d_j[r] = i\) (left-maximality), and (\(x = 1\) or \(d_j[x] > i\)) and (\(y = k\) or \(d_j[y+1] > i\)) (row-maximality). The interval [x, y] of an available block (i, j; x, y) is called the colexicographic range of the block. Lemma 1 Suppose we have a maximal perfect haplotype block (K, i, j), then the set\(\{a_j^{-1}[r] \mid r \in K\}\)must be a contiguous range [x, y] of indices such that (i, j; x, y) is an available block. This necessary condition follows immediately from Definitions 1 and 2 and the definition of the pBWT (arrays \(a_l\) and \(d_l\)). \(\square\) Let us consider the set \(B_l\) of available blocks ending at column l. We have that \(|B_l| \le k\) because each available block corresponds to a distinct branching node in the trie of the reverses of \(\{S[1][1,l], \ldots , S[k][1,l]\}\), and the number of branching nodes in the trie is bounded from above by the number of leaves k. The branching nodes of the trie can be enumerated in O(k) time by using a standard algorithm [14] for enumerating LCP intervals of the LCP array of the trie, \(LCP_l[r] = l - d_l[r] + 1\). This gives us the colexicographic ranges [x, y] of all available blocks in \(B_l\). An example is shown in Fig. 2. Available blocks. Left: an example of a haplotype matrix up to column 6 with the two arrays \(a_6\) and \(a_6^{-1}\) on the right. Center: the colexicographically sorted rows and the array \(d_6\) listed on the right. Right: the trie of the reverses of the rows of the matrix. For example, the block \((\{1,2,4,5\},5,6)\) is available because \(a_6^{-1}(1) = 3\), \(a_6^{-1}(2) = 1\), \(a_6^{-1}(4) = 2\), \(a_6^{-1}(5) = 4\) is the consecutive range \([x,y] = [1,4]\), we have \(d_6[r] \le 5\) for all \(r \in [1+1,4]\) with \(d_6[3] = 5\), and we have \(x=1\) and \(d_6[4+1] = 6 > 5\). The repeat in the block is 00, and we see it is a branching node in the trie on the right The only thing left is to show how to check the right-maximality property of an available block. The following lemma gives the sufficient condition for this: An available block (i, j; x, y) corresponds to a maximal haplotype block (K, i, j) if and only if\(j = n\)or\(|\{S[a[r]][j+1] : r \in [x,y]\}| > 1\). If \(j=n\), right-maximality according to Definition 1 holds trivially. If \(j<n\), right-maximality requires that there are two rows \(s,t \in S|_K\) for which \(s[j+1] \not = t[j+1]\). Since all rows s, t qualifying for this condition are within the colexicographic range [x, y] of our available block, the statement follows immediately. \(\square\) To check the condition of Lemma 2 in constant time for \(j \ne n\), we build a bit vector \(V_j\) such that \(V_j[1] = 1\) and \(V_j[r] = 1\) if and only if \(S[a_j[r]][j+1] \ne S[a_j[r-1]][j+1]\). Now the block is right-maximal if and only if \(V_j[x+1,y]\) contains at least one 1-bit. We can build a vector of prefix sums of \(V_j\) to answer this question in constant time. Time and space complexity We assume the column stream model, where we can stream the haplotype matrix column by column. We can thus build the arrays \(d_l\), \(a_l\) and \(a_l^{-1}\) on the fly column by column [13], and also easily build the required prefix sums of arrays \(V_l\) from these. The time is O(nk), since each of the n columns takes O(k) time to process. The algorithm needs to keep in memory only the data for two adjacent columns at a time, so in space O(k) we can report the colexicographic ranges of all maximal blocks ending in each column \(l \in [1,n]\). If the colexicographic range of a block at column l is [x, y], then the rows in the original haplotype matrix are \(a_l[x], a_l[x+1], \ldots , a_l[y]\). There are O(nk) blocks and O(k) rows per block, so the time to report all rows explicitly is \(O(nk^2)\). In fact, a sharper bound that can also easily be achieved is \(O(nk+z)\) where \(z \in O(nk^2)\) is the size of the output. Alternatively, we can store a complete representation of the answer taking O(nk) space by storing all the \(a_l\) arrays and the colexicographic ranges of the maximal perfect blocks for each column, from which we can readily report all rows in any maximal perfect block in constant time per row. Empirical evaluation Since the algorithm of "Linear-time method I: based on suffix trees" section is mostly of theoretical interest, we evaluate only the pBWT-based algorithm presented in "Linear-time method II: based on the positional BWT" section. The source code is available from https://gitlab.com/bacazaux/haploblocks. As a baseline for comparison we use the implementation of the trie-based algorithm by Cunha et al. [8], available from the same gitlab site. The experiments were run on a machine with an Intel Xeon E5-2680 v4 2.4 GHz CPU, which has a 35 MB Intel SmartCache. The machine has 256 gigabytes of memory at a speed of 2400MT/s. The code was compiled with g++ using the -Ofast optimization flag. Our test data consists of chromosomes 2, 6 and 22 from phase three of the 1000 Genomes Project [2], which provides whole-genome sequences of 2504 individuals from multiple populations worldwide. We preprocessed the data by extracting all biallelic SNPs from the provided VCF filesFootnote 4 and converting them to a binary haplotype matrix using our own program vcf2bm, also available from https://gitlab.com/bacazaux/haploblocks. Our implementation has a user-defined parameter allowing to adjust the minimum size of a reported maximal perfect haplotype block (K, i, j), where size is defined as the width (\(j-i+1\)) times the number of rows (|K|) in the block. Table 1 shows the running times and memory usage of our implementation on the different chromosomes and for different settings of the minimum block size parameter. The larger the minimum block size, the faster the algorithm is, because there are less blocks to report. In general, it takes only a few minutes to process a complete human chromosome. Locating all 323,163,970 blocks of minimum size \(10^6\) in all 22 human autosomes (non-sex chromosomes) took in total 4 h and 26 min with a memory peak of 12.8 MB (data not shown). Table 1 Running times and memory usage of our pBWT-based implementation Table 2 shows a comparison of our implementation to the trie-based implementation from [8]. Our implementation is about 5 times faster on all datasets, and the memory consumption is up to 93 times smaller. Table 2 Comparison of the trie-based implementation from [8] and our pBWT-based implementation with minimum block size \(10^6\) It is now easy to apply the method for estimating a local selection coefficient from the size of maximal perfect haplotype blocks covering a certain genomic region presented in [8]. This method estimates the likelihood of observing a haplotype block for a given selection coefficient s and the time t since the onset of selection following an approach presented by Chen et al. [15]. Therefore, chromosome-wide selection scans indicating the loci of maximum selection, as shown in Fig. 3 for the complete human chromosome 2 (size parameter \(10^6\)), can now be generated in less than half an hour. Selection scan for human chromosome 2. Shown is for each position of the chromosome the largest maximum likelihood estimate derived from any maximal perfect haplotype block overlapping that locus. It is easy to spot potential regions of high selection. The centromere, located around 93 Mbp, shows no signal as sequencing coverage is low here and no SNPs could be called In this paper we presented two algorithms that are able to find all maximal perfect haplotype blocks in a haplotype matrix of size \(k \times n\) in linear time O(kn). In particular the second method, based on the positional Burrows–Wheeler Transform, performs also extremely well in practice, as it allows for a streaming implementation with extremely low memory footprint. While an initial implementation of the method is available from https://gitlab.com/bacazaux/haploblocks, a user-friendly software combining the algorithm presented here with the computation of the selection coefficient suggested in [8] remains to be developed. Source code and test data are available from https://gitlab.com/bacazaux/haploblocks. http://www.allofus.nih.gov. Biologists propose to sequence the DNA of all life on Earth, by Elizabeth Pennisi. Science News, Feb. 24, 2017. https://doi.org/10.1126/science.aal0824. Note that a similar procedure has been described by Lunter [11], where also a connection to the positional Burrows–Wheeler Transform is mentioned. ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/. Buniello A, MacArthur JAL, Cerezo M, Harris LW, Hayhurst J, Malangone C, McMahon A, Morales J, Mountjoy E, Sollis E, Suveges D, Vrousgou O, Whetzel PL, Amode R, Guillen JA, Riat HS, Trevanion SJ, Hall P, Junkins H, Flicek P, Burdett T, Hindorff LA, Cunningham F, Parkinson H. The NHGRI-EBI GWAS catalog of published genome-wide association studies, targeted arrays and summary statistics 2019. Nucl Acids Res. 2018;47(D1):1005–12. https://doi.org/10.1093/nar/gky1120. Auton A, Brooks LD, Durbin RM, Garrison EP, Min Kang H, Korbel JO, Marchini JL, McCarthy S, McVean GA, Abecasis GR, 1000 Genomes Project Consortium. A global reference for human genetic variation. Nature. 2015;526(7571):68–74. https://doi.org/10.1038/nature15393. Sudmant PH, Rausch T, Gardner EJ, Handsaker RE, Abyzov A, Huddleston J, Zhang Y, Ye K, Jun G, Hsi-Yang Fritz M, Konkel MK, Malhotra A, Stütz AM, Shi X, Paolo Casale F, Chen J, Hormozdiari F, Dayama G, Chen K, Malig M, Chaisson MJP, Walter K, Meiers S, Kashin S, Garrison E, Auton A, Lam HYK, Jasmine MuX, Alkan C, Antaki D, Bae T, Cerveira E, Chines P, Chong Z, Clarke L, Dal E, Ding L, Emery S, Fan X, Gujral M, Kahveci F, Kidd JM, Kong Y, Lameijer E-W, McCarthy S, Flicek P, Gibbs RA, Marth G, Mason CE, Menelaou A, Muzny DM, Nelson BJ, Noor A, Parrish NF, Pendleton M, Quitadamo A, Raeder B, Schadt EE, Romanovitch M, Schlattl A, Sebra R, Shabalin AA, Untergasser A, Walker JA, Wang M, Yu F, Zhang C, Zhang J, Zheng-Bradley X, Zhou W, Zichner T, Sebat J, Batzer MA, McCarroll SA, Mills RE, Gerstein MB, Bashir A, Stegle O, Devine SE, Lee C, Eichler EE, Korbel JO, The 1000 Genomes Project Consortium. An integrated map of structural variation in 2504 human genomes. Nature. 2015;526(7571):75–81. https://doi.org/10.1038/nature15394. Gudbjartsson DF, Helgason H, Gudjonsson SA, Zink F, Oddson A, Gylfason A, Besenbacher S, Magnusson G, Halldorsson BV, Hjartarson E, Sigurdsson GT, Stacey SN, Frigge ML, Holm H, Saemundsdottir J, Helgadottir HT, Johannsdottir H, Sigfusson G, Thorgeirsson G, Sverrisson JT, Gretarsdottir S, Walters GB, Rafnar T, Thjodleifsson B, Bjornsson ES, Olafsson S, Thorarinsdottir H, Steingrimsdottir T, Gudmundsdottir TS, Theodors A, Jonasson JG, Sigurdsson A, Bjornsdottir G, Jonsson JJ, Thorarensen O, Ludvigsson P, Gudbjartsson H, Eyjolfsson GI, Sigurdardottir O, Olafsson I, Arnar DO, Magnusson OT, Kong A, Masson G, Thorsteinsdottir U, Helgason A, Sulem P, Stefansson K. Large-scale whole-genome sequencing of the Icelandic population. Nat Genet. 2015;47:435–44. https://doi.org/10.1038/ng.3247. Hehir-Kwa JY, Marschall T, Kloosterman WP, Francioli LC, Baaijens JA, Dijkstra LJ, Abdellaoui A, Koval V, Thung DT, Wardenaar R, Renkens I, Coe BP, Deelen P, de Ligt J, Lameijer E-W, van Dijk F, Hormozdiari F, Consortium TGotN, Bovenberg JA, de Craen AJM, Beekman M, Hofman A, Willemsen G, Wolffenbuttel B, Platteel M, Du Y, Chen R, Cao H, Cao R, Sun Y, Cao JS, Neerincx PBT, Dijkstra M, Byelas G, Kanterakis A, Bot J, Vermaat M, Laros JFJ, den Dunnen JT, de Knijff P, Karssen LC, van Leeuwen EM, Amin N, Rivadeneira F, Estrada K, Hottenga J-J, Kattenberg VM, van Enckevort D, Mei H, Santcroos M, van Schaik BDC, Handsaker RE, McCarroll SA, Ko A, Sudmant P, Nijman IJ, Uitterlinden AG, van Duijn CM, Eichler EE, de Bakker PIW, Swertz MA, Wijmenga C, van Ommen G-JB, Slagboom PE, Boomsma DI, Schönhuth A, Ye K, Guryev V. A high-quality human reference panel reveals the complexity and distribution of genomic structural variants. Nat Commun. 2016;7:12989. https://doi.org/10.1038/ncomms12989. Turnbull C, Scott RH, Thomas E, Jones L, Murugaesu N, Pretty FB, Halai D, Baple E, Craig C, Hamblin A, Henderson S, Patch C, O'Neill A, Devereau A, Smith K, Martin AR, Sosinsky A, McDonagh EM, Sultana R, Mueller M, Smedley D, Toms A, Dinh L, Fowler T, Bale M, Hubbard TJP, Rendon A, Hill S, Caulfield MJ. 100 000 Genomes Project: the 100 000 genomes project: bringing whole genome sequencing to the NHS. BMJ. 2018;361:1687. https://doi.org/10.1136/bmj.k1687. Gillespie JH. Population genetics—a concise guide. Baltimore: The Johns Hopkins University Press; 1998. Cunha L, Diekmann Y, Kowada LAB, Stoye J Identifying maximal perfect haplotype blocks. In: Advances in bioinformatics and computational biology: 11th Brazilian symposium on bioinformatics, BSB 2018, Niterói, Brazil, October 30 - November 1, 2018, Proceedings; 2018. p. 26–37. https://doi.org/10.1007/978-3-030-01722-4_3. Alanko J, Bannai H, Cazaux B, Peterlongo P, Stoye J Finding all maximal perfect haplotype blocks in linear time. In: Huber, K.T., Gusfield, D. (eds.) 19th International Workshop on Algorithms in Bioinformatics (WABI 2019). LIPIcs, vol. 143:8, p. 1–9 (2019). https://doi.org/10.4230/LIPIcs.WABI.2019.8 Gusfield D. Algorithms on strings, trees, and sequences: computer science and computational biology. Cambridge: Cambridge University Press; 1997. Lunter G. Haplotype matching in large cohorts using the Li and Stephens model. Bioinformatics. 2019;35(5):798–806. https://doi.org/10.1093/bioinformatics/bty735. Farach M Optimal suffix tree construction with large alphabets. In: Proceedings 38th annual symposium on foundations of computer science. New York: IEEE; 1997. p. 137–143. Durbin R. Efficient haplotype matching and storage using the positional Burrows–Wheeler transform (PBWT). Bioinformatics. 2014;30(9):1266–72. https://doi.org/10.1093/bioinformatics/btu014. Abouelhoda MI, Kurtz S, Ohlebusch E. Replacing suffix trees with enhanced suffix arrays. J Discret Algorithms. 2004;2(1):53–86. https://doi.org/10.1016/S1570-8667(03)00065-0. Chen H, Hey J, Slatkin M. A hidden Markov model for investigating recent positive selection through haplotype structure. Theor Popul Biol. 2015;99:18–30. https://doi.org/10.1016/j.tpb.2014.11.001. We thank the organizers of DSB 2019 (http://www.dsb2019.gitlab.io) for giving us the opportunity to present earlier work in this area and start a discussion from which the present results originated. We would also like to thank Michel T. Henrichs for providing a script to convert VCF files to haplotype matrices and for assisting with the production of Fig. 3. The work presented in this article and the publication cost were funded from the authors' home institutions. This funding was supported by JSPS KASKENHI (Grant No JP16H02783); ANR Hydrogen (Grant No ANR-14-CE23-0001). Department of Computer Science, University of Helsinki, Helsinki, Finland Jarno Alanko & Bastien Cazaux Department of Informatics, Kyushu University, Fukuoka, Japan Hideo Bannai Inria, CNRS, Irisa, Univ. Rennes, Rennes, France Pierre Peterlongo Faculty of Technology and Center for Biotechnology (CeBiTec), Bielefeld University, Bielefeld, Germany Jens Stoye Jarno Alanko Bastien Cazaux All authors contributed equally. All authors read and approved the final manuscript. Correspondence to Jens Stoye. Alanko, J., Bannai, H., Cazaux, B. et al. Finding all maximal perfect haplotype blocks in linear time. Algorithms Mol Biol 15, 2 (2020). https://doi.org/10.1186/s13015-020-0163-6 Population genomics Selection coefficient Haplotype block Positional Burrows–Wheeler Transform Selected papers from WABI2019
CommonCrawl
View source for Efficient kNN Classification with Different Numbers of Nearest Neighbors ← Efficient kNN Classification with Different Numbers of Nearest Neighbors == Presented by == Cooper Brooke, Daniel Fagan, Maya Perelman == Introduction == Unlike popular parametric approaches which first utilizes training observations to learn a model and subsequently predict test samples with this model, the non-parametric k-nearest neighbors (kNNs) method classifies observations with a majority rules approach. kNN classifies test samples by assigning them the label of their k closest training observation (neighbours). This method has become very popular due to its significant performance and easy implementation. There are two main approaches to conduct kNN classification. The first is to use a fixed k value to classify all test samples. The second is to use a different k value for each test sample. The former, while easy to implement, has proven to be impractical in machine learning applications. Therefore, interest lies in developing an efficient way to apply a different optimal k value for each test sample. The authors of this paper present the kTree and k*Tree methods to solve this research question. == Previous Work == Previous work on finding an optimal fixed k value for all test samples is well-studied. Zhang et al. [1] incorporated a certainty factor measure to solve for an optimal fixed k. This resulted in the conclusion that k should be <math>\sqrt{n}</math> (where n is the number of training samples) when n > 100. The method Song et al.[2] explored involved selecting a subset of the most informative samples from neighbourhoods. Vincent and Bengio [3] took the unique approach of designing a k-local hyperplane distance to solve for k. Premachandran and Kakarala [4] had the solution of selecting a robust k using the consensus of multiple rounds of kNNs. These fixed k methods are valuable however are impractical for data mining and machine learning applications. Finding an efficient approach to assigning varied k values has also been previously studied. Tuning approaches such as the ones taken by Zhu et al. as well as Sahugara et al. have been popular. Zhu et al. [5] determined that optimal k values should be chosen using cross validation while Sahugara et al. [6] proposed using Monte Carlo validation to select varied k parameters. Other learning approaches such as those taken by Zheng et al. and Góra and Wojna also show promise. Zheng et al. [7] applied a reconstruction framework to learn suitable k values. Góra and Wojna [8] proposed using rule induction and instance-based learning to learn optimal k-values for each test sample. While all these methods are valid, their processes of either learning varied k values or scanning all training samples are time-consuming. == Motivation == Due to the previously mentioned drawbacks of fixed-k and current varied-k kNN classification, the paper's authors sought to design a new approach to solve for different k values. The kTree and k*Tree approach seeks to calculate optimal values of k while avoiding computationally costly steps such as cross validation. A secondary motivation of this research was to ensure that the kTree method would perform better than kNN using fixed values of k given that running costs would be similar in this instance. == Approach == === kTree Classification === The proposed kTree method is illustrated by the following flow chart: [[File:Approach_Figure_1.png | center | 800x800px]] ==== Reconstruction ==== The first step is to use the training samples to reconstruct themselves. The goal of this is to find the matrix of correlations between the training samples themselves, <math>\textbf{W}</math>, such that the distance between an individual training sample and the corresponding correlation vector multiplied by the entire training set is minimized. This least square loss function where <math>\mathbf{X}</math> represents the training set can be written as: $$\begin{aligned} \mathop{min}_{\textbf{W}} \sum_{i=1}^n ||Xw_i - x_i||^2 \end{aligned}$$ In addition, an <math>l_1</math> regularization term multiplied by a tuning parameter, <math>\rho_1</math>, is added to ensure that sparse results are generated as the objective is to minimize the number of training samples that will eventually be depended on by the test samples. The least square loss function is then further modified to account for samples that have similar values for certain features yielding similar results. After some transformations, this second regularization term that has tuning parameter <math>\rho_2</math> is: $$\begin{aligned} R(W) = Tr(\textbf{W}^T \textbf{X}^T \textbf{LXW}) \end{aligned}$$ where <math>\mathbf{L}</math> is a Laplacian matrix that indicates the relationship between features. This gives a final objective function of: $$\begin{aligned} \mathop{min}_{\textbf{W}} \sum_{i=1}^n ||Xw_i - x_i||^2 + \rho_1||\textbf{W}|| + \rho_2R(\textbf{W}) \end{aligned}$$ Since this is a convex function, an iterative method can be used to optimize it to find the optimal solution <math>\mathbf{W^*}</math>. ==== Calculate ''k'' for training set ==== Each element <math>w_{ij}</math> in <math>\textbf{W*}</math> represents the correlation between the ith and jth training sample so if a value is 0, it can be concluded that the jth training sample has no effect on the ith training sample which means that it should not be used in the prediction of the ith training sample. Consequently, all non-zero values in the <math>w_{.j}</math> vector would be useful in predicting the ith training sample which gives the result that the number of these non-zero elements for each sample is equal to the optimal ''k'' value for each sample. For example, if there was a 4x4 training set where <math>\textbf{W*}</math> had the form: [[File:Approach_Figure_2.png | center | 400x400px]] The optimal ''k'' value for training sample 1 would be 2 since the correlation between training sample 1 and both training samples 2 and 4 is non-zero. === k*Tree Classification === == Experiments == In order to assess the performance of the proposed method against existing methods, a number of experiments were performed to measure classification accuracy and run time. The experiments were run on twenty public datasets provided by the UCI Repository of Machine Learning Data, and contained a mix of data types varying in size, in dimensionality, in the number of classes, and in imbalanced nature of the data. Ten-fold cross-validation was used to measure classification accuracy, and the following methods were compared against: # k-Nearest Neighbor: The classical kNN approach with k set to k=1,5,10,20 and square root of the sample size [19]; the best result was reported. # kNN-Based Applicability Domain Approach (AD-kNN) [33] # kNN Method Based on Sparse Learning (S-kNN) [20] # kNN Based on Graph Sparse Reconstruction (GS-kNN) [30] # Filtered Attribute Subspace-based Bagging with Injected Randomness (FASBIR) [62], [63] # Landmark-based Spectral Clustering kNN (LC-kNN) [64] The experimental results were then assessed based on classification tasks that focused on different sample sizes, and tasks that focused on different numbers of features. '''A. Experimental Results on Different Sample Sizes''' The running cost and (cross-validation) classification accuracy based on experiments on ten UCI datasets can be seen in Table I below. [[File:Table_I_kNN.png | center | 800x800px]] The following key results are noted: * Regarding classification accuracy, the proposed methods (kTree and k*Tree) outperformed kNN, AD-KNN, FASBIR, and LC-kNN on all datasets by 1.5%-4.5%, but had no notable improvements compared to GS-kNN and S-kNN. * Classification methods which involved learning optimal k-values (for example the proposed kTree and k*Tree methods, or S-kNN, GS-kNN, AD-kNN) outperformed the methods with predefined k-values, such as traditional kNN. * The proposed k*Tree method had the lowest running cost of all methods. However, the k*Tree method was still outperformed in terms of classification accuracy by GS-kNN and S-kNN, but ran on average 15 000 times faster than either method. '''B. Experimental Results on Different Feature Numbers''' The goal of this section was to evaluate the robustness of all methods under differing numbers of features; results can be seen in Table II below. The Fisher score [65] approach was used to rank and select the most information features in the datasets. [[File:Table_II_kNN.png | center | 800x800px]] From Table II, the proposed kTree and k*Tree approaches outperformed kNN, AD-kNN, FASBIR and LC-KNN when tested for varying feature numbers. The S-kNN and GS-kNN approaches remained the best in terms of classification accuracy, but were greatly outperformed in terms of running cost by k*Tree. The cause for this is that k*Tree only scans a subsample of the training samples for kNN classification, while S-kNN and GS-kNN scan all training samples. == Conclusion == This paper introduced two novel approaches for kNN classification algorithms that can determine optimal k-values for each test sample. The proposed kTree and k*Tree methods achieve efficient classification by designing a training step that reduces the run time of the test stage. Based on the experimental results for varying sample sizes and differing feature numbers, it was observed that the proposed methods outperformed existing ones in terms of running cost while still achieving similar or better classification accuracies. Future areas of investigation could focus on the improvement of kTree and k*Tree for data with large numbers of features. == Critiques == *The paper only assessed classification accuracy through cross-validation accuracy. However, it would be interesting to investigate how the proposed methods perform using different metrics, such as AUC, precision-recall curves, or in terms of holdout test data set accuracy. * The authors addressed that some of the UCI datasets contained imbalance data (such as the Climate and German data sets) while others did not. However, the nature of the class imbalance was not extreme, and the effect of imbalanced data on algorithm performance was not discussed or assessed. Moreover, it would have been interesting to see how the proposed algorithms performed on highly imbalanced datasets in conjunction with common techniques to address imbalance (e.g. oversampling, undersampling, etc.). *While the authors contrast their ktTee and k*Tree approach with different kNN methods, the paper could contrast their results with more of the approaches discussed in the Related Work section of their paper. For example, it would be interesting to see how the kTree and k*Tree results compared to Góra and Wojna varied optimal k method. == References == [1] C. Zhang, Y. Qin, X. Zhu, and J. Zhang, "Clustering-based missing value imputation for data preprocessing," in Proc. IEEE Int. Conf., Aug. 2006, pp. 1081–1086. [2] Y. Song, J. Huang, D. Zhou, H. Zha, and C. L. Giles, "IKNN: Informative K-nearest neighbor pattern classification," in Knowledge Discovery in Databases. Berlin, Germany: Springer, 2007, pp. 248–264. [3] P. Vincent and Y. Bengio, "K-local hyperplane and convex distance nearest neighbor algorithms," in Proc. NIPS, 2001, pp. 985–992. [4] V. Premachandran and R. Kakarala, "Consensus of k-NNs for robust neighborhood selection on graph-based manifolds," in Proc. CVPR, Jun. 2013, pp. 1594–1601. [5] X. Zhu, S. Zhang, Z. Jin, Z. Zhang, and Z. Xu, "Missing value estimation for mixed-attribute data sets," IEEE Trans. Knowl. Data Eng., vol. 23, no. 1, pp. 110–121, Jan. 2011. [6] F. Sahigara, D. Ballabio, R. Todeschini, and V. Consonni, "Assessing the validity of QSARS for ready biodegradability of chemicals: An applicability domain perspective," Current Comput.-Aided Drug Design, vol. 10, no. 2, pp. 137–147, 2013. [7] S. Zhang, M. Zong, K. Sun, Y. Liu, and D. Cheng, "Efficient kNN algorithm based on graph sparse reconstruction," in Proc. ADMA, 2014, pp. 356–369. [8] X. Zhu, L. Zhang, and Z. Huang, "A sparse embedding and least variance encoding approach to hashing," IEEE Trans. Image Process., vol. 23, no. 9, pp. 3737–3750, Sep. 2014. [19] U. Lall and A. Sharma, "A nearest neighbor bootstrap for resampling hydrologic time series," Water Resour. Res., vol. 32, no. 3, pp. 679–693, 1996. [20] D. Cheng, S. Zhang, Z. Deng, Y. Zhu, and M. Zong, "KNN algorithm with data-driven k value," in Proc. ADMA, 2014, pp. 499–512. [30] S. Zhang, M. Zong, K. Sun, Y. Liu, and D. Cheng, "Efficient kNN algorithm based on graph sparse reconstruction," in Proc. ADMA, 2014, pp. 356–369. [33] F. Sahigara, D. Ballabio, R. Todeschini, and V. Consonni, "Assessing the validity of QSARS for ready biodegradability of chemicals: An applicability domain perspective," Current Comput.-Aided Drug Design, vol. 10, no. 2, pp. 137–147, 2013. [62] Z. H. Zhou and Y. Yu, "Ensembling local learners throughmultimodal perturbation," IEEE Trans. Syst. Man, B, vol. 35, no. 4, pp. 725–735, Apr. 2005. [63] Z. H. Zhou, Ensemble Methods: Foundations and Algorithms. London, U.K.: Chapman & Hall, 2012. [64] Z. Deng, X. Zhu, D. Cheng, M. Zong, and S. Zhang, "Efficient kNN classification algorithm for big data," Neurocomputing, vol. 195, pp. 143–148, Jun. 2016. [65] K. Tsuda, M. Kawanabe, and K.-R. Müller, "Clustering with the fisher score," in Proc. NIPS, 2002, pp. 729–736. Return to Efficient kNN Classification with Different Numbers of Nearest Neighbors. Retrieved from "http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Efficient_kNN_Classification_with_Different_Numbers_of_Nearest_Neighbors" About statwiki
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Inverse Matrix Linear Transformation Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Login/Join us Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems Category: Linear Algebra by Yu · Published 08/01/2016 · Last modified 11/17/2017 Find a Formula for a Linear Transformation If $L:\R^2 \to \R^3$ is a linear transformation such that L\left( \begin{bmatrix} 1 \\ \end{bmatrix}\right) =\begin{bmatrix} \end{bmatrix}, \,\,\,\, \end{bmatrix}. \end{align*} (a) find $L\left( \begin{bmatrix} \end{bmatrix}\right)$, and (b) find the formula for $L\left( \begin{bmatrix} x \\ \end{bmatrix}\right)$. If you think you can solve (b), then skip (a) and solve (b) first and use the result of (b) to answer (a). (Part (a) is an exam problem of Purdue University) Read solution Click here if solved 49 Add to solve later Find the Rank of the Matrix $A+I$ if Eigenvalues of $A$ are $1, 2, 3, 4, 5$ Let $A$ be an $n$ by $n$ matrix with entries in complex numbers $\C$. Its only eigenvalues are $1,2,3,4,5$, possibly with multiplicities. What is the rank of the matrix $A+I_n$, where $I_n$ is the identity $n$ by $n$ matrix. (UCB-University of California, Berkeley, Exam) Stochastic Matrix (Markov Matrix) and its Eigenvalues and Eigenvectors (a) Let \[A=\begin{bmatrix} a_{11} & a_{12}\\ a_{21}& a_{22} \end{bmatrix}\] be a matrix such that $a_{11}+a_{12}=1$ and $a_{21}+a_{22}=1$. Namely, the sum of the entries in each row is $1$. (Such a matrix is called (right) stochastic matrix (also termed probability matrix, transition matrix, substitution matrix, or Markov matrix).) Then prove that the matrix $A$ has an eigenvalue $1$. (b) Find all the eigenvalues of the matrix \[B=\begin{bmatrix} 0.3 & 0.7\\ 0.6& 0.4 \end{bmatrix}.\] (c) For each eigenvalue of $B$, find the corresponding eigenvectors. The Subspace of Matrices that are Diagonalized by a Fixed Matrix Suppose that $S$ is a fixed invertible $3$ by $3$ matrix. This question is about all the matrices $A$ that are diagonalized by $S$, so that $S^{-1}AS$ is diagonal. Show that these matrices $A$ form a subspace of $3$ by $3$ matrix space. (MIT-Massachusetts Institute of Technology Exam) Find all Values of x such that the Given Matrix is Invertible \[ A=\begin{bmatrix} 2 & 0 & 10 \\ 0 &7+x &-3 \\ 0 & 4 & x \end{bmatrix}.\] Find all values of $x$ such that $A$ is invertible. (Stanford University Linear Algebra Exam) Click here if solved 102 Equivalent Conditions to be a Unitary Matrix A complex matrix is called unitary if $\overline{A}^{\trans} A=I$. The inner product $(\mathbf{x}, \mathbf{y})$ of complex vector $\mathbf{x}$, $\mathbf{y}$ is defined by $(\mathbf{x}, \mathbf{y}):=\overline{\mathbf{x}}^{\trans} \mathbf{y}$. The length of a complex vector $\mathbf{x}$ is defined to be $||\mathbf{x}||:=\sqrt{(\mathbf{x}, \mathbf{x})}$. Let $A$ be an $n \times n$ complex matrix. Prove that the followings are equivalent. (a) The matrix $A$ is unitary. (b) $||A \mathbf{x}||=|| \mathbf{x}||$ for any $n$-dimensional complex vector $\mathbf{x}$. (c) $(A\mathbf{x}, A\mathbf{y})=(\mathbf{x}, \mathbf{y})$ for any $n$-dimensional complex vectors $x, y$ Finite Order Matrix and its Trace Let $A$ be an $n\times n$ matrix and suppose that $A^r=I_n$ for some positive integer $r$. Then show that (a) $|\tr(A)|\leq n$. (b) If $|\tr(A)|=n$, then $A=\zeta I_n$ for an $r$-th root of unity $\zeta$. (c) $\tr(A)=n$ if and only if $A=I_n$. Solve a System of Linear Equations by Gauss-Jordan Elimination Solve the following system of linear equations using Gauss-Jordan elimination. 6x+8y+6z+3w &=-3 \\ 6x-8y+6z-3w &=3\\ 8y \,\,\,\,\,\,\,\,\,\,\,- 6w &=6 A Matrix is Invertible If and Only If It is Nonsingular In this problem, we will show that the concept of non-singularity of a matrix is equivalent to the concept of invertibility. That is, we will prove that: A matrix $A$ is nonsingular if and only if $A$ is invertible. (a) Show that if $A$ is invertible, then $A$ is nonsingular. (b) Let $A, B, C$ be $n\times n$ matrices such that $AB=C$. Prove that if either $A$ or $B$ is singular, then so is $C$. (c) Show that if $A$ is nonsingular, then $A$ is invertible. Properties of Nonsingular and Singular Matrices An $n \times n$ matrix $A$ is called nonsingular if the only solution of the equation $A \mathbf{x}=\mathbf{0}$ is the zero vector $\mathbf{x}=\mathbf{0}$. Otherwise $A$ is called singular. (a) Show that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. (b) Show that if $A$ is nonsingular, then the column vectors of $A$ are linearly independent. (c) Show that an $n \times n$ matrix $A$ is nonsingular if and only if the equation $A\mathbf{x}=\mathbf{b}$ has a unique solution for any vector $\mathbf{b}\in \R^n$. Do not use the fact that a matrix is nonsingular if and only if the matrix is invertible. Solving a System of Linear Equations Using Gaussian Elimination Solve the following system of linear equations using Gaussian elimination. x+2y+3z &=4 \\ 5x+6y+7z &=8\\ 9x+10y+11z &=12 How to Find Eigenvalues of a Specific Matrix. Find all eigenvalues of the following $n \times n$ matrix. A=\begin{bmatrix} 0 & 0 & \cdots & 0 &1 \\ 1 & 0 & \cdots & 0 & 0\\ 0 & 1 & \cdots & 0 &0\\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0&\cdots & 1& 0 \\ \end{bmatrix} If Every Trace of a Power of a Matrix is Zero, then the Matrix is Nilpotent Let $A$ be an $n \times n$ matrix such that $\tr(A^n)=0$ for all $n \in \N$. Then prove that $A$ is a nilpotent matrix. Namely there exist a positive integer $m$ such that $A^m$ is the zero matrix. Questions About the Trace of a Matrix Let $A=(a_{i j})$ and $B=(b_{i j})$ be $n\times n$ real matrices for some $n \in \N$. Then answer the following questions about the trace of a matrix. (a) Express $\tr(AB^{\trans})$ in terms of the entries of the matrices $A$ and $B$. Here $B^{\trans}$ is the transpose matrix of $B$. (b) Show that $\tr(AA^{\trans})$ is the sum of the square of the entries of $A$. (c) Show that if $A$ is nonzero symmetric matrix, then $\tr(A^2)>0$. Linear Dependent/Independent Vectors of Polynomials Let $p_1(x), p_2(x), p_3(x), p_4(x)$ be (real) polynomials of degree at most $3$. Which (if any) of the following two conditions is sufficient for the conclusion that these polynomials are linearly dependent? (a) At $1$ each of the polynomials has the value $0$. Namely $p_i(1)=0$ for $i=1,2,3,4$. (b) At $0$ each of the polynomials has the value $1$. Namely $p_i(0)=1$ for $i=1,2,3,4$. (University of California, Berkeley) Possibilities of the Number of Solutions of a Homogeneous System of Linear Equations Here is a very short true or false problem. Select either True or False. Then click "Finish quiz" button. You will be able to see an explanation of the solution by clicking "View questions" button. Common Eigenvector of Two Matrices and Determinant of Commutator Let $A$ and $B$ be $n\times n$ matrices. Suppose that these matrices have a common eigenvector $\mathbf{x}$. Show that $\det(AB-BA)=0$. Transpose of a Matrix and Eigenvalues and Related Questions Let $A$ be an $n \times n$ real matrix. Prove the followings. (a) The matrix $AA^{\trans}$ is a symmetric matrix. (b) The set of eigenvalues of $A$ and the set of eigenvalues of $A^{\trans}$ are equal. (c) The matrix $AA^{\trans}$ is non-negative definite. (An $n\times n$ matrix $B$ is called non-negative definite if for any $n$ dimensional vector $\mathbf{x}$, we have $\mathbf{x}^{\trans}B \mathbf{x} \geq 0$.) (d) All the eigenvalues of $AA^{\trans}$ is non-negative. Nilpotent Matrix and Eigenvalues of the Matrix An $n\times n$ matrix $A$ is called nilpotent if $A^k=O$, where $O$ is the $n\times n$ zero matrix. Prove the followings. (a) The matrix $A$ is nilpotent if and only if all the eigenvalues of $A$ is zero. (b) The matrix $A$ is nilpotent if and only if $A^n=O$. Determinant/Trace and Eigenvalues of a Matrix Let $A$ be an $n\times n$ matrix and let $\lambda_1, \dots, \lambda_n$ be its eigenvalues. (1) $$\det(A)=\prod_{i=1}^n \lambda_i$$ (2) $$\tr(A)=\sum_{i=1}^n \lambda_i$$ Here $\det(A)$ is the determinant of the matrix $A$ and $\tr(A)$ is the trace of the matrix $A$. Namely, prove that (1) the determinant of $A$ is the product of its eigenvalues, and (2) the trace of $A$ is the sum of the eigenvalues. Page 24 of 25« First«...10...1819202122232425» This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Elementary Row Operations Gaussian-Jordan Elimination Solutions of Systems of Linear Equations Linear Combination and Linear Independence Nonsingular Matrices Inverse Matrices Subspaces in $\R^n$ Bases and Dimension of Subspaces in $\R^n$ General Vector Spaces Subspaces in General Vector Spaces Linearly Independency of General Vectors Bases and Coordinate Vectors Dimensions of General Vector Spaces Linear Transformation from $\R^n$ to $\R^m$ Linear Transformation Between Vector Spaces Orthogonal Bases Determinants of Matrices Computations of Determinants Introduction to Eigenvalues and Eigenvectors Eigenvectors and Eigenspaces Diagonalization of Matrices The Cayley-Hamilton Theorem Dot Products and Length of Vectors Eigenvalues and Eigenvectors of Linear Transformations Jordan Canonical Form Elementary Number Theory (1) Field Theory (27) Group Theory (126) Linear Algebra (485) Math-Magic (1) Module Theory (13) Probability (33) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. How to Prove Markov's Inequality and Chebyshev's Inequality How to Use the Z-table to Compute Probabilities of Non-Standard Normal Distributions Expected Value and Variance of Exponential Random Variable Condition that a Function Be a Probability Density Function Conditional Probability When the Sum of Two Geometric Random Variables Are Known Elements of Finite Order of an Abelian Group form a Subgroup Eigenvalues of $2\times 2$ Symmetric Matrices are Real by Considering Characteristic Polynomials Are these vectors in the Nullspace of the Matrix? What is the Probability that Selected Coin was Two-Headed? How to Diagonalize a Matrix. Step by Step Explanation. Determine Whether Each Set is a Basis for $\R^3$ Eigenvalues of Real Skew-Symmetric Matrix are Zero or Purely Imaginary and the Rank is Even Express the Eigenvalues of a 2 by 2 Matrix in Terms of the Trace and Determinant Express a Vector as a Linear Combination of Other Vectors Eigenvalues of a Matrix and its Transpose are the Same Find the Inverse Matrix Using the Cayley-Hamilton Theorem Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue Rank of the Product of Matrices $AB$ is Less than or Equal to the Rank of $A$ The Intersection of Two Subspaces is also a Subspace Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA probability rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2021. All Rights Reserved.
CommonCrawl
Rahul Khatri1, Michael Schmidt1 & Industrial enterprises represent a significant portion of electricity consumers with the potential of providing demand-side energy flexibility from their production processes and on-site energy assets. Methods are needed for the active and profitable participation of such enterprises in the electricity markets especially with variable prices, where the energy flexibility available in their manufacturing, utility and energy systems can be assessed and quantified. This paper presents a generic model library equipped with optimal control for energy flexibility purposes. The components in the model library represent the different technical units of an industrial enterprise on material, media, and energy flow levels with their process constraints. The paper also presents a case study simulation of a steel-powder manufacturing plant using the model library. Its energy flexibility was assessed when the plant procured its electrical energy at fixed and variable electricity prices. In the simulated case study, flexibility use at dynamic prices resulted in a 6% cost reduction compared to a fixed-price scenario, with battery storage and the manufacturing system making the largest contributions to flexibility. Ever increasing energy demands, integration of variable renewable energy resources and less reliance on building new generation capacity from conventional power plants is resulting in an increased challenge for power systems to match the supply and demand at all times during their operation. This gap between supply and demand is referred to as flexibility gap by authors in Papaefthymiou et al. (2018). Among the various options to address this gap, demand-side energy flexibility (DSEF) is considered to be one viable solution, in which electricity consumers adapt (increase, decrease or shift) their energy consumption, which is also referred to as demand-response (DR) (Roesch et al. 2019). The focus of this work lies in developing models and methods that allow industrial consumers to evaluate their DSEF potential and to participate in DR in a semi-automated or fully automated manner. The energy flexibility of industrial enterprises could play a significant role in closing the flexibility gap. In Germany, the industrial sector consumes 44% of the total electricity (Lund et al. 2015), thus a significant portion. According to a Umweltbundesamt report from 2015 (Langrock et al. 2015), there is a technical DR potential of 6.5 GW from the industrial sector in Germany for at least one hour; however, authors in Stede (2016) argue that under current regulatory and technical barriers, only 3.5 GW of the DR potential is estimated to be viable. Two categories of DSEF can be distinguished according to SEDC (2016): One is explicit DSEF, where consumers receive control signals from system operators or committed schedules from external agents and directly adjust their power consumption accordingly. In return, they receive a contracted remuneration. The other is implicit DSEF, in which consumers react to price signals. Various price based programs are being introduced to leverage implicit DSEF based on price signals, such as time of use (ToU), critical peak pricing (CPP) and real-time pricing (RTP) (Albadi and El-Saadany 2008). To facilitate DSEF, new market actors such as "aggregators" are also evolving, which act as intermediaries between end-consumers and electricity market. Their main role is to pool distributed units and market their generation capacity or DSEF on the several markets such as spot market with variable prices, balancing market and others (Stede et al. 2020). According to a white paper on the Electricity Market 2.0 by the Federal Ministry of Economics Affairs and Energy(BMWi), Germany (German Federal Ministry for Economic Affairs and Energy (BMWi) 2015), "aggregators can also open up the potential for flexibilization in medium-sized and small electricity consumers to the extent that they have direct access to energy markets". It is expected that industrial enterprises can also procure their electricity at variable prices in the future. This might apply not only to large and energy-intensive industries but also small and medium-sized enterprises (SMEs). In addition to easy access to electricity markets, another challenge for industrial enterprises is the effort involved in implementing and operating a local process control system that responds to dynamic price signals. Increased use of "digitalization" could be a key driver to tackle this challenge and to leverage energy flexibility. According to the report "SMEs Digital" from (BMWi) (German Federal Ministry for Economic Affairs and Energy (BMWi) 2018), 88% of all SMEs see a connection between digitalization and corporate success, but for 51% of those companies surveyed, digitalization is not the core part of their business strategy. The increased use of digitalization could therefore also prove to be a door opener for industrial enterprises, especially SMEs, to flexibly consume their energy and profitably participate in the electricity markets of today and tomorrow. Against this background, key research questions are whether it is really worthwhile for industrial enterprises to exploit their energy flexibility on the basis of dynamic prices, and how the technical entry barriers can be lowered. Methods and models are therefore needed which, on the one hand, can characterize the energy flexibility, the achievable profits and the associated effort in a company-specific and operation-focused manner and, on the other hand, enable a partially or fully automated implementation. The characterization and modeling of industrial enterprises' flexibility has been intensively investigated over the last years. Some research works (Tristán et al. 2020; Schott et al. 2019; Pierri et al. 2020; Weeber et al. 2017) present data models and characterization methodologies for available energy flexibility measures (EFMs) on-site, while some works (Seitz et al. 2019; Roesch et al. 2019; Schott et al. 2018) also discuss the overall conceptual layout for using EFMs by also presenting IT-based synchronization platforms between flexible companies and external demand response agents. When comprehensive automated data acquisition and process control are introduced as part of digitalization, the basis is also created for the use of advanced methods of data analysis, optimization and control (Scheidt et al. 2020). To also use these methods to provide automated use of energy flexibility, models are needed that can represent the energy flows themselves as well as the flexibility in the energy flows. This in turn requires modeling of the energy-relevant devices, plants, processes as well as all interdependencies. On the one hand, these models can be used in off-line simulations to assess the flexibility potential, or to assist in decisions on how to best use or increase flexibility. On the other hand, as part of model-based controls, they can help to realize semi-automated or fully automated operations with flexibility utilization, e.g. in response to external price signals. The paper (Roesch et al. 2019) argues that industrial enterprises face particular challenges in the optimal use of flexibility due to the complex nature of their production processes and interdependencies. In addition, each industrial enterprise is different from another based on their type of manufacturing, final products, and associated processes. This complexity challenge also applies in particular to the modeling of energy flows and their flexibilities, as well as the use of model-based control and optimization approaches for automated use of flexibility. The contribution of this work to the solution of the above-mentioned research questions and challenges is the development of a model library that is as generic as possible in order to be able to map the energy flows and energy flexibilities of various industrial companies with reasonable effort and to use them for model-based control and optimization. For this purpose, the models presented capture the energy, material and media flows within industrial companies, take into account so-called process dependencies and boundary conditions with sufficient accuracy and at the same time ensure a minimum complexity for mathematical optimization. To be applicable to as many companies as possible, the models are kept as generic and modular as possible. The developed model library is published as an open-source tool under GNU General Public License v3.0) (Offenburg University 2021). In the remainder the concept of the generic industrial enterprise model library is presented in detail, including the generic modeling of the manufacturing systems (MFS), the technical building services (TBS) and the energy systems (ES). Then, in a case study, the model library is used to model and analyze the flexibilities of a steel powder manufacturing plant, before the paper ends with a conclusion and outlook. A generic industrial enterprise model library A general industrial enterprise can be considered as composed of various industrial systems that work together to execute the intended production through core and auxiliary processes. The systems are interconnected by material, energy, media and data flows (Beier 2017). Focusing on modeling of energy flexibility, the proposed generic model library considers a industrial enterprise to be composed of three major technical units, namely MFS, TBS and ES, see Fig. 1. The research approach is taken from existing research works of (Beier et al. 2015; Tristán et al. 2020), where they represent the industrial facility in these sub-units for their methodology for energy flexible industrial systems. A generic representation of industrial enterprise for energy flexibility modeling purposes Each of these three units can contain different types of sub-units representing different generic key functionalities in industrial enterprise, for e.g cooling or heating systems as part of TBS. These sub-units can in turn be broken down to the lowest level, where concrete individual machines, devices and plants are modeled as the ultimate consumers and producers of electricity that directly affect the electric load profiles and flexibility. To model the enterprise's energy flexibility and DR potential, the control of the technical units has to be modeled in addition to the technical units themselves. In the generic model library, these control and management functionalities are combined and modeled in an additional fourth technical unit, the energy and manufacturing control (EMC) unit. It contains the optimal control algorithms and coordinates the processes and energy consumption of defined technical units, alongside with having interfaces to external signals based on the participation mechanisms. The sections below describe each of the technical units and relevant modeling parameters and constraints that are also chosen as inputs for the modules of the library. Manufacturing systems - MFS The MFS is the central and value-adding part of every industrial enterprise, determining the main electricity consumption activities of the production process. This is the system which transforms the energy, media, raw material and information into final products for the company (Beier 2017). For modeling this system as generically as possible for the purpose of optimal load control for DSEF, the MFS was modeled in form of a resource-task network (RTN) (Castro et al. 2009). Table 1 shows the corresponding modeling parameters and constraints. Table 1 Modeling parameters and constraints for MFS in the model library. The resources include raw materials that serve as inputs for the production systems, products that are finished goods having some required quantity at the end of the planning horizon, storage facilities that serve as buffers for materials in production systems and production machines which consume electricity and transform materials. Tasks represent the manufacturing steps or jobs that are carried out to produce an intermediate or final product depending on the production flow. When the task is executed, it uses a machine or combination of machines. The operation of machines in turn increases or decreases the quantity of raw materials and products inside storage facilities. Production machines alongside with electrical power might also require other media and energy such as compressed air, heating and cooling that is supplied by the TBS. When the production is being carried out, it might require special operation conditions, such as temperature or air quality, which can be ensured by heating and ventilation systems available in the TBS. The energy flexibility available in the MFS comes from degrees of freedom in production planning, where the execution of flexible production tasks can be interrupted or shifted according to external price signals. The relevant constraints for the MFS described in Table 1 are provided to the optimizer in the EMC module to make optimal use of the available flexibility. Technical building services - TBS The MFS needs certain media for its manufacturing processes, which must be provided by the TBS in requested quantities and at the right time. For this purpose, the TBS system contains corresponding units for the generation and intermediate storage of these media. For simplicity it is assumed that media cannot flow back to the TBS system as they might in a real enterprise. The sub-units of the TBS that are currently modeled in the library include air system (CAS), process cooling system (PCS), process heating system (PHS) and a simplified version of heating, ventilation and air conditioning (HVAC). Other building utilities such as lighting or humidity control can be included in the future. The main source of energy flexibility in the TBS is in the intermediate storage facilities for media, as they allow to alter and shift the load of media generation units. In the following, we discuss the sub-units and their components of TBS that are available in the library as modeling blocks. Compressed air system: A CAS is widely used in most of the industrial enterprises to supply the compressed air supporting the manufacturing process and equipment, including machine tools, clamping, spraying, material separation and material handling and pneumatic utility (Javied et al. 2018). The flexibility in these systems lies in the storage of compressed air in pressurized storage tanks, which can offer DSEF by making the energy consumption of compressors flexible. However, such systems have process constraints as, for example, that the air supply pressure in air distribution circuits or storage tanks must not fall out of the specified required operation levels. Table 2 shows the elements inside the CAS and considered parameters and optimization constraints. Table 2 Description of CAS and constraints in the model library Process heating and cooling system: PCS and PHS are important systems that provide heating and cooling as utility to MFS. For example, the operation temperatures of machines are required to be under certain limits, or materials need heating and cooling treatment before and after the production. The energy flexibility in these systems mainly comes from available thermal storage capabilities. Further, these systems can also supply the thermal energy to HVAC systems to maintain the desired operation temperature range and personal comfort depending on the climatic conditions. Table 3 shows the relevant parameters and constraints for energy flexibility modeling. Table 3 Description of PCS and PHS and constraints in the model library Heating, ventilation and air conditioning: HVAC systems are important for maintaining indoor comfort for personnel and favorable operating conditions for production machinery. For the purpose of energy flexibility, only thermal comfort has been considered in the developed model library so far, where the goal is to maintain the desired indoor temperature against the outdoor temperature using the dynamic indoor temperature prediction as in works of Harder et al. (2020); Hietaharju et al. (2018). Based on the desired thermal comfort, a heating or cooling demand is generated, which is provided by dedicated heating or cooling devices or also by thermal storage via PCS and PHS. Table 4 shows the parameters required for the modeling of the building HVAC system. Table 4 Description of HVAC system and constraint Energy systems - ES The ES of the industrial enterprise in the developed model library comprises on-site electricity generation and storage. This includes electricity supply from locally installed photovoltaics (PV), combined heat and power (CHP) units which can also supply heat energy to the thermal storage units inside the PHS, and battery energy storage (BES). Further systems such as wind turbines or diesel generators can also be added. For the purpose of modeling energy flexibility, the mentioned systems are not modeled in detail and a consideration on the power level is sufficient. BES play a particularly important role in terms of flexibility, because its operational constraints are not directly linked with the constraints of the MFS and TBS systems, and hence these system provide greater flexibility to the industrial enterprise. The optimizer inside the EMC module that also ensures the power balance with the external grid, optimally decides the power flows of all controllable ES components. Table 5 shows the generic parameters and constraints for the components of the ES. Table 5 Description of ES and constraints in the model library Energy and manufacturing control - EMC The EMC module in the model library serves as a central control entity equipped with model based optimization. The main objective is to adapt the overall load demand of the industrial enterprise at the connection point of the grid and hence provide available flexibility. It contains generic mixed-integer linear programming (MILP) based optimization formulations for the technical units, which automatically consider the defined input parameters and constraints (Tables 1, 2, 3, 4 and 5). Figure 2 shows the structural layout of the EMC. The generic formulation of technical units results in the load demand variables, and the power balance of the industrial enterprise reads for each time step t in optimization horizon T, $$ \begin{aligned} P_{Grid,t} = P_{MFS,t}+P_{PCS,t}+P_{PHS,t}-P_{PV,t}-& P_{CHP,t} - P_{BES,t} \\ & \forall t\in \mathbf{T} = [0,1,...,T_{f}] \end{aligned} $$ Optimization and control structure inside EMC where each of the given variables in Eq. 1 are optimally determined based on the provided process constraints. A positive grid power PGrid,t indicates consumption from the grid, whereas a positive electric storage power PBES,t indicates discharge. Negative values indicate feed-in to the grid and storage charging, respectively. The objective function of the optimization algorithm is to minimize the daily electricity procurement costs and CHP fuel costs, which is written as, $$ J = \sum_{t=0}^{t=T_{f}} (P_{Grid,t} \cdot \lambda^{electricity}_{t} \cdot \Delta_{t}) +\sigma^{CHP,fuel}_{t} \quad\quad\quad \forall t\in \mathbf{T} = [0,1,...,T_{f}] $$ where \(\lambda ^{electricity}_{t}\) in €/kWH represents the electricity price at time step "t", which can be either fixed if the industrial enterprise procures its electricity at flat rate, or be variable based on the prices on the electricity market. Δt represents the sampling time step of the optimization horizon, which can be chosen generically as well. The input data for price signals is retrieved by the EMC from the Market block in the model library as shown in Fig. 3. Δt is the sampling time step of the optimization, which can also be adapted in the model. The term \(\sigma ^{CHP,fuel}_{t}\) shows the fuel costs of CHP for time step "t". In the future, for example when other on-site energy generation assets such as diesel generator are added, their respective costs must also be accounted for in the objective function. Modeling diagram of generic industrial enterprise model library - representation of modules and sub-modules in objected oriented manner The developed optimization formulations contains the boundary conditions such as production planning from MFS, process and operational constraints from TBS and ES. Based on these degrees of freedom, the optimizer decides the values of load demand of individual units, which affects Pgrid,t. Further, for any specific technical units, their direct load forecast can also be added as external parameters, and the EMC ignores the optimization formulation for those units. Currently, the nature of the optimization is that it solves for all the time steps in optimization horizon T and finds a global solution. However, as a later work it is intended to develop a cascaded optimization with a moving horizon approach. This will help to tackle and simulate the short-term disturbances in process parameters and fluctuation in price signals. The software implementation of the generic model library is done in object-oriented Python. The development style is taken as motivation from the structure of open-source power system modeling tool pandapower (Thurner et al. 2018), where different elements of power grids can be defined generically and added in a data-based structure. Likewise, in this industrial enterprise model library, each of the technical units and their sub-units are represented by separate modules as classes and sub-classes with generic inputs. Figure 3 shows the corresponding modeling diagram. Once all units are created, the control-relevant parameters of the defined technical units become available in the EMC block, and are used there in generic optimization formulations. Case study - simulation To apply and validate the proposed methodology, a real-world case of a steel-powder manufacturing plant layout was taken from Yu et al. (2016). As we want to showcase our methodology, we model the manufacturing process using MFS based on tasks and resources. Further, the case study has a nitrogen plant, which we ignore in our analysis, and add CAS, PHS and CHP as additional units. For this, we also assume our parameters concerning the developed methodology. The basic purpose is to test the models developed in this exemplary industrial company. Figure 4 shows the technical units, their sub-units and interdependencies via material and energy flow for the chosen case study. Overall representation of the exemplary industrial process as divided by MFS; TBS; ES. Process layout and pictures for MFS taken from Yu et al. (2016) The MFS is composed of manufacturing tasks starting from melting of iron and coke to a final product coming out from the blending machine. For simplification of modeling, the manufacturing task related to blast furnace is not taken into account as it is a complex process in itself as per (Yu et al. 2016). The specifications of the modeled tasks in MFS is provided in Table 6. The tasks of type "INT" can be interrupted and shifted to feasible times following its run time constraints, while tasks of type "UN-INT" cannot be interrupted once they start; however, their start times can be adjusted. In the listed tasks, FRF is of type "CONST", which means that it is executed at all times in the planning horizon and has a uniform electricity demand. Table 6 Parameters for the tasks in MFS The parameters for the machines used by the manufacturing tasks are provided in Table 7. Table 7 Parameters for the modeled machines in MFS The other resources in the modeled MFS include a raw material RM with initial quantity of 2000 tons which is fed into the first machine, a sprayer SPR. For each task a dedicated storage facility is considered with maximum storage facility capacity of 300 tons. As the represented MFS system is of continuous manufacturing type, for each task an output material product is defined which becomes the input material for the next manufacturing task. In the production line, the last task is blending BL. Hence, the material coming out of this task represents the final product, for which a minimum required value of 150 tons was given. This represents the quantity that will be produced at the end of the planning horizon by the production process, which is ensured by the optimizer. From Fig. 4, it can be seen that the chosen case study also contains PCS, PHS, CAS and HVAC as part of the TBS which serve as utilities to manufacturing operation in the MFS. The details of the components and their parameters are shown in Table 8. Table 9 lists the parameters of sub-units of ES system, which include a BES, CHP and PV. Table 8 Details of the technical units inside TBS in MFS Table 9 Details for the ES of the chosen case study The maximum amount of power to be exchanged with the grid was limited to 1000 kW. A simulation horizon of 24 hours with a sampling time step of 15 minutes was chosen as optimization parameters inside the EMC, which formulates the time window of the optimization formulation accordingly. For solving the optimization problem inside the EMC, the commercially available solver GUROBI (Gurobi Optimization LLC 2021) was used with academic license. The hardware on which simulation was performed included a Windows PC with four 3.2 GHz processors with 16 GB of RAM. The relative MIP optimality criterion was chosen as 1% in the solver, and it took under 40 seconds to solve the simulated optimization problems. For the analysis, the following pricing mechanisms were assumed: Fixed price where the value of λbuy is 0.256 €/kWh including all the taxes and levies that are applied to industrial consumers. Variable prices in range of 0.2 - 0.32 €/kWh based on the variability in given wholesale electricity market prices plus the applied taxes and levies. Results - optimal scheduling of MFS tasks and load profiles Figures 5 and 6 show the simulation results for the load profiles of the machines in the MFS with fixed and variable price mechanism respectively. The resulting load profiles are the outcome of an optimal scheduling of the manufacturing tasks, while also following the given constraints for the MFS. In the case of fixed prices, the optimizer decides to schedule the tasks in such a way that the self-consumption of PV is increased, which reduces the overall electricity purchase costs. With variable price mechanism, the optimizer decides to execute and shift the operations of the tasks to time intervals when prices are lower. Load profile of MFS with fixed price Load profile of MFS with variable prices Results - load profiles and states of TBS systems Figures 7 and 8 show the effect of volatile prices on the load profiles of the components and corresponding state variables inside TBS respectively. The optimal operation with variable prices results in a shift of the load demands of the devices to intervals where prices are lower as compared to that with fixed price scenarios. The flexibility in shifting of the loads is dependent on boundary conditions of both input and state variables. For instance, it can be observed from Fig. 7 that the operation of binary controlled devices i-e compressor K2, chiller C1, and heat-pump HP-1 follow the respective run-time constraints. Load profiles for the components inside the TBS systems for both fixed and variable prices State variables for the components inside TBS systems for both fixed and variable price mechanisms Further, Fig. 8 shows that state variables such as the pressure of CAS storage, the temperature of thermal storages in PCS and PHS, and indoor thermal temperature follow the dynamic prices without violating their provided boundary limits. Results - energy system with BES, CHP and PV Figure 9 shows the results of the optimization of the ES. The energy content of the battery almost remains constant for fixed prices, except that it uses the available PV power after 16:00 HRS. However, for variable prices the optimizer decides to charge and discharge actions in direct relation to volatile prices, and the energy content of the battery varies accordingly between the given capacity limits. Figure 9 shows that also the CHP unit operates flexibility with respect to variable prices. Energy content, charging and discharging profiles for BES and CHP inputs Results - overall load profile and flexibility assessment Figure 10 shows the industrial enterprise's total load demand and its power exchange with the distribution grid for both fixed and variable prices. Figure 10 also shows the flexibility provided as a comparison between fixed and variable prices. When the prices are low, the industrial enterprise offers negative flexibility (red bars) by increasing its power from grid, and when prices are high, it offers positive flexibility (blue bars) by lowering its power intake from grid. Load profile, power exchange with the grid and flexibility in both fixed and variable prices mechanism The contribution of each system to the overall flexibility is shown in Fig. 11. For the modeled case, the greatest flexibility comes from the adjustment of production related tasks in the MFS and the energy storage capability of the battery. The flexibility of the CAS, the PCS and the PHS is limited because of the low rated power of the devices due to the given process constraints. Breakdown of the flexibility provided by each system in response to the variable electricity prices Results - daily costs Figure 12 shows the daily cost comparison between four case scenarios for which the simulations were performed for the modeled case study. The daily costs for this case are the sum of the total daily electricity purchase cost and fuel costs for CHP operation. These scenarios chosen for cost comparison are, Case 1: Fixed prices with no BES and no CHP Daily cost comparison in € Case 2: Fixed prices with BES and CHP Case 3: Variable prices with no BES and no CHP Case 4: Variable prices with BES and CHP The obtained results for daily costs for Case 1, Case 2, Case 3 and Case 4 were 971 €, 945 €, 968€ and 912€, respectively. From the figure, it becomes clear that for the modeled case, cost reduction is more pronounced when the industrial enterprise uses its flexible assets (BES and CHP) with variable costs, where cost reduction between Case 1 and Case 4 was around 6%. The cost reduction depends on several factors such as variability in dynamic prices, electrical ratings of loads inside the defined technical units, the capacity of storage, and fuel prices for CHP (or other flexible assets such as diesel generators). Further, operational constraints and their limits can also affect the load flexibility and hence the costs as well. The goal of this work was to show the methodology and use of the developed model. A detailed cost-benefit analysis is to be carried out as part of the future work. Conclusion and outlook This paper presents a generic modeling framework with optimal energy management and control for industrial enterprises to define their manufacturing systems, utility processes, and energy systems at a modular level. The developed framework also includes generic optimization formulations that take care of production constraints in the MFS, process constraints in the TBS and physical constraints in the ES for the defined units while providing the flexibility. Electricity prices are the external input to the optimization algorithm's cost function and serve as the industrial enterprise's interface to electricity markets. The developed model framework was applied to a case study of a steel-powder manufacturing plant for validation and for assessing the energy flexibility in terms of variable price signals. The simulation results showed that the optimizer decides to shift the load profiles of defined technical units towards the lower price intervals using the available flexibility potential. For the modelled case, the MFS and the BES provided most of the flexibility. In the case study, the proposed modeling framework fulfilled the requirements of the research task posed at the beginning: The framework enables the modeling of an exemplary industrial company with regard to its energy flexibility with little effort on the basis of a manageable number of company-specific parameters. The flexibility potential to be achieved by dynamic prices and the profit to be expected for the company could be estimated. The authors are convinced that this framework can be applied in the same way to a large number of other industrial companies and in this sense fulfills the claim of being "generic". In terms of an outlook, it can be assumed that the digitalization of industrial enterprises will progress rapidly in the next few years and that this will also create new prerequisites for advanced energy management in these companies. Model libraries such as the one proposed here could then form the basis for "digital twins" of industrial enterprises energy flows and flexibility potentials that in many ways facilitate semi-automated and fully automated flexibility provision and marketing. In order to prepare and research this development, an experimental twin of industrial enterprise is being set up at Offenburg University of Applied Sciences in parallel to the model library, which provides selected subsystems of the MFS, TBS and ES in real hardware. This will then exchange data with the digital twin and its EMC software module via standard industrial communication systems and be controlled automatically by it. This will require the development of further model library features such as data transfer, communication and control for real hardware. From the modeling perspective, the work described is still ongoing as part of a research project. A non-exhaustive list of planned future developments is: Adding granularity to the developed models which includes the representation of additional process constraints and characterization parameters; Extending the developed optimization formulations towards rolling horizon or Model Predictive Control (MPC) for energy management such as in works of Habib et al. (2018); Dongol et al. (2018); Sawant et al. (2020). This will provide the model library an ability to deal with short term process disturbances and variations in electricity market signals for provision of flexibility; Extending the market block of the model library to represent new and innovative flexibility products for industrial enterprises and modeling of those in cost functions and performing cost-benefit analyses for industrial enterprises. For the developed case study production layout information has been taken from the example of Yu et al. (2016) with adaptations and assumption to represent the developed methodology. The weather data as outdoor ambient temperature has been taken from weather station located at Offenburg University of Applied Sciences. The PV generation profile has been taken from a real roof-top generation installed at one industrial enterprise participating in the research project. The contents of the developed model library are published under open-source GNU General Public License as Github repository and can be viewed on Offenburg University (2021). The icons used in Fig. 1 are downloaded from open source icons website https://www.flaticon.com/. Albadi, MH, El-Saadany EF (2008) A summary of demand response in electricity markets. Electr Power Syst Res 78(11):1989–1996. Beier, J (2017) Simulation Approach Towards Energy Flexible Manufacturing Systems Introduction. Springer International Publishing AG 2017. https://doi.org/10.1007/978-3-319-46639-2. Beier, J, Thiede S, Herrmann C (2015) Increasing energy flexibility of manufacturing systems through flexible compressed air generation. Procedia CIRP 37(December):18–23. Castro, PM, Harjunkoski I, Grossmann IE (2009) New continuous-time scheduling formulation for continuous plants under variable electricity cost. Ind Eng Chem Res 48(14):6701–6714. Dongol, D, Feldmann T, Schmidt M, Bollin E (2018) A model predictive control based peak shaving application of battery for a household with photovoltaic system in a rural distribution grid. Sustain Energy Grids Netw 16:1–13. German Federal Ministry for Economic Affairs and Energy (BMWi) (2015) An electricity market for Germany's energy transition. Technical report. https://www.bmwi.de/BMWi/Redaktion/PDF/G/gruenbuch-gesamt-englisch,property=pdf,bereich=bmwi2012,sprache=de,rwb=true.pdf. German Federal Ministry for Economic Affairs and Energy (BMWi) (2018) SMEs Digital - Strategies for the digital transformation. Technical report. https://www.bmwi.de/Redaktion/EN/Publikationen/Mittelstand/smes-digital-strategies-for-digital-transformation.html. Gurobi Optimization LLC (2021) Gurobi Optimizer Reference Manual. http://www.gurobi.com. Accessed 1 June 2021. Habib, M, Ahmed Amine L, Bollin E, Schmidt M (2018) One-day ahead predictive management of building hybrid power system improving energy cost and batteries lifetime. IET Renew Power Gener 13(3):482–490. Harder, N, Qussous R, Weidlich A (2020) The cost of providing operational flexibility from distributed energy resources. Appl Energy 279:115784. Hietaharju, P, Ruusunen M, Leivisk K (2018) A dynamic model for indoor temperature prediction in buildings. Energies 11(6):1477. https://doi.org/10.3390/en11061477. Javied, T, Kimmig F, Franke J (2018) Demand-based dimensioning of compressed air systems for energy optimization and flexibility In: 2018 4th International Conference on Control, Automation and Robotics, ICCAR (ICCAR), 492–497.. IEEE, IEEE Corporate Headquarters (NY). 3 Park Avenue, 17th Floor. https://doi.org/10.1109/ICCAR.2018.8384726. Langrock, T, Achner S, Jungbluth C, Marambio C, Michels A (2015) Potentiale regelbarer Lasten in einem Energieversorgungs- system mit wachsendem Anteil erneuerbarer Energien. Technical report, Umwelt Bundesamt. Lund, PD, Lindgren J, Mikkola J, Salpakari J (2015) Review of energy system flexibility measures to enable high levels of variable renewable electricity. Renew Sust Energ Rev 45:785–807. Offenburg University, InstituteofEnergySystemsTechnology, Research group Intelligent Energy Networks (2021) indOptFlex - A generic industrial enterprise model library for demand side energy flexibility - open source. https://github.com/inesIEN/indOptFlex. Accessed 30 June 2021. Papaefthymiou, G, Haesen E, Sach T (2018) Power system flexibility tracker: Indicators to track flexibility progress towards high-res systems. Renew Energy 127:1026–1035. Pierri, E, Schulze C, Herrmann C, Thiede S (2020) Integrated methodology to assess the energy flexibility potential in the process industry. Procedia CIRP 90:677–682. Roesch, M, Bauer D, Haupt L, Keller R, Bauernhansl T, Fridgen G, Reinhart G, Sauer A (2019) Harnessing the full potential of industrial demand-side flexibility: An end-to-end approach connecting machines with markets through service-oriented IT platforms. Appl Sci (Switzerland) 9(18):3796. https://doi.org/10.3390/app9183796. Sawant, P, Bürger A, Doan MD, Felsmann C, Pfafferott J (2020) Development and experimental evaluation of grey-box models of a microscale polygeneration system for application in optimal controls. Energy Build 215:109725. Scheidt, Fv, Medinová H, Ludwig N, Richter B, Staudt P, Weinhardt C (2020) Data analytics in the electricity sector – a quantitative and qualitative literature review. Energy AI 1:100009. Schott, P, Ahrens R, Bauer D, Hering F, Keller R, Pullmann J, Schel D, Schimmelpfennig J, Simon P, Weber T, Abele E, Bauernhansl T, Fridgen G, Jarke M, Reinhart G (2018) Flexible IT platform for synchronizing energy demands with volatile markets. IT Inf Technol 60(3):155–164. Schott, P, Sedlmeir J, Strobel N, Weber T, Fridgen G, Abele E (2019) A generic data model for describing flexibility in power markets. Energies 12(10):1–29. SEDC (2016) Explicit and Implicit Demand-Side Flexibility. http://www.smartenergydemand.eu/wp-content/uploads/2016/09/SEDC-Position-paper-Explicit-and-Implicit-DR-September-2016.pdf. Accessed 1 June 2021. Seitz, P, Abele E, Bank L, Bauernhansl T, Colangelo E, Fridgen G, Schilp J, Schott P, Sedlmeir J, Strobel N, Weber T (2019) IT-based architecture for power market oriented optimization at multiple levels in production processes. Procedia CIRP 81:618–623. Stede, J (2016) Demand Response in Germany: Technical Potential, Benefits and Regulatory Challenges. DIW Roundup 2016(96):7. Stede, J, Arnold K, Dufter C, Holtz G, von Roon S, Richstein JC (2020) The role of aggregators in facilitating industrial demand response: Evidence from germany. Energy Policy 147:111893. Thurner, L, Scheidler A, Schäfer F, Menke J, Dollichon J, Meier F, Meinecke S, Braun M (2018) pandapower — an open-source python tool for convenient modeling, analysis, and optimization of electric power systems. IEEE Trans Power Syst 33(6):6510–6521. Tristán, A, Heuberger F, Sauer A (2020) A methodology to systematically identify and characterize energy flexibility measures in industrial systems. Energies 13(22):5887. https://doi.org/10.3390/en13225887. Weeber, M, Lehmann C, Böhner J, Steinhilper R (2017) Augmenting Energy Flexibility in the Factory Environment. Procedia CIRP 61:434–439. Yu, M, Lu R, Hong SH (2016) A real-time decision model for industrial load management in a smart grid. Appl Energy 183:1488–1497. This work was funded by Germany's Federal Ministry of Economics and Energy (BMWi) for the ongoing project "GaIN - Gewinnbringende Partizipation der mittelständischen Industrie am Energiemarkt der Zukunft" (Fkz: 03EI6019E). Publication funding was provided by the German Federal Ministry for Economic Affairs and Energy. Institute of Energy Systems Technology (INES), Offenburg University of Applied Sciences, Badstraße 24, Offenburg, 77652, Germany Rahul Khatri, Michael Schmidt & Rainer Gasper Rahul Khatri RK, MS, and RG conceived the idea presented. RK worked on software implementation, performed formal analysis and research, wrote the initial draft, wrote and revised drafts, and performed visualization. MS and RG arranged resources, wrote and edited review drafts, conducted analysis, and performed activities involving project management, funding, and acquisition. All authors read and approved the final manuscript. Correspondence to Rahul Khatri. The authors declare that they have no potential conflict of interest in relation to the study in this paper. Khatri, R., Schmidt, M. & Gasper, R. Active participation of industrial enterprises in electricity markets - a generic modeling approach. Energy Inform 4, 20 (2021). https://doi.org/10.1186/s42162-021-00173-5 Energy systems modeling Demand side flexibility Optimization and control
CommonCrawl
Three-player tournament with Markov chains Consider the following problem, from Tijms's Understanding Probability: Three equally matched opponents decide to have a ping-pong tournament. Two people play against each other in each game. Drawing lots, it is determined who are playing the first game. The winner of a game stays on and plays against the person not active in that game. The games continue until somebody has won two games in a row. What is the probability that the person not active in the first game is the ultimate winner? Let's call the three players A, B, and C. To solve this problem using a Markov chain, I introduce the following states: state 0: there is going to be a match between A and B, and nobody has won the previous match because there have been no previous matches. I will denote this state with the shortcut $(AB, 0)$ (this is the initial state); state 1: there is going to be a match between A and B, and A won the previous match. Shortcut: $(AB,A)$; state 2: there is going to be a match between A and B, and B won the previous match. Shortcut: $(AB,B)$; state 3: there is going to be a match between A and C, and A won the previous match. Shortcut: $(AC,A)$; state 4: there is going to be a match between A and C, and C won the previous match. Shortcut: $(AC,C)$; state 5: there is going to be a match between B and C, and B won the previous match: $(BC,B)$; state 6: there is going to be a match between B and C, and C won the previous match: $(BC,C)$; state 7: A has won two matches in a row; state 8: B has won two matches in a row; state 9: C has won two matches in a row. The last three states should be absorbing states. By considering all the possible games, one can easily write down the nonzero entries of the transition probability matrix: $$ \begin{gathered} p_{03} & p_{05} & p_{17} & p_{15} & p_{23} & p_{28} & p_{37} & p_{36} & p_{41} & p_{49} & p_{58} & p_{54} & p_{62} & p_{69}, \end{gathered} $$ which are all equal to $\frac12$. Already after 20 matrix multiplication, we have the following quite stable probabilities: $$ p^{(20)}_{07} = p^{(20)}_{08} = 0.3571, \qquad p^{(20)}_{09} = 0.2857, $$ so the probability that the person not active in the first game wins is 0.2857. Does the result seem right? Intuitively, I would expect that the player not involved in the first game has lower probability of winning, because if it loses there won't be any more matches. I noticed that building the probability transition matrix is quite tedious, is there a more systematic way to doing this? probability markov-chains J. D.J. D. have you come across hitting probabilities? If not see theorem 1.3.2 in these notes: http://www.statslab.cam.ac.uk/~james/Markov/s13.pdf These let you set up equations relating the hitting probabilities starting from each state, which can be solved. Remember to exploit symmetry to save computation. beelalbeelal $\begingroup$ Thanks, I will read the notes and retry the computation with hitting probabilities. $\endgroup$ – J. D. Jun 6 '19 at 9:28 Not the answer you're looking for? Browse other questions tagged probability markov-chains or ask your own question. using markov chains to solve a project-euler problem? how to calculate the real probability Markov Model transition probability Markov Chains - Strong Markov Property Two wins in a row in a game involving three players Tennis tournament probability. Question is about part B Probability of a team winning a tournament Markov chain long run proportion
CommonCrawl
Algebraic $K_1$ group for a $C^*$-algebra Let $A$ be a $C^*$-algebra: then one defines topological $K_1$ group as $GL_{\infty}(A^+)/\Big(GL_{\infty}(A^+)\Big)_0$ where $A^+$ denotes $A$ with the unit adjointed (even if $A$ already had a unit: in this case $A^+$ is isomorphic to $A \oplus \mathbb{C}$ but this definition turns out to be equivalent if we insert $A$ instead of $A^+$). The algebraic $K_1(A)$ is defined as $K_1(A)=GL_{\infty}(A)/[GL_{\infty}(A),GL_{\infty}(A)]$ where ${H,H}$ is a commutator subgroup and it is known that it is not the same as topological $K_1$. I read somewhere that $K_1^{top}(A)$ may be defined somehow similarly to the algebraic $K_1$ as $K_1^{top}(A)=GL_{\infty}(A)/ \overline{[GL_{\infty}(A),GL_{\infty}(A)]}$, at least for unital $C^*$-algebras. My question is the following: How to show that these two aproaches are equivalent? oa.operator-algebras kt.k-theory-and-homology asked May 3, 2016 at 21:30 truebarantruebaran One must show that $$ \overline{[GL_\infty(A),GL_\infty(A)]}= (GL_\infty(A))_0. $$ The proof that the left side is in the right side is the easier one: Let $a$ be in the left side. Then $a=bc$, where $b\in [GL_\infty(A),GL_\infty(A)]$ and $\|c-1\|<1$. Commutators belong to $(GL_\infty(A))_0$, because $K_1(A)$ is abelian (or because of the proof of this fact). So $b\in (GL_\infty(A))_0$. Also, $c=e^h$ for some $h\in M_\infty(A)$, and so it is connected to $1$ by the path $t\mapsto e^{th}$. Choose now $a\in (GL_\infty(A))_0$. It can be written as finite product of exponentials $e^h$, with $h\in M_\infty(A)$. So it is enough to show that these exponentials belong to the left side. We first prove this for $h\in M_\infty(A)$ of the form $xy-yx$. In this case one has $$ (e^{x/n}e^{y/n}e^{-x/n}e^{-y/n})^{n^2}\to e^{xy-yx}. $$ The elements on the left side are products of commutators so we get the desired result. Finally, notice that any $h\in M_\infty(A)$ is a limit of commutators. For example, assuming that $h\in A$ for simplicity we can express the matrix $$ \begin{pmatrix} 1 & & & \\ & -\frac 1 n & &\\ && -\frac 1 n &\\ &&&\ddots \end{pmatrix} $$ as a commutator in $M_{n+1}(\mathbb C)$ then multiply by $h\otimes 1_{n+1}$ to get diag$(h,-h/n,\ldots,-h/n)$ as a commutator. Leonel RobertLeonel Robert $\begingroup$ Thank you for the great answer! However I don't see how does the final argument (that each element in $M_{\infty}(A)$ is a limit of commutators) works. The problem is that if we express the matrix $diag(1,-\frac{1}{n},...,-\frac{1}{n})$ as a commutator $AB-BA$ and then multiply by $diag(h,0,...,0)$ then how do we know that $h(AB-BA)$ are also commutators (this would be fine if $diag(h,0,...,0)$ will commute with $A$ and $B$? $\endgroup$ – truebaran May 5, 2016 at 2:07 $\begingroup$ I meant multiply by the diagonal matrix constant $h$. $\endgroup$ – Leonel Robert $\begingroup$ Ok, just to be sure whether I understood everything correctly: so now everything commutes, $diag(h,-\frac{h}{n},...,-\frac{h}{n})$ is a commutator and $diag(h,0,...,0)$ is a limit of commutators. Now we apply the same argument for $h \in M_n(A)$ where the matrix $diag(1,-\frac{1}{n},...,-\frac{1}{n})$ is understood as a block diagonal matrix where the size of each block is $n$, and blocks are diagonal matrices (the scalar matrix is a commutator iff its trace is $0$ so it is again a commutator). $\endgroup$ $\begingroup$ Yes, that's right. $\endgroup$ Explicit path in the unitary group of a $C^*$-algebra Map from algebraic K-theory to topological K-theory Commutator representation of certain smoothing operators
CommonCrawl
Vanishing (non-square) matrix product to determine that a cubic passes through a collection of points I have a collection of nine distinct points $P_{i} = [X_{i} : Y_{i} : Z_{i}]$ in the projective plane, with the index $i \in [1, 9]\cap\mathbb{Z}$. I want to show that a cubic curve passes through all nine points. A cubic curve in this context is a set of points in the projective plane that are solutions of the equation $$ a_{1}X^{3} + a_{2}X^{2}Y + a_{3}XY^{2} + a_{4}Y^{3} + a_{5}X^{2}Z + a_{6}XYZ + a_{7}Y^{2}Z + a_{8}XZ^{2} + a_{9}YZ^{2} + a_{10}Z^{3} = 0 $$ for some coefficients $a_{1}, \ldots, a_{10} \in \mathbb{R}$ not all zero (we do not assume this polynomial to be irreducible). Ignoring the obvious pairwise collinearity, there are some important collinearity relations between the points. For brevity, the notation $\{i, j, k\}$ will be used to indicate that $P_{i}, P_{j}, P_{k}$ are collinear. The collinear points are the following: $$ \{1, 2, 3\},\quad \{4, 5, 6\},\quad \{7, 8, 9\},\quad \{1, 4, 8\},\quad \{2, 4, 7\},\\ \{1, 5, 9\},\quad \{2, 5, 8\},\quad \{3, 5, 7\},\quad \{2, 6, 9\},\quad \{3, 6, 8\}. $$ These are the only collinearity relations for more than two points, and this is the only information we have regarding the points. I wanted to appeal to the Cayley-Bacharach Theorem, but I am struggling to show that the points arise as the intersection of two cubics. Reading around the subject is little help at the moment since I have no Algebraic Geometry knowledge, so all the papers I've tried to read are inaccessible to me for the time being. Instead, I have been treating this as a Linear Algebra problem (of sorts) as it is enough to show that the equation $$ \begin{pmatrix} X_{1}^{3} & X_{1}^{2} Y_{1} & X_{1} Y_{1}^{2} & Y_{1}^{3} & X_{1}^{2} Z_{1} & X_{1} Y_{1} Z_{1} & Y_{1}^{2} Z_{1} & X_{1} Z_{1}^{2} & Y_{1} Z_{1}^{2} & Z_{1}^{3} \\ X_{2}^{3} & X_{2}^{2} Y_{2} & X_{2} Y_{2}^{2} & Y_{2}^{3} & X_{2}^{2} Z_{2} & X_{2} Y_{2} Z_{2} & Y_{2}^{2} Z_{2} & X_{2} Z_{2}^{2} & Y_{2} Z_{2}^{2} & Z_{2}^{3} \\ X_{3}^{3} & X_{3}^{2} Y_{3} & X_{3} Y_{3}^{2} & Y_{3}^{3} & X_{3}^{2} Z_{3} & X_{3} Y_{3} Z_{3} & Y_{3}^{2} Z_{3} & X_{3} Z_{3}^{2} & Y_{3} Z_{3}^{2} & Z_{3}^{3} \\ X_{4}^{3} & X_{4}^{2} Y_{4} & X_{4} Y_{4}^{2} & Y_{4}^{3} & X_{4}^{2} Z_{4} & X_{4} Y_{4} Z_{4} & Y_{4}^{2} Z_{4} & X_{4} Z_{4}^{2} & Y_{4} Z_{4}^{2} & Z_{4}^{3} \\ X_{5}^{3} & X_{5}^{2} Y_{5} & X_{5} Y_{5}^{2} & Y_{5}^{3} & X_{5}^{2} Z_{5} & X_{5} Y_{5} Z_{5} & Y_{5}^{2} Z_{5} & X_{5} Z_{5}^{2} & Y_{5} Z_{5}^{2} & Z_{5}^{3} \\ X_{6}^{3} & X_{6}^{2} Y_{6} & X_{6} Y_{6}^{2} & Y_{6}^{3} & X_{6}^{2} Z_{6} & X_{6} Y_{6} Z_{6} & Y_{6}^{2} Z_{6} & X_{6} Z_{6}^{2} & Y_{6} Z_{6}^{2} & Z_{6}^{3} \\ X_{7}^{3} & X_{7}^{2} Y_{7} & X_{7} Y_{7}^{2} & Y_{7}^{3} & X_{7}^{2} Z_{7} & X_{7} Y_{7} Z_{7} & Y_{7}^{2} Z_{7} & X_{7} Z_{7}^{2} & Y_{7} Z_{7}^{2} & Z_{7}^{3} \\ X_{8}^{3} & X_{8}^{2} Y_{8} & X_{8} Y_{8}^{2} & Y_{8}^{3} & X_{8}^{2} Z_{8} & X_{8} Y_{8} Z_{8} & Y_{8}^{2} Z_{8} & X_{8} Z_{8}^{2} & Y_{8} Z_{8}^{2} & Z_{8}^{3} \\ X_{9}^{3} & X_{9}^{2} Y_{9} & X_{9} Y_{9}^{2} & Y_{9}^{3} & X_{9}^{2} Z_{9} & X_{9} Y_{9} Z_{9} & Y_{9}^{2} Z_{9} & X_{9} Z_{9}^{2} & Y_{9} Z_{9}^{2} & Z_{9}^{3} \\ \end{pmatrix}\begin{pmatrix} a_{1} \\ a_{2} \\ a_{3} \\ a_{4} \\ a_{5} \\ a_{6} \\ a_{7} \\ a_{8} \\ a_{9} \\ a_{10} \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ \end{pmatrix} $$ has a solution for $a_{1}, a_{2}, \ldots, a_{10} \in \mathbb{R}$ not all zero. With an explicit example I have used Maple to solve for the values $a_{i}$, but I cannot figure out how to show that they exist in general (given the collinearity relations). How can I show the existence of such a cubic can be proved using Cayley-Bacharach, the Linear Algebra approach, or by a whole new method? linear-algebra algebraic-geometry Bill WallisBill Wallis You have a system of nine homogeneous linear equations in ten variables. As $9<10$ then they have a nonzero solution. Lord Shark the UnknownLord Shark the Unknown 128k1212 gold badges7373 silver badges165165 bronze badges $\begingroup$ Ah, so the variables $X_{i}^{d_{1}}Y_{j}^{d_{2}}Z_{k}^{d_{3}}$ form a ten-dimensional vector space, which the vector of $0$'s is an element of? $\endgroup$ – Bill Wallis Aug 9 '18 at 16:15 Not the answer you're looking for? Browse other questions tagged linear-algebra algebraic-geometry or ask your own question. through two points passes an unique line and generalizations A cubic hypersurface in $\mathbb{P}^{4}$ that passes through $7$ points in general position with multiplicity $2$. Finding a stronger version of Cayley-Bacharach Theorem that applies in the case that the intersection multiplicities are not equal to $1$ Giving coordinates in a projective plane Cayley-Bacharach for higher degree curves A sextic passing through $8$ points of a cubic with multiplicity at least $2$ and passing through a ninth one, does it also with multiplicity $2$ Cross-ratio of points in the real projective plane Riemann-Roch Theorem and cubic curves
CommonCrawl
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) Agriculture, Fishery and Food > Agricultural Engineering/Facilities Asian-Australasian Journal of Animal Sciences (AJAS) aims to publish original and cutting-edge research results and reviews on animal-related aspects of life sciences. Emphasis will be given to studies involving farm animals such as cattle, buffaloes, sheep, goats, pigs, horses and poultry, but studies with other animal species can be considered for publication if the topics are related to fundamental aspects of farm animals. Also studies to improve human health using animal models can be publishable. AJAS will encompass all areas of animal production and fundamental aspects of animal sciences: breeding and genetics, reproduction and physiology, nutrition, meat and milk science, biotechnology, behavior, welfare, health, and livestock farming system. AJAS is sub-divided into 10 sections. - Animal Breeding and Genetics Quantitative and molecular genetics, genomics, genetic evaluation, evolution of domestic animals, and bioinformatics - Animal Reproduction and Physiology Physiology of reproduction, development, growth, lactation and exercise, and gamete biology - Ruminant Nutrition and Forage Utilization Rumen microbiology and function, ruminant nutrition, physiology and metabolism, and forage utilization - Swine Nutrition and Feed Technology Swine nutrition and physiology, evaluation of feeds and feed additives and feed processing technology - Poultry and Laboratory Animal Nutrition Nutrition and physiology of poultry and other non-ruminant animals - Animal Products Milk and meat science, muscle biology, product composition, food safety, food security and functional foods http://submit.ajas.info KSCI KCI SCOPUS SCIE Genetic Diversity of Goats from Korea and China Using Microsatellite Analysis Kim, K.S.;Yeo, J.S.;Lee, J.W.;Kim, J.W.;Choi, C.B. 461 https://doi.org/10.5713/ajas.2002.461 PDF KSCI Nine microsatellite loci were analyzed in 84 random individuals to characterize the genetic variability of three domestic goat breeds found in Korea and China: Korean goat, Chinese goat and Saanen. Allele diversity, heterozygosity, polymorphism information content, F-statistics, indirect estimates of gene flow (Nm) and Nei's standard distances were calculated. Based on the expected mean heterozygosity, the lowest genetic diversity was exhibited in Korean goat ($H_E$=0.381), and the highest in Chinese goat ($H_E$=0.669). After corrections for multiple significance tests, deviations from Hardy-Weinberg equilibrium were statistically significant over all populations and loci, reflecting the deficiencies of heterozygotes (global $F_{IS}$=0.053). Based on pairwise FST and Nm between different breeds, there was a great genetic differentiation between Korean goat and the other two breeds, indicating that these breeds have been genetically subdivided. Similarly, individual clustering based on the proportion of shared alleles showed that Korean goat individuals formed a single cluster separated from the other two goat breeds. Genetic Studies on Production Efficiency Traits in Hariana Cattle Dhaka, S.S.;Chaudhary, S.R.;Pander, B.L.;Yadav, A.S.;Singh, S. 466 The data on 512 Hariana cows, progeny of 20 sires calved during period from 1974 to 1993 maintained at Government Livestock Farm, Hisar were considered for the estimation of genetic parameters. The means for first lactation milk yield (FLY), wet average (WA), first lactation peak yield (FPY), first lactation milk yield per day of first calving interval (MCI) and first lactation milk yield per day of age at second calving (MSC) were 1,141.58 kg, 4.19 kg/day, 6.24 kg/day, 2.38 kg/day and 0.601 kg/day, respectively. The effect of period of calving was significant (p<0.05) on WA, FPY and MCI while the effect of season of calving was significant only on WA. Monsoon calvers excelled in performance for all the production efficiency traits. The effect of age at first calving (linear) was significant on all the traits except on MCI. Estimates of heritabilty for all the traits were moderate and ranged from 0.255 to 0.333 except for WA (0.161). All the genetic and phenotypic correlations among different production efficiency traits were high and positive. It may be inferred that selection on the basis of peak yield will be more effective as the trait is expressed early in life and had reasonably moderate estimate of heritability. Genetic Similarity and Variation in the Cultured and Wild Crucian Carp (Carassius carassius) Estimated with Random Amplified Polymorphic DNA Yoon, Jong-Man;Park, Hong-Yang 470 Random amplified polymorphic DNA (RAPD) analysis based on numerous polymorphic bands have been used to investigate genetic similarity and diversity among and within two cultured and wild populations represented by the species crucian carp (Carassius carassius). From RAPD analysis using five primers, a total of 442 polymorphic bands were obtained in the two populations and 273 were found to be specific to a wild population. 169 polymorphic bands were also produced in wild and cultured population. According to RAPD-based estimates, the average number of polymorphic bands in the wild population was approximately 1.5 times as diverse as that in cultured. The average number of polymorphic bands in each population was found to be different and was higher in the wild than in the cultured population. Comparison of banding patterns in the cultured and wild populations revealed substantial differences supporting a previous assessment that the populations may have been subjected to a long period of geographical isolation from each other. The values in wild population altered from 0.21 to 0.51 as calculated by bandsharing analysis. Also, the average level of bandsharing values was $0.40{\pm}0.05 $ in the wild population, compared to $0.69{\pm}0.08$ in the cultured. With reference to bandsharing values and banding patterns, the wild population was considerably more diverse than the cultured. Knowledge of the genetic diversity of crucian carp could help in formulating more effective strategies for managing this aquacultural fish species and also in evaluating the potential genetic effects induced by hatchery operations. Rearing Black Bengal Goat under Semi-Intensive Management 1. Physiological and Reproductive Performances Chowdhury, S.A.;Bhuiyan, M.S.A.;Faruk, S. 477 Ninety pre-puberal (6-7 months) female and 15 pre-puberal male Black Bengal goats were collected on the basis of their phenotypic characteristics from different parts of Bangladesh. Goats were reared under semi-intensive management, in permanent house. The animals were vaccinated against Peste Des Petits Ruminants (PPR), drenched with anthelmentics and deeped in 0.5% Melathion solution. They were allowed to graze 6-7 h along with supplemental concentrate and green forages. Concentrates were supplied either 200-300 g/d (low level feeding) or quantity that supply NRC (1981) recommended nutrient (high level of feeding). Different physiological, productive and reproductive characteristics of the breed were recorded. At noon (temperature=$95^{\circ}F$ and light intensity=60480 LUX) rectal temperature and respiration rate of adult male and female increased from 100.8 to $104.8^{\circ}F$ and 35 to 115 breath/min, indicated a heat stress situation. Young female attain puberty at an average age and weight of 7.2$\pm$0.18 months and 8.89$\pm$0.33 kg respectively. Mean age and weight at 1st kidding were 13.5$\pm$0.49 months and 15.3$\pm$0.44 kg respectively. It required 1.24-1.68 services per conception with an average gestation length of 146 days. At low level of feeding the postpartum estrus interval was 37$\pm$2.6 days, which reduced (p<0.05) with high feeding level to 21$\pm$6.9 days. Kidding interval also reduced (p<0.05) from 192 d at low feeding level to 177 d at high feeding level. On an average there were two kiddings/doe/year. Average litter sizes in the 1st, 2nd, 3rd and 4th parity were 1.29, 1.71, 1.87 and 2.17 respectively. Birth weights of male and female kids were 1.24 and 1.20 kg respectively, which increased (p<0.05) with better feeding. Although kid mortality was affected (p<0.05) by dam's weight at kidding, birth weight of kid, milk yield of dam, parity of kidding, season of birth, but pre-netal dam's nutrition found to be the most important factor. Kid mortality reduced from 35% at low level of feeding to 6.5% at high level of feeding of dam during gestation. Apparently, this was due to high (p<0.05) average daily milk yield (334 vs. 556 g/d) and heavier and stronger kid at birth at high feeding level. Effects of Meiotic Stages, Cryoprotectants, Cooling and Vitrification on the Cryopreservation of Porcine Oocytes Huang, Wei-Tung;Holtz, Wolfgang 485 Different factors may affect the sensitivity of porcine oocytes during cryopreservation. The effect of two methods (cooling and vitrification), four cryoprotectants [glycerol (GLY), 1, 2-propanediol (PROH), dimethyl sulfoxide (DMSO) or ethylene glycol (EG)] and two vitrification media (1 M sucrose (SUC)+8 M EG; 8 M EG) on the developmental capacity of porcine oocytes at the germinal vesicle (GV) stage or after IVM at the metaphase II (M II) stage were examined. Survival was assessed by FDA staining, maturation and cleavage following IVF and IVC. A toxicity test for different cryoprotectants (GLY, PROH, DMSO, EG) was conducted at room temperature before cooling. GV and M II-oocytes were equilibrated stepwise in 1.5 M cryoprotectant and diluted out in sucrose. The survival rate of GV-oocytes in the GLY group was significantly lower (82%, p<0.01) than that of the other group (92 to 95%). The EG group achieved a significantly higher maturation rate (84%, p<0.05) but a lower cleavage rate (34%, p<0.01) than the DMSO group and the controls. For M II-oocytes, the survival rates for all groups were 95 to 99% and the cleavage rate of the GLY group was lower than the PROH-group (21 vs 43%, p<0.01). After cooling to $10^{\circ}C$, the survival rates of GV-oocytes in the cryoprotectant groups were 34 to 51%, however, the maturation rates of these oocytes were low (1%) and none developed after IVF. For M II-oocytes, the EG group showed a significantly higher survival rate than those of the other cryoprotectant groups (40% vs 23-26%, p<0.05) and the cleavage rates of PROH, DMSO and EG group reached only 1 to 2%. For a toxicity test of different vitrification media, GV and M II-oocytes were equilibrated stepwise in 100% 8 M EG (group 1) and 1 M SUC + 8 M EG (group 2) or equilibrated in sucrose and then in 8 M EG (SUC+8 M EG, group 3). For GV-oocytes, the survival, maturation and cleavage rates of Group 1 were significantly lower than those in group 2, 3 and control group (p<0.05). For M II-oocytes, there were no differences in survival, maturation and cleavage rates between groups. After vitrification, the survival rates of GV and M II-oocytes in group 2 and 3 were similarly low (4-9%) and none of them matured nor cleaved after in vitro maturation, fertilization and culture. In conclusion, porcine GV and M II-oocytes do not seem to be damaged by a variety of cryoprotectants tested, but will succumb to a temperature decrease to $10^{\circ}C$ or to the process of vitrification, regardless of the cryoprotectant used. The Reproductive Characteristics of the Mare in Subtropical Taiwan Ju, Jyh-Cherng;Peh, Huo-Cheng;Hsu, Jenn-Chung;Cheng, San-Pao;Chiu, Shaw-Ching;Fan, Yang-Kwang 494 The objectives of this study were to document the reproductive traits of mares as influenced by the month of the year in Taiwan. Reproductive records, lactation traits, foal birth weight (FBW) and foal height (FBH) were collected from Holi Equine Station of Taiwan. The effects of month on these parameters were analyzed. The length of estrus (LE) was shortest in December each year. The increasing trend was recorded from January to September with a significantly (p<0.05) longer period of $12.4{\pm}0.4$ days in September than in January and February. A gradual shortening in LE was observed from September to December ($10.1{\pm}0.6$ days, p<0.05), when the shortest period of the year was observed. Mares showed signs of estrus throughout the year, but more than 80% were found in estrus during March through October. The FBW was significantly (p<0.05) affected by the breeding month of the year. The lowest foal weights were recorded in both September ($36.7{\pm}0.7$ kg) and December ($36.8{\pm}0.9$ kg), which were also significantly lower than those in other months except in March, August, and November. A trend of lower FBH from September to December (93.5-93.8 cm) than those from January to August was observed. The greatest FBH was in June (96.2 cm). Breeding months and onset of estrus of the mares exerted a significant effect on the incidence of agalactia during the lactation period. These analyses provide fundamental information on adaptive processes in respect to reproductive characteristics of mares, which indicated an extent of acclimation by these animals in subtropical Taiwan. Effect of Season Influencing Semen Characteristics, Frozen-Thawed Sperm Viability and Testosterone Concentration in Duroc Boars Cheon, Y.M.;Kim, H.K.;Yang, C.B.;Yi, Y.J.;Park, C.S. 500 This study was carried out to investigate the effects of season influencing semen characteristics, frozen-thawed sperm viability and testosterone concentration in Duroc boars. There were no significant differences in the semen volume and sperm concentration of Duroc boars among spring, summer, autumn and winter. However, the pH of sperm-rich and sperm-poor fractions in autumn and winter season was higher than in spring and summer season in Duroc boars. Sperm motility and normal acrosome of raw semen in Duroc boars did not differ significantly among spring, summer, autumn and winter. However, motility and normal acrosome of frozen-thawed sperm were higher in spring season than in summer, autumn and winter. Serum testosterone concentrations in Duroc were higher in spring than summer, autumn and winter. In conclusion, when serum testosterone concentrations were higher in seasons, frozen-thawed sperm viability in Duroc boars were higher. Changes in Maternal Blood Glucose and Plasma Non-Esterified Fatty Acid during Pregnancy and around Parturition in Twin and Single Fetus Bearing Crossbred Goats Khan, J.R.;Ludri, R.S. 504 The effects of fetal number (single or twin) on blood glucose and plasma NEFA during pregnancy and around parturition were studied on ten Alpine ${\times}$ Beetal crossbred goats in their first to third lactation. The animals were divided in-groups 1(carrying single fetus, n=4) and 2(twin fetus, n=6). The samples were drawn on day1 after estrus and then at 14 days interval (fortnight) for 10 fortnights. Around parturition the samples were taken on days -20, -15, -10, -5, -4, -3, -2, -1 prior to kidding and on day 0 and +1, +2, +3, +4, +5, +10, +15, +20 days post kidding. In twin bearing goats the blood glucose concentration continued to increase from 1st until 4th fortnight and thereafter gradually decline from 5th upto 8th fortnight. In single bearing goats there was increase in levels from 2nd upto 4th fortnight and thereafter it declined from 5th uptill 9th fortnight. The difference in sampling interval was highly significant (p<0.01) in both the groups. However the values were higher in single than in twin bearing goats. The plasma NEFA concentration was low in both the groups' upto 4th fortnight and thereafter it is continuously increased upto 9th fortnight. During prepartum period the blood glucose was higher in single than in twin bearing goats. The values were minimum on the day of kidding in both the groups. During postpartum period the values were significantly (p<0.01) higher in twin than in single fetus bearing goats. The plasma NEFA was significantly (p<0.05) higher in twin than in single fetus bearing goats. The blood glucose and plasma NEFA concentration can be used as index of nutritional status during pregnancy and around parturition in goats. Effect of Molybdenum Induced Copper Deficiency on Peripheral Blood Cells and Bone Marrow in Buffalo Calves Randhawa, C.S.;Randhawa, S.S.;Sood, N.K. 509 Copper deficiency was induced in eight male buffalo calves by adding molybdenum (30 ppm wet basis) to their diet. Copper status was monitored from the liver copper concentration and a level below 30 ppm (DM basis) was considered as deficient. Haemoglobin, haematocrit, total and differential leucocyte numbers were determined. The functions of peripheral neutrophils were assessed by in vitro phagocytosis and killing of Staphylococcus aureus. The effect of molybdenum induced copper deficiency on bone marrow was monitored. The mean total leucocyte count was unaffected whereas a significant fall in neutrophil count coincided with the fall in hepatic copper level to $23.9{\pm}2.69$ ppm. Reduced blood neutrophil numbers was not accompanied by any change in the proportion of different neutrophil precursor cells in bone marrow. It was hypothesised that buffalo calves were more tolerant to dietary molybdenum excess than cattle. It was concluded that neutropenia in molybdenum induced copper deficiency occurred without any effect on their synthesis and maturation process. Bone marrow studies in healthy calves revealed higher percentage of neutrophilic myelocytes and metamyelocytes as compared to cattle. Dry Matter Intake, Digestibility and Milk Yield by Friesian Cows Fed Two Napier Grass Varieties Gwayumba, W.;Christensen, D.A.;McKinnon, J.J.;Yu, P. 516 The objective of this study was to compare two varieties of Napier grass (Bana Napier grass vs French Cameroon Napier grass) and to determine whether feed intake, digestibility, average daily gain (ADG) and milk yield of lactating Friesian cows from fresh cut Bana Napier grass was greater than from French Cameroon Napier grass, using a completely randomized design. Results show that Bana Napier grass had similar percent dry matter (DM), ash and gross energy (GE) to French Cameroon. Bana grass had higher percent crude protein (CP) and lower fiber fractions, acid detergent fibre (ADF), neutral detergent fibre (NDF) and lignin compared to French Cameroon. Overall the forage quality was marginally higher in Bana Napier grass compared to French Cameroon. The DM and NDF intake expressed as a percentage of body weight (BW) were similar in both Napier grass types. Both grasses had similar digestible DM and energy. Bana had higher digestible CP but lower digestible ADF and NDF than French Cameroon. Bana Napier was not different from French Cameroon when fed as a sole diet to lactating cows in terms of low DM intake, milk yield and a loss of BW and condition. To improve the efficient utilization of both Napier grass varieties, a supplement capable of supplying 1085-1227 g CP/d and 17.0-18.0 Mcal ME/d is required for cows to support moderate gains 0.22 kg/d and 15 kg 4% fat corrected milk/d. Effects of Feeding Urea and Soybean Meal-Treated Rice Straw on Digestibility of Feed Nutrients and Growth Performance of Bull Calves Ahmed, S.;Khan, M.J.;Shahjalal, M.;Islam, K.M.S. 522 The experiment was conducted for a period of 56 days with twelve Bangladeshi bull calves of average body weight of $127.20{\pm}11.34$ kg. The calves were divided into 3 groups having 4 animals in each. The animals were fed urea-treated rice straw designated as A) 4% urea-treated rice straw, B) 4% urea+4% soybean-treated rice straw and C) 4% urea+6% soybean-treated rice straw. In addition, all the animals were supplied 2 kg green grass, 350 g Til-oil-cake and 100 g common salt per 100 kg body weight of animals. Straw was treated with 4% urea solution and soybean meal at 4 and 6% were added to treated straw and kept for 48 h in double layer polythene bags under anaerobic condition. Urea treatment improved crude protein (CP) content of rice straw from 2.68 to 8.70% and it was further increased by 10.74 and 12.12% with the addition of 4 and 6% soybean meal. Dry matter (DM) intake (kg) was higher (p<0.05) in C (4.2) followed by B (4.1) and A (4.0). Crude protein intake was significantly higher (p<0.05) in group B and C than group A. Total live weight gains were 20.2, 24.8 and 25.6 kg for calves of group A, B and C respectively (p<0.01). The addition of soybean meal to treated rice straw did not affect the coefficients of digestibility of DM, OM, EE and NFE. However, CP and CF digestibility were significantly higher in group B and C (p<0.05). The values for digestible crude protein (DCP), digestible ether extract (DEE), digestible nitrogen free extract (DNFE) and total digestible nutrients (TDN) were significantly (p<0.05) higher in diet C and B in comparison to diet A, but there were no significant difference in digestible organic matter (DOM) and digestible crude fibre (DCF) value among the groups. It may be concluded that 4% urea treated rice straw can be fed to growing bull calves with 2 kg green grass and a small quantity of concentrate without any adverse effect on feed intake and growth. Moreover, soybean meal at 4 and 6% can be added to urea treated rice straw at the time of treatment for rapid hydrolyzing of urea, which resulted an improvement in nutrient digestibility and better utilization of rice straw for growth of growing bull calves. Effect of Different Seasons on Cross-Bred Cow Milk Composition and Paneer Yield in Sub-Himalayan Region Sharma, R.B.;Kumar, Manish;Pathak, V. 528 The study was designed to evaluate the seasonal influences on cross-bred cow milk composition and paneer yield in Dhauladhar mountain range of sub-himalayan region. Fifty samples from each season were collected from a herd of $Jersey{\times}Red\;Sindhi{\times}Local$ cross-bred cows during summer (April-June), rainy (July-September) and winter (November-February) and analyzed for fat, total solids (TS) and solids not fat (SNF). Paneer was prepared by curdling milk at $85{\pm}2^{\circ}C$ with 2.5 per cent citric acid solution. Overall mean for fat, TS and SNF content of milk and paneer yield were 4.528, 13.310, 8.754 and 15.218 per cent respectively. SNF and TS content varied among seasons being highest in winter (8.983% and 13.639%) followed by summer (8.835% and 13.403%) and lowest in rainy season (8.444% and 12.888%). Paneer yield was lowest (14.792%) in rainy season and highest (15.501%) in winter season. Changes of Serum Mineral Concentrations in Horses during Exercise Inoue, Y.;Osawa, T.;Matsui, A.;Asai, Y.;Murakami, Y.;Matsui, T.;Yano, H. 531 We investigated the exercise-induced changes in the serum concentration of several minerals in horses. Four welltrained Thoroughbred horses performed exercise for 5 d. The blood hemoglobin (Hb) concentration increased during exercise, recovered to the pre-exercise level immediately after cooling down and did not change again up till the end of experiment. The changes in serum zinc (Zn) and copper (Cu) concentrations were similar to those of blood Hb during the experiment. The serum magnesium (Mg), inorganic phosphorus (Pi) and iron (Fe) concentrations also increased during exercise. Though the serum Pi concentration recovered to the pre-exercise level immediately after the cooling down, it decreased further before the end of the experiment. The serum Mg concentration was lower immediately after cooling down than its pre-exercise level but gradually recovered from the temporal reduction. The recovery of the serum Fe concentration was delayed compared to that of other minerals and recovered 2 h after cooling down. The serum calcium (Ca) concentration did not change during exercise but rapidly decreased after cooling down. As a result, it was lower immediately after cooling down than its pre-exercise level. It recovered, however, to the pre-exercise level 2 h after cooling down. The temporal increase in the serum concentrations of all minerals except Ca is considered to result from hemoconcentration induced by exercise and the stable concentration of the serum Ca during exercise is possibly due to its strict regulation of homeostasis. These results indicate that the serum concentration of each mineral responds differently to exercise in horses, which may be due to the difference in metabolism among these minerals. Evaluation of Some Aquatic Plants from Bangladesh through Mineral Composition, In Vitro Gas Production and In Situ Degradation Measurements Khan, M.J.;Steingass, H.;Drochner, W. 537 A study was conducted to evaluate the nutritive potential value of different aquatic plants: duckweed (Lemna trisulaca), duckweed (Lemna perpusila), azolla (Azolla pinnata) and water-hyacinth (Eichhornia crassipes) from Bangladesh. A wide variability in protein, mineral composition, gas production, microbial protein synthesis, rumen degradable nitrogen and in situ dry matter and crude protein degradability were recorded among species. Crude protein content ranged from 139 to 330 g/kg dry matter (DM). All species were relatively high in Ca, P, Na, content and very rich in K, Fe, Mg, Mn, Cu and Zn concentration. The rate of gas production was highest in azolla and lowest in water-hyacinth. A similar trend was observed with in situ DM degradability. Crude protein degradability was highest in duckweed. Microbial protein formation at 24 h incubation ranged from 38.6-47.2 mg and in vitro rumen degradable nitrogen between 31.5 and 48.4%. Based on the present findings it is concluded that aquatic species have potential as supplementary diet to livestock. Effect of Partial Replacement of Green Grass by Urea Treated Rice Straw in Winter on Milk Production of Crossbred Lactating Cows Sanh, M.V.;Wiktorsson, H.;Ly, L.V. 543 Fresh elephant grass was replaced by urea treated rice straw (UTRS) to evaluate the effects on milk production of crossed lactating cows. A total of 16 crossbred F1 cows (Holstein Friesian ${\times}$ Vietnamese Local Yellow), with a body weight of about 400 kg and lactation number from three to five, were used in the experiment. The experimental cows were blocked according to the milk yield of the previous eight weeks and divided into 4 homogenous groups. The experiment was conducted with a Latin Square design with 4 treatments and 4 periods. Each period was 4 weeks, with 2 weeks of feed adaptation and 2 weeks for data collection. The ratio of concentrate to roughage in the ration was 50:50. All cows were given constant amounts of elephant grass dry matter (DM), with ratios of 100% grass without UTRS (control treatment 100G), and 75% grass (75G), 50% grass (50G) and 25% grass (25G) with ad libitum UTRS. Daily total DM intake on 100G, 75G, 50G and 25G was 12.04, 12.31, 12.32 and 11.85 kg, and the daily ME intake was 121.6, 121.5, 119.4 and 114.3 MJ, respectively. The daily CP intake was similar for all treatments (1.85-1.91 kg). There was a difference (p<0.05) in daily milk yield between the 25G and the 100G and 75G (11.7 vs. 12.6 and 12.5 kg, respectively). Milk protein concentration was similar for all treatments, while a tendency to increased milk fat concentration following the increase of UTRS ratio was observed. The cows gained 4-5 kg body weight per month and showed first oestrus 3-4 months after calving. The overall feed conversion for milk production was not affected by ratio of UTRS in the ration. It is concluded that replacement of green grass by UTRS with a ratio of 50:50 for crossbred lactating cows is as good as feeding 100% green grass in terms of milk yield, body weight gain and feed conversion. UTRS can preferably replace green grass in daily rations for crossbred dairy cows in winter to cope with the shortage of green grass, with the ratio 1:1. Influence of Dietary Addition of Dried Wormwood (Artemisia sp.) on the Performance, Carcass Characteristics and Fatty Acid Composition of Muscle Tissues of Hanwoo Heifers Kim, Y.M.;Kim, J.H.;Kim, S.C.;Ha, H.M.;Ko, Y.D.;Kim, C.-H. 549 An experiment was conducted to examine the performance and carcass characteristics of Hanwoo (Korean native beef cattle) heifers and the fatty acid composition of muscle tissues of the heifers when the animals fed diets containing four levels of dried wormwood (Artemisia sp.). For the experiment the animals were given a basal diet consisting of rice straw and concentrate mixed at 3:7 ratio (on DM basis). The treatments were designed as a completely randomized design with two feeding periods. Heifers were allotted in one of four dietary treatments, which were designed to progressively substitute dried wormwood for 0, 3, 5 and 10% of the rice straw in the basal diet. There was no difference in body weight gain throughout the entire period between the treatment groups. Feed conversion rate was improved (p<0.05) only by the 3% dried wormwood inclusion treatment compared with the basal treatment. Carcass weight, carcass yield and backfat thickness of all treatment groups were not altered by wormwood inclusion. The 5% dried wormwood inclusion significantly increased (p<0.05) the size of loin-eye area over the other treatments. The higher levels (5 and 10%) of dried wormwood inclusion resulted in the higher (p<0.05) water holding capacity (WHC) in loin than the lower levels (0 and 3%) of wormwood inclusion. The redness ($a^*$) and yellowness ($b^*$) values of meat color were significantly lower (p<0.05) in the top round muscle of heifers fed the diet containing 3% dried wormwood. There was a profound effect of the progressively increased intake of dried wormwood led to the linear increase of unsaturated fatty acid content and the linear decrease of saturated fatty acid content in the muscle tissues of Hanwoo heifers. It is concluded that the feeding diets containing dried wormwood substituted for equal weights of rice straw at 5% levels would be anticipated to provide better quality roughage for beef heifer production and economical benefits for beef cattle producers. The Effects of Chinese and Argentine Soybeans on Nutrient Digestibility and Organ Morphology in Landrace and Chinese Min Pigs Qin, G.X.;Xu, L.M.;Jiang, H.L.;van der Poel, A.F.B.;Bosch, M.W.;Verstegen, M.W.A. 555 Twenty Landrace and twenty Min piglets, with an average initial body weight of 22.4 kg, were randomly divided into 5 groups with 4 animals per group, within each of the breeds. The piglets were housed in individual concrete pens. Each group of the piglets was fed one of 5 diets. The diets contained either 20% raw Argentine soybeans, 20% processed Argentine soybeans ($118^{\circ}C$ for 7.5 min.), 20% raw Chinese soybeans, 20% processed Chinese soybeans ($118^{\circ}C$ for 7.5 min.) or no soybean products (control diet). Faecal samples were collected on days 6, 7 and 8 of the treatment period. Digestibilities of dietary nutrients were determined with AIA (acid insoluble ash) as a marker. After a 17 day treatment, three piglets were killed from each of the groups. Tissue samples of small and large intestine for light and electron microscopy examination were taken immediately after the opening of abdomen. Then, the weight or size of relevant organs was measured. The results show that the digestibilities of dry matter (DM), crude protein (CP) and fat were higher in Min piglets than in Landrace piglets (p<0.05). The diets containing processed soybeans had a significant higher CP digestibility than the control diet and the diets containing raw soybeans (p<0.05). Landrace piglets had heavier and longer small intestines, heavier kidneys and a lighter spleen than Min piglets (p<0.05). The pancreas of the animals fed the diets containing processed soybeans was heavier than that of the animals fed control diet (p<0.05) and the diets containing raw soybeans. But, the differences between raw and processed soybean diets were not significant. A significant interaction (p<0.05) between diet and pig breed was observed in weight of the small intestine. The Landrace piglets increased the weight in their small intestine when they were fed the diets containing soybeans. In the light micrographs and electron scanning micrographs, it was found that the villi of small intestinal epithelium of animals (especially Landrace piglets) fed the diets containing raw Chinese soybeans were seriously damaged. The transmission electron micrograph showed that a lot of vesicles were located between the small intestinal microvilli of these piglets. The histological examination also indicated that the proportion of goblet cells in villi and crypts in the piglets consuming the control diet was significantly lower (p<0.01 and p<0.02, respectively) than those of the animals consuming the diets containing raw or processed soybeans. Evaluation of Sorghum (Sorghum bicolor) as Replacent for Maize in the Diet of Growing Rabbits (Oryctolagus cuniculus) Muriu, J.I.;Njoka-Njiru, E.N.;Tuitoek, J.K.;Nanua, J.N. 565 Thirty six young New Zealand white rabbits were used in a randomised complete block (RCB) design with a $3{\times}2$ factorial treatment experiment to study the suitability of sorghum as substitute for maize in the diet of growing rabbits in Kenya. Six different diets were formulated to contain 35% of one of the three different types of grain (maize, white sorghum or brown sorghum) and one of the two different levels of crude protein (CP) 16 or 18.5% and fed to growing rabbits for a period of six weeks. The tannin content of the grains was 0.05, 0.52 and 5.6% chatechin equivalents for maize, white and brown sorghum respectively. Weaning weight at 35 days of age was used as the blocking criterion at the beginning of the experiment. Results of feed intake, weight gain, feed conversion efficiency, feed digestibility, as well as the blood parameters, indicated that white sorghum was not significantly different from maize. Animals fed on diets containing brown sorghum had a lower average daily gain (ADG) and a poorer feed conversion efficiency (FCE) (p<0.01) in comparison with those fed on diets containing maize or white sorghum. The 18.5% CP level gave a better FCE (p<0.05) compared with the 16% CP level. However, increasing the level of CP did not improve the utilisation of any of the grains. It was concluded that white sorghum could effectively substitute maize in the diet of growing rabbits. On the other hand, the use of brown sorghum in the diets of growing rabbits may compromise their growth rate. This may be due to the high concentration of tannins in the brown sorghum. Effects of Active Immunization against Somatostatin or its Analogues on Milk Protein Synthesis of Rat Mammary Gland Cells Kim, J.Y.;Cho, K.K.;Chung, M.I.;Kim, J.D.;Woo, J.H.;Yun, C.H.;Choi, Y.J. 570 Effects of active immunization against native 14-mer somatostatin (SRIF, somatotropin releasing inhibiting factor) and its two 14-mer-somatostatin analogues on the milk production in rat mammary cells were studied. Native SRIF, Tyr11-somatostatin (Tyr11-SRIF), and D-Trp8, D-Cys14-somatostatin (Trp8Cys14-SRIF) were conjugated to bovine serum albumin (BSA) for immunogen preparation. Twenty-four female Sprague-Dawley rats were divided into four groups and immunized against saline (Control), SRIF, Tyr11-SRIF, and Trp8Cys14-SRIF at five weeks of age. Booster immunizations were performed at 7, 9, and 11 weeks of age. SRIFimmunized rats were mated at 10 weeks of age. The blood and mammary glands were collected at day 15 post-pregnancy and -lactation. To measure the amount of milk protein synthesis in the mammary gland, mammary cells isolated from the pregnant and the lactating rats, were cultured in the presence of $^3H$-lysine. No significant differences in growth performance, concentration of growth hormone in the circulation, and the amount of milk protein synthesis were observed among the groups. Inductive levels of serum anti-SRIF antibody in the SRIF and Tyr11-SRIF groups but not in the Trp8Cys14-SRIF group, were significantly higher than that of the control group during the pregnancy and lactation periods. The result suggests that active immunization against native 14-mer SRIF and Tyr11-SRIF was able to induce anti-SRIF antibodies, but did not affect the milk protein synthesis. Cloning of cDNA Encoding PAS-4 Glycoprotein, an Integral Glycoprotein of Bovine Mammary Epithelial Cell Membrane Hwangbo, Sik;Lee, Soo-Won;Kanno, Chouemon 576 Bovine PAS-4 is an integral membrane glycoprotein expressed in mammary epithelial cells. Complementary DNA (cDNA) cloning of PAS-4 was performed by reverse-transcriptase polymerase chain reaction (RT-PCR) with oligonucleotide probes based on it's amino terminal and internal tryptic-peptides. The cloned PAS-4 cDNA was 1,852 nucleotides (nt) long and its open reading frame (ORF) was encoded 1,413 base long. The deduced amino acid sequence indicated that PAS-4 consisted of 471 amino acid residues with molecular weight of 52,796, bearing 8 potential N-glycosylation sites and 9 cysteine residues. Partial bovine CD36 cDNA from liver also was sequenced and the homology of both nucleotide sequence was 94%. Most of the identical amino acid residues were in the luminal/extracellular domains. Contrary to PAS-4, bovine liver CD36 displays 6 potential N-glycosylation sites, which were located, except for those at positions 101 and 171, at same positions as PAS-4 cDNA. Cysteine residues of PAS-4 and CD36 were same at position and in numbers. Northern blot analysis showed that PAS-4 was widely expressed, although its mRNA steady-state levels vary considerably among the analyzed cell types. PAS-4 possessed hydrophobic amino acid segments near the amino- and carboxyl-termini. Two short cytoplasmic tails of the amino- and carboxyl-terminal ends constituted of a 5-7 and 8-11 amino acid residues, respectively. Effects of High Level of Sucrose on the Moisture Content, Water Activity, Protein Denaturation and Sensory Properties in Chinese-Style Pork Jerky Chen, W.S.;Liu, D.C.;Chen, M.T. 585 The effects of a high level of sucrose on the moisture content, water activity, protein denaturation and sensory properties in Chinese-style pork jerky were investigated. The pork jerky with different levels (0, 12, 15, 18 and 21%) of sucrose was prepared. Fifteen frozen boneless pork legs from different animals were used in this trial. Sucrose is a non-reducing disaccharides and would not undergo non-enzymatic browning. Some studies pointed out that sucrose might be hydrolyzed during freezing, dehydration and storage into glucose and fructose, and cause non-enzymatic browning in meat products. The results showed that moisture content and water activity of pork jerky decreased with increase of the level of sucrose. At the same time, shear value was increased due to the reduced moisture content and water activity by osmotic dehydration. However, a higher level of sucrose had a significantly negative effect on protein solubility and extractability of myosin heavy chain of pork jerky due to non-enzymatic browning. From the results of sensory panel tests, the pork jerky with 21% of sucrose seems to be more acceptable by the panelists in hardness, sweetness and overall acceptability. Evaluation of Ultrasound for Prediction of Carcass Meat Yield and Meat Quality in Korean Native Cattle (Hanwoo) Song, Y.H.;Kim, S.J.;Lee, S.K. 591 Three hundred thirty five progeny testing steers of Korean beef cattle were evaluated ultrasonically for back fat thickness (BFT), longissimus muscle area (LMA) and intramuscular fat (IF) before slaughter. Class measurements associated with the Korean yield grade and quality grade were also obtained. Residual standard deviation between ultrasonic estimates and carcass measurements of BFT, LMA were 1.49 mm and $0.96cm^2$. The linear correlation coefficients (p<0.01) between ultrasonic estimates and carcass measurements of BFT, LMA and IF were 0.75, 0.57 and 0.67, respectively. Results for improving predictions of yield grade by four methods-the Korean yield grade index equation, fat depth alone, regression and decision tree methods were 75.4%, 79.6%, 64.3% and 81.4%, respectively. We conclude that the decision tree method can easily predict yield grade and is also useful for increasing prediction accuracy rate. Treatment of Microencapsulated ${\beta}$- Galactosidase with Ozone : Effect on Enzyme and Microorganism Kwak, H.S.;Lee, J.B.;Ahn, J. 596 The present study was designed to examine the effect of ozone treatment in microencapsulated ${\beta}$-galactosidase on inactivation of the enzyme and sterilization of microorganism. The efficiency was the highest as 78.4% when the ratio of polyglycerol monostearate (PGMS) was 15:1. Activities of lactase remaining outside the capsule were affected by ozone treatment. With the increase of ozone concentration and duration of ozone treatment, the activity reduced significantly. In sensory aspect, with 2% microcapsule addition, no significant difference in sweetness was found compared with a market milk during 12 d storage. Above result indicated that the additional washing process of lactase was not necessary to inactivate the residual enzyme. In a subsequent study, the vegetative cells of microorganisms were completely killed with 10 ppm for 10 min treatment by ozone. The present study provides evidence that ozone treatment can be used as an inactivation and a sterilization process. In addition, these results suggest that acceptable milk products containing lactase microcapsules made by PGMS can be prepared with ozone treatment. Studies on Lao-Chao Culture Filtrate for a Flavoring Agent in a Yogurt-Like Product Liu, Yi-Chung;Chen, Ming-Ju;Lin, Chin-Wen 602 Lao-chao is a traditional Chinese fermented rice product with a sweet and fruity flavor, containing high levels of glucose, a little alcohol and milk-clotting characteristics. In order to optimize commercial production of lao-chao, Rhizopus javanicus and Saccharomyces cerevisiae were selected as the mold and yeast starter, respectively. A commercial mixed starter (chiu-yao) was used as control. Fermentation of the experimental combination revealed a sharp drop in pH (to 4.5) on the fourth day, remaining constant thereafter. Content of reducing sugars gradually decreased throughout the entire fermentation period. Of the free amino acids, higher quantities of alanine, leucine, proline, glutamic acid, glutamine and $NH_3$ were noted. For sugars, glucose revealed the highest concentration, while organic acid levels, including those for oxalic, lactic, citric and pyroglutamic acid, increased throughout the fermentation period. Twenty-one compounds were identified by gas chromatography from aroma concentrates of the lao-chao culture filtrate, prepared using the headspace method. For the flavor components, higher quantities of ethanol, fusel oil and ester were determined in both culture filtrates. In regard to the evaluation of yogurt-like product, there were significant differences in alcoholic smell, texture and curd firmness. Prevalence of Fumonisin Contamination in Corn and Corn-based Feeds in Taiwan Cheng, Yeong-Hsiang;Wu, Jih-Fang;Lee, Der-Nan;Yang, Che-Ming J. 610 The purpose of this study was to investigate the prevalence of fumonisin contamination in corn and corn-based feeds in Taiwan. A total of 233 samples was collected from 8 feed mill factories located in four different regions in Taiwan. The presence of fumonisin $B_1$ ($FB_1$) and $B_2$ ($FB_2$) was determined by thin layer chromatograph, while the total fumonisin content was determined using immuno-affinity column cleanup and fluorometer quantitation. Our results showed that 55 samples of swine feeds had the highest percentage of incidence of $FB_1$ and $FB_2$ (41.8% and 41.8%, respectively), followed by 66 samples of duck feeds (40.9% and 37.8%). However, the percentage of incidence of $FB_1$ and $FB_2$ was much lower in 43 samples of broiler feeds (23.2% and 13.9%) and 69 samples of corn (17.3% and 10.1%). Corn and duck feeds were found to have a significant higher level of means of total fumonisins ($5.4{\pm}1.5$ and $5.8{\pm}0.6$ ppm, respectively) than swine feeds ($2.9{\pm}0.4$ ppm) and broiler feeds ($3.0{\pm}0.5$ ppm). Comparing fumonisins distribution in different regions, the highest percentage of $FB_1$ incidence (39.2%) was found in the eastern region of Taiwan, and total fumonisins level ($4.5{\pm}0.7$ ppm) was significantly higher than other regions. However, the highest percentage of $FB_2$ incidence (32.0%) was found in the central region of Taiwan. Trimonthly analysis of data showed that both high percentage of $FB_1$ and $FB_2$ incidence (39.3% and 37.7%) and total concentration of fumonisin ($5.7{\pm}0.4$ ppm) were found in the period of Jan. to Mar., The incidence and concentration were significantly higher than other trimothly periods. These results indicate that fumonisin B mycotoxins are both widespread and persistent in feed-grade corn and corn-based feeds in Taiwan.
CommonCrawl
Publishing house / Journals and Serials / Studia Mathematica / All issues Search for IMPAN publications PDF files of articles are only available for institutions which have paid for the online version upon signing an Institutional User License. Computation of the Łojasiewicz exponent for a germ of a smooth function in two variables Volume 240 / 2018 Ha Huy Vui Studia Mathematica 240 (2018), 161-176 MSC: 58K55, 14P05, 32C99. DOI: 10.4064/sm8676-4-2017 Published online: 1 September 2017 Let $f:(\mathbb {R}^2,0)\rightarrow (\mathbb {R},0)$ be a germ of a smooth function. We give a sufficient condition for the Łojasiewicz inequality to hold for $f$, i.e. there exist a neighbourhood $\varOmega $ of the origin and constants $c, \alpha \gt 0$ such that $$ |f(x)|\geq c\operatorname {dist}(x, f^{-1}(0))^{\alpha } $$ for all $x\in \varOmega .$ Then, under this condition, we compute the Łojasiewicz exponent of $f.$ As a by-product we obtain a formula for the Łojasiewicz exponent of a germ of an analytic function, which is different from that of T. C. Kuo [Comment. Math. Helv. 49 (1974), 201–213]. Ha Huy VuiInstitute of Mathematics, VAST 18 Hoang Quoc Viet, Cau Giay District Thang Long Institute of Mathematics and Applied Sciences Nghiem Xuan Yem Road Hoang Mai District 8.00 EUR USD all title author MSCS 2010 Query phrase too short. Type at least 4 characters.
CommonCrawl
Active Calculus Matthew Boelkins ☰Contents AssignmentsPractice Peer Instruction (Instructor)Peer Instruction (Student) Edit ProfileChange PasswordLog Out ✏ Choose avatar ✔️You! 👽 ✔️Open SansAaBbCc 123 PreTeXt Roboto SerifAaBbCc 123 PreTeXt Adjust font wider f a r t h e r Word spacing smaller gap larger gap ✔️default Reading ruler ✔️none L-underline grey bar sunrise underline Motion by: ✔️follow the mouse up/down arrows - not yet eye tracking - not yet <Prev^UpNext> \(\newcommand{\dollar}{\$} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\arctanh}{arctanh} \DeclareMathOperator{\arcsec}{arcsec} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \definecolor{fillinmathshade}{gray}{0.9} \newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}} \) Active Calculus: Our Goals Features of the Text Students! Read this! Instructors! Read this! 1 Understanding the Derivative 1.1 How do we measure velocity? 1.1.1 Position and average velocity 1.1.2 Instantaneous Velocity 1.1.4 Exercises 1.2 The notion of limit 1.2.1 The Notion of Limit 1.3 The derivative of a function at a point 1.3.1 The Derivative of a Function at a Point 1.4 The derivative function 1.4.1 How the derivative is itself a function 1.5 Interpreting, estimating, and using the derivative 1.5.1 Units of the derivative function 1.5.2 Toward more accurate derivative estimates 1.6 The second derivative 1.6.1 Increasing or decreasing 1.6.2 The Second Derivative 1.6.3 Concavity 1.7 Limits, Continuity, and Differentiability 1.7.1 Having a limit at a point 1.7.2 Being continuous at a point 1.7.3 Being differentiable at a point 1.8 The Tangent Line Approximation 1.8.1 The tangent line 1.8.2 The local linearization 2 Computing Derivatives 2.1 Elementary derivative rules 2.1.1 Some Key Notation 2.1.2 Constant, Power, and Exponential Functions 2.1.3 Constant Multiples and Sums of Functions 2.2 The sine and cosine functions 2.2.1 The sine and cosine functions 2.3 The product and quotient rules 2.3.1 The product rule 2.3.2 The quotient rule 2.3.3 Combining rules 2.4 Derivatives of other trigonometric functions 2.4.1 Derivatives of the cotangent, secant, and cosecant functions 2.5 The chain rule 2.5.1 The chain rule 2.5.2 Using multiple rules simultaneously 2.5.3 The composite version of basic function rules 2.6 Derivatives of Inverse Functions 2.6.1 Basic facts about inverse functions 2.6.2 The derivative of the natural logarithm function 2.6.3 Inverse trigonometric functions and their derivatives 2.6.4 The link between the derivative of a function and the derivative of its inverse 2.7 Derivatives of Functions Given Implicitly 2.7.1 Implicit Differentiation 2.8 Using Derivatives to Evaluate Limits 2.8.1 Using derivatives to evaluate indeterminate limits of the form \(\frac{0}{0}\text{.}\) 2.8.2 Limits involving \(\infty\) 3 Using Derivatives 3.1 Using derivatives to identify extreme values 3.1.1 Critical numbers and the first derivative test 3.1.2 The second derivative test 3.2 Using derivatives to describe families of functions 3.2.1 Describing families of functions in terms of parameters 3.3 Global Optimization 3.3.1 Global Optimization 3.3.2 Moving toward applications 3.4 Applied Optimization 3.4.1 More applied optimization problems 3.5 Related Rates 3.5.1 Related Rates Problems 4 The Definite Integral 4.1 Determining distance traveled from velocity 4.1.1 Area under the graph of the velocity function 4.1.2 Two approaches: area and antidifferentiation 4.1.3 When velocity is negative 4.2 Riemann Sums 4.2.1 Sigma Notation 4.2.2 Riemann Sums 4.2.3 When the function is sometimes negative 4.3 The Definite Integral 4.3.1 The definition of the definite integral 4.3.2 Some properties of the definite integral 4.3.3 How the definite integral is connected to a function's average value 4.4 The Fundamental Theorem of Calculus 4.4.1 The Fundamental Theorem of Calculus 4.4.2 Basic antiderivatives 4.4.3 The total change theorem 5 Evaluating Integrals 5.1 Constructing Accurate Graphs of Antiderivatives 5.1.1 Constructing the graph of an antiderivative 5.1.2 Multiple antiderivatives of a single function 5.1.3 Functions defined by integrals 5.2 The Second Fundamental Theorem of Calculus 5.2.1 The Second Fundamental Theorem of Calculus 5.2.2 Understanding Integral Functions 5.2.3 Differentiating an Integral Function 5.3 Integration by Substitution 5.3.1 Reversing the Chain Rule: First Steps 5.3.2 Reversing the Chain Rule: \(u\)-substitution 5.3.3 Evaluating Definite Integrals via \(u\)-substitution 5.4 Integration by Parts 5.4.1 Reversing the Product Rule: Integration by Parts 5.4.2 Some Subtleties with Integration by Parts 5.4.3 Using Integration by Parts Multiple Times 5.4.4 Evaluating Definite Integrals Using Integration by Parts 5.4.5 When \(u\)-substitution and Integration by Parts Fail to Help 5.5 Other Options for Finding Algebraic Antiderivatives 5.5.1 The Method of Partial Fractions 5.5.2 Using an Integral Table 5.5.3 Using Computer Algebra Systems 5.6 Numerical Integration 5.6.1 The Trapezoid Rule 5.6.2 Comparing the Midpoint and Trapezoid Rules 5.6.3 Simpson's Rule 5.6.4 Overall observations regarding \(L_n\text{,}\) \(R_n\text{,}\) \(T_n\text{,}\) \(M_n\text{,}\) and \(S_{2n}\text{.}\) 6 Using Definite Integrals 6.1 Using Definite Integrals to Find Area and Length 6.1.1 The Area Between Two Curves 6.1.2 Finding Area with Horizontal Slices 6.1.3 Finding the length of a curve 6.2 Using Definite Integrals to Find Volume 6.2.1 The Volume of a Solid of Revolution 6.2.2 Revolving about the \(y\)-axis 6.2.3 Revolving about horizontal and vertical lines other than the coordinate axes 6.3 Density, Mass, and Center of Mass 6.3.1 Density 6.3.2 Weighted Averages 6.3.3 Center of Mass 6.4 Physics Applications: Work, Force, and Pressure 6.4.1 Work 6.4.2 Work: Pumping Liquid from a Tank 6.4.3 Force due to Hydrostatic Pressure 6.5 Improper Integrals 6.5.1 Improper Integrals Involving Unbounded Intervals 6.5.2 Convergence and Divergence 6.5.3 Improper Integrals Involving Unbounded Integrands 7 Differential Equations 7.1 An Introduction to Differential Equations 7.1.1 What is a differential equation? 7.1.2 Differential equations in the world around us 7.1.3 Solving a differential equation 7.2 Qualitative behavior of solutions to DEs 7.2.1 Slope fields 7.2.2 Equilibrium solutions and stability 7.3 Euler's method 7.3.1 Euler's Method 7.3.2 The error in Euler's method 7.4 Separable differential equations 7.4.1 Solving separable differential equations 7.5 Modeling with differential equations 7.5.1 Developing a differential equation 7.6 Population Growth and the Logistic Equation 7.6.1 The earth's population 7.6.2 Solving the logistic differential equation 8 Sequences and Series 8.1 Sequences 8.1.1 Sequences 8.2 Geometric Series 8.2.1 Geometric Series 8.3 Series of Real Numbers 8.3.1 Infinite Series 8.3.2 The Divergence Test 8.3.3 The Integral Test 8.3.4 The Limit Comparison Test 8.3.5 The Ratio Test 8.4 Alternating Series 8.4.1 The Alternating Series Test 8.4.2 Estimating Alternating Sums 8.4.3 Absolute and Conditional Convergence 8.4.4 Summary of Tests for Convergence of Series 8.5 Taylor Polynomials and Taylor Series 8.5.1 Taylor Polynomials 8.5.2 Taylor Series 8.5.3 The Interval of Convergence of a Taylor Series 8.5.4 Error Approximations for Taylor Polynomials 8.6 Power Series 8.6.1 Power Series 8.6.2 Manipulating Power Series Back Matter A A Short Table of Integrals B Answers to Activities C Answers to Selected Exercises Section 7.3 Euler's method Motivating Questions What is Euler's method and how can we use it to approximate the solution to an initial value problem? How accurate is Euler's method? In Section 7.2, we saw how a slope field can be used to sketch solutions to a differential equation. In particular, the slope field is a plot of a large collection of tangent lines to a large number of solutions of the differential equation, and we sketch a single solution by simply following these tangent lines. With a little more thought, we can use this same idea to approximate numerically the solutions of a differential equation. Preview Activity 7.3.1. Consider the initial value problem \begin{equation*} \frac{dy}{dt} = \frac12 (y + 1), \ y(0) = 0\text{.} \end{equation*} Use the differential equation to find the slope of the tangent line to the solution \(y(t)\) at \(t=0\text{.}\) Then use the given initial value to find the equation of the tangent line at \(t=0\text{.}\) Sketch the tangent line on the axes provided in Figure 7.3.1 on the interval \(0\leq t\leq 2\) and use it to approximate \(y(2)\text{,}\) the value of the solution at \(t=2\text{.}\) Figure 7.3.1. Grid for plotting the tangent line. Assuming that your approximation for \(y(2)\) is the actual value of \(y(2)\text{,}\) use the differential equation to find the slope of the tangent line to \(y(t)\) at \(t=2\text{.}\) Then, write the equation of the tangent line at \(t=2\text{.}\) Add a sketch of this tangent line on the interval \(2\leq t\leq 4\) to your plot Figure 7.3.1; use this new tangent line to approximate \(y(4)\text{,}\) the value of the solution at \(t=4\text{.}\) Repeat the same step to find an approximation for \(y(6)\text{.}\) Subsection 7.3.1 Euler's Method Preview Activity 7.3.1 demonstrates an algorithm known as Euler's 1 Method, which generates a numerical approximation to the solution of an initial value problem. In this algorithm, we will approximate the solution by taking horizontal steps of a fixed size that we denote by \(\Delta t\text{.}\) Before explaining the algorithm in detail, let's remember how we compute the slope of a line: the slope is the ratio of the vertical change to the horizontal change, as shown in Figure 7.3.2. In other words, \(m = \frac{\Delta y}{\Delta t}\text{.}\) Solving for \(\Delta y\text{,}\) we see that the vertical change is the product of the slope and the horizontal change, or \begin{equation*} \Delta y = m\Delta t\text{.} \end{equation*} Figure 7.3.2. The role of slope in Euler's Method. Now, suppose that we would like to solve the initial value problem \begin{equation*} \frac{dy}{dt} = t - y, \ y(0) = 1\text{.} \end{equation*} There is an algorithm by which we can find an algebraic formula for the solution to this initial value problem, and we can check that this solution is \(y(t) = t -1 + 2e^{-t}\text{.}\) But we are instead interested in generating an approximate solution by creating a sequence of points \((t_i, y_i)\text{,}\) where \(y_i\approx y(t_i)\text{.}\) For this first example, we choose \(\Delta t = 0.2\text{.}\) Since we know that \(y(0) = 1\text{,}\) we will take the initial point to be \((t_0,y_0) = (0,1)\) and move horizontally by \(\Delta t = 0.2\) to the point \((t_1,y_1)\text{.}\) Thus, \(t_1=t_0+\Delta t = 0.2\text{.}\) Now, the differential equation tells us that the slope of the tangent line at this point is \begin{equation*} m=\frac{dy}{dt}\bigg\vert_{(0,1)} = 0-1 = -1\text{,} \end{equation*} so to move along the tangent line by taking a horizontal step of size \(\Delta t=0.2\text{,}\) we must also move vertically by \begin{equation*} \Delta y = m\Delta t = -1\cdot 0.2 = -0.2\text{.} \end{equation*} We then have the approximation \(y(0.2) \approx y_1= y_0 + \Delta y = 1 - 0.2 = 0.8\text{.}\) At this point, we have executed one step of Euler's method, as seen graphically in Figure 7.3.3. Figure 7.3.3. One step of Euler's method. Now we repeat this process: at \((t_1,y_1) = (0.2,0.8)\text{,}\) the differential equation tells us that the slope is \begin{equation*} m=\frac{dy}{dt}\bigg\vert_{(0.2,0.8)} = 0.2-0.8 = -0.6\text{.} \end{equation*} If we move forward horizontally by \(\Delta t\) to \(t_2=t_1+\Delta = 0.4\text{,}\) we must move vertically by \begin{equation*} \Delta y = -0.6\cdot0.2 = -0.12\text{.} \end{equation*} We consequently arrive at \(y_2=y_1+\Delta y = 0.8-0.12 = 0.68\text{,}\) which gives \(y(0.2)\approx 0.68\text{.}\) Now we have completed the second step of Euler's method, as shown in Figure 7.3.4. Figure 7.3.4. Two steps of Euler's method. If we continue in this way, we may generate the points \((t_i, y_i)\) shown in Figure 7.3.5. Because we can find a formula for the actual solution \(y(t)\) to this differential equation, we can graph \(y(t)\) and compare it to the points generated by Euler's method, as shown in Figure 7.3.6. Figure 7.3.5. The points and piecewise linear approximate solution generated by Euler's method. Figure 7.3.6. The approximate solution compared to the exact solution (shown in blue). Because we need to generate a large number of points \((t_i,y_i)\text{,}\) it is convenient to organize the implementation of Euler's method in a table as shown. We begin with the given initial data. \(t_i\) \(y_i\) \(dy/dt\) \(\Delta y\) \(0.0000\) \(1.0000\) From here, we compute the slope of the tangent line \(m=dy/dt\) using the formula for \(dy/dt\) from the differential equation, and then we find \(\Delta y\text{,}\) the change in \(y\text{,}\) using the rule \(\Delta y = m\Delta t\text{.}\) \(0.0000\) \(1.0000\) \(-1.0000\) \(-0.2000\) Next, we increase \(t_i\) by \(\Delta t\) and \(y_i\) by \(\Delta y\) to get We continue the process for however many steps we decide, eventually generating a table like Table 7.3.7. Table 7.3.7. Euler's method for 6 steps with \(\Delta t = 0.2\text{.}\) \(0.8000\) \(0.6192\) \(0.1808\) \(0.0362\) Activity 7.3.2. \begin{equation*} \frac{dy}{dt} = 2t-1, \ y(0) = 0 \end{equation*} Use Euler's method with \(\Delta t = 0.2\) to approximate the solution at \(t_i = 0.2, 0.4, 0.6, 0.8\text{,}\) and \(1.0\text{.}\) Record your work in the following table, and sketch the points \((t_i, y_i)\) on the axes provided. Table 7.3.8. Table for recording results of Euler's method. \(0.2000\) Figure 7.3.9. Grid for plotting points generated by Euler's method. Find the exact solution to the original initial value problem and use this function to find the error in your approximation at each one of the points \(t_i\text{.}\) Explain why the value \(y_5\) generated by Euler's method for this initial value problem produces the same value as a left Riemann sum for the definite integral \(\int_0^1 (2t-1)~dt\text{.}\) How would your computations differ if the initial value was \(y(0) = 1\text{?}\) What does this mean about different solutions to this differential equation? Hint. Small hints for each of the prompts above. Consider the differential equation \(\frac{dy}{dt} = 6y-y^2\text{.}\) Sketch the slope field for this differential equation on the axes provided in Figure 7.3.10. Figure 7.3.10. Grid for plotting the slope field of the given differential equation. Identify any equilibrium solutions and determine whether they are stable or unstable. What is the long-term behavior of the solution that satisfies the initial value \(y(0) = 1\text{?}\) Using the initial value \(y(0) = 1\text{,}\) use Euler's method with \(\Delta t = 0.2\) to approximate the solution at \(t_i = 0.2, 0.4, 0.6, 0.8\text{,}\) and \(1.0\text{.}\) Record your results in Table 7.3.11 and sketch the corresponding points \((t_i, y_i)\) on the axes provided in Figure 7.3.12. Note the different horizontal scale on the axes in Figure 7.3.12 compared to Figure 7.3.10. Table 7.3.11. Table for recording results of Euler's method with \(\Delta t = 0.2\text{.}\) \(0.0\) \(1.0000\) \(0.2\) Figure 7.3.12. Axes for plotting the results of Euler's method. What happens if we apply Euler's method to approximate the solution with \(y(0) = 6\text{?}\) Subsection 7.3.2 The error in Euler's method Since we are approximating the solutions to an initial value problem using tangent lines, we should expect that the error in the approximation will be smaller when the step size is smaller. Consider the initial value problem \begin{equation*} \frac{dy}{dt} = y, \ y(0) = 1\text{,} \end{equation*} whose solution we can easily find. The question posed by this initial value problem is "what function do we know that is the same as its own derivative and has value 1 when \(t=0\text{?}\)" It is not hard to see that the solution is \(y(t) = e^t\text{.}\) We now apply Euler's method to approximate \(y(1) = e\) using several values of \(\Delta t\text{.}\) These approximations will be denoted by \(E_{\Delta t}\text{,}\) and we'll use them to see how accurate Euler's Method is. To begin, we apply Euler's method with a step size of \(\Delta t = 0.2\text{.}\) In that case, we find that \(y(1) \approx E_{0.2} = 2.4883\text{.}\) The error is therefore \begin{equation*} y(1) - E_{0.2} = e - 2.4883 \approx 0.2300\text{.} \end{equation*} Repeatedly halving \(\Delta t\) gives the following results, expressed in both tabular and graphical form. Table 7.3.13. Errors that correspond to different \(\Delta t\) values. \(\Delta t\) \(E_{\Delta t}\) Error \(0.200\) \(2.4883\) \(0.2300\) Figure 7.3.14. A plot of the error as a function of \(\Delta t\text{.}\) Notice, both numerically and graphically, that the error is roughly halved when \(\Delta t\) is halved. This example illustrates the following general principle. If Euler's method is used to approximate the solution to an initial value problem at a point \(\overline{t}\text{,}\) then the error is proportional to \(\Delta t\text{.}\) That is, \begin{equation*} y(\overline{t}) - E_{\Delta t} \approx K\Delta t \end{equation*} for some constant of proportionality \(K\text{.}\) Subsection 7.3.3 Summary Euler's method is an algorithm for approximating the solution to an initial value problem by following the tangent lines while we take horizontal steps across the \(t\)-axis. If we wish to approximate \(y(\overline{t})\) for some fixed \(\overline{t}\) by taking horizontal steps of size \(\Delta t\text{,}\) then the error in our approximation is proportional to \(\Delta t\text{.}\) Exercises 7.3.4 Exercises 1. A few steps of Euler's method. Consider the differential equation \(y'=-x-y\text{.}\) Use Euler's method with \(\Delta x=0.1\) to estimate \(y\) when \(x=1.4\) for the solution curve satisfying \(y(1)=1\) : Euler's approximation gives \(y(1.4)\approx\) Use Euler's method with \(\Delta x=0.1\) to estimate \(y\) when \(x=2.4\) for the solution curve satisfying \(y(1) = 0\) : Euler's approximation gives \(y(2.4)\approx\) 2. Using Euler's method for a solution of \(y'=4y\). Consider the solution of the differential equation \(y' = -2 y\) passing through \(y(0) = 1\text{.}\) A. Sketch the slope field for this differential equation, and sketch the solution passing through the point (0,1). B. Use Euler's method with step size \(\Delta x=0.2\) to estimate the solution at \(x=0.2,0.4,\ldots,1\text{,}\) using these to fill in the following table. (Be sure not to round your answers at each step!) \(x =\) 0 0.2 0.4 0.6 0.8 1.0 \(y\approx\) 1 C. Plot your estimated solution on your slope field. Compare the solution and the slope field. Is the estimated solution an over or under estimate for the actual solution? D. Check that \(y = e^{-2 x}\) is a solution to \(y' = -2 y\) with \(y(0) = 1\text{.}\) 3. Using Euler's method with different time steps. Use Euler's method to solve \begin{equation*} \frac{dB}{dt}= 0.05 B \end{equation*} with initial value \(B=800\) when \(t=0\) . A. \(\Delta t = 1\) and 1 step: \(B(1) \approx\) B. \(\Delta t = 0.5\) and 2 steps: \(B(1) \approx\) C. \(\Delta t = 0.25\) and 4 steps: \(B(1) \approx\) D. Suppose \(B\) is the balance in a bank account earning interest. Be sure that you can explain why the result of your calculation in part (a) is equivalent to compounding the interest once a year instead of continuously. Then interpret the result of your calculations in parts (b) and (c) in terms of compound interest. Newton's Law of Cooling says that the rate at which an object, such as a cup of coffee, cools is proportional to the difference in the object's temperature and room temperature. If \(T(t)\) is the object's temperature and \(T_r\) is room temperature, this law is expressed at \begin{equation*} \frac{dT}{dt} = -k(T-T_r)\text{,} \end{equation*} where \(k\) is a constant of proportionality. In this problem, temperature is measured in degrees Fahrenheit and time in minutes. Two calculus students, Alice and Bob, enter a 70\(^\circ\) classroom at the same time. Each has a cup of coffee that is 100\(^\circ\text{.}\) The differential equation for Alice has a constant of proportionality \(k=0.5\text{,}\) while the constant of proportionality for Bob is \(k=0.1\text{.}\) What is the initial rate of change for Alice's coffee? What is the initial rate of change for Bob's coffee? What feature of Alice's and Bob's cups of coffee could explain this difference? As the heating unit turns on and off in the room, the temperature in the room is \begin{equation*} T_r=70+10\sin t\text{.} \end{equation*} Implement Euler's method with a step size of \(\Delta t = 0.1\) to approximate the temperature of Alice's coffee over the time interval \(0\leq t\leq 50\text{.}\) This will most easily be performed using a spreadsheet such as Excel. Graph the temperature of her coffee and room temperature over this interval. In the same way, implement Euler's method to approximate the temperature of Bob's coffee over the same time interval. Graph the temperature of his coffee and room temperature over the interval. Explain the similarities and differences that you see in the behavior of Alice's and Bob's cups of coffee. We have seen that the error in approximating the solution to an initial value problem is proportional to \(\Delta t\text{.}\) That is, if \(E_{\Delta t}\) is the Euler's method approximation to the solution to an initial value problem at \(\overline{t}\text{,}\) then \begin{equation*} y(\overline{t})-E_{\Delta t} \approx K\Delta t \end{equation*} In this problem, we will see how to use this fact to improve our estimates, using an idea called accelerated convergence. We will create a new approximation by assuming the error is exactly proportional to \(\Delta t\text{,}\) according to the formula \begin{equation*} y(\overline{t})-E_{\Delta t} =K\Delta t\text{.} \end{equation*} Using our earlier results from the initial value problem \(dy/dt = y\) and \(y(0)=1\) with \(\Delta t = 0.2\) and \(\Delta t = 0.1\text{,}\) we have \begin{align*} y(1) - 2.4883 =\mathstrut \amp 0.2K\\ y(1) - 2.5937 =\mathstrut \amp 0.1K\text{.} \end{align*} This is a system of two linear equations in the unknowns \(y(1)\) and \(K\text{.}\) Solve this system to find a new approximation for \(y(1)\text{.}\) (You may remember that the exact value is \(y(1) = e = 2.71828\ldots\text{.}\)) Use the other data, \(E_{0.05} = 2.6533\) and \(E_{0.025} = 2.6851\) to do similar work as in (a) to obtain another approximation. Which gives the better approximation? Why do you think this is? Let's now study the initial value problem \begin{equation*} \frac{dy}{dt} = t-y, \ y(0) = 0\text{.} \end{equation*} Approximate \(y(0.3)\) by applying Euler's method to find approximations \(E_{0.1}\) and \(E_{0.05}\text{.}\) Now use the idea of accelerated convergence to obtain a better approximation. (For the sake of comparison, you want to note that the actual value is \(y(0.3) = 0.0408\text{.}\)) In this problem, we'll modify Euler's method to obtain better approximations to solutions of initial value problems. This method is called the Improved Euler's method. In Euler's method, we walk across an interval of width \(\Delta t\) using the slope obtained from the differential equation at the left endpoint of the interval. Of course, the slope of the solution will most likely change over this interval. We can improve our approximation by trying to incorporate the change in the slope over the interval. Let's again consider the initial value problem \(dy/dt = y\) and \(y(0) = 1\text{,}\) which we will approximate using steps of width \(\Delta t = 0.2\text{.}\) Our first interval is therefore \(0\leq t \leq 0.2\text{.}\) At \(t=0\text{,}\) the differential equation tells us that the slope is 1, and the approximation we obtain from Euler's method is that \(y(0.2)\approx y_1= 1+ 1(0.2)= 1.2\text{.}\) This gives us some idea for how the slope has changed over the interval \(0\leq t\leq 0.2\text{.}\) We know the slope at \(t=0\) is 1, while the slope at \(t=0.2\) is 1.2, trusting in the Euler's method approximation. We will therefore refine our estimate of the initial slope to be the average of these two slopes; that is, we will estimate the slope to be \((1+1.2)/2 = 1.1\text{.}\) This gives the new approximation \(y(1) = y_1 = 1 + 1.1(0.2) = 1.22\text{.}\) The first few steps look like what is found in Table 7.3.15. Table 7.3.15. The first several steps of the improved Euler's method \(t_i\) \(y_i\) Slope at \((t_{i+1},y_{i+1})\) Average slope \(0.0\) \(1.0000\) \(1.2000\) \(1.1000\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) Continue with this method to obtain an approximation for \(y(1) = e\text{.}\) Repeat this method with \(\Delta t = 0.1\) to obtain a better approximation for \(y(1)\text{.}\) We saw that the error in Euler's method is proportional to \(\Delta t\text{.}\) Using your results from parts (a) and (b), what power of \(\Delta t\) appears to be proportional to the error in the Improved Euler's Method? You have attempted of activities on this page. "Euler" is pronounced "Oy-ler." Among other things, Euler is the mathematician credited with the famous number \(e\text{;}\) if you incorrectly pronounce his name "You-ler," you fail to appreciate his genius and legacy. <Prev^TopNext> Authored in PreTeXt Hosted on Runestone
CommonCrawl
Metrics for network power based on Castells' Network Theory of Power: a case study on Brazilian elections Letícia Verona ORCID: orcid.org/0000-0002-8190-45401, Jonice Oliveira1, José Vitor da Cunha Hisse1 & Maria Luiza Machado Campos1 The identification of power groups that act within the political structure is a tool for citizens and also a challenging research topic as an unstable and complex force of interests system. This paper presents metrics based on sociologist Manuel Castells' Network Theory of Power that reflect key factors for evaluating power: the imbalance between relationships; the ability to program the rules and protocols of the network; and the ability to serve as a switcher between two or more networks. A case study was developed using a network built on data from Brazilian Elections about electoral donations since 2002. The application of the metrics enabled to highlight some of the corporate and party interests dominant in the Brazilian context. The proposed metrics reflect the main contribution of this work: an approximation between sociology theory and topological analysis of a network. The use of domain knowledge combined with bottom-up strategies can leverage the comprehension of power and influence in political networks. Social power is an important concept to understand the actions taken within a particular context. The analysis of how much power one actor has over another is crucial to undercover the interests to which these actions are subordinate. When the agents are members of the government, they have the potential to affect millions of people and shape society. The identification of power groups that act within the political structure is a tool for citizens to face the debate with a critical and informed position. As the world move to a radical concentration of power in the hands of a few oligarchs, the study of its mechanisms is fundamental to activists and social movements to elaborate counter-power initiatives. The political context also offers a challenging and complex scenario for research: balance in relations are always changing; news political agents thrive; old politicians retire; pressure from the citizens can remove a politician from duty; corporations try to manipulate laws or market regulations in their benefit; elections can renew or change completely those who have power positions. Power is frequently used interchangeably with influence and authority. It is necessary to remember classical sociological definitions to perceive the subtle differences between them: in [1] power is defined as potential influence and authority as a temporary power related to a specific role. Social influence and, more specifically, its propagation through networks, have gathered a lot of attention in computer science over the last years [2–6], leveraged by the amount of data available online. Social influence analysis aims at qualitatively and quantitatively measuring the influence of one person on others. The main problems in the area are (i) how to quantify the influence of each user and (ii) how to identify the so-called power elite - the key decision makers in a network. There are several definitions of influence and methods for calculating influence scores for different empirical purposes [7]. SNA (Social Network Analysis) is a field of research founded on sociology, physics, biology, mathematics, and computer science [8]. Analysts use mathematical models and graph theory to extract knowledge from data-built networks. The construction of the network is an arbitrary abstraction guided by scientists who decide what a node is and how the connections between nodes are made, for example using followers in social media applications [9–11] or co-authored academic papers [12–14]. From the connections, the most important nodes are determined by metrics like PageRank [15], betweenness centrality [16], among many others. In most of the studies, the network structure (or topology) reveals the influence dynamics in a bottom-up approach. However, there are many possibilities still open, and the present study aims to add to the discussion on the evaluation of power and influence in social networks by applying a top-down approach. The research question is: how can we measure power in a political network using concepts from sociology, more specifically from Manuel Castells' theory? Using concepts from Castells' A Network Theory of Power [17], we developed a set of metrics for evaluating power in a political network. The choice of Castells was motivated by the gap between his works and SNA literature: despite being a major specialist in theorizing our contemporary world as a network society, he is not mentioned as a key player by SNA researchers. Manuel Castells is perhaps the most influential sociologist in the analysis and understanding of contemporary society as a networked society. His trilogy The Information Age: Economy, Society and Culture[18] is a milestone in terms of theorizing the concept of a social network, how this abstraction fits the contemporary world, and the economic and individual implications of this concept. In spite of this, the interface of his work with the area of Social Network Analysis is small, as detailed in [19]. The author highlighted that the silence is reciprocal: Castells neither refers to classic literature from SNA nor is he cited by it as the important network theorist that he is. The gap has a methodological explanation. SNA scientists focused on formal mathematical modeling, computational methods, diagrams and matrices to study networks, thus emphasizing micro-structures and patterns. On the other hand, Castells was concerned about macro-structures that explain the connected society and did not address individual social ties. Castells considers network theory to be a unifying language between the natural, human and social sciences, in consonance with Barabási [8, 20] and other social networking scientists [6]. But his emphasis is on ontology rather than methodology, and instead of focusing on training mechanics and structural dynamics, he turned to the sociological aspects of the network. While he believes that macroeconomic and social structure conditions the connections between individuals, social network analysis scientists analyze the connections and try to get the macro-structure from them. The analysis of political processes has become an important part of network analysis and the political-economic analysis performed by Castells opens up the thematic linkages and the discussion of the power of global business, the state and interest groups within political networks [19]. A key aspect of power, according to Castells' formulation [17], is the policy-making power held by elected politicians. Thus, using Castells' concepts, we incorporate and combine previous knowledge of the domain (in our case, political networks) and topology information. It is an attempt to address the challenge of capturing the rich process resulting from an interplay between agents' behaviour and their dynamic interactions within a political and economic network [21]. To evaluate our strategy, we built a political network that included Brazilian politicians, political parties, and their main campaign donors. We applied the metrics to the network and compared our findings with a list of most influential politicians in Brazil, published every year by a nonpartisan organization. This article is a substantially extended and reviewed version of [22]. The metrics are redesigned to better encompass concepts from Castell's theory. In this article, we compare our findings with related work on influence and power that are based on topological properties of the network. We consider a rebuilt version of the network, expanding the data to all elections since 2002, including local governments and defeated candidates. The analysis was conducted over time, revealing the political network dynamics and changes. The results show valuable insights for a critical perspective of the electoral system in Brazil, that can also be applied to other scenario. Concerning the structure of this article, in Section 2, we review classic sociological concepts for power analysis and summarize Manuel Castells' theory. Still in the same section, we discuss some works that measure the influence or find the top influential users in social network research. In Section 3, we present the set of metrics and rules for network construction. In Section 4, we describe the process of data gathering, cleaning and analyzing data based on the metrics. In Section 5, we show the results of the analysis of the Brazilian political scenario. Finally, in Section 6, we highlight the limitations of our work and point out potential future work. Background and related work The power of an individual is broadly discussed in many fields of knowledge, including economy, philosophy, sociology, and psychology. Since our goal is to study power on political and economic social networks, this section focuses on authors who discuss the social aspect of power. Based on the studied authors, we can summarize power as the chance to impose one's will on someone else within a social context [1, 17, 23–26]. Although shortly defined, power is difficult to measure. Richard Emerson, in his classical Network-Dependence Theory [27] highlighted that it is common for one person to dominate another while being subservient to a third one. Thus, it is questionable to think about a generalized power. To say that a person has power is incomplete unless over whom is also clarified. Thus, power should be a property of a relationship, not an absolute property of an agent. Still to Emerson, it is a relational property that depends on resource exchange and possession and it can be measured by the imbalance when one side is more dependent on the relationship than the other. Actors with multiple potential negotiation partners are perceived as powerful in their network. Relationships of power are deeply unstable and changeable. It leads to a constant clash between multiple forces in a network, and this directs us to the ideas of Castells, discussed briefly in the next section. Manuel Castells' Network Theory of Power Castells' theory of network society is essentially about the impact of informationalism on the economy, covering a wide range of issues from the conditions of the informational economy, globalization, industrial organization, changes in work and employment, and the emerging space of flows [18]. It is a political economy-oriented macro-analysis of the tensional relationship between the networks of informational economy and historically-rooted identities. Two key concepts in his framework are network and power and it is important to review the development of these ideas to contextualise our work. His first statements about network logic appear in The Informational City [28]. He pointed out that new information technologies provided the basis for a change with major impact: spatially based relations of production were about to be substituted by flows of information and power in a much more flexible and connected system of production. This connected society, a progressive capitalist society, allows capital to become stronger by making networks, while the working mass becomes weaker with increasing individualism. Later, in the concluding section of the first volume of the trilogy The Information Age: Economy, Society and Culture, the network concept was further elaborated and presented as a corollary: "a network is a set of interconnected nodes. A node is the point at which a curve intersects itself. What a node is, concretely speaking, depends on the kind of concrete networks of which we speak" [18]. He mentions concrete networks as examples of his statements: stock exchange markets; political elites in political networks; broadcasting systems; computer-aided communications and social network service providers in the global network of media. In the second volume of the trilogy, he applied the idea of network to the analysis of the state, following the axiom that if networks have become the most important form of social organization, this must also apply to the state. The network state emanates from the complex networks of power, being manifested in a multilevel and multisector decision-making system based on negotiations [29]. In his book Communication Power, Castells continues the analysis of the network society from a power perspective. He argues that global social networks make use of global digital communication as a fundamental source of power and counter-power in contemporary society. Power is associated with coercion, domination, violence or potential violence, and asymmetry in the relations. Finally, in his article A Network Theory of Power [17], he details sources and mechanisms of power. He states that the possession and exchange of valuable resources are a power determinant and also emphasizes two fundamental abilities required to gain power in a network: switching and programming. For Castells, power follows the logic of network construction. "In a networked world, the ability to exercise control over others lies in two basic mechanisms: (a) the ability to program or reprogram networks in terms of their goals and protocols; and (b) the ability to connect and secure cooperation between different networks by combining resources while isolating others through strategic competition" [17]. These skills are named as programming and switching, respectively. To understand these concepts, we have to detail what Castells calls network protocols: the standards of communication determine the rules to be accepted once in the network. Once the rules are set, they become compelling for all nodes in the network. The programmers are key actors who decide what are network goals and what protocols members must follow. The process differs from network to network. Politicians are holders of programming power intrinsically based on their function in society: they define laws, apply military force, and social welfare programs, but they depend on winning the competition to access political office and to accomplish this, they must employ huge amounts of money in electoral campaigns. They must articulate the diversity of interests of campaign donors to maximize their autonomy, but at the same time, raise funds to increase their chances of seizing political power. Once in power, they are the programmers of political processes and policy making. Even then, the clash of forces continues: the judiciary exercises networking power by gate-keeping access to political positions and regulating procedures; political decision relies on media to communicate with the public and get support; media owners are not passive transmitters of political instructions: they distribute biased political programs according to their specific interests as media organizations. This interface between political networks and media networks is an example of switching power. It is the control over the connecting points between various strategic networks. When switchers become aligned in oligarchic domination, the dynamism and initiative of multiple sources of social structure suffocate. This is why the government should not control the media and reciprocally, media owners do not became political leaders. This can be extended to the relationships between religious leaders and media owners, and between politicians and rural corporations. Switching functions, and therefore switchers, play a central role to understanding power making [17]. We can summarize the concepts from Castells' theory that we use on our metrics in three aspects: nodes are connected by the exchange of resources that are valuable for the network; when a node has the capability to change network rules it is a powerful node; the most crucial ability in a networked society is the ability to bridge two or more different networks. By reducing the gap between network topology studies and Manuel Castells' theory, we approximate bottom-up and top-down approaches. We claim that SNA can benefit from this combination of topological features and domain knowledge. In the next section, we review metrics used in SNA literature for influence and power. The gap explained in the beginning of this section is well-illustrated when we move the discussion from macro theory to patterns and formulas. Influence analysis background from social network analysis research A directed network(N) can be defined using its node set and the adjacency weighted matrix. Formally, N = (V,W), where V is the node set and W = [Weightij]n∗n is the adjacency weighted matrix, it is common [2–4] to use Weight(edgeab)/OutWeight(a) or some proportional variation to compute authority of node a over node b, where OutWeight(a) is the sum of edge weights with source on node a. PageRank uses the transition probabilities between nodes in a Markov process and gets the authority rank from the stationary distribution. In [30] the focus is on the relationship between PageRank and social influence. The argument is that the authority (or PageRank) of a node is a collection of its influence on the network. As stated in [31], it is still not well understood what the evaluation models for social influence studies are and the research field is still on its preliminary stages. The authors' goal in [32] is to identify top-k influential (or powerful) nodes in a society as a whole. They propose a method for including nodes in the network and gathering data from various data sets, such as top foundations and corporations' executive boards, all in Denmark. They applied a clustering algorithm to the resulting network based on a weighting scheme to find out the top-k powerful nodes. To validate their approach, they studied the relationship between this cluster and topological metrics like centrality and PageRank. With the same goal, [33] used a model based on a structural diversity assumption: a node is more likely to be influenced if impacted by nodes from different groups of neighbours. Our work differs from all the studies previously described because they do not use domain knowledge and they do not use the sociologists' theories. All the findings are based on topological features, in a strict bottom-up approach. We propose to use top-level theory to identify which topological feature should be considered in the calculations and to use domain knowledge explicitly. As an example, similar to our proposed strategy, the authors in [34] use well known social network metrics, like centrality and PageRank, to determine roles each criminal has in a money laundering network. The link between topological metrics and top-level roles was only possible through the cooperation of investigators with field knowledge of laundering methods, an approximation between domain knowledge, and bottom-up metrics. This mixed strategy generated excellent results with real world datasets and is quite similar to the path we follow in this study. Metrics definition We propose an evaluation of power specific to political networks that is an enhancement of what was proposed in [22]. Degree has generally been extended to correspond to the sum of weights when analyzing weighted networks and labeled node strength, or Wdegree. The modeled network is built from resource exchanges in real world. The amount of money (or other resource traded) is indicated by edge weights and the edge is directed and follow the same direction of the resource flow. Thus, a directed network is constructed. However, the volume of exchanged resources cannot be considered the only indication of the power of a node, as we need to find which is the most powerful part in the deal is and this, according to the selected underlying theory, depends on the complete exchange network of each of the nodes. The political influence of node a over node b is defined in Eq. (1), where Wedgeab is the volume of resources transferred from a to b, and Windegreeb is the weighted indegree of node b, or the total amount of resources b received from its exchange partners. It means how much partner a is important to b. The bigger the number, the bigger is the influence of a on b decisions. $$ {politicalinfluence}_{ab} = \frac{{Wedge}_{ab}}{{Windegree}_{b}} $$ Symmetrically, bargain indicates how much partner b is important to a. If node a has few exchange alternative, b has negotiation benefits, as defined in Eq. (2), where W outdegreea is the weighted outdegree of node a, or total amount of distributed resources. $$ {bargain}_{ba} = \frac{{Wedge}_{ab}}{{Woutdegree}_{a}} $$ Finally, the imbalance in this relation is what defines the power associated with the edge. More potential partners means more power and more dependence means less power. The power acquired by the exchanges from a to b can be defined in Eq. (3). $$ {{\begin{aligned} {poweredge}_{ab} = \left({politicalinfluence}_{ab}-{bargain}_{ba}\right)*{Wedge}_{ab} \end{aligned}}} $$ The closer to zero, more balanced is the relation. Positive values mean the node who sends the resource, a, is more powerful and negative values indicate that the destination node, b, is more powerful. If the inverse relation (poweredgeba) is needed, the result should be multiplied by -1. This calculation reflects all the relations in the exchange network, since the results change for all the surrounding relations if a single edge is changed. The power of node a is the sum of each power relation with all M nodes in a network, as stated in Eq. (4). $$ {power}_{a} = \sum\limits_{n=1}^{M} {poweredge}_{an} $$ To encompass the programming and switching abilities we propose that in an exchange network, switching power can be denoted in two ways: (i) if a node gathers and redistributes resources to other nodes within the same network and (ii) if a node exchanges with various sectors of society. The first is evaluate by the relation between the distributed amount (W outdegree) and the total amount exchanged by the node (W degree). For the second, we need to show how different the exchange partners of a node are and then use top-level domain knowledge, if any. Consider each node has an attribute indicating to which religious or economic sector it belongs. We then add a tuning parameter δ for diversity, meaning the number of different sectors a node has exchanged with and in this way modeling Castell's switching concept. For programming (or node capacity to define rules and protocols in the network), we use a tuning parameter Ωa. So, finally, Eq. 4 can be adjusted and Power defined in Eq. (5). $$ {Power}_{a}= \delta_{a} * \Omega_{a} * \sum_{n=1}^{M} {poweredge}_{an} $$ Gathering and cleaning data from Brazilian elections Brazil's multi-party system allows the creation of a new party with 1 million signatures. As a result, as of December 2017, there are almost 40 political parties in Brazil. However, we can distinguish three major parties that remained in power in the last four decades: PT(center-left ideology), PSDB(center-right ideology) and PMDB/MDB(right ideology). Brazil is facing a critical moment: important politicians, since state governors and senators are being prosecuted, facing corruption charges, and eventually going to jail. The investigations show that many campaign donations are in fact bribes or payment for some illegal benefitFootnote 1. Donations from corporations and persons to a party or directly to a candidate were legal until 2014. In 2016 elections, regulation was stricter and corporate donations forbidden and personal donations were limited to ten thousand Brazilian Reais, except for the case of candidates using their own money. To evaluate these metrics, we built a network based on open data of the Brazilian government extracted from the Supreme Electoral Court [35]. Brazilian law guarantees that all Brazilian citizens have access to any public information unless explicitly classified (Law n° 12,527, 12/18/2011). Candidates to public legislative or executive positions must deliver the report of their expenses as well as income during electoral campaigns, but this raw data is difficult to visualize and analyze. The goal of this case study is to make clear power relations that can be inferred from the campaign donations. The database contains information on campaign donations since 2002 elections. The data is unstructured in flat (.csv) files. The network has nodes that represent candidates, political parties and donors. These nodes are connected by edges that represent financial transactions (campaign donations) and the weight of the edges is the amount involved in the transaction. Since 2014, resources can be transferred between parties and candidates explicitly. In these cases, two edges were created: one for the original exchange and another for the transfer of resources. As we want to analyze network dynamics over time, each edge and node were created with temporal timestamps. For the switching δ factor, we use the economic activity information present in the dataset. For each different activity in the related nodes, we add one to δ. For example, a politician with donors from construction industry, retail industry and food industry would have δ equal to three. For the programming Ω factor, to each political role was assigned an arbitrary value indicating the importance and decision power of each position, as showed in Table 1. The programming factor is temporary: when a candidate wins an election, the programming power is valid during the years of his term (usually 4 years, except for senators, who have 8 years). Table 1 Ω factor table Data cleaning removed duplicated nodes and grouped in the same node all the legal committee of the same political party. Data cleaning was particularly challenging and a lot of inconsistencies were detected, for example, candidates and donors without proper identification. The network considers only donations over ten thousand Brazilian Reais. The result is a network with 103,345 nodes and 128,288 edges, distributed over the last fifteen years. We use Gephi software to generate visualizations and metrics of the network and MySQL combined with Python scripts to perform data cleaning and metrics calculation. For the presentation of our results, we go from general to more detailed information, showing different aspects our metrics can encompass. First, we show general information and evolution of Brazilian electoral campaign financing numbers from 2002 to 2016 and how this political and economic network evolved in time using a few SNA metrics. We also detail the profile of big campaign donors and the economic sectors they represent. Thereafter we evaluate our metrics comparing the top 100 Congressman (Senators and Federal Deputies) with a well-known list published every year by a non-partisan organisation, with good results. After this validation we present top donors and political parties according to our power metric and analyze the relation between power and amount donated and received by them. Finally, we choose two famous Brazilian politicians and go into details of their poweredges, indicating each power relation they have with campaign donors. General overview through time We would like to share some general insights from our data, only possible after the cleaning process and the analysis of elections data along the years. There has been a significant growth in the amount of money donated to political parties, specially after 2008. This growth can be glimpsed in the network: Table 2 shows the evolution of SNA metrics over the years. Table 2 Network evolution The evolution of the network thorough time is also shown in Figs. 1 and 2. Figure 1 shows the evolution of the network for local elections years and Fig. 2 shows the general elections years. Both figures were created using the Yifan Hu distribution algorithm [36]. Network evolution - local elections Network evolution - general elections The financial numbers are big and increasing: considering donation amount since 2002, all the top 50 Brazilian politicians received over 20 million Brazilian Reais (6 million dollars) in direct donations, not considering donations received through their political parties. Many of them appear every four years, renewing their power positions. The amount of money each one received rises dramatically after 2008 as seen in Fig. 3, even considering inflationFootnote 2. Three major political parties (PMDB, PSDB and PT) are highlighted on the chart. New regulations forced numbers down in 2016. As examples of that growth: Marconi Perillo, the current governor of the state of Goiás, ran in 2002, 2006, 2010 and 2014 elections and received around 9, 13, 29 and 25 million Brazilian Reais (2.7, 3.9, 8.8 and 7.5 million dollars) respectively. Carlos Alberto Richa, another state governor, ran in 2002, 2008, 2010 and 2014 elections and received around 2, 6, 23 and 25 million Brazilian Reais (0.6, 1.8, 6.9 and 7.5 million dollars) respectively. This behaviour is a pattern repeated many times among the top politicians. The profile of the main donors also changed in time. We selected the top 30 donors each year and grouped them by economic activity. General elections - when senators, deputies, governors and the president are elected - occurred in years 2002, 2006, 2010 and 2014. Local elections - when mayors and city councilmen are elected - occurred in 2004, 2008, 2012 and 2016. There were some differences in the behaviour of the top donors in these two kind of elections. The results seen in Figs. 4 and 5 show that in general elections there was a constant rise in the total amount donated by these heavy donors and also a dramatic rise of influence from food industry. On the other hand, local elections showed a big explosion of the amount donated in 2012, simultaneously with a concentration of political parties as distributors of the money. We do not see corporate investment as dominant in these years as they are in general elections years. Figures 6 and 7 illustrate these findings. One possible interpretation is that the general elections are more attractive to corporations. Other aspect is that political parties use local elections to consolidate their power where they are already strong. Big contributors total amount - general elections Big contributors by sector - general elections Big contributors total amount - local elections Big contributors by sector - local elections Power metrics results To evaluate the metrics, we consider a list with the top 100 politicians from the National Congress (Senators and Federal Deputies) according to our power metric and compared it with the list generated by DIAP (acronym for Intersindical Parliamentary Assessors Department, in Portuguese) for the years 2015, 2016 and 2017. We identify that 54 of our top politicians were present at DIAP's 2015 list, 58 at DIAP's 2016 list and 55 at DIAP's 2017 list [37]. In [38] the authors ranked Brazilian Congressman using topological metrics and also used DIAP's list as a target list. Our metrics surpasses their results by 2 times, as they identified 27 percent of DIAP's list. The complete table comparing our findings with DIAP's list can be found at https://github.com/LinkedOpenDataUFRJ/JISA_Article_PowerAnalisys. The power analysis allowed us to extract the most powerful corporate and personal donors in Brazilian scenario. Tables 3 and 4 show the top influences Brazilian politicians are subjected to and the amount of money involved in donationsFootnote 3. We can see the strength of food industry as well as the presence of big construction corporations. As can be observed, the power metric does not have a direct relation to the amount donated. As an example, we can highlight that Distribuidora Coimbra and MRM Construtora Ltda donated much less than Construtora OAS Ltda. but have a greater power measure. Although the names on Table 4 may be unknown for many people, they highlight hidden influences sometimes bigger than corporate ones. Most of the persons on the list are owners of big companies in Brazil or billionaire politicians. We can also extract the relation between the powerful donors and the political parties, as showed on Tables 5 and 6. Table 5 Distribution of donations from top 20 corporate donors considering power metrics among political parties Table 6 Distribution of donations from top 10 personal donors considering power metrics We analyzed power distribution among political parties. Table 7 shows the top ten parties in the Brazilian political scenario. The four major parties lead the list, but we can also see local representations (PMDB-RJ and PMDB-RN) among the top powerful groups. Table 7 Top 10 political parties considering power metrics and amount received from donations Another aspect to be noticed is the amount of money that flows between parties that are considered adversaries, especially from big parties to smaller ones, as we can see in Table 8. Table 8 Money donated from political party to another political party As stated in Section 2, the analysis of power is incomplete unless we detail who is the subject of this power. We could notice some famous and supposed powerful politicians that were not top-ranked in our list. We chose two of them to go into their poweredge details. They are quite similar: both from the same state (RJ), both federal deputies with important positions (head of Congress) and both received around the same amount in campaign donations. Our purpose is to detect if they have similar power structure. (i) Eduardo Cunha was a federal congressman from Rio de Janeiro, head of Congress for five years and the main leader of the impeachment process of president Dilma Rouseff that occurred in 2016. On 19 October 2016, Cunha was arrested by the Brazilian Federal Police, accused of hiding approximately 40 million dolars in secret bank accounts and on trying to tamper with investigations against him. His poweredges can be found in Table 9. (ii) Rodrigo Maia is also a federal congressman from Rio de Janeiro and was nominated head of Congress in Brazil after Eduardo Cunha was arrested. His poweredges can be found in Table 10. Table 9 Eduardo Cunha's poweredges Table 10 Rodrigo Maia's poweredges Considering that a positive poweredge means that the candidate has more power and a negative one means the donor is the one who rules the relationship, we can see Eduardo Cunha is totally subjected to his party (PMDB-RJ) and, with less force, to two big construction industry players, Camargo Correa S.A. and Construtora Norberto Odebrecht S.A., both deeply involved in recent bribe scandals in Brazil. Rodrigo Maia's poweredges show much more balanced relationships, with smaller values on the negative side. This means he is more independent from his campaign donors, from his party and, therefore, more powerful, as the calculations bellow demonstrate. According to the proposed Eq. 5, the general power for Mr. Cunha is calculated using: (a) sum of his poweredges: -1,072,756; (b) his δ factor: 9, the count of different economic sector he traded with; and (c) his Ω factor: 4, obtained in Table 1 for Federal Deputy, considering the following result: PowerEduardoCunha=9∗4∗−1,072,756=−38,619,216. Similarly, the power for Mr. Maia is calculated (a) sum of his poweredges: 650,232; (b) his δ factor: 8; and (c) his Ω factor: 4, as he is also a Federal Deputy, considering the following result: PowerRodrigoMaia=8∗4∗650,232=20,807,424. Complete tables with poweredges, all the data, source code and network files are available in https://github.com/LinkedOpenDataUFRJ/JISA_Article_PowerAnalisys. and are free to use under the GNU license. This article presented metrics for power analysis on political and economic networks based on sociologist Manuel Castells' Network Theory of Power. Such environment, where power is the result of a constant clash of forces, is hard to analyze and to quantify. Such challenge was reinforced by the specific scenario we chose to evaluate our proposal: Brazilian politics. With almost 6000 cities, 23 states, Brazil is facing a critic moment is its democracy. To identify whose interests are behind each political group or individual politician is key to have citizens better informed. Our main contributions can be summarized as follows: (i) we developed quantitative metrics for power analysis on political network using domain knowledge combined with classical SNA metrics; (ii) in order to evaluate our metrics, we applied our metrics to a Brazilian economic and political network based on campaign donations in Brazil since 2002. Then, we compared our list of the top 100 powerful congressman with a well-known list published every year with good results; (iii) we performed a detailed analysis of some aspects in Brazilian politics that revealed what are the top influencing corporations and persons in this scenario and also the flow of money from one political party to another; and finally (iv) we provided a detailed analysis of power relations in Brazilian scenario from 2002 to 2016, with the publication of a clean and fine-grained dataset for future research. Some limitations can be listed though: data about activity sector was not complete in our dataset and should be extracted from alternative sources manually. This limited our computation of switching power and was made only for donors above 1 million Brazilian Reais. The metrics gave a glimpse of the dynamic of power involved in Brazilian elections, but rules are changing fast for campaign donation and money laundry and bribes networks are hard to capture only with official data. Some future enhancement are needed and planned: we intend to redesign the power metric to show relative values inside the network, instead of big absolute values. We will also integrate information about company owners to reveal hidden connections behind donations and politicians. We plan to add connections in the network between candidates and their individual campaign committees. In the present study donations for these committees were considered as donations to the political party, instead of to individual candidate. Finally, we intend to capture domain information from alternative datasets to evaluate if politicians or donors are religious leaders, owners of communication corporations or financial operators, because these networks should wield more power in the evaluation of switching power, according to Castells. Journalist information on bribe scandals in Brazil can be found in http://www.bbc.com/news/world-latin-america-35810578, https://theintercept.com/2017/08/03/brazils-corrupt-congress-protects-its-bribe-drenched-president-finalizing-elites-two-year-plot/ and https://www.theguardian.com/world/2017/jun/01/brazil-operation-car-wash-is-this-the-biggest-corruption-scandal-in-history Historical inflation taxes in Brazil: 1999: 8.94%, 2000: 5.97%, 2001: 7.67%, 2002: 12.53%, 2003: 9.30%, 2004: 7.60%, 2005: 5.69%, 2006: 3.14%, 2007: 4.46%, 2008: 5.90%, 2009: 4.31%, 2010: 5.91%, 2011: 6.50%, 2012: 5.84%, 2013: 5.91%, 2014: 6.41%, 2015: 10.67%, 2016: 6.29%, 2017: 2.95%. As an example, 2 million Brazilian Reais in 1998 is equivalent to 6.5 million in 2016. Amount donated to political parties. Evolution over the years Exchange rate: 1 USD = R$ 3.30. Official dollar exchange rate for 12/31/2017 Table 3 Top 20 corporate donors considering power metrics Table 4 Top 10 personal donors considering power metrics French JRP, Raven B, Cartwright D. The bases of social power. Class Organ Theory. 1959; 7:251–60. Li J, Peng W, Li T, Sun T, Li Q, Xu J. Social network user influence sense-making and dynamics prediction. Expert Syst Appl. 2014; 41(11):5115–24. Muchnik L, Aral S, Taylor SJ. Social influence bias: A randomized experiment. Science. 2013; 341(6146):647–51. Kempe D, Kleinberg JM, Tardos É. Maximizing the spread of influence through a social network. Theory Comput. 2015; 11(4):105–47. MathSciNet Article MATH Google Scholar Lawyer G. Understanding the influence of all nodes in a network. Sci Rep. 2015; 5:8665. Easley D, Kleinberg J. Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Cambridge: Cambridge University Press; 2010. Sun J, Tang J. A Survey of Models and Algorithms for Social Influence Analysis In: Aggarwal C, editor. Social Network Data Analytics. Boston: Springer: 2011. Barabási A-L. Linked: The New Science of Networks. New York: AAPT; 2003. Java A, Kolari P, Finin T, Oates T. Modeling the spread of influence on the blogosphere. In: Proceedings of the 15th International World Wide Web Conference: 2006. p. 22–26. Hajian B, White T. Modelling influence in a social network: Metrics and evaluation. In: Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social Computing (SocialCom), 2011 IEEE Third International Conference On. Boston: IEEE: 2011. p. 497–500. Ye S, Wu SF. Measuring message propagation and social influence on twitter. com. SocInfo. 2010; 10:216–31. Van Raan A. The influence of international collaboration on the impact of research results: Some simple mathematical considerations concerning the role of self-citations. Scientometrics. 1998; 42(3):423–8. Glänzel W, Schubert A. Analysing scientific networks through co-authorship. Handb Quant Sci Technol Res. 2004; 11:257–79. Börner K, Dall'Asta L, Ke W, Vespignani A. Studying the emerging global brain: Analyzing and visualizing the impact of co-authorship teams. Complexity. 2005; 10(4):57–67. Page L, Brin S, Motwani R, Winograd T. The pagerank citation ranking: Bringing order to the web, Technical report. Standford: Stanford InfoLab; 1999. Freeman L. A set of measures of centrality based upon betweenness. Sociometry. 1977; 40:35–41. Castells M. A network theory of power. Int J Commun. 2011; 5:773–87. Castells M. The Information Age. Economy, Society and Culture. Vol I: The Rise of the Network Society. Oxford: Blackwell; 1996. Anttiroiko A-V. Castells' network concept and its connections to social, economic and political network analyses. J Soc Struct. 2015; 16:1–18. Barabási A-L. The architecture of complexity. IEEE Control Syst. 2007; 27(4):33–42. Schweitzer F, Fagiolo G, Sornette D, Vega-Redondo F, Vespignani A, White DR. Economic networks: The new challenges. science. 2009; 325(5939):422–5. Verona L, Campos MLM, Oliveira J. Métricas para análise de poder em redes sociais e sua aplicação nas doações de campanha para o senado federal brasileiro. In: Proceedings of XXXVII Congresso da Sociedade Brasileira de Computação: 2-6 July 2017; São Paulo: 2017. p. 544–54. Emerson RM. Social exchange theory. Annu Rev Sociol. 1976; 2:335–62. Hindess B. Discourses of Power from Hobbes to Foucault. New Jersey: Wiley; 1996. Willer D, Markovsky B, Patton T. Power structures: Derivations and applications of elementary theory. Sociol Theor Prog. 1989; 3:313–53. Willer D. Network Exchange Theory. Connecticut: Greenwood Publishing Group; 1999. Emerson RM. Power-dependence relations. Am Sociol Rev. 1962; 27:31–41. Castells M. The Informational City: Information Technology, Economic Restructuring, and the Urban-regional Process. Oxford: Blackwell; 1989. Castells M. The Information Age. Economy, Society and Culture. Vol II: Power of Identity. Oxford: Blackwell; 1997. Liu Q, Xiang B, Yuan NJ, Chen E, Xiong H, Zheng Y, Yang Y. An influence propagation view of pagerank. ACM Trans Knowl Discov Data (TKDD). 2017; 11(3):30. Peng S, Yang A, Cao L, Yu S, Xie D. Social influence modeling using information theory in mobile social networks. Inf Sci. 2017; 379:146–59. Larsen AG, Ellersgaard CH. Identifying power elites—k-cores in heterogeneous affiliation networks. Soc Networks. 2017; 50:55–69. Xu W, Liang W, Lin X, Yu JX. Finding top-k influential users in social networks under the structural diversity model. Inf Sci. 2016; 355:110–26. DreŻewski R, Sepielak J, Wojciech F. The application of social network analysis algorithms in a system supporting money laundering detection. Inf Sci. 2015; 295:18–32. MathSciNet Article Google Scholar Brazilian electoral data repository. http://www.tse.jus.br/hotSites/pesquisas-eleitorais/index.html. Accessed 20 Aug 2017. HU Y. Efficient, high-quality force-directed graph drawing. Math J. 2005; 10(1):37–71. MathSciNet Google Scholar A research about most influential congressman in Brazil. http://www.diap.org.br. Accessed 20 Nov 2017. Bursztyn VS, Nunes MG, Figueiredo DR. How congressmen connect: analyzing voting and donation networks in the Brazilian congress. In: Proceedings do XXXVI Congresso da Sociedade Brasileira de Computação; São Paulo: 2016. The authors would like to thank CNPq and FAPERJ for partially supporting this work. We would like to thank the organisers of BraSNAM 2017, Giseli Lopes and Rodrigo Pereira dos Santos for their excellent work on the conference. We also thank the anonymous reviewers for their complimentary comments and suggestions. Programa de Pós-Graduação em Informática (PPGI) Universidade Federal do Rio de Janeiro - (UFRJ), Rio de Janeiro, Brazil Letícia Verona, Jonice Oliveira, José Vitor da Cunha Hisse & Maria Luiza Machado Campos Letícia Verona Jonice Oliveira José Vitor da Cunha Hisse Maria Luiza Machado Campos All authors read and approved the final manuscript. Correspondence to Letícia Verona. Not applicable. All data used in the study was already publicly available. All data and code used in this article can be found at https://github.com/LinkedOpenDataUFRJ/JISA_Article_PowerAnalisys and can be used freely under GNU license. † This article is a substantially extended and revised version of Verona (2017), which appeared in the proceedings of the VI Brazilian Workshop on Social Network Analysis and Mining. Verona, L., Oliveira, J., da Cunha Hisse, J. et al. Metrics for network power based on Castells' Network Theory of Power: a case study on Brazilian elections. J Internet Serv Appl 9, 23 (2018). https://doi.org/10.1186/s13174-018-0092-5 Government open data Power metrics Social Network Analysis and Mining
CommonCrawl
kawasaki algorithm ising model With kBT/J=1.0 and 1.7 and p=1.0, we confirmed M.A. Students learn how to implement the Metropolis algorithm, write modular programs, plot physical relationships, run for-loops in parallel, and develop machine learning algorithms … well-known rate wi(σ)=min[1,exp(−ΔEiJ/kBT)], is consistent with the fact Metropolis, Wolf or Swendsen-Wang algorithm competing against Kawasaki On these networks the that if on a directed lattice a spin Sj influences spin Si, then dynamics. The Metropolis results are independent of competition. where ΔEi is the energy change related to the given spin explains the behavior of magnetisation to fall faster towards a C 16, After successfully using the Metropolis algorithm … The Ising Hamiltonian can be written as, $$ \mathcal{H} = -J \sum_{\langle i j \rangle} S_{i} S_{j}. 310, 269 (2002). accurate than the MB algorithm, and the main di erence of this algorithm is that the LASSO problem are coupled, and this coupling is essential for stability under noise. J. Mod. Metropolis, Wolf or Swendsen-Wang algorithm competing against Kawasaki by the single spin-flip Glauber kinetics and the flux of energy into J.S. No spontaneous magnetisation was The obtained The Ising Model Today we study one of the most studied models in statistical physics, the Ising Model (1925). Abstract: On directed Barabási-Albert networks with two and In our case, we consider seven neighbours selected by each added site, the Ising model does not pumped into the system from an outside source. model on the directed Barabási-Albert networks studied for A 303, 166 (2002). found below a critical temperature which increases logarithmically with Departamento de Física, Universidade Federal do Piauí. It was first proposed as a model to explain the orgin of magnetism arising from bulk materials containing many interacting magnetic dipoles and/or spins. Instead, the decay time for (Teresina-Piauí-Brasil) for its financial support. The others competing process, occuring We show that the model There are different ways to implement the Kawasaki algorithm. Now we study the self-organisation phenomenon in the Ising Lima and D. Stauffer, cond-mat/0505477, to appear the zero temperature and temperature same the others algorithms Kawasaki dynamics give different results. characterised by the transition probability of exchanging two dynamics, is studied by Monte Carlo simulations. [4, 5, 6] where a spontaneous magnetisation was [8], where we exchange nearest-neighbour spins, which magnetisation, this phenomenon occurs after an exponentially decay of The 2D square lattice was initially considered. p=0.2 competes with the algorithm of HeatBath algorithm. parameter (= magnetisation), and with probability q=1−p Keywords:Monte Carlo simulation, Ising, networks, competing. Ising model. flipping of the magnetisation follows an Arrhenius law for Metropolis Wang and R. H. Swendsen, Physica A 167, 565 (1990). the magnetisation behavior is as in Fig.3 for Kawasaki dynamics at zero temperature; the same similarity occurs with sizes of clusters: Fig.8 looks like Fig.4 despite the Kawasaki dynamics being different. In Fig.5, we observe that the magnetisation behavior for Kawasaki dynamics with temperature different from zero competing with algorithm HeatBath is competition between the algorithms studied here and the configuration zero temperature, already mentioned above, where there are an exchange of As a prototype statistical physics system, we will consider the Ising model. Kawasaki dynamics is not dominant in its competition with Glauber, HeatBath So-called spins sit on the sites of a lattice; a spin S can take the value +1 or -1. Rev. Sumour and Shabat [1, 2] investigated Ising models on Kawasaki process is not the usual relaxational one, where Kawasaki dynamics (A very detailed and good book, containing also some material on Molecular Dynamics simulations). magnetisation with time. magnetisation decays exponentially with time. Thus, they show that for We show that the model In Fig.3 we have competing Wolf and Kawasaki dynamics, algorithms but different for HeatBath. The Ising model is one of the most studied model in statistical physics. In this model, space is divided up into a discrete lattice with a magnetic spin on each site. Sumour, M.M. algorithms, including cluster flips [9], for The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states (+1 or −1). e The Ising model (/ ˈaɪsɪŋ /; German: [ˈiːzɪŋ]), named after the physicist Ernst Ising, is a mathematical model of ferromagnetism in statistical mechanics. exhibits the phenomenon of self-organisation (= stationary equilibrium) The Ising model is a model … More recently, Lima and Stauffer [7] simulated exhibits the phenomenon of self-organisation (= stationary equilibrium) competing dynamics: the contact with the heat bath is taken into account The first is the dynamics Kawasaki at For Sumour and M.M. increase in the energy of the external flux of energy. $$ The spins $S_{i}$ can take values $\pm 1$, $\langle i j \rangle$ implies nearest-neighbor interaction only, $J>0$ is the strength of exchange interaction. (from top to bottom), after Grandi and W. Figueiredo, Phys. A. Aleksiejuk, J.A. The Ising Model. On these networks the The first one is the two-spin exchange Kawasaki dynamics at zero temperature (in the usual sense) In conclusion, we have presented a very simple nonequilibrium model on dynamics, is studied by Monte Carlo simulations. See app_style diffusion for an Ising model which performs Kawasaki dynamics, meaning the spins on two neighboring sites are swapped. or 7 already existing sites as neighbours influencing it; the newly and there may be no well-defined total energy. The Ising model is one of the most studied model in statistical physics. the system is simulated by a process of the Kawasaki type directed Barabási-Albert networks [3] with the usual Glauber p=0.2; this is similar to p=0.8, Fig.2b; the only difference between Exercises are included at the end. university Magazine (cond-mat/0504460 at www.arXiv.org). flip for local algorithms; we use the corresponding traditional probabilities defined in. They found a freezing-in of the The Ising model is a simple model to study phase transitions. the Kawasaki dynamics which keeps the order parameter constant. 1 Monte Carlo simulation of the Ising model In this exercise we will use Metropolis algorithm to study the Ising model, which is certainly the most thoroughly researched model in the whole of statistical physics. The system undergoes a 2nd order phase transition at the critical temperature $T_{c}$. directed Barabási-Albert network [1, 2]. In Fig.2a law at least in low dimensions. Mod. It was first proposed as a model to explain the orgin of magnetism arising from bulk materials containing many interacting magnetic dipoles and/or spins. E Instead, the decay time for Spin block renormalization group. Each neighbouring pair of aligned spins lowers the energy of the system by an amount J > 0. I also acknowledge the Brazilian agency FAPEPI Phys. stationary equilibrium when Kawasaki dynamics is predominant. different from Kawasaki dynamics at zero temperature (Fig.1) and insensitive to the value of the competition probability p. In Fig.6, for Metropolis algorithm million spins, with each new site added to the network selecting m=2 The Ising model Qon TCL obtained by matching correlations satis es L(2)(P;Q) with probability at least 1 0 . found, in contrast to the case of undirected Barabási-Albert networks Swendsen-Wang cluster flips, for both p=0.2 and p=0.8, the •  Some applications: –  Magnetism (the original application) –  Liquid-gas transition –  Binary alloys (can be generalized to multiple components) predominant, and the fluctuations occur only near two well defined the 2D Ising model with nearest neighbor spin exchange dynamics. In Fig.1b, where p=0.8 instead, the HeatBath algorithm is They also compared different spin flip cording to a tree Ising model P, denote the Chow-Liu tree by TCL. On directed Barabási-Albert networks with two and seven neighbours selected by each added site, the Ising model does not seem to show a spontaneous magnetisation. dependent Kawasaki dynamics. Shabat, Int. In Fig.4 we observe phenomenon occurs, because the big energy flux through the Kawasaki magnetisation behaviour of the Ising model, with Glauber, HeatBath, discussions during the development this work and also for the revision We consider ferromagnetic Ising models, in which the system (2002). and Glauber algorithms, but for Wolff cluster flipping the and p=0.2 that means for a predominance of Kawasaki dynamics, magnetisation behaviour of the Ising model, with Glauber, HeatBath, probability For m=7, kBT/J=1.7, In Fig.7 for Single-Cluster Wolff algorithm and Kawasaki dynamics competing at same temperature, the effects of directedness. that, a big fluctuation occurs to a lower value of this magnetisation. Abstract: On directed Barabási-Albert networks with two and We also study the same process of competition described above but with Kawasaki dynamics at the same temperature as the other algorithms. These processes can be simulated by two Abstract: On directed Barabási-Albert networks with two and seven neighbours selected by each added site, the Ising model does not seem to show a spontaneous magnetisation. Home Address Symbol, Wise King Solomon Story, Motion Sickness Treatment For 1 Year Old, Piano Concerto No 1 Rachmaninoff, German Pork Chops With Apples, Watercress Recipes Uk, Pan Sautéed Chicken Thighs, Customer Tracking Spreadsheet Excel, Filmfare Awards South 2016 Full Show Watch Online,
CommonCrawl
Volume 16 Supplement 1 Proceedings of the Indian Genetics Congress 2015: complementary and alternative medicine In vitro and in vivo α-amylase and α-glucosidase inhibiting activities of the protein extracts from two varieties of bitter gourd (Momordica charantia L.) Sundar Poovitha1 and Madasamy Parani1Email author BMC Complementary and Alternative MedicineBMC series – open, inclusive and trusted201616 (Suppl 1) :185 α-amylase and α-glucosidase digest the carbohydrates and increase the postprandial glucose level in diabetic patients. Inhibiting the activity of these two enzymes can control postprandial hyperglycemia, and reduce the risk of developing diabetes. Bitter gourd or balsam pear is one of the important medicinal plants used for controlling postprandial hyperglycemia in diabetes patients. However, there is limited information available on the presence of α-amylase and α-glucosidase inhibiting compounds. In the current study, the protein extracts from the fruits of M. charantia var. charantia (MCC) and M. charantia var. muricata (MCM) were tested for α-amylase and α-glucosidase inhibiting activities in vitro, and glucose lowering activity after oral administration in vivo. The protein extract from both MCC and MCM inhibited the activity of α-amylase and α-glucosidase through competitive inhibition, which was on par with Acarbose as indicated by in vitro percentage of inhibition (66 to 69 %) and IC50 (0.26 to 0.29 mg/ml). Both the protein extracts significantly reduced peak blood glucose and area under the curve in Streptozotocin-induced diabetic rats, which were orally challenged with starch and sucrose. Protein extracts from the fruits of the two varieties of bitter gourd inhibited α-amylase and α-glucosidase in vitro and lowered the blood glucose level in vivo on par with Acarbose when orally administrated to Streptozotocin-induced diabetic rats. Further studies on mechanism of action and methods of safe and biologically active delivery will help to develop an anti-diabetic oral protein drug from these plants. α-amylase α-glucosidase Momordica charantia Competitive inhibition Peak blood glucose In diabetes mellitus, homeostasis of carbohydrate and lipid metabolism is altered due to defects in insulin production or action. It is a major non-communicable metabolic disease involving huge healthcare cost and high mortality rate. The number of adults with diabetes was estimated to be 387 million, and diabetes alone caused 4.9 million deaths in the year 2014 [1]. Postprandial hyperglycemia (PPHG) is a condition in which blood glucose level remains high after consuming meal, and it is an important factor to be considered in the management of diabetes mellitus and diabetes related secondary complications such as diabetic retinopathy, diabetic neuropathy, cardiovascular diseases, etc. [2]. Glycosidic linkages of α-D-(1,4) in carbohydrates are cleaved by α-amylase to produce oligosaccharides, which are further cleaved to monosaccharide glucose by α-glucosidase [3]. Therefore, inhibitors of these enzymes can delay the increase in blood glucose level in people who consume carbohydrate-rich food, and keep the PPHG under control [4]. Acarbose, Miglitol, and Voglibose are the enzyme inhibitors that are currently used for controlling PPHG. Acarbose inhibits both α-amylase and α-glucosidase, but Miglitol and Voglibose inhibit only α-glucosidase. Though effective in controlling PPHG, these inhibitors are not desirable for long-term treatment due to their gastrointestinal side effects [5, 6]. Given the fact that about 80 % of the diabetic people are living in low and middle income countries [7], these drugs are expensive also. Therefore, several groups have made their efforts to find α-amylase and α-glucosidase inhibitors from plants, bacteria, marine algae, and fungi [8–11]. Majority of them have studied the crude extracts (organic or aqueous), and some have studied pure compounds also [12, 13]. Most of the plant extracts and pure compounds were effective against either α-amylase or α-glucosidase, with a few exceptions being effective against both enzymes [14, 15]. Presence of antidiabetic activity in Momordica charantia (bitter gourd or balsam pear) was identified as early as in 1963 [16]. Extracts from fruit pulp, seeds, leaves and whole plants of M. charantia were shown to have hypoglycemic effects [17]. Methanol extracts from the fruits and seeds of M. charantia exhibited α-glucosidase inhibiting activity [18–20]. Fasting and postprandial blood glucose levels in diabetes patients were reduced after the oral intake of the aqueous extract from M. charantia fruit pulp [21]. Clinical trials using an insulin-like protein from M. charantia fruit pulp showed hypoglycemic activity in diabetes patients [22]. In vivo hypoglycemic, insulin-mimetic, and insulin secretagogue activities were also reported for the protein extracts from M. charantia [23, 24]. However, there was no direct evidence to show that the protein extracts from M. charantia have α-amylase and α-glucosidase inhibiting activities. Therefore, the current study was undertaken to evaluate the protein extracts from the fruits of two varieties of M. charantia for α-amylase and α-glucosidase inhibiting activities in vitro and glucose lowering activity in vivo using Acarbose as reference. The fruits of M. charantia var. charantia (MCC) and M. charantia var. muricata (MCM) were bought from the local market in Chengalpet, Tamil Nadu, India. They were taxonomically identified by a botanist and verified by DNA barcoding. Porcine α-amylase and yeast α-glucosidase were bought from Sigma Aldrich, and Acarbose from Bayer AG (Germany). Proteins were extracted from the fruit pulp of the two varieties of M. charantia as described before [24] with minor modifications. Fresh pulp was ground with ice-cold acid-ethanol, filtered through a muslin cloth, and centrifuged at 8000 × g for 10 min. The pH of the supernatant was adjusted to 3.0 using ammonia solution. Four volumes of acetone was added, mixed gently, and incubated at 4 °C for 24 h. The mixture was centrifuged at 6000 × g for 10 min. The pellet was washed with 80 % acetone, air dried, and dissolved in 10 mM Tris–HCl, pH 8.0. α-amylase inhibition assay Stock solutions of protein extracts and Acarbose were prepared in water. Inhibition of porcine α-amylase activity was determined using dinitrosalicylic acid as described before [25]. Protein extract or Acarbose (100 μl of 2 to 20 mg/ml) was added to 100 μl of α-amylase (1 U/ml) and 200 μl of sodium phosphate buffer (20 mM, pH 6.9) to get 0.5 to 5.0 mg/ml final concentration. The samples were pre-incubated at 25 °C for 10 min, and 200 μl of 1 % starch prepared in 20 mM sodium phosphate buffer (pH 6.9) was added. The reaction mixtures were incubated at 25 °C for 10 min. The reactions were stopped by incubating the mixture in a boiling water bath for 5 min after adding 1 ml of dinitrosalicylic acid. The reaction mixtures were cooled to room temperature, diluted to 1:5 ratio with water, and absorbance was measured in a spectrophotometer (Amersham Biosciences, USA) at 540 nm. Percentage of inhibition of enzyme activity was calculated as $$ \%\ \mathrm{Inhibition} = \left[\left({{\mathrm{A}}_{540}}^{\mathrm{Control}}\hbox{--}\ {{\mathrm{A}}_{540}}^{\mathrm{Treatment}}\right)/{{\mathrm{A}}_{540}}^{\mathrm{Control}}\right]\ \mathrm{x}\ 100 $$ wherein A540 Control is absorbance at 540 nm in control sample without protein extract and A540 Treatment is absorbance at 540 nm in treatment with protein extract α-glucosidase inhibition assay Inhibition of α-glucosidase activity was determined using yeast α-glucosidase and p-nitrophenyl-α-D-glucopyranoside (pNPG) as described before [26]. Protein extract or Acarbose (100 μl of 2 to 20 mg/ml) was added to 50 μl of α-glucosidase (1 U/ml) prepared in 0.1 M phosphate buffer (pH 6.9), and 250 μl of 0.1 M phosphate buffer to get 0.5 to 5.0 mg/ml final concentration. The mixture was pre-incubated at 37 °C for 20 min. After pre-incubation, 10 μl of 10 mM pNPG prepared in 0.1 M phosphate buffer (pH 6.9) was added, and incubated at 37 °C for 30 min. The reactions were stopped by adding 650 μl of 1 M sodium carbonate, and the absorbance was measured in a spectrophotometer (Amersham Biosciences, USA) at 405 nm. Percentage of inhibition of enzyme activity was calculated as Analysis of proteolytic activity Proteolytic activity of the plant extracts was tested against α-amylase (2 U/ml), α-glucosidase (0.05 U/ml) and a mixture of six unrelated proteins (β-lactalbumin, lysozyme, soybean trypsin inhibitor, ovalbumin, bovine serum albumin, and phosphorylase-b, 5 μg). These samples were treated with 20 μg of the protein extracts from MCC and MCM for 10 min at 25 °C (α-amylase and mixture of six proteins) or 37 °C (α-glucosidase). Treatment with Proteinase K (55 °C for 1 h), and heat denatured protein extracts were used as positive and negative controls, respectively. The reactions were stopped by heating the samples at 100 °C for 5 min after adding protein loading dye to a final concentration of 1X. The samples were analyzed by 12 % SDS-PAGE. Mode of inhibition assay Mode of inhibition of α-amylase and α-glucosidase by the protein extracts was determined as described before [27]. For α-amylase, the enzyme solution (1 U/ml) was pre-incubated with protein extracts (10 mg/ml), Acarbose (10 mg/ml) or phosphate buffer (pH 6.9) at 25 °C for 10 min. The reactions were started by adding 5 to 25 mg/ml starch, and continued at 25 °C for 10 min. The reactions were stopped by adding 0.5 ml of dinitrosalicylic acid followed by boiling for 5 min. For α-glucosidase, the enzyme solution (1 U/ml) was pre-incubated with protein extracts (10 mg/ml), Acarbose (10 mg/ml) or phosphate buffer (pH 6.9) at 25 °C for 10 min. The reactions were started by adding 50 to 250 mg/ml pNPG, and continued at 25 °C for 10 min. The reactions were stopped by adding 0.05 ml of 1 M sodium carbonate. Release of reducing sugars was quantified using maltose and paranitrophenol standard curve. Double reciprocal plot (1/v versus 1/[S]) where v is reaction velocity and [S] is substrate concentration was plotted. Mode of inhibition was determined by analysing Lineweaver-Burk plot using Michaelis-Menten kinetics. Induction of diabetes in Wistar rats Forty male Wistar rats (3-months old) were purchased from King's Institute, Chennai, and kept under 12:12 h light and dark cycle at 25 ± 2 °C. The diabetic animals were fed with high fat diet and water ad libitum throughout the treatment period of 30 days. The experimental protocols were conducted in accordance with internationally accepted principles for laboratory animal use and were approved by the Institutional Animal Ethical Committee (087/835/IAEC-2014). Diabetes was induced in 18 h fasted animals by intraperitoneal injection of 110 mg/kg nicotinamide followed by 45 mg/kg Streptozotocin (freshly dissolved in 0.1 M citrate buffer, pH 4.5). Tail bleeds were performed 7 days after injecting Streptozotocin, and animals with blood glucose concentration above 250 mg/dL were considered diabetic. Oral starch and sucrose tolerance tests Oral starch and sucrose tolerance tests were performed as described before [28]. Twenty fasted diabetic and non-diabetic rats were divided into four groups of five each, and orally treated with 10 mg/kg body weight of the protein from MCC, MCM, Acarbose (positive control) or distilled water (negative control). Ten minutes after the treatment, blood glucose level was estimated (0 min), and the rats were orally administered with 3.0 g/kg starch or 4.0 g/kg sucrose. Blood glucose level (BG) was estimated 30, 60 and 120 min after the administration. Peak blood glucose (PBG) was determined by observing the blood glucose level during the above mentioned time intervals and area under the curve (AUC) was calculated using the formula given below $$ \mathrm{A}\mathrm{U}\mathrm{C}\ \left(\mathrm{mg}/\mathrm{dL}.\ \mathrm{H}\right) = \left({\mathrm{BG}}_0 + {\mathrm{BG}}_{30}\mathrm{x}\ 0.5\right)/2 + \left({\mathrm{BG}}_{30} + {\mathrm{BG}}_{60}\mathrm{x}\ 0.5\right)/2 + \left({\mathrm{BG}}_{60} + {\mathrm{BG}}_{120}\mathrm{x}\ 1.0\right)/2 $$ wherein BG0 is the blood glucose level before oral administration of starch and glucose, and BG30, BG60, and BG120 are the blood glucose level 30, 60 and 120 min after the administration. All experiments were performed in triplicate. Means, standard errors, and standard deviations were calculated from replicates within the experiments. Statistical analysis was done by one way analysis of variance (ANOVA). Statistical significance was accepted at P < 0.05. IC50 was calculated using graph pad prism software. α -amylase inhibition assay Fruit pulp of M. charantia var. charantia (MCC) and M. charantia var. muricata (MCM) yielded 0.07 and 0.025 % proteins on wet weight basis, respectively. Inhibition of α-amylase activity by the protein extracts from MCC, MCM and Acarbose was found to be dose dependent from 0.5 to 2.5 mg/ml concentrations (Fig. 1). A maximum of 66.5, 67 and 68 % inhibition of α-amylase activity was observed at 2.5 mg/ml concentration for the protein extracts from MCC, MCM and Acarbose, respectively. Heat denatured protein extracts from MCC and MCM showed only a maximum of 11 % α-amylase inhibition activity at this concentration. The IC50 was 0.267 ± 0.024, 0.261 ± 0.019, and 0.258 ± 0.017 mg/ml for the protein extracts from MCC, MCM, and Acarbose, respectively (Table 1). The percentage inhibition of α-amylase activity and IC50 of the protein extracts from MCC and MCM was highly significant (P < 0.05) when compared with the heat denatured protein extracts but not with Acarbose. Percentage inhibition of α-amylase (a) and α-glucosidase (b) enzyme activity at different concentrations of Acarbose, protein extracts from MCC and MCM, and heat denatured protein extracts from MCC and MCM IC50 values of Acarbose and protein extracts from MCC and MCM for α-amylase and α-glucosidase inhibition IC50 (mg/ml) 0.258 ± 0.017 0.28 ± 0.019 α -glucosidase inhibition assay Ability of the protein extracts from MCC and MCM to inhibit α- glucosidase enzyme activity was determined at different concentrations between 0.5 and 5.0 mg/ml. Protein extracts from both MCC and MCM showed α-glucosidase inhibition activity in a dose dependant manner from 0.5 to 2.5 mg/ml concentration (Fig. 1). A maximum of 68.8, 69.2 and 70 % inhibition of α-glucosidase activity was observed at 2.5 mg/ml concentration for the protein extracts from MCC, MCM and Acarbose, respectively. Heat denatured protein extracts from MCC and MCM showed only a maximum of 10 % α-glucosidase inhibition activity at this concentration. The IC50 was 0.298 ± 0.034, 0.292 ± 0.022, and 0.28 ± 0.019 mg/ml for the protein extracts from MCC, MCM, and Acarbose, respectively (Table 1). The percentage inhibition of α-glucosidase activity and IC50 of protein extracts from MCC and MCM was highly significant (P < 0.05) when compared with heat denatured protein extracts but not with Acarbose. Treatment with the protein extracts from MCC and MCM did not degrade α -amylase, α-glucosidase or the mixture of six unrelated proteins. The same result was observed when the protein extracts were heat denatured before the treatment. Treatment with Proteinase K showed complete degradation of the proteins (Fig. 2). SDS-PAGE analysis of α-amylase, α-glucosidase and the protein mixture after treatment with the protein extracts from MCC and MCM. The figure shows protein marker (M), un-treated α-amylase/α-glucosidase/protein mixture (1), α-amylase/α-glucosidase/protein mixture treated with protein extracts (2) or heat denatured protein extracts (3) or Proteinase K (4) In the presence of the protein extracts from MCC and MCM, Km increased (3.214 to 4.235 and 4.583) but Vmax remained constant, which indicated competitive inhibition of the α-amylase enzyme activity (Fig. 3). The same mode of inhibition was observed against α-glucosidase enzyme also because Km increased from 3.734 to 4.564 (MCC) and 4.789 (MCM) but Vmax remained constant (Fig. 3). Lineweaver-Burk plots analysis of inhibition kinetics of α-amylase and α-glucosidase enzymes by the protein extracts from MCC and MCM. ▲, ■, 10 mg/ml protein extracts; □ 20 mM phosphate buffer In both oral starch and sucrose tolerance tests, the groups that were treated with the protein extracts from MCC and MCM showed significant lowering of PBG and AUC when compared with the diabetic control group but not with the Acarbose-treated group (Fig. 4, Table 2, P < 0.05). In the groups that were treated with the protein extracts from MCC and MCM, the glucose level was reduced after 30 min in starch tolerance test, but only after 60 min in sucrose tolerance test. In Acarbose-treated group, the glucose level was reduced after 60 min in both the tests. Effects of protein extracts from MCC and MCM and Acarbose on glucose concentration after the administration of 3 mg/kg starch (a), and 4 mg/kg sucrose (b) in diabetic rats. Values are the mean ± SEM (n = 5), at P < 0.05 vs. control. DC: Diabetic control, Acar: Acarbose Effect of Acarbose and the protein extracts from MCC and MCM on peak blood glucose (PBG) and area under the curve (AUC) after starch (3 g/kg) and sucrose (4 g/kg) loading in diabetic and non- diabetic rats. Values are the mean ± SEM (n = 5), at P < 0.05 vs. control PBG (mg/dL) % Reduction of PBG AUC (mg/dL) % Reduction of AUC Diabetic control 367.5 ± 0.87 343 ± 0.92 Diabetic + Acarbose Diabetic + MCC Diabetic + MCM Undesirable side effects and high cost of the currently available synthetic α-glucosidase inhibitors drive the need to explore the natural sources for new inhibitors. Being a global lifestyle disorder that affects millions of people with diverse genetic backgrounds, the search for alternate inhibitors is also desirable from the pharmacogenetics point of view. Crude extracts and purified compounds from M. charantia were reported to show anti-diabetic activities, which includes α-glucosidase inhibitory activity of the methanol and aqueous extracts [18–20]. However, α-amylase inhibiting activity was not reported for any extract or pure compound from this plant, and its protein extract was not studied for enzyme inhibitors against either α-amylase or α-glucosidase. Prospective inhibitors for controlling PPHG should have the ability to inhibit both α-amylase and α-glucosidase with higher percentage of inhibition and lower IC50. Therefore, protein extracts from M. charantia var. charantia (MCC) and M. charantia var. muricata (MCM) were studied for their ability to inhibit both α-amylase and α-glucosidase enzymes in vitro. A few studies have reported that α-amylase and α-glucosidase inhibitory activity of Acarbose may range between 55 and 82 % depending on the experimental conditions [14, 15, 29]. In our study, 10 mg/ml Acarbose showed 68 and 70 % inhibition of α-amylase and α-glucosidase activity, respectively. Under the same experimental conditions, the protein extracts from both MCC and MCM showed enzyme inhibition and IC50 on par with Acarbose. To our knowledge, this is the first report of a protein extract from the same plant showing both α-amylase and α-glucosidase inhibitor activities. It remains to be studied if these two activities are contributed by a single protein or different proteins. Near complete abolition of the enzyme inhibitory activities was observed when the protein extracts from MCC and MCM were heat denatured before the treatment. This indicated that the inhibitory compounds present in the extracts may be protein in nature, though inhibition due to other thermo-labile compounds cannot be ruled out. The protein extracts from MCC and MCM did not show proteolytic activity against α-amylase, α-glucosidase or the mixture of six unrelated proteins, which indicated the absence of proteolytic activity that is specific against these enzymes or non-specific against proteins. Therefore, the protein extracts do not contain any compound, which inhibits the activity by cleaving the enzymes. The Lineweaver-Burk plot showed that the protein extracts inhibit the α-amylase and α-glucosidase enzymes by competitive binding. Enzyme inhibitory activity of the protein observed in vitro is often not maintained in vivo, especially in case of oral administration of protein extract which is confronted by proteolytic enzymes and harsh pH conditions. However, orally administered proteins and peptides from plants have shown significant decrease in blood glucose level indicating biologically functional activity in vivo [30, 31]. We have tested the protein extracts from MCC and MCM in vivo in Streptozotocin-induced diabetic rats. Peak blood glucose (PBG) and area under the curve (AUC) were significantly reduced in the diabetic rats that were orally administered with the protein extracts after starch or sucrose loading. The protein extracts were able to lower the blood glucose level faster than Acarbose in starch-fed diabetic rats. These results demonstrated the glucose lowering effect of the protein extracts in vivo, possibly due to the α-amylase and α-glucosidase inhibiting activities observed in vitro. However, further experiments will be needed to confirm the same or to find other possible mechanisms. Earlier studies in M. charantia have shown insulin-mimetic and insulin secretagogue activities in the protein extract [24], and anti-oxidant activity in the aqueous extract [32]. Therefore, the protein extract from M. charantia may work against diabetes through multiple mechanisms to be useful for the holistic management of diabetes mellitus. M. charantia is a traditional medicinal plant that is popularly used for the management of diabetes in complementary and alternative medicine, and several scientific lines of evidences were reported in favour of the same. The present study established that the protein extracts from two varieties of M. charantia do have α-amylase and α-glucosidase inhibiting activities in vitro and glucose lowering activity in vivo. Further research is needed to develop anti-diabetic oral protein drug from this natural source. ANOVA, analysis of variance; AUC, area under the curve; BG, blood glucose; IC50, half maximal inhibitory concentration; MCC, Momordica charantia var. charantia; MCM, Momordica charantia var. muricata; PBG, peak blood glucose; pNPG, p-nitrophenyl-α-D-glucopyranoside; PPHG, postprandial hyperglycemia; SDS-PAGE, sodium dodecyl sulfate- polyacrylamide gel electrophoresis This project was supported by DST-INSPIRE program of Department of Science and Technology, Government of India (Dy.No.100/IFD/10685/2010-2011). The publication charges for this article was funded by DST-INSPIRE grant 100/IFD/10685/2010-2011. This article has been published as part of BMC Complementary and Alternative Medicine Volume 16 Supplement 1, 2016: Proceedings of the Indian Genetics Congress 2015: Complementary and Alternative Medicine. The full contents of the supplement are available online at http://bmccomplementalternmed.biomedcentral.com/articles/supplements/volume-16-supplement-1. MP and SP designed the study and prepared the manuscript. SP performed the study and data analysis. Both the authors reviewed the manuscript. Both authors read and approved the final manuscript. Genomics Laboratory, Department of Genetic Engineering, SRM University, Kattankulathur, Chennai, 603203, India International Diabetes Federation, IDF Diabetes Atlas: http://www.idf.org/diabetesatlas/. Accessed 15 June 2015. Gin H, Rigalleau V. Post-prandial hyperglycemia. Post-prandial hyperglycemia and diabetes. Diabetes Metab. 2000;4:265–72.Google Scholar Lordan S, Smyth TJ, Soler-Vila A, Stanton C, Ross RP. The α-amylase and α-glucosidase inhibitory effects of Irish seaweed extracts. Food Chem. 2013;141:2170–6.View ArticlePubMedGoogle Scholar Lebovitz HE. α-glucosidase inhibitors. Endocrinol Metab Clin North Am. 1997;26:539–51.View ArticlePubMedGoogle Scholar Van de Laar FA. α-glucosidase inhibitors in the early treatment of type 2 diabetes. Vasc. Health Risk Manag. 2008;4:1189–95.Google Scholar Etxeberria U, de la Garza AL, Capion J, Martnez JA, Milagro FI. Antidiabetic effects of natural plant extracts via inhibition of carbohydrate hydrolysis enzymes with emphasis on pancreatic α amylase. Expert Opin Ther Targets. 2012;16:269–97.View ArticlePubMedGoogle Scholar Fatmawati S, Shimizu K, Kondo R, Ganoderol B. A potent α-glucosidase inhibitor isolated from the fruiting body of Ganoderma lucidum. Phytomedicine. 2011;18:1053–5.View ArticlePubMedGoogle Scholar Konishi K, Watanabe N, Saito M, Nakajima N, Sakaki T, Katayama T, Enomoto T. Isolation of a new phlorotannin, a potent inhibitor of carbohydrate-hydrolyzing enzymes, from the brown alga Sargassum patens. J Agr Food Chem. 2012;60:5565–70.View ArticleGoogle Scholar Orhan N, Aslan M, Süküroğlu M, Orhan D. In vivo and in vitro antidiabetic effect of Cistus laurifolius L. and detection of major phenolic compounds by UPLC-TOF-MS analysis. J Ethnopharmacol. 2013;146:859–65.View ArticlePubMedGoogle Scholar Panwar H, Calderwood D, Grant IR, Grover S, Green BD. Lactobacillus strains isolated from infant faeces possess potent inhibitory activity against intestinal α- and beta-glucosidases suggesting anti-diabetic potential. Eur J Nutr. 2014;53:1465–74.View ArticlePubMedGoogle Scholar Ali RB, Atangwho IJ, Kuar N, Ahmad M, Mahmud R, Asmawi MZ. In vitro and in vivo effects of standardized extract and fractions of Phaleria macrocarpa fruits pericarp on lead carbohydrate digesting enzymes. BMC Complement Altern Med. 2013;20:13–39.Google Scholar Kim KT, Rioux LE, Turgeon SL. α-amylase and α-glucosidase inhibition is differentially modulated by Fucoidan obtained from Fucus vesiculosus and Ascophyllum nodosum. Phytochem. 2014;98:27–33.View ArticleGoogle Scholar Mohamed EAH, Siddiqui MJA, Ang LF, Sadikun A, Chan SH, Tan SC, et al. Potent α-glucosidase and α-amylase inhibitory activities of standardized 50 % ethanolic extracts and sinensetin from Orthosiphon stamineus Benth as anti-diabetic mechanism. BMC Complement Altern Med. 2012;12:176–82.View ArticlePubMedPubMed CentralGoogle Scholar Perez-Gutierrez RM, Damian-Guzman M. Meliacinolin: a potent α-glucosidase and α-amylase inhibitor isolated from Azadirachta indica leaves and in vivo antidiabetic property in streptozotocin-nicotinamide-induced type 2 diabetes in mice. Biol Pharm Bull. 2012;35:1516–24.View ArticlePubMedGoogle Scholar Chatterjee KP. On the Presence of an Antidiabetic principle in Momordica Charantia. Indian J Physiol Pharmacol. 1963;7:240–4.PubMedGoogle Scholar Kar A, Choudhary BK, Bandyopadhyay NG. Comparative evaluation of hypoglycaemic activity of some Indian medicinal plants in alloxan diabetic rats. J Ethnopharmacol. 2003;84:105–8.View ArticlePubMedGoogle Scholar Matsuura H, Asakawa C, Kurimoto M, Mizutani J. α-glucosidase inhibitor from the seeds of balsam pear (Momordica charantia) and the fruit bodies of Grifola frondosa. Biosci Biotech Biochem. 2002;66:1576–8.View ArticleGoogle Scholar Uebanso T, Arai H, Taketani Y, Fukaya M, Yamamoto H, Mizuno A, et al. Extracts of momordica charantia suppress postprandial hyperglycemia in rats. J Nutr Sci Vitaminology. 2007;53:482–8.View ArticleGoogle Scholar Nhiem NX, Kiem PV, Minh CV, Ban NK, Cuong NX, Tung NH, et al. α-glucosidase inhibition properties of cucurbitane-type triterpene glycosides from the fruits of momordica charantia. Chem Pharm Bull. 2010;58:720–4.View ArticlePubMedGoogle Scholar Ahmad N, Hassan MR, Halder H, Bennoor KS. Effect of Momordica charantia (Karolla) extracts on fastin7g and postprandial serum glucose levels in NIDDM patients. Bangladesh Med Res Counc Bull. 1999;25:11–3.PubMedGoogle Scholar Baldwa VS, Bhandari CM, Pangaria A, Goyal RK. Clinical trial in patients with diabetes mellitus of an insulin-like compound obtained from plant source. Upsala J Med Sci. 1977;82:39–41.View ArticlePubMedGoogle Scholar Khanna P, Jain SC, Panagariya A, Dixit VP. Hypoglycemic activity of polypeptide-p from a plant source. J Nat Prod. 1981;44:648–55.View ArticlePubMedGoogle Scholar Yibchok-anun S, Adisakwattana S, Yao CY, Sangvanich P, Roengsumran S, Hsu WH. Slow acting protein extract from fruit pulp of Momordica charantia with insulin secretagogue and insulinomimetic activities. Biol Pharm Bull. 2006;29:1126–31.View ArticlePubMedGoogle Scholar Kwon Y, Apostolidis E, Shetty K. Inhibitory potential of wine and tea against α-amylase and α-glucosidase for management of hyperglycemia linked to type 2 diabetes. J Food Biochem. 2006;32:15–31.View ArticleGoogle Scholar Kim YM, Wang MH, Rhee HI. A novel α-glucosidase inhibitor from pine bark. Carbohydr Res. 2004;339:715–7.View ArticlePubMedGoogle Scholar Ali H, Houghton PJ, Soumyanath A. α-amylase inhibitory activity of some Malaysian plants used to treat diabetes; with particular reference to Phyllanthus amarus. J Ethnopharmacol. 2006;107:449–55.View ArticlePubMedGoogle Scholar Subramanian R, Asmawi MZ, Sadikun A. In vitro α-glucosidase and α-amylase enzyme inhibitory effects of Andrographis paniculata extract and andrographolide. Acta Biochim Pol. 2008;55:391–8.PubMedGoogle Scholar Olubomehin OO, Abo KA, Ajaiyeoba EO. α-amylase inhibitory activity of two Anthocleista species and in vivo rat model anti-diabetic activities of Anthocleista djalonensis extracts and fractions. J Ethnopharmacol. 2013;146:811–4.View ArticlePubMedGoogle Scholar Joshi BN, Munot H, Hardikar M, Kulkarni A. Orally active hypoglycemic protein from Costus igneus N. E Br Biochem Biophys Res Commun. 2013;436:278–82.View ArticleGoogle Scholar Zhang H, Wang J, Liu Y, Sun B. Peptides derived from oats improve insulin sensitivity and lower blood glucose in streptozotocin-induced diabetic mice. J Biomed Sci. 2015;4:1–7.Google Scholar Hamissou M, Smith AC, Carter Jr RE, Triplett II JK. Antioxidative properties of bitter gourd (Momordica charantia) and zucchini (Cucurbita pepo). Emir J Food Agric. 2013;25:641–7.View ArticleGoogle Scholar
CommonCrawl
Only show content I have access to (11) Physics and Astronomy (11) Proceedings of the International Astronomical Union (8) Publications of the Astronomical Society of Australia (3) International Astronomical Union (8) WALLABY Pilot Survey: Public release of HI kinematic models for more than 100 galaxies from phase 1 of ASKAP pilot observations N. Deg, K. Spekkens, T. Westmeier, T. N. Reynolds, P. Venkataraman, S. Goliath, A. X. Shen, R. Halloran, A. Bosma, B Catinella, W. J. G. de Blok, H. Dénes, E. M. DiTeodoro, A. Elagali, B.-Q. For, C Howlett, G. I. G. Józsa, P. Kamphuis, D. Kleiner, B Koribalski, K. Lee-Waddell, F. Lelli, X. Lin, C. Murugeshan, S. Oh, J. Rhee, T. C. Scott, L. Staveley-Smith, J. M. van der Hulst, L. Verdes-Montenegro, J. Wang, O. I. Wong Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022 Published online by Cambridge University Press: 15 November 2022, e059 We present the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY) Pilot Phase I Hi kinematic models. This first data release consists of Hi observations of three fields in the direction of the Hydra and Norma clusters, and the NGC 4636 galaxy group. In this paper, we describe how we generate and publicly release flat-disk tilted-ring kinematic models for 109/592 unique Hi detections in these fields. The modelling method adopted here—which we call the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) and for which the corresponding scripts are also publicly available—consists of combining results from the homogeneous application of the FAT and 3DBarolo algorithms to the subset of 209 detections with sufficient resolution and $S/N$ in order to generate optimised model parameters and uncertainties. The 109 models presented here tend to be gas rich detections resolved by at least 3–4 synthesised beams across their major axes, but there is no obvious environmental bias in the modelling. The data release described here is the first step towards the derivation of similar products for thousands of spatially resolved WALLABY detections via a dedicated kinematic pipeline. Such a large publicly available and homogeneously analysed dataset will be a powerful legacy product that that will enable a wide range of scientific studies. WALLABY pilot survey: Public release of H i data for almost 600 galaxies from phase 1 of ASKAP pilot observations T. Westmeier, N. Deg, K. Spekkens, T. N. Reynolds, A. X. Shen, S. Gaudet, S. Goliath, M. T. Huynh, P. Venkataraman, X. Lin, T. O'Beirne, B. Catinella, L. Cortese, H. Dénes, A. Elagali, B.-Q. For, G. I. G. Józsa, C. Howlett, J. M. van der Hulst, R. J. Jurek, P. Kamphuis, V. A. Kilborn, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, J. Rhee, P. Serra, L. Shao, L. Staveley-Smith, J. Wang, O. I. Wong, M. A. Zwaan, J. R. Allison, C. S. Anderson, Lewis Ball, D. C.-J. Bock, D. Brodrick, J. D. Bunton, F. R. Cooray, N. Gupta, D. B. Hayman, E. K. Mahony, V. A. Moss, A. Ng, S. E. Pearce, W. Raja, D. N. Roxby, M. A. Voronkov, K. A. Warhurst, H. M. Courtois, K. Said We present WALLABY pilot data release 1, the first public release of H i pilot survey data from the Wide-field ASKAP L-band Legacy All-sky Blind Survey (WALLABY) on the Australian Square Kilometre Array Pathfinder. Phase 1 of the WALLABY pilot survey targeted three $60\,\mathrm{deg}^{2}$ regions on the sky in the direction of the Hydra and Norma galaxy clusters and the NGC 4636 galaxy group, covering the redshift range of $z \lesssim 0.08$ . The source catalogue, images and spectra of nearly 600 extragalactic H i detections and kinematic models for 109 spatially resolved galaxies are available. As the pilot survey targeted regions containing nearby group and cluster environments, the median redshift of the sample of $z \approx 0.014$ is relatively low compared to the full WALLABY survey. The median galaxy H i mass is $2.3 \times 10^{9}\,{\rm M}_{{\odot}}$ . The target noise level of $1.6\,\mathrm{mJy}$ per 30′′ beam and $18.5\,\mathrm{kHz}$ channel translates into a $5 \sigma$ H i mass sensitivity for point sources of about $5.2 \times 10^{8} \, (D_{\rm L} / \mathrm{100\,Mpc})^{2} \, {\rm M}_{{\odot}}$ across 50 spectral channels ( ${\approx} 200\,\mathrm{km \, s}^{-1}$ ) and a $5 \sigma$ H i column density sensitivity of about $8.6 \times 10^{19} \, (1 + z)^{4}\,\mathrm{cm}^{-2}$ across 5 channels ( ${\approx} 20\,\mathrm{km \, s}^{-1}$ ) for emission filling the 30′′ beam. As expected for a pilot survey, several technical issues and artefacts are still affecting the data quality. Most notably, there are systematic flux errors of up to several 10% caused by uncertainties about the exact size and shape of each of the primary beams as well as the presence of sidelobes due to the finite deconvolution threshold. In addition, artefacts such as residual continuum emission and bandpass ripples have affected some of the data. The pilot survey has been highly successful in uncovering such technical problems, most of which are expected to be addressed and rectified before the start of the full WALLABY survey. The Dawes Review 9: The role of cold gas stripping on the star formation quenching of satellite galaxies L. Cortese, B. Catinella, R. Smith Published online by Cambridge University Press: 11 August 2021, e035 One of the key open questions in extragalactic astronomy is what stops star formation in galaxies. While it is clear that the cold gas reservoir, which fuels the formation of new stars, must be affected first, how this happens and what are the dominant physical mechanisms involved is still a matter of debate. At least for satellite galaxies, it is generally accepted that internal processes alone cannot be responsible for fully quenching their star formation, but that environment should play an important, if not dominant, role. In nearby clusters, we see examples of cold gas being removed from the star-forming discs of galaxies moving through the intracluster medium, but whether active stripping is widespread and/or necessary to halt star formation in satellites, or quenching is just a consequence of the inability of these galaxies to replenish their cold gas reservoirs, remains unclear. In this work, we review the current status of environmental studies of cold gas in star-forming satellites in the local Universe from an observational perspective, focusing on the evidence for a physical link between cold gas stripping and quenching of the star formation. We find that stripping of cold gas is ubiquitous in satellite galaxies in both group and cluster environments. While hydrodynamical mechanisms such as ram pressure are important, the emerging picture across the full range of dark matter halos and stellar masses is a complex one, where different physical mechanisms may act simultaneously and cannot always be easily separated. Most importantly, we show that stripping does not always lead to full quenching, as only a fraction of the cold gas reservoir might be affected at the first pericentre passage. We argue that this is a key point to reconcile apparent tensions between statistical and detailed analyses of satellite galaxies, as well as disagreements between various estimates of quenching timescales. We conclude by highlighting several outstanding questions where we expect to see substantial progress in the coming decades, thanks to the advent of the Square Kilometre Array and its precursors, as well as the next-generation optical and millimeter facilities. Arecibo Survey of HI Emission from Disk Galaxies at Redshift z~0.2 B. Catinella, M. P. Haynes, J. P. Gardner, A. J. Connolly, R. Giovanelli Journal: Proceedings of the International Astronomical Union / Volume 3 / Issue S244 / June 2007 Print publication: June 2007 We present results from a targeted survey undertaken with the 305m Arecibo radiotelescope to detect HI-line emission from disk galaxies at redshift z>0.16. Among other applications, this dataset will be used to study the evolution of disk scaling relations at intermediate redshifts. Compared to optical velocity widths, HI measurements sample a larger fraction of the disks, where the rotation curves are typically flat, and are not affected by slit smearing and misalignment or by aperture effects. Thus, in contrast to studies based on optical spectroscopy, this dataset allows for a direct comparison with the local Tully-Fisher relation that is technique independent. The Arecibo Galaxy Environment Survey–Potential for finding Dark Galaxies and Results so far R. F. Minchin, R. Auld, L. Cortese, J. I. Davies, E. Momjian, R. Taylor, B. Catinella, P. Henning, S. Linder, E. Muller, K. O'Neil, J. Rosenberg, S. Sabatini, S. E. Schneider, M. Stage, W. van Driel, the AGES team Published online by Cambridge University Press: 01 June 2007, E1 View extract Reproduction errors have occured in figures 1–4 of this paper, published in these proceedings, pages 112–119. The complete corrected paper is reproduced here for clarity. Cambridge University Press apologise to the authors and readers for these errors. AGES Observations of Abell 1367 L. Cortese, R. F. Minchin, R. R. Auld, J. I. Davies, B. Catinella, E. Momjian, J. L. Rosenberg, R. Taylor, G. Gavazzi, K. O'Neil We present 21 cm observations of 5×1 square degrees centered on the local Abell cluster 1367 obtained as part of the Arecibo Galaxy Environment Survey. This represents the first HI selected sample covering the core and the outskirts of a local cluster of galaxies. Combining the HI data with SDSS optical imaging we show that in HI selected samples follow scaling relations similar to the ones usually observed in optically selected samples. The most striking difference between HI and optically selected samples resides in their large scale distribution: while optical and X-ray observations trace the cluster potential very well, at radio wavelengths there is almost no evidence of the cluster presence. The Arecibo Galaxy Environment Survey – Potential for finding Dark Galaxies and Results so far The Arecibo Galaxy Environment Survey is a blind neutral hydrogen survey using the ALFA multibeam receiver at Arecibo Observatory to reach unprecedented sensitivities in a number of selected fields in the local Universe. When completed the survey will cover 200 square degrees out to a distance of at least 270 Mpc. If a population of gas-rich dark galaxies exists, then this survey is in a prime position to uncover that population. So far 20 square degrees have been covered in the regions of Abell 1367, the Virgo Cluster, the NGC 7332/9 galaxy pair and the isolated galaxy NGC 1156. Over 200 sources have been found, including a number that have no obvious optical counterparts. We discuss here the potential of AGES for uncovering more such objects and the characteristics of the dark sources identified to date. The ALFA Zone of Avoidance Survey: Results from the Precursor Observations C. M. Springob, P. A. Henning, B. Catinella, F. Day, R. Minchin, E. Momjian, B. Koribalski, K. L. Masters, E. Muller, C. Pantoja, M. Putman, J. L. Rosenberg, S. Schneider, L. Staveley-Smith The Arecibo L-band Feed Array Zone of Avoidance Survey (ALFA ZOA) will map 1350-1800 deg2 at low Galactic latitude, providing HI spectra for galaxies in regions of the sky where our knowledge of local large scale structure remains incomplete, owing to obscuration from dust and high stellar confusion near the Galactic plane. Because of these effects, a substantial fraction of the galaxies detected in the survey will have no optical or infrared counterparts. However, near infrared follow up observations of ALFA ZOA sources found in regions of lowest obscuration could reveal whether some of these sources could be objects in which little or no star formation has taken place ("dark galaxies"). We present here the results of ALFA ZOA precursor observations on two patches of sky totaling 140 deg2 (near l = 40°, and l = 192°). We have measured HI parameters for detections from these observations, and cross-correlated with the NASA/IPAC Extragalactic Database (NED). A significant fraction of the objects have never been detected at any wavelength. For those galaxies that have been previously detected, a significant fraction have no previously known redshift, and no previous HI detection. The Arecibo Galaxy Environments Survey–Description of the Survey and Early Results R. F. Minchin, R. Auld, J. I. Davies, B. Catinella, L. Cortese, S. Linder, E. Momjian, E. Muller, K. O'Neil, J. Rosenberg, S. Sabatini, S. E. Schneider, M. Stage, W. van Driel, the AGES team Journal: Proceedings of the International Astronomical Union / Volume 2 / Issue S235 / August 2006 Published online by Cambridge University Press: 01 August 2006, pp. 227-229 The Arecibo Galaxy Environments Survey (AGES) is a 2000-hour neutral hydrogen (H I) survey using the new Arecibo L-band Feed Array (ALFA) multibeam instrument at Arecibo Observatory. It will cover 200 square degrees of sky, sampling a range of environments from the Local Void through to the Virgo Cluster with higher sensitivity, spatial resolution and velocity resolution than previous neutral hydrogen surveys. AGES Observations of Abell1367 and its Outskirts L. Cortese, R. F. Minchin, R. R. Auld, J. I. Davies, B. Catinella, E. Momjian, J. L. Rosenberg, K. O'Neil, The AGES Team Published online by Cambridge University Press: 01 August 2006, p. 196 The Arecibo Galaxy Environment Survey (AGES, Auld et al. 2006) will map ~200 square degrees over the next years using the ALFA feed array at the 305-m Arecibo Telescope. AGES is specifically designed to investigate various galactic environments from local voids to interacting groups and cluster of galaxies. AGES will map 20 square degrees in the Coma-Abell1367 supercluster including the Abell cluster 1367 and its outskirts (up to ~2 virial radii). In Spring 2006 we nearly completed the observations of 5 square degrees in the range 11:34< RA< 11:54, 19:20<Dec<20:20 covering all the cluster core and part of its infalling region reaching a 5 sigma detection limit of M(HI)~4×108M⊙ (assuming a velocity width ~200 km~s−1) at the distance of Abell1367 (~92 Mpc). An HI selected sample has been extracted from the datacube obtaining a catalogue of fluxes, recessional velocities, positions and velocity widths. We present a preliminary analysis of the properties of the HI sources and report the discovery of HI diffuse features within interacting groups at the periphery of Abell1367. Evolution of the Mass-to-light Ratio of Galaxies to z~ 0.25 We present the first results of a targeted survey carried out with the 305m Arecibo radiotelescope to detect HI-line emission from disk galaxies at redshift z > 0.16. We are using this sample to study the evolution of the zero point of the Tully-Fisher relation (TFR) for galaxies at intermediate redshifts. Compared to optical widths, HI measurements sample a larger fraction of the disks, where the rotation curves are typically flat, and are not affected by slit smearing and misalignment or by aperture effects. Thus, in contrast to studies based on optical spectroscopy, this dataset allows for a direct comparison with the local TFR that is technique independent.
CommonCrawl
Clinical Phytoscience International Journal of Phytomedicine and Phytotherapy Ethanol extract of Nigella sativa has antioxidant and ameliorative effect against nickel chloride-induced hepato-renal injury in rats Original contribution Kazeem Akinyinka Akinwumi1, Afusat Jagun Jubril2, Oreoluwa Oluwafunke Olaniyan1 & Yusuf Yusuf Umar1 Clinical Phytoscience volume 6, Article number: 64 (2020) Cite this article Nickel exposure causes hepato-renal toxicity via oxidative stress. Medicinal plants with antioxidants properties are being explored as treatment options. In this study, the effect of ethanol extract of Nigella sativa (ENS) on nickel chloride (NiCl2)-induced hepato-renal damage was evaluated by monitoring biochemical and oxidative stress markers. Additionally, the antioxidant capacity and phytochemical constituents of ENS were quantified using HPLC and GC-MS. NiCl2 significantly increased (p < 0.05) aspartate aminotransferase, creatinine, sodium ion, chloride ion and malondialdehyde levels, while antioxidant enzymes were decreased in the organs except for kidney glutathione-S-transferase when compared to the control. However, ENS exerted inhibitory effect against NiCl2 toxicity in both organs by reversing the biomarkers towards control levels. ENS has a high antioxidant capacity and is rich in antioxidants including gallic acid, quercetin, eucalyptol and levomenthol that may have accounted for the improvement of hepato-renal health in co-exposed rats. Our result suggests that amelioration of nickel chloride-induced hepato-renal pathology by ethanol extract of Nigella sativa was related to its antioxidant properties. Therefore, Nigella sativa could be valuable in the management of nickel-induced toxicity. Nickel has several industrial applications including the manufacturing of stainless steel, batteries, utensils, cosmetics and electronic products [1]. It is also used as a catalyst and pigments in the food industries. Fossil fuel burning and other industrial activities result in the release of a large amount of nickel into the atmosphere every year [1, 2]. Occupational exposure to nickel at the workplace is common, while non-occupational exposure to the metal occurs in people who live in the vicinity of nickel related industries. This is particularly common in developing countries where regulations concerning effluent treatment are not strictly enforced [1,2,3]. Moreover, the general public is also chronically exposed to nickel leached from nickel-containing metal pipes, kitchen utensil and food processing equipment as well as cosmetics, tobacco and jewellery [4,5,6]. Several deleterious health hazards have been linked with exposure to this metal including allergic contact dermatitis, eczema, respiratory infections, asthma, bronchitis, dizziness, nausea, headache, diarrhoea, reproductive damage, neurological defects, diabetes, fever, heart attack, insomnia, itching and haemorrhages [7,8,9]. Moreover, epidemiological studies have found an increased occurrence of cancers of the breast, respiratory and gastrointestinal tracts in workers within nickel related industries [1]. The pathophysiological mechanisms of nickel toxicity are complex. However, one of the main mechanisms of nickel toxicity lies in its ability to alter bio-metal homeostasis and induce oxidative stress. Nickel induced reactive oxygen species (ROS) enhances lipid peroxidation and modulates antioxidant enzyme activities [10]. The ROS produced by nickel ion also oxidizes DNA resulting in elevation of 8-hydroxy-2′-deoxyguanosine in nickel-related workers [11]. Also, Ni-induced ROS production has been linked with apoptosis and inflammatory signalling including JNK, Nrf2/HO-1 TLR4/p38/CREB pathways [12,13,14]. Like other heavy metals, nickel toxicity is treated with chelating agents and synthetic antioxidants. Chelation treatment is however limited by redistribution of lethal metals, loss of essential metals, headache, nausea, hypertension, pressure, hepatotoxicity and nephrotoxicity [15]. Similarly, synthetic antioxidants are limited by negative health effects and restrictions [16]. Due to these limitations, medicinal plants with antioxidant properties are now being screened for natural antioxidants and other phytochemicals that could be valuable in the remediation or management of nickel toxicities [16]. The usefulness of Nigella sativa (N. sativa) was therefore explored in the present study. The N. sativa is an annual herb in the Ranunculacae family that is commonly grown in the Middle East, Middle Asia and North Africa [17]. The seed is a common food additive in Middle Eastern and South Western cuisines and has been used since antiquity in traditional medicine for treating illness and improving physical performance [18,19,20]. It has been reported in traditional pharmacopoeia against wide range of ailments including, bronchitis, skin diseases, rheumatism, diarrhoea, headache, fever, diabetes, hypertension, obesity, amenorrhea, dysmenorrhea, asthma and fatigue [21,22,23,24]. The seed and oil are also used for the treatment of disorders of the nervous, cardiovascular, respiratory, digestive, and excretory and immune system [25]. Other pharmacological activities that have been reported in the seed include analgesic, immunomodulatory, anti-histaminic and anti-leukotrienes effects [17, 26,27,28]. Furthermore, N. sativa possesses a wide range of biological and pharmacological activities against bacteria, fungi, viruses and parasitic microbes [29,30,31]. It is also used in the management of psoriasis and cancers [32]. Recently, N. sativa was reported to exert protection against several food toxicants and ease the side effects of chemotherapeutic drugs including bromobenzene, tartrazine, cisplatin, thioacetamide and tramadol [33,34,35,36,37,38]. Potent antioxidants and anticancer agents in the plants including thymoquinone are well-documented [39]. Healing power and therapeutic benefits of N. sativa have been attributed to the powerful antioxidants present in it [34, 39]. Based on the reported antioxidant properties of the plant, we investigated the possible ameliorative effect of ethanol extract of Nigella sativa (ENS) in protecting against nickel chloride-induced hepato-renal damage in Wistar rat model. Additionally, we evaluated the extract for the presence of phenolics and other phytochemicals using high-performance liquid chromatography and gas chromatography-mass spectrometry. Methods title Nickel chloride, 2-thiobarbituric acid, reduced glutathione, and 5′5'dithiobis (2-nitrobenzoic acid) were purchased from Sigma Chemical Co., St. Louis, MO., while ethanol and hydrogen peroxide were obtained from Merck KGaA, Darmstadt Germany. 1-chloro-2, 4-dinitrobenzene was obtained from J.T. Baker Inc. Philipsburg NJ, USA. Alanine aminotransferase (ALT), aspartate aminotransferase (AST), total protein, total bilirubin, creatinine, urea, sodium ion (Na+), potassium ion (K+) and chloride ion (Cl−) Cobas diagnostic kits were obtained from Roche Diagnostic GnBH, Mannheim, Germany. All other chemicals used were of analytical grade. Preparation of ethanol extract of Nigella sativa seeds N.sativa seed was obtained from Al Khatim, Medina, Saudi Arabia. The seeds were milled and successively extracted for two days each with n-hexane, ethyl acetate and 70% ethanol. Briefly, 200 g of the milled Nigella sativa seeds were macerated in 1000 ml of n-hexane with constant shaking for forty-eight hours. The n-hexane extract was separated and the residue was subsequently soaked in 1000 ml ethyl acetate extract for another forty-eight-hours. Finally, the residue remaining after ethyl acetate extraction was soaked in 1000 ml of 70% ethanol for 48 h and the filtrate from the ethanol extract was concentrated in a rotary evaporator at 40 °C and then dried to constant weight in an oven at 40 °C. Sixteen female Wistar rats with average age and body weight of 7 weeks and 85 g respectively were obtained from Rawlab Farm Nigeria Enterprise, Ibadan and housed four per cage with wood shaven bedding in polypropylene cages under standard conditions at Bells University of Technology animal husbandry facility in conformity with a guide for the care and use of laboratory animals [40]. The rats were housed at room temperature and exposed to natural daily light-dark cycles. They were provided with water and standard rat chow ad libitum and were allowed to acclimate for two weeks before the commencement of treatment. Animals were assigned in four different groups of four rats each and treated with deionized water, 2 mg/kg NiCl2, 50 mg/kg body weight ENS and 2 mg/kg NiCl2 + 50 mg/kg body weight ENS. Animals were assigned to groups based on their body weight to ensure even distribution and eliminate variation in initial mean body weights among the groups. The dose of nickel chloride was selected based on its environmental relevance and toxicity in a previous study [41], while the dose of ENS was selected based on our preliminary study and its reported ameliorative effect [42]. The NiCl2 was injected intraperitoneally once every Friday, while ENS was intubated by gavage daily throughout the 35-day study period. Animals were treated between 5.00 and 6.00 pm during the experiment. After the treatment schedule, the final weights of the animals were obtained before blood samples were collected from the retro-orbital plexus of each rat after a 12 h fast. The livers and kidneys were subsequently removed from test and control animals after they were euthanized by cervical dislocation. Before euthanasia, the rats were exposed to diethyl ether to anaesthetize them. The tissues were rinsed in ice-cold 1.15% KCl, blotted on a filter paper and weighed. The organ-body weight ratio was subsequently calculated for each animal. Serum chemistry and oxidative stress assessment Blood samples collected into heparinized tubes were allowed to stand for two hours before centrifugation at 3000 g for 15 min. Total protein, ALT, AST, total bilirubin, creatinine, urea, Na+, K+ and Cl− were determined in the resulting plasma using Cobas diagnostic kits and Cobas C311 chemistry autoanalyzer (Roche Diagnostic GnBH, Mannheim, Germany). One part of the liver and left kidney of each rat was homogenized with cold 50 mM Tris-HCl buffer in a Potter- Elvehjem homogeniser and centrifuged at 10000 g for 20 min at 4 °C. The supernatants obtained were used for the determination of oxidative stress parameters. Lipid peroxidation was evaluated by TBARS formation as measured by malondialdehyde (MDA) following the method of Esterbauer and Cheeseman [43]. Catalase (CAT) activity was measured by monitoring the decomposition of H2O2 as described by Aebi et al [44]. Glutathione- S- transferase (GST) activity was determined according to the method of Habig et al. [45] based on the conjugation of 1-chloro-2,4-dinitrobenzene to reduced glutathione at 340 nm. The other part of the liver and the right kidneys of each animal were fixed in 4% phosphate-buffered formalin, dehydrated in graded ethanol and embedded in paraffin. Tissues were cut into 5 μm sections, mounted on a clean slide and stained with haematoxylin-eosin dye. Pathological changes in both organs were examined by a trained pathologist who was blinded to the treatments. Phytochemical analysis Phytochemical screening of ENS was determined following standard methods described by Sofowora et al, [46], Trease and Evans [47] and Harborne [48]. The concentrations of caffeic acid, gallic acid, p-coumaric acid, apigenin, quercetin and amentoflavone were determined in ENS using the high-performance liquid chromatography. Briefly, 1.0 mg of ENS was dissolved in 1.0 ml of absolute methanol. The resulting solution was filtered through 0.45 m Millipore membrane filter. HPLC analysis of ENS was done using Agilent technologies HPLC 1100 series. The extract was separated on Zorbax eclipse XDB RP C8 (150 × 4.6 mm, 5 μm) column with the temperature set at 40 °C and a flow rate of 0.5 mL/min. Elution of the mobile phase was performed with acetonitrile and 0.2% acetic acid. The injection volume was 10 μL and the detection of the compounds was monitored at 257 nm. Sample identification was achieved by comparing the retention time and spectra of the compounds in the sample with that of reference standards. Gas chromatography-mass spectrometry (GC-MS) analysis Volatile compounds in ENS were detected by dissolving 1.0 mg of the sample in 1.0 ml of methanol, filtered with a micro-syringe and 1 μl aliquot was injected into a QP2010SE GC-MS Shimadzu gas chromatography-mass spectrometer (Shimadzu Co., Kyoto, Japan) equipped with a mass selective detector operating at 70 eV electron impact mode and capillary column (30 m × 0.25 mm, film thickness 0.25 μm). The column oven temperature was programmed from 60 to 290 °C at the rate of 4 °C/min. Initial and final temperatures were maintained for 3 and 10 min, respectively. The mass scanning range was 45–700 m/z, while ion source and interface temperatures were set at 230 and 250οC, respectively. The carrier gas was helium with a flow rate of 3.22 mL/min and pressure of 144.4 kPa. Compounds in ENS were identified by matching their mass spectra with those in the database of NIST computer data bank and by comparison of the fragmentation pattern with those reported in the literature. Diphenyl-1-picrylhydrazyl (DPPH) scavenging activity The DPPH scavenging activity of ENS was evaluated based on the reduction of DPPH to diphenylpicrylhydrazine by antioxidants as previously described by Dildar et al. [49]. Briefly, 3 mg of ENS was dissolved in 3 ml methanol and 100 μL of the different concentrations of ENS (10–500 μgml− 1) were mixed with 3 ml of DPPH solution (12 mg DPPH in 50 mL methanol). The absorbance was obtained at 517 nm after 30 min using a SM7504UV spectrophotometer (Uniscope, UK`). The DPPH scavenging activity was determined as: $$ \mathrm{Percentage}\ \left(\%\right)\ \mathrm{DPPH}\ \mathrm{Scavenging}\ \mathrm{Activity}=\frac{A_{\mathrm{b}}-{A}_{\mathrm{e}}\times 100}{A_{\mathrm{b}}} $$ where \( {A}_b \) = absorbance of DPPH∙ in solution without the extract \( {A}_e \)= absorbance of DPPH∙ in the presence of the (extract) The concentration of ENS that scavenge 50% of DPPH radicals (IC50) was calculated from the plot of percentage DPPH scavenging activity against sample concentration and expressed as 휇g/mL of the extract. Total antioxidant capacity Total antioxidant was determined based on the reduction of Mo (VI) to Mo (V) at acidic pH as described by Prieto et al, [50] with slight modification. Briefly, 1.0 ml of 1.0 mg/ml ENS was mixed with 1.0 ml of reagent solution containing 4.0 mM (NH4)2MoO4, 28.0 mM Na3PO4 and 0.6 M H2SO4. The mixture was incubated for 90 min at 95 °C in shaking water bath. The mixture was thereafter allowed cool at room and the absorbance was taken at 695 nm against a blank containing ethanol instead of the extract. The total antioxidant was calculated from an ascorbic standard curve and expressed as μg equivalent of ascorbic acid per ml. Data obtained from all the test and control rats were analyzed using the 17th version of SPSS (SPSS Inc. Chicago, IL) and presented as mean ± SEM. One-way analysis of variance (ANOVA) and Duncan Multiple Range Test were used to determined intergroup differences and mean separation respectively. The level of significance was set at p < 0.05. The organ weight and relative organ weight ratio for both the liver and kidney of test and control animals are shown in Table 1. The result showed that there was no significant difference in liver and kidney weight as well as the relative liver weight in all the treatment groups. However, relative kidney weight was significantly (p < 0.05) reduced by 13.4% in NiCl2-treated group when compared with the control group. Administration of the ENS either alone or in combination with NiCl2 resulted in similar relative kidney weights with control. Table 1 Organ weight and relative organ weight in test and control rats The results obtained for the liver biomarkers following concurrent exposure to NiCl2 and ENS are presented in Table 2. The AST activity was significantly elevated (p < 0.05) in animals exposed to NiCl2 by 111.3% when compared to the control. In contrast, ENS reduced AST activity to 73.6%. The AST in the group that was given ENS alone was not significantly different from the control. Alanine aminotransferase (ALT) was increased by 44.4% in the NiCl2-treated rats when compared to the control. However, ALT activity was reduced to 20% in the group exposed simultaneously to ENS and NiCl2. The ALT activities were similar in the groups exposed to ENS alone and the control. Total protein (TP) and total bilirubin (TB) were not significantly different across the groups, although both parameters were elevated in NiCl2-treated group. Table 2 Liver function markers in the serum of test and control animals The effect of ENS on NiCl2-induced alteration of kidney health markers is presented in Table 3. Urea concentration was increased by 22.5% in the NiCl2-treated animals when compared to control, while creatinine was markedly (p < 0.05) increased by 74.1% also as compared to control value. Both markers were, however, reversed towards control value in the groups treated with ENS and NiCl2. The reversal was more pronounced (p < 0.05) for creatinine levels. The Na+ and Cl− were significantly (p < 0.05) elevated in rats administered NiCl2 by 15.4% and 23.2% respectively. In contrast, simultaneous administration of NiCl2 and ENS reduced Na+ and Cl− concentration to 10% and 2% respectively. The reduction was only significant (p < 0.05) for Cl− concentration. Concentrations of creatinine, urea, Na+ and Cl− in the group that was given ENS alone were not significantly different from the control. Similarly, plasma K+ was not significantly different across all the groups, although a decrease in K+ level was observed in the NiCl2-treated group. Table 3 Kidney function markers and electrolyte in the serum of test and control animals The results obtained for lipid peroxidation and antioxidant enzymes in test and control animals are presented in Table 4. Malondialdehyde (MDA) level was increased by 41.5% in the liver of NiCl2−treated rats when compared with the control, while kidney MDA of treated rats was significantly (p < 0.05) increased by 42.3% also as compared with the control. Simultaneous administration of ENS and NiCl2 reduced liver MDA and kidney MDA to 34% and 3.3% respectively. The MDA reduction was only significant (p < 0.05) in the kidney. The MDA level in both organs was not significantly different from the control in the group treated with ENS alone. Catalase activity was significantly (p < 0.05) decreased by 69.3% in the liver of rats administered NiCl2 when compared to the control, while a 40.1% decrease in kidney catalase activity was observed in the same set of animals as compared to control value. In contrast, liver catalase activity was markedly (p < 0.05) reduced to 40.1% in the co-treatment group that was given ENS and NiCl2. Kidney catalase activity was also reduced to 8.71% in the co-exposure group. Catalase activities in both organs of rats treated with ENS alone were not significantly different from the control. Liver GST was significantly (p < 0.05) decreased by 23.1% in the animals treated with NiCl2 when compared to the control. However, co-treatment with ENS improved liver GST activity to 15.9%. Contrary to the results obtained for liver GST, NiCl2 significantly (p < 0.05) increased kidney GST activity by 113.7% when compared to control value. However, ENS markedly (p < 0.05) reduced NiCl2- induced hike in GST activity to 14.7%. Liver and kidney GST activity were similar in the control group and those exposed to ENS alone. Table 4 Lipid peroxidation and antioxidant enzymes activities in the liver and kidney of test and control animals The results of the histopathological evaluation of test and control animals are presented in Figs. 1 and 2. Control rats and those fed with ENS alone had normal and intact hepatic histological architecture (A and B), while NiCl2-induced alteration of hepatic architecture was evident by hepatocellular necrosis and atrophy of hepatic cords with Kupffer cell hyperplasia (C). However, simultaneous exposure to ENS and NiCl2 resulted in mild atrophy of hepatic cords (D). The renal histopathology result presented in Fig. 2 revealed normal architecture in the groups fed water and ENS (A and B), while treatment with NiCl2 led to tubular epithelial degeneration and necrosis (C) as well as nephrosis with casts (D) and congestion of glomerular capillaries (E). ENS prevented the lesion as evident by normal renal architecture observed in the co-treated group. Ameliorative effect of ENS on NiCl2-induced alteration of hepatic histo-architecture in rats. Normal liver histology in the group treated with water and 50 mg/kg ENS alone a and b. Hepatocellular necrosis (black arrow) and atrophy of cord (arrow head) with Kupffer cell hyperplasia (blue arrow) in the group that received NiCl2 c. Mild atrophy of hepatic cords (arrow head) observed in the group that received a combination of ENS and NiCl2 d Protective effect of ENS on NiCl2-induced alteration of renal histo-architecture in rats. Normal renal histology in the group treated with water a and 50 mg/kg ENS alone b. Tubular epithelial degeneration and necrosis c, Nephrosis with casts d and congestion of glomerular capillaries e observed in the group injected (i.p.) with NiCl2. Normal renal histology in the observed group that received a combination of ENS and NiCl2.f The result of the phytochemical screening of ENS presented in Table 5 revealed that it is very rich in phenolic compounds, alkaloids, flavonoids, saponins and reducing sugar. Deoxy sugars, tannins, terpenoids and tannins were also detected in ENS. However, anthraquinone was not detected in ENS in the current study. The results of the DPPH scavenging and total antioxidant capacity presented in Table 6 showed that ENS has a DPPH scavenging activity of 145 ± 0.40 휇g/mL, while the total antioxidant was 186.90 ± 0.40 μgml−1AAE. Table 5 The Qualitative phytochemical analyses of ethanol extract Nigella sativa Table 6 In vitro antioxidant activities of the ethanol extract of Nigella sativa The HPLC chromatograms and quantities of some phenolic compound present in ENS are presented in Fig. 3 and Table 7 respectively. The concentration of the compounds is in the order quercetin > amentoflavone> apigenin> gallic acid> caffeic acid> p-coumaric acids. The GCMS analysis of ENS presented in Table 8 revealed the presence of 33 bioactive compounds. Major phytochemicals detected under our experimental conditions including 9Z,12Z-octadecadienoic acid methyl ester (27.8%) cyclohexanol,5-methyl-2-(1-methylethyl)-,(1.alpha.,2.beta.,5.alpha.)-(.+/−.)-(22%), 9E-octadecenoic acid methyl ester, (15.60%), eucalyptol (10%), hexadecanoic acid methyl ester (10%). Other beneficial phytochemicals detected in ENS are levomenthol (2.19%), cis-4-methoxy thujane (1.70%), 9Z-hexadecenoic acid, methyl ester, (1.46%), butylated hydroxytoluene (0.5%) and p-cymene (0.2%). Chromatogram of flavonoid standards a and crude ethanol extract of Nigella sativa b Table 7 The concentration of some phenolic antioxidants detected in the ethanol extract of Nigella sativa Table 8 Compounds detected in the ethanol extract of Nigella sativa by GCMS The liver is a target of Ni toxicity. Clinical and experimental data have indicated that Ni toxicity is characterized by alteration of liver function profile [13, 51]. As evident in this study, the elevated ALT and AST observed in Ni- treated animal could be due to induction of hepatocellular injury. Both enzymes are localized in hepatocytes but are rapidly spilt into the bloodstream during hepatic membrane and hepatocellular damage. Changes in histo-architecture of liver including hepatocellular necrosis and atrophy of cord and kupffer cell hyperplasia observed in Ni-treated rats confirmed the toxicity of NiCl2 to liver cells. Similar lesions had been observed in the mice and fish models of nickel toxicity [7, 52]. In contrast, ENS acted antagonistically with NiCl2 to decrease plasma transaminase level as well as ameliorate Ni-induced hepatic lesion as evident by mild atrophy observed in the animals co-exposed to NiCl2 and ENS. Creatinine and urea are by-products of cellular metabolism that are mainly excreted through the kidneys. Impairment in kidney function renders the kidney inefficient to excrete both compounds, resulting in their elevation in the plasma. Therefore, the elevation of both products in the group treated with NiCl2 could be due to the nephrotoxic effect of NiCl2 to renal cells. Earlier works have shown that NiCl2 is a nephrotoxin and similar elevation of creatinine and urea were observed in mice [53]. Renal impairment often results in perturbation of electrolytes balance. NiCl2-induced perturbation of electrolyte balance in the present study was characterized by increased plasma Na+ and Cl− with a concomitant decrease in K+. The cellular mechanism by which nickel affects electrolyte balance is not completely understood, but it may be secondary to its effects on mitochondria including depletion of ATP that may impair active transport of ions [54, 55]. Also, NiCl2 may alter tubular permeability or interfere with transporters including Na/K ATPase. For instance, Liapi et al [56] and Maiti et al [57] reported that NiCl2 inhibits Na/K ATPase in erythrocyte and brains of rats and fish. Inhibition of Na/K ATPase would decrease sodium reabsorption and could, in turn, elevate Cl−, since Cl− is co-transported with Na+. The evidence for NiCl2- induced histological damage to renal cells was confirmed by the presence of tubular epithelial degeneration and necrosis as well as nephrosis with casts and congestion of glomerular in treated rats. However, ENS protected the kidney against NiCl2- induced renal toxicity as evident by reversal of kidney function parameters and electrolyte imbalance towards the control as well as the apparently normal renal histo-architecture in the animals co-exposed to NiCl2 and ENS. Therefore, suggesting that ENS exerted protective role against NiCl2 treatment in the kidney. Although the mechanism by which Ni ion cause tissue damage is not fully elucidated, accumulating evidence suggest that ROS generation play a significant role in nickel toxicity [58]. The result of the present study adds to the body of evidence in favour of an association between nickel exposure and oxidative stress. Increased MDA in the NiCl2- treated animals is indicative of lipid peroxidation. Ni ions initiate free radical generation through redox cycling that occurs when Ni2+ bind to protein [58]. In addition, nickel may facilitate bioaccumulation of iron that may partake in Fenton reaction and enhanced production of radicals including hydroxyl radicals that promotes membrane lipid peroxidation and oxidative stress [2]. Lipid peroxidation may lead to perturbation of membrane function, loss of enzyme activity and alteration of the antioxidant defence system. This may be responsible for the alteration of the endogenous antioxidants observed in this study, which was characterized by inhibition of liver and kidney catalase as well as liver GST. However renal GST activity was enhanced. A similar result was obtained by Misra et al [59]. Catalase plays an important role in maintaining physiological levels of hydrogen peroxide and eliminating peroxides generated during lipid peroxidation. Inhibition of catalase in the present study could be due to the utilization of the enzyme in eliminating H2O2 or deactivation of the enzymes by free radicals. This could enhance ROS-induced oxidative damage in both organs. The differential response of GST in the liver and kidney suggest that the effect NiCl2 on GST activity seems to be complex and tissue-specific. GST is a multifunctional family of enzymes that protect cells against oxidative damage and is involved in the detoxification of several electrophilic xenobiotics including environmental toxicants and products of oxidative membrane lipid peroxidation [60, 61]. Therefore, decreased liver GST activity observed in the present study could be due to inhibition of the enzyme by ROS. This may reflect inefficient detoxification and protection against Ni-induced oxidative stress. The kidney is more sensitive to nickel toxicity because Ni ions accumulate and are retained more in the kidney [2]. Therefore, the increase in kidney GST could be an adaptive response against enhanced oxidative stress induced by Ni2+. Alternatively, it could be suggestive of the onset of kidney diseases. Certain isoforms of GST are elevated at the early stage and during the progression of kidney diseases [62, 63]. The increase in renal GST observed in this study may also be due to NiCl2- induced damage to renal tubules. Increased renal GST-α and GST-π activities are associated with proximal and distal tubular damage in experimental drug and heavy metal models [64, 65]. Interestingly, ENS relieved the liver and kidney of NiCl2-induced oxidative stress as manifested by the attenuation of lipid peroxidation and reversal of both catalase and GST activities in the liver and kidney of the animals co-exposed to ENS and NiCl2. The high antioxidant capacity and free radical scavenging activity of ENS may be responsible for the fortification of antioxidant defences against nickel chloride-induced oxidative stress. This may be responsible for the observed reduction of lipid peroxidation as well as improvement in catalase and GST activities in both organs. Also, phytochemical screening of ENS showed that it contains beneficial phytochemicals including polyphenols and flavonoids. Polyphenols and flavonoids such as gallic acid, quercetin, amentoflavone, apigenin, caffeic acid and p-coumaric acid detected in ENS could account in part for the strong antioxidant capacity and inhibition of nickel induced oxidative stress as well as hepato-renal damage in this study. Some of these compounds have been demonstrated to modify oxidative stress-related pathways altered by environmental toxicants including nickel [66, 67]. Also, quercetin forms complex with nickel and accelerate its removal from the body [68]. Some of the volatile compounds detected in ENS by GCMS have been reported to have valuable biological effects and may have contributed to the amelioration of Ni-induced oxidative hepato-renal injury observed in the current study. For instance, the oxygenated terpenes, eucalyptol exerted hepatoprotection against TCDD and dexamethasone by inhibiting inflammation and oxidative stress [69, 70]. Similarly, levomenthol exhibited gastroprotective action against ethanol-induced ulcer through its anti-inflammatory, anti-apoptotic and antioxidant effects [71]. The fatty acid methyl esters that are abundant in ENS, 9, 12-Octadecadienoic acid (Z, Z)- methyl ester, 9-Octadecenoic acid methyl ester, (E)- and hexadecanoic acid methyl ester are potent free radical scavengers, antioxidants and anticancer agents [72,73,74]. The antioxidant properties of other compounds including p-cymene have also been documented [75]. Therefore, synergism between the different antioxidants present in ENS could be responsible for the high antioxidant capacity and amelioration of nickel chloride-induced hepato-renal damage in this study. Collectively, our data support the involvement of oxidative stress in nickel chloride-induced hepato-renal damage. Ethanol extract of Nigella sativa ameliorated the oxidative damage in the liver and protected the kidney by evoking antioxidant enzyme response and attenuating lipid peroxidation. Therefore, suggesting that the ameliorative effect of ethanol extract of Nigella sativa may be related to its antioxidant properties. All data obtained during this study are included in this published article. ENS: Ethanol extract of Nigella sativa HPLC: High-performance liquid chromatography GC-MS: Gas chromatography-mass spectroscopy TCDD: 2, 3, 7, 8-tetrachlorodibenzo-p-dioxin DPPH: Diphenyl-1-picrylhydrazyl AAE: Ascorbic acid equivalent Aspartate transaminase ALT: Alanine transaminase TP: Total protein TB: Total bilirubin MDA: Malondialdehyde GST: Glutathione-S-transferase NIST: LWT: Liver weight RLWT: Relative kidney weight KWT: Kidney weight RKWT: SPSS: Statistical package for the social sciences IARC. A review of human carcinogens, part C: arsenic, metals, fibres and dust. In: IARC monographs on the evaluation of carcinogenic risks to humans, vol. 100C. Lyon: World Health Organization; 2012. p. 169–211. Cempel M, Janicka K. Distribution of nickel, zinc, and copper in rat organs after oral administration of nickel (II) chloride. Biol Trace Elem Res. 2002;90(1-3):215–60. Agency for Toxic Substances and Disease Registry (1997). Toxicological profile for nickel (update). PublicHealth service, U.S. Department of Health and Human Services, Atlanta, GA. Kamerud KL, Hobbie KA, Anderson KA. Stainless steel leaches nickel and chromium into foods during cooking. J Agric Food Chem. 2013;61(39):9495–501. PubMed PubMed Central CAS Google Scholar Talio MC, Marta OL, Fernánde P. Determination of nickel in cigarettes smoke by molecular fluorescence. Microchem J. 2011;99(2):486–91. Torres F, Graças M, Melo M, Tosti A. Management of contact dermatitis due to nickel allergy: an update. Clin Cosmet Investig Dermatol. 2009;2:39–48. Liu G, Sun L, Pan A, Zhu M, Zi L, Wang Z, Liu X, Ye XW, et al. Nickel exposure is associated with the prevalence of type 2 diabetes in Chinese adults. Int J Epidemiol. 2015;44:240–8. Forgacs Z, Massányi P, Lukac N, Somosy Z. Reproductive toxicology of nickel – review, journal of environmental science and health Part A. Toxic Hazard Subs Environ Eng. 2012;47(9):1249–60. Das KK, Das SN, Dhundas SA. Nickel, its adverse health effects and oxidative stress. Indian J Med Res. 2008;128:412–25. Dahmen-Ben MI, Bellassoued K, Athmouni K, Naifar M, Chtourou H, Ayadi H, Makni-Ayadi F, Sayadi S, El Feki A, Dhouib A. Protective effect of Dunaliella sp., lipid extract rich in polyunsaturated fatty acids, on the hepatic and renal toxicity induced by nickel in rats. Toxicol Mech Methods. 2016;26(3):221–30. Cheng Z, Cheng N, Shi D, Ren X, Gan T, Bai Y, et al. The relationship between Nkx2.1 and DNA oxidative damage repair in nickel smelting workers: Jinchang cohort study. Int J Environ Res Public Health. 2019. https://doi.org/10.3390/ijerph16010120. Liu CM, Ma JQ, Liu SS, Feng ZJ, Wang AM. Puerarin protects mouse liver against nickel-induced oxidative stress and inflammation associated with the TLR4/p38/CREB pathway. Chem Biol Interact. 2016;243:29–34. Liu CM, Zheng GH, Ming QL, Chao C, Sun JM. Sesamin protect mouse liver against nickel-induced oxidative DNA damage by the PI3K-Akt pathway. J Agric Food Chem. 2013;61(5):1146–54. Zheng GH, Liu CM, Sun JM, Cheng C. Nickel-induced oxidative stress and apoptosis in Carassius auratus liver by JNK pathway. Aquatic Toxicol. 2014. https://doi.org/10.1016/j.aquatox.2013.12.015. Flora SJ, Pachauri V. Chelation in metal intoxication Int. J. Environ. Res. Public Health. 2010;7:2745–88. Elangovan P, Pari L. Ameliorating effects of troxerutin on nickel-induced oxidative stress in rats. Redox Rep. 2013;18(6):224–32. Ahmad A, Husain A, Mujeeb M, Khan SA, Najmi AK, Siddique NA. A review on the therapeutic potential of Nigella sativa: a miracle herb. Asian Pacific J Tropical Biomedicine. 2013;3(5):337–521. Nakasugi T, Murakawa T, Shibuya K, Morimoto M. Deodorizing substance in black cumin (Nigella sativa L.) seed oil. J Oleo Sci. 2017;66(8):877–82. Majdalawieh AF, Fayyad MW. Recent advances on the anti-cancer properties of Nigella sativa, a widely used food additive. J Ayurveda Integ Med. 2016;7:173–18. Zohary D, Hopf M. Domestication of plants in the old world (3rdedn). New York: Oxford University Press; 2000. p. 206. Forouzanfar F, Bazzaz BSF, Hosseinzadeh H. Black cumin (Nigella sativa) and it's constituent (thymoquinone): a review on antimicrobial effects. Iran J Basic Med Sci. 2014;17(12):929–38. Yarnell E, Abascal K (2001) Nigella sativa: holy herb of the Middle East. Alternative and Complementary Therapies17:99–105. Namazi N, Larijani B, Ayati MH, Abdollahi M. The effects of Nigella sativa L. on obesity: a systematic review and meta-analysis. J Ethnopharmacol. 2018;219(12):173–81. Koshak A, Koshak E, Heinrich M. Medicinal benefits of Nigella sativa in bronchial asthma: a literature review. Saudi Pharmaceutical Journal. 2017;25(8):1130–6. Tavakkoli A, Mahdian V, Razavi BM, Hosseinzadeh H. Review on clinical trials of black seed (Nigella sativa ) and its active constituent, thymoquinone. J Pharmacopuncture. 2017;20(3):179–93. Seghatoleslam M, Alipour F, Shafieian R, Hassanzadeh Z, Edalatmanesh MA, Sadeghnia HR, et al. The effects of Nigella sativa on neural damage after pentylenetetrazole induced seizures in rats. J Tradit Complement Med. 2016;6:262–8. Perveen T, Haider S, Zuberi NA, Saleem S, Sadaf S, Batool Z. Increased 5-HT levels following repeated administration of Nigella sativa L. (black seed) oil produce antidepressant effects in rats. Sci Pharm. 2014;82(1):161–70. Kanter M, Coskun O, Uysal H. The antioxidative and antihistaminic effect of Nigella sativa and its major constituent, thymoquinone on ethanol-induced gastric mucosal damage. Arch fürToxikol. 2006;80(4):217–40. Oyero OG, Toyama M, Mitsuhiro N, Onifade AA, Hidaka A, Okamoto M. Selective inhibition of hepatitis C virus replication by alpha-zam, a Nigella sativa seed formulation. Afr J Tradit Complement Altern Med. 2016;13(6):144–8. Shokri H. A review on the inhibitory potential of Nigella sativa against pathogenic and toxigenic fungi. Avicenna J Phytomed. 2016;6(1):21–33. Ramadan MF. Nutritional value and applications of Nigella sativa essential oil: a mini-review. J Essent Oil Res. 2015;27:271–5. Dwarampudi LP, Palaniswamy D, Nithyanantham M, Raghu PS. Antipsoriatic activity and cytotoxicity of ethanolic extract of Nigella sativa seeds. Pharmacogn Mag. 2012;8(32):268–72. Al-Seeni MN, El RabeyH A, Al-Hamed AM, Zamazami MA. Nigella sativa oil protects against tartrazine toxicity in male rats. Toxicol Rep. 2018;5:146–55. Farooqui Z, Ahmed F, Rizwan S, Shahid F, Khan AA, Khan F. Protective effect of Nigella sativa oil on cisplatin-induced nephrotoxicity and oxidative damage in rat kidney. Biomed Pharmacother. 2017;85:7–15. Ahmad A, Al-Abbasi FA, Sadath S, Ali SS, Abuzinadah MF, Alhadrami HA, et al. Ameliorative effect of camel's milk and Nigella Sativa oil against thioacetamide-induced hepatorenal damage in rats. Phcog Mag. 2018;14:27–35. Hamed MA, El-Rigal NS, Ali SA. Effects of black seed oil on the resolution of hepato-renal toxicity induced by bromobenzene in rats. Eur Rev Med Pharmacol Sci. 2018;17(5):569–81. Abdel-Zaher AO, Abdel-Rahman MS, Elwasei FM. Protective effect of Nigella sativa oil against tramadol-induced tolerance and dependence in mice: role of nitric oxide and oxidative stress. Neurotoxicology. 2011;32(6):725–33. Hassan AS, Ahmed JH, Sawsan S, Al-Haroon. A study of the effect of Nigella sativa (black seeds) in isoniazid (INH)-induced hepatotoxicity in rabbits. Indian J Pharmacol. 2012;44(6):678–82. Bourgou S, Pichette A, Marzouk B, Legault J. Bioactivities of black cumin essential oil and its main terpenes from Tunisia. South Afr J Bot. 2010;76:210–6. National Institute of Health (NIH). Guide for the care and use of laboratory animals. NIH Publication. 1985:85–23. Gathwan KH, Al-Karkhi IHT, EA JAL-M. Hepatic toxicity of nickel chloride in mice. Res Chem Intermed. 2013;39:2537–42. El-Sayed WM. Upregulation of chemoprotective enzymes glutathione by Nigella sativa (Black Seed) and thymoquinone in CCl4-intoxicated rats. Int J Toxicol. 2011;30(6):707–14. Esterbauer H, Cheeseman KH. Determination of aldehydic lipid peroxidation products: malonaldehyde and 4-hydroxynonenal. Meth Enzymol. 1990;186:407–21. Aebi HE (2011) Catalase. In: Bergmeyer HU (ed). Methods of enzymatic analysis. Verlag Chemie, Weinheim, Florida, p273. Habig WH, Pabast MJ, Jakoby WB. Glutathione S- Transferase, the first enzymatic step in mercapturic acid formation. J Biol Chem. 1974;249:7130–9. Sofowara A. Medicinal plants and traditional medicine in Africa. New York: Wiley; 1993. p. 97–145. Trease GE, Evans WC. Pharmacognosy (11th edn). London: Bailliar Tindall; 1989. p. 60–75. Harborne JB. Phytochemical methods: a guide to modern techniques of plant analysis. London: Chapman and Hall Ltd; 1973. p. 279. Dildar A, Muhammad MK, Ramsha S. Comparative analysis of phenolics, flavonoids, and antioxidant and antibacterial potential of methanolic, hexanic and aqueous extracts from Adiantum caudatum leaves. Antioxidants. 2015;4:394–409. Prieto P, Pineda M, Aguilar M. Spectrophotometric quantitation of antioxidant capacity through the formation of a phosphomolybdenum complex: specific application to the determination of vitamin E. Anal Biochem. 1999;269:337–41. El-Shafei HM. Assessment of liver function among nickel-plating workers in Egypt. East Mediterr Health J. 2011;6:490–4. Latif A, Ali M, Iqbal F. Histopathological responses of liver and kidney of a freshwater cyprinid, Labeo rohitato nickel sulphate. Pak J Zool. 2014;46(1):37–44. Kadi IE, Dah Douh F. Vitamin C pretreatment protects from nickel-induced acute nephrotoxicity in mice. Arh Hig Rada Toksikol. 2016;67:210–5. He M, Lu Y, Xu S, Mao ZL, Duan W, et al. MiRNA-210 modulates a nickel-induced cellular energy metabolism shift by repressing the iron-sulfur cluster assembly proteins ISCU1/2 in Neuro-2a cells. Cell Death Dis. 2014. https://doi.org/10.1038/cddis.2014.60. Sousa CA, Soares HM, Soares EV. Nickel oxide nanoparticles trigger caspase- and mitochondria-dependent apoptosis in the yeast Saccharomyces cerevisiae. Chem Res Toxicol. 2019;32:245. Liapi C, Zarros A, Theocharis S, Voumvourakis K, Anifantaki F, Gkrouzman E, Mellios Z, Skandali N, Al-Humadi H, Tsakiris S. Short-term exposure to nickel alters the adult rat brain antioxidant status and the activities of crucial membrane-bound enzymes: neuroprotection by L-cysteine. Biol Trace Elem Res. 2011;143(3):1673–81. Maiti AK, Saha NC, Paul G, Dhara K. Mitochondrial respiratory chain inhibition and Na+K+ATPase dysfunction are determinant factors modulating the toxicity of nickel in the brain of Indian catfish Clarias batrachus L. Interdiscip Toxicol. 2018;1(2):306–15. Das KK, Reddy RC, Bagoji IB, Das S, Bagali S, Mullur L, Khodnapur JP, Biradar MS. The primary concept of nickel toxicity – an overview. J Basic Clin Physiol Pharmacol. 2019;30(2):141–52. Misra M, Rodriguez RE, Kasprzak KS. Nickel induced lipid peroxidation in the rat: correlation with nickel effect on antioxidant defence systems. Toxicol. 1990;64(1):1–17. Allocati N, Masulli M, Ilio C, DiandFederici L. Glutathione transferases: substrates, inhibitors and pro-drugs in cancer and neurodegenerative diseases. Oncogenesis. 2018. https://doi.org/10.1038/s41389-017-0025-3. Chang YC, Liu FP, Ma X, Li M-M, Li R, Li C-W, et al. Glutathione S-transferase A1 – a sensitive marker of alcoholic injury on primary hepatocytes. Hum Exp Toxicol. 2017;36(4):386–94. Bieniaś B, Sikora P. Potential novel biomarkers of obstructive nephropathy in children with hydronephrosis. Dis Markers. https://doi.org/10.1155/2018/1015726. Tesauro M, Nisticò S, Noce A, Tarantino A, Marrone G, Costa A, et al (2015). The possible role of glutathione-S-transferase activity in diabetic nephropathy. Int J Immunopathol Pharmacol 2015; 28(1): 129–133. McMahon GM, Waikar SS. Biomarkers in nephrology. Am J Kidney Dis. 2013;62(1):165–78. Wright LS, Kornguth SE, Oberley TD, Siegel FL. Effects of lead on glutathione-S-transferase expression in rat kidney: a dose-response study. Toxicol Sci. 2015;46:254–9. An X, Zhou A, Yang Y, Yue W, Xin R, Tian C, Wu Y. Protective effects of gallic acid against NiSO4 induced toxicity through down-regulation of the Ras/ERK signalling pathway in BEAS-2B cells. Med Sci Monit. 2016;22:3446–54. Liu Y, Guo M. Studies on transition metal-quercetin complexes using electrospray ionization tandem mass spectrometry. Molecules. 2015;20(5):8583–94. de Castilho TS, Matias TB, Nicolini KP, Nicolini J. Study of interaction between metal ions and quercetin. Food Sci Human Wellness. 2018;7:215–9. Ciftci O, Ozdemir I, Tanyildizi S, Yildiz S, Oguzturk H. Antioxidative effects of curcumin, β-myrcene and 1, 8-cineole against 2, 3, 7, 8-tetrachlorodibenzo-p-dioxin-induced oxidative stress in rats liver. Toxicol Ind Health. 2011;27(5):447–530. Santos FA. 1, 8-cineole protects against liver failure in an in-vivo murine model of endotoxemic shock. J Pharm Pharmacol. 2001;53(4):505–11. Rozza AL, Meira FF, Souza AR, Pellizzon CH. The gastroprotective effect of menthol: involvement of anti-apoptotic, antioxidant and anti-inflammatory activities. PLoS One doi. 2001. https://doi.org/10.1371/journal.pone.0086686. Pinto MEA, Araújo SG, Morais MI, Nívea PSÁ, Caroline ML, Rosa CA, et al. Antifungal and antioxidant activity of fatty acid methyl esters from vegetable oils. An Acad Bras Cienc. 2017;89(3):1671–81. Dailey OD, Wang X, Chen F, Guohui H. Anticancer activity of branched-chain derivatives of oleic acid. Anticancer Res. 2011;31:3165–70. Yu FR, Lian XZ, Guo HY, McGuire PM, Li RD, Wang R, et al. Isolation and characterization of methyl esters and derivatives from Euphorbia kansui (Euphorbiaceae) and their inhibitory effects on the human SGC-7901 cells. J Pharm Pharm Sci. 2005;8:528–35. de Oliveira TM, Fonseca de Carvalho RB, Fernandes da Costa IH, Lopes de Oliveira GA, de Souza AA, de Lima SG, et al. Evaluation of p-cymene, a natural antioxidant. Pharm Biol. 2015;53:423–4. Voucher specimen Voucher specimen of this material has been identified by and deposited in the University of Ibadan herbarium, but a code has not been allocated to it, therefore the voucher code is not applicable. This research did not receive a grant from funding agencies in the public, commercial, or not-for-profit sectors. Department of Chemical and Food Sciences, Bells University of Technology, Ota, Nigeria Kazeem Akinyinka Akinwumi, Oreoluwa Oluwafunke Olaniyan & Yusuf Yusuf Umar Department of Veterinary Pathology, University of Ibadan, Ibadan, Nigeria Afusat Jagun Jubril Kazeem Akinyinka Akinwumi Oreoluwa Oluwafunke Olaniyan Yusuf Yusuf Umar KA: Conception and design of the work, laboratory analysis, data acquisition, statistical analysis and manuscript writing. JA: Conception and design of the work, laboratory analysis, data acquisition and manuscript writing. OO: laboratory analysis and data acquisition. UY: laboratory analysis and data acquisition. The author(s) read and approved the final manuscript. Correspondence to Afusat Jagun Jubril. All experiments were carried out according to recommendations of the ethical conditions approved by the Department of Chemical and Food Sciences, Bells University of Technology, Ota, Nigeria in conformity with standard ethics for handling and care of experimental animals [40]. Authors state no conflict of interests. Akinwumi, K.A., Jubril, A.J., Olaniyan, O.O. et al. Ethanol extract of Nigella sativa has antioxidant and ameliorative effect against nickel chloride-induced hepato-renal injury in rats. Clin Phytosci 6, 64 (2020). https://doi.org/10.1186/s40816-020-00205-9 Liver and kidney
CommonCrawl
On the stability of time-domain integral equations for acoustic wave propagation DCDS Home On the condition number of the critically-scaled Laguerre Unitary Ensemble August 2016, 36(8): 4349-4366. doi: 10.3934/dcds.2016.36.4349 A thermodynamic study of the two-dimensional pressure-driven channel flow Weinan E 1, and Jianchun Wang 2, Department of Mathematics and Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544 The Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544, United States Received May 2015 Revised November 2015 Published March 2016 The instability of the two-dimensional Poiseuille flow in a long channel and the subsequent transition is studied using a thermodynamic approach. The idea is to view the transition process as an initial value problem with the initial condition being Poiseuille flow plus noise, which is considered as our ensemble. Using the mean energy of the velocity fluctuation and the skin friction coefficient as the macrostate variable, we analyze the transition process triggered by the initial noises with different amplitudes. A first order transition is observed at the critical Reynolds number $Re_* \sim 5772$ in the limit of zero noise. An action function, which relates the mean energy with the noise amplitude, is defined and computed. The action function depends only on the Reynolds number, and represents the cost for the noise to trigger a transition from the laminar flow. The correlation function of the spatial structure is analyzed. Keywords: action., subcritical transition, phase transition, Poiseuille flow, statistical mechanics. Mathematics Subject Classification: Primary: 65P40, 65Z05; Secondary: 65C0. Citation: Weinan E, Jianchun Wang. A thermodynamic study of the two-dimensional pressure-driven channel flow. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4349-4366. doi: 10.3934/dcds.2016.36.4349 K. T. Allhoff and B. Eckhardt, Directed percolation model for turbulence transition in shear flows,, Fluid Dyn. Res., 44 (2012). Google Scholar K. Avila, D. Moxey, A. de Lozar, M. Avila, D. Barkley and B. Hof, The onset of turbulence in pipe flow,, Science, 333 (2011), 192. Google Scholar M. Avila, F. Mellibovsky, N. Roland and B. Hof, Streamwise-localized solutions at the onset of turbulence in pipe Flow,, Phys. Rev. Lett., 110 (2013). Google Scholar D. Barkley, Simplifying the complexity of pipe flow,, Phys Rev E, 84 (2011). Google Scholar D. Barkley, Modeling the transition to turbulence in shear flows,, J. Phys.: Conf. Ser., 318 (2011). Google Scholar P. S. Casasa and À. Jorbab, Hopf bifurcations to quasi-periodic solutions for the two-dimensional plane Poiseuille flow,, Comm. Nonlinear. Sci. Numer. Simulat., 17 (2012), 2864. doi: 10.1016/j.cnsns.2011.11.008. Google Scholar M. Chantry, A. P. Willis and R. R. Kerswell, Genesis of streamwise-localized solutions from globally periodic traveling waves in pipe flow,, Phys. Rev. Lett., 112 (2014). Google Scholar B. Eckhardt, T. M. Schneider, B. Hof and J. Westerweel, Turbulence transition in pipe flow,, Annu. Rev. Fluid Mech., 39 (2007), 447. doi: 10.1146/annurev.fluid.39.050905.110308. Google Scholar H. Faisst and B. Eckhardt, Traveling waves in pipe flow,, Phys. Rev. Lett., 91 (2003). Google Scholar M. I. Freidlin and A. D. Wentzell, Random Perturbations of Dynamical Systems,, 2nd edn (New York: Springer), (1998). doi: 10.1007/978-1-4612-0611-8. Google Scholar T. Herbert, Secondary instability of boundary layers,, Annu. Rev. Fluid Mech., 20 (1988), 487. Google Scholar B. Hof, J. Westerweel, T. M. Schneider and B. Eckhardt, Finite lifetime of turbulence in shear flows,, Nature, 443 (2006), 59. Google Scholar J. Jimenez, Bifurcations and bursting in two-dimensional Poiseuille flow,, Phys. Fluids, 30 (1987), 3644. Google Scholar J. Jimenez, Transition to turbulence in two-dimensional Poiseuille flow,, J. Fluid Mech., 218 (1990), 265. Google Scholar T. Kreilos and B. Eckhardt, Periodic orbits near onset of chaos in plane Couette flow,, Chaos, 22 (2012). doi: 10.1063/1.4757227. Google Scholar T. Kreilos, B. Eckhardt and T. M. Schneider, Increasing Lifetimes and the Growing Saddles of Shear Flow Turbulence,, Phys. Rev. Lett., 112 (2014). Google Scholar P. Manneville, Spatiotemporal perspective on the decay of turbulence in wall-bounded flows,, Phys Rev E, 79 (2009). Google Scholar P. Manneville, On the growth of laminar-turbulent patterns in plane Couette flow,, Fluid Dyn. Res., 44 (2012). doi: 10.1088/0169-5983/44/3/031412. Google Scholar F. Mellibovsky and B. Eckhardt, From travelling waves to mild chaos: A supercritical bifurcation cascade in pipe flow,, J. Fluid Mech., 709 (2012), 149. doi: 10.1017/jfm.2012.326. Google Scholar D. Moxey and D. Barkley, Distinct large-scale turbulent-laminar states in transitional pipe flow,, Proc. Natl. Acad. Sci. USA, 107 (2010), 8091. Google Scholar M. Nagata, Three-dimensional finite-amplitude solutions in plane Couette flow: bifurcation from infinity,, J. Fluid Mech., 217 (1990), 519. doi: 10.1017/S0022112090000829. Google Scholar M. Nagata and K Deguchi, Mirror-symmetric exact coherent states in plane Poiseuille flow,, J. Fluid Mech., 735 (2013). Google Scholar S. A. Orszag and A. T. Patera, Subcritical transition to turbulence in plane channel flows,, Phys. Rev. Lett., 45 (1980), 989. Google Scholar C. Pringle and R. R. Kerswell, Asymmetric, helical, and mirror-symmetric traveling waves in pipe flow,, Phys. Rev. Lett., 99 (2007). Google Scholar Y. Pomeau, Front motion, metastability and subcritical bifurcations in hydrodynamics,, Physica D, 23 (1986), 3. Google Scholar O. Reynolds, An experimental investigation of the circumstances which determine whether the motion of water shall be direct or sinuous, and of the law of resistance in parallel channels,, Philos. T. R. Soc. A, 174 (1883), 935. Google Scholar B. L. Rozhdestvensky and I. N. Simakin, Secondary flows in a plane channel: their relationship and comparison with turbulent flows,, J. Fluid. Mech., 147 (1984), 261. Google Scholar T. M. Schneider, D. Marinc and B. Eckhardt, Localized edge states nucleate turbulence in extended plane Couette cells,, J Fluid Mech, 646 (2010), 441. Google Scholar J. Shen, Efficient spectral-Galerkin method I. Direct solvers of second-and fourth-order equations using Legendre polynomials,, SIAM J. Sci. Comput., 15 (1994), 1489. doi: 10.1137/0915089. Google Scholar L. Shi, M. Avila and B. Hof, Scale invariance at the onset of turbulence in couette flow,, Phys. Rev. Lett., 110 (2013). Google Scholar L. Trefethen, A. Trefethen, S. Reddy and T. Driscoll, Hydrodynamic stability without eigenvalues,, Science, 261 (1993), 573. doi: 10.1126/science.261.5121.578. Google Scholar L. Trefethen, Pseudospectra of linear operators,, SIAM Rev., 39 (1997), 383. doi: 10.1137/S0036144595295284. Google Scholar X. Wan, H. Yu and W. E, Model the nonlinear instability of wall-bounded shear flows as a rare event: A study on two-dimensional Poiseuille flow,, Nonlinearity, 28 (2015), 1409. doi: 10.1088/0951-7715/28/5/1409. Google Scholar J. Wang, Q. Li and W. E, Study of the instability of the Poiseuille flow using a thermodynamic formalism,, Proc. Natl. Acad. Sci. USA, 112 (2015), 9518. Google Scholar F. Waleffe, Three-dimensional coherent states in plane shear flows,, Phys. Rev. Lett., 81 (1998), 4140. Google Scholar F. Waleffe, Exact coherent structures in channel flow,, J. Fluid Mech., 435 (2001), 93. Google Scholar H. Wedin and R. R. Kerswell, Exact coherent structures in pipe flow: Traveling wave solutions,, J. Fluid Mech., 508 (2004), 333. doi: 10.1017/S0022112004009346. Google Scholar A. P. Willis, P. Cvitanovi and M. Avila, Revealing the state space of turbulent pipe flow by symmetry reduction,, J. Fluid Mech., 721 (2013), 514. doi: 10.1017/jfm.2013.75. Google Scholar S. Zammert and B. Eckhardt, Streamwise and doubly-localised periodic orbits in plane Poiseuille flow,, J. Fluid Mech., 761 (2014), 348. Google Scholar S. Zammert and B. Eckhardt, Periodically bursting edge states in plane Poiseuille flow,, Fluid Dyn. Res., 46 (2014). doi: 10.1088/0169-5983/46/4/041419. Google Scholar Matteo Novaga, Enrico Valdinoci. The geometry of mesoscopic phase transition interfaces. Discrete & Continuous Dynamical Systems - A, 2007, 19 (4) : 777-798. doi: 10.3934/dcds.2007.19.777 Alain Miranville. Some mathematical models in phase transition. Discrete & Continuous Dynamical Systems - S, 2014, 7 (2) : 271-306. doi: 10.3934/dcdss.2014.7.271 Jun Yang. Coexistence phenomenon of concentration and transition of an inhomogeneous phase transition model on surfaces. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 965-994. doi: 10.3934/dcds.2011.30.965 Mauro Garavello, Benedetto Piccoli. Coupling of microscopic and phase transition models at boundary. Networks & Heterogeneous Media, 2013, 8 (3) : 649-661. doi: 10.3934/nhm.2013.8.649 Emanuela Caliceti, Sandro Graffi. An existence criterion for the $\mathcal{PT}$-symmetric phase transition. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1955-1967. doi: 10.3934/dcdsb.2014.19.1955 Pavel Krejčí, Jürgen Sprekels. Long time behaviour of a singular phase transition model. Discrete & Continuous Dynamical Systems - A, 2006, 15 (4) : 1119-1135. doi: 10.3934/dcds.2006.15.1119 Mauro Garavello. Boundary value problem for a phase transition model. Networks & Heterogeneous Media, 2016, 11 (1) : 89-105. doi: 10.3934/nhm.2016.11.89 I-Liang Chern, Chun-Hsiung Hsia. Dynamic phase transition for binary systems in cylindrical geometry. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 173-188. doi: 10.3934/dcdsb.2011.16.173 Mauro Garavello, Francesca Marcellini. The Riemann Problem at a Junction for a Phase Transition Traffic Model. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5191-5209. doi: 10.3934/dcds.2017225 Maya Briani, Benedetto Piccoli. Fluvial to torrential phase transition in open canals. Networks & Heterogeneous Media, 2018, 13 (4) : 663-690. doi: 10.3934/nhm.2018030 Pierluigi Colli, Antonio Segatti. Uniform attractors for a phase transition model coupling momentum balance and phase dynamics. Discrete & Continuous Dynamical Systems - A, 2008, 22 (4) : 909-932. doi: 10.3934/dcds.2008.22.909 Claudio Giorgi. Phase-field models for transition phenomena in materials with hysteresis. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 693-722. doi: 10.3934/dcdss.2015.8.693 Francesca Marcellini. Existence of solutions to a boundary value problem for a phase transition traffic model. Networks & Heterogeneous Media, 2017, 12 (2) : 259-275. doi: 10.3934/nhm.2017011 Raffaele Esposito, Yan Guo, Rossana Marra. Stability of a Vlasov-Boltzmann binary mixture at the phase transition on an interval. Kinetic & Related Models, 2013, 6 (4) : 761-787. doi: 10.3934/krm.2013.6.761 Da-Peng Li. Phase transition of oscillators and travelling waves in a class of relaxation systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2601-2614. doi: 10.3934/dcdsb.2016063 Mauro Fabrizio, Claudio Giorgi, Angelo Morro. Phase transition and separation in compressible Cahn-Hilliard fluids. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 73-88. doi: 10.3934/dcdsb.2014.19.73 Alethea B. T. Barbaro, Pierre Degond. Phase transition and diffusion among socially interacting self-propelled agents. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1249-1278. doi: 10.3934/dcdsb.2014.19.1249 Kota Kumazaki. Periodic solutions for non-isothermal phase transition models. Conference Publications, 2011, 2011 (Special) : 891-902. doi: 10.3934/proc.2011.2011.891 Tian Ma, Shouhong Wang. Cahn-Hilliard equations and phase transition dynamics for binary systems. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 741-784. doi: 10.3934/dcdsb.2009.11.741 Kousuke Kuto, Tohru Tsujikawa. Stationary patterns for an adsorbate-induced phase transition model I: Existence. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 1105-1117. doi: 10.3934/dcdsb.2010.14.1105 Weinan E Jianchun Wang
CommonCrawl
Canine Genetics and Epidemiology Plain english summary DLA class II risk haplotypes for autoimmune diseases in the bearded collie offer insight to autoimmunity signatures across dog breeds Liza C. Gershony1, 2View ORCID ID profile, Janelle M. Belanger1, Andrea D. Short3, Myly Le1, Marjo K. Hytönen4, 5, Hannes Lohi4, 5, Thomas R. Famula1, Lorna J. Kennedy3 and Anita M. Oberbauer1Email author Canine Genetics and Epidemiology20196:2 © The Author(s). 2019 Received: 10 December 2018 Accepted: 24 January 2019 Primary hypoadrenocorticism (Addison's disease, AD) and symmetrical lupoid onychodystrophy (SLO) are two clinical conditions with an autoimmune etiology that occur in multiple dog breeds. In man, autoimmunity is associated with polymorphisms in immune-related genes that result in a reduced threshold for, or defective regulation of, T cell activation. The major histocompatibility complex (MHC) class II genes encode molecules that participate in these functions, and polymorphisms within these genes have been associated with autoimmune conditions in dogs and humans. Bearded collies have a relatively high prevalence of autoimmune diseases, particularly AD and SLO. Our study assessed the relationship between particular MHC (dog leukocyte antigen, DLA) class II haplotypes and the two autoimmune diseases most common in this breed. Moreover, five unrelated breeds at increased risk for AD were studied for comparative purposes and analyzed in the context of extant literature. A single DLA class II three-locus haplotype, determined by sequence-based typing, was associated with increased risk for AD (DLA-DRB1*009:01/DQA1*001:01/DQB1*008:02) in bearded collies. Comparative analysis with the five additional breeds showed limited allele sharing, with DQA1*001:01 and DQB1*002:01 being the only alleles observed in all breeds. A distinct three-locus risk haplotype (DLA-DRB1*001:01/DQA1*001:01/DQB1*002:01) was associated with AD in the West Highland white terrier and Leonberger. Two different risk haplotypes were associated with increased risk for SLO in the bearded collie (DLA-DRB1*018:01/DQA1*001:01/DQB1*002:01 and DLA-DRB1*018:01/DQA1*001:01/ DQB1*008:02). Two-locus DQ haplotypes composed of DLA-DQA1*001:01 in association with DLA-DQB1*002:01 or DLA-DQB1*008:02 make up the four risk haplotypes identified in the present study and are also found in other risk haplotypes previously associated with diabetes mellitus and hypothyroidism across different dog breeds. Our findings build upon previously published data to suggest that this two-locus (DQ) model serves as a good indicator for susceptibility to multiple organ-specific autoimmune diseases in the canine population. However, it is also clear that additional loci are necessary for actual disease expression. Investigation of affected and unaffected dogs carrying these predisposing DQ haplotype signatures may allow for the identification of those additional genetic components that determine autoimmune disease expression and organ specificity. (3–10): Dog Addison's disease Primary hypoadrenocorticism, also known as Addison's disease (AD), and symmetrical lupoid onychodystrophy (SLO) are two autoimmune conditions that occur in multiple dog breeds. Disease expression depends on a combination of genetic and environmental factors, and discovery of genetic loci involved in disease susceptibility may help understand and predict risk for disease. Autoimmunity in humans is associated with altered immune-related genes that result in defective regulation of the immune system. The strongest associations for many human autoimmune diseases involve the major histocompatibility complex (MHC) class II genes. Genetic variants within these genes have also been associated with autoimmune conditions in dogs. Bearded collies have a relatively high prevalence of autoimmune diseases, and our study assessed the relationship between particular MHC (dog leukocyte antigen, DLA) class II haplotypes and the two autoimmune diseases most common in this breed, AD and SLO. We also studied five unrelated dog breeds at high risk for AD to determine if there were haplotypes common across affected dogs in these breeds and bearded collies with AD. A single DLA class II three-locus haplotype was associated with increased risk for AD in bearded collies with a different three-locus risk haplotype associated with AD in the West Highland white terrier and Leonberger. Two separate three-locus risk haplotypes were associated with increased risk for SLO in the bearded collie. Importantly, two-locus DQ haplotypes composed of DLA-DQA1*001:01 in association with DLA-DQB1*002:01 or DLA-DQB1*008:02 were common across the breeds and autoimmune conditions, and made up the four risk haplotypes identified in the present study. In the published literature, these same two-locus haplotypes are also found in risk haplotypes associated with diabetes mellitus and hypothyroidism across different dog breeds, suggesting that this two-locus model serves as a good indicator for susceptibility to multiple organ-specific autoimmune diseases in the dog population. However, many dogs carrying these haplotypes never develop clinical autoimmune disease, making it clear that additional genes are necessary for actual disease expression. Investigation of affected and unaffected dogs carrying these predisposing DQ haplotype signatures may allow for the identification of those additional genetic components that determine autoimmune disease expression and organ specificity. Primary hypoadrenocorticism (Addison's disease, AD) is a life-threatening clinical condition in dogs characterized by inadequate secretion of adrenocortical hormones by the adrenal glands [1–5]. In both humans and dogs, this endocrine disorder is caused by gradual immune-mediated destruction of the adrenal cortex [1, 4–7]. The presence of autoantibodies against adrenal antigens detected in both human [8, 9] and canine patients [2] with naturally occurring AD provide further evidence of the immune-mediated etiopathogenesis of AD. AD has been reported in many purebred and mixed breed dogs (OMIA 000519–9615) [1, 10, 11] with disease prevalence ranging from 0.06 to 0.4% in the overall dog population [12–15]. However, within certain breeds, prevalence of AD can be as high as 9% [7, 10, 16, 17]. Reported breeds at increased risk for developing AD include bearded collies, Portuguese water dogs (PWD), standard poodles, West Highland white terriers (WHWT), Leonbergers, Wheaten terriers and Nova Scotia duck tolling retrievers [1, 7, 10, 14, 16–20]. Symmetrical lupoid onychodystrophy (SLO) is another condition of autoimmune etiology that affects multiple dog breeds, such as the bearded collie, Gordon setter, English setter, giant schnauzer, Labrador retriever, Welsh corgi, boxer, and German shorthair pointer with variable prevalence (OMIA 001989–9615) [11, 21–25]. SLO is a clinical syndrome characterized by sloughing claws and secondary bacterial infection that was first described in 1992 [24, 26, 27]. Since then, most research has focused on disease diagnosis and treatment [22, 25]. While little research exists on the cause of SLO, Wilbe et al. [24] and Dahlgren et al. [28] have reported an association with particular major histocompatibility complex (MHC; or Dog Leukocyte Antigen, DLA) class II alleles in bearded collies, giant schnauzers, and English and Gordon setters [24, 28]. DLA class II haplotypes were also shown to be more prevalent among dogs of multiple breeds with AD [3]. Bearded collies, in particular, have a relatively high prevalence of autoimmune diseases. Health reports produced by the Bearded Collie Foundation for Health indicate 11.2% of registered dogs have one or more autoimmune disease [29]. Among these, more than half were diagnosed with AD or SLO. Autoimmune diseases such as AD and SLO are considered complex disorders that likely result from the combination of a predisposing genetic background and environmental factors [30]. The environmental triggers necessary or sufficient for autoimmune disease expression remain unclear, but discovery of the genetic loci involved in disease susceptibility may prove helpful in understanding and predicting risk for disease [31]. In humans, it is hypothesized that various polymorphisms in immune-related genes contribute to a reduced threshold for autoreactive lymphocyte activation and/or to defective regulation of autoreactive T cell responses [30]. Likely due to its role in recognition of self versus non-self, variation within MHC class II alleles has been implicated in multiple human autoimmune disorders including Addison's disease, type 1 diabetes mellitus and inflammatory bowel disease [30, 32–34]. Similar connections have been made for dogs, and review of the published literature shows that several autoimmune conditions are associated with DLA class II risk haplotypes (as shown in Table 1) [3, 24, 28, 35–45]. Published three-locus DLA class II haplotypes associated with increased risk for some immune-related diseases in dogs DLA-DRB1 DLA-DQA1 DLA-DQB1 Breed(s) Massey et al. 2013 [3] 008:01:1 Cocker spaniel, Springer spaniel Labrador retriever, WHWT NSDTR Hughes et al. 2010 [38] Bearded collie, Gordon setter Wilbe et al. 2010; Ziener et al. 2015 [24, 43] Wilbe et al. 2010; Dahlgren et al. 2016 [24, 28] Giant schnauzer, English setter 101:02a Dahlgren et al. 2016 [28] Doberman pinscher, Giant schnauzer Kennedy et al. 2006; Wilbe et al. 2010 [40, 44] Ziener et al. 2015 [43] Multiple breeds Kennedy et al. 2006 [39] Dyggve et al. 2011 [36] Anal furunculosis Necrotizing meningoencephalitis Greer et al. 2010 [37] Meningoencephalitis Shiel et al. 2014 [45] AD Addison's disease, SLO symmetrical lupoid onychodystrophy, NSDTR Nova Scotia duck tolling retriever, WHWT West Highland white terrier aCurrent nomenclature is 001:07 Given the relatively high prevalence of autoimmune diseases in bearded collies, the present study assessed the relationship between particular MHC class II haplotypes and the two autoimmune conditions commonly observed in this breed. As a corollary, those haplotypes were evaluated in five other dog breeds at high risk for AD to determine if there were common DLA haplotype signatures associated with the disease across multiple breeds. This is the largest DLA study on AD and SLO bearded collies. Two hundred and thirty-six bearded collies were haplotyped for the three polymorphic DLA class II genes: 125 healthy dogs (57 males, 68 females), 61 AD (22 males and 39 females) and 50 SLO (26 males, 24 females). Quality sequences of all three genes could not be obtained for three male control dogs, which were removed from further analysis. Within the study population, six DLA-DRB1 alleles were identified, three of which were uncommon and only observed in healthy control dogs (allele frequency < 3%; Additional file 1: Table S1). Four DLA-DQA1 alleles were identified, two of which were only seen in a small number of controls (allele frequency < 3%). Similarly, seven DLA-DQB1 alleles were identified, three of which were less frequent (two found in control dogs only, and one in both AD and control dogs). Among the nine three-locus haplotypes identified (coded 1 through 9 for ease in presentation) in the bearded collie study population, four were infrequent in the breed (Table 2). Frequency, code, and odds ratio (OR) with 95% confidence interval (CI) of each three-locus haplotype observed in healthy, Addisonian (AD) and symmetrical lupoid onychodystrophy (SLO) bearded collies Haplotype DLA-DRB1/DQA1/DQB1 2n = 244 OR (95%CI) p-value† 009:01/001:01/008:02 2.36 (1.29–4.34) N/A not enough data points to calculate; †Fisher's exact p-value Text in bold indicates the haplotype frequency significantly differed between cases and controls AD in bearded collies Three DLA-DRB1 alleles (DLA-DRB1*009:01, DLA-DRB1*015:01 and DLA-DRB1*018:01) were frequent among bearded collie AD cases; DLA-DRB1*009:01 was the only allele associated with higher risk for AD (OR = 2.36, p = 0.0058; Additional file 2: Table S2). A single three-locus haplotype (haplotype 1), containing DLA-DRB1*009:01, was overrepresented among the AD dogs (Table 2). Increased risk for AD was observed in dogs that carried haplotype 1 although only two AD cases were homozygous for this haplotype. The majority of AD cases were heterozygous for haplotype 1 (n = 21), with half of those (n = 11) also carrying haplotype 5. Bearded collies carrying the heterozygous genotype 1 5 were at higher risk for AD (Table 3). No differences in DLA homozygosity were noted when comparing AD dogs to controls (Table 4). DLA class II genotypes significantly associated with Addison's disease (AD) and symmetrical lupoid onychodystrophy (SLO) in 233 Bearded collies, and corresponding odds ratios (OR) with the 95% confidence interval (CI) using the DLA haplotype codes reported in Table 2 OR (95% CI) All genotypes containing haplotype 1 2.6 (1.31–5.19) Genotype 1 5 Genotypes containing haplotypes 4 and/or 5 12.0 (1.58–91.33) †Fisher's exact p-value, significant at p < 0.05 Homozygosity across DLA class II genes in 233 Bearded collies (61 AD, 50 SLO and 122 controls). Different letters in the same column indicate statistical difference (p < 0.05) in DLA homozygosity across health status 42.6%a 90.0%b n = number of homozygous dogs AD Addison's disease, SLO symmetrical lupoid onychodystrophy A subset of bearded collies (n = 219) that excluded those dogs carrying less frequent haplotypes (i.e., haplotypes 6–9; n = 14) was subjected to pairwise genotypic comparisons using logistic regression. A comparison between healthy and AD dogs showed genotypes 4 4, 4 5, and 5 5 were associated with the lowest probability for AD and genotypes that included haplotype 1 were associated with the highest probability (Fig. 1a). No sex differences were observed in the analysis (p > 0.05). Probability of (a) Addison's disease and (b) symmetrical lupoid onychodystrophy associated with the most common genotypic combinations of three-locus haplotypes observed in bearded collies (n = 219). Dots indicate the estimated probability of disease and lines represent the 95% confidence interval. DLA haplotype codes are as reported in Table 2 A second data set containing a total of 75 United Kingdom bearded collies (29 AD, 46 controls) included in a previously published study [3] was analyzed for comparison. Common haplotypes in the bearded collie breed (haplotypes 1–5) were seen in similar proportions in both data sets, and analyzing the merged data further corroborated the association between haplotype 1 and AD (OR = 2.87; 95% CI = 1.76–4.66; p = 0.00002; Additional file 3: Table S3). Six three-locus haplotypes observed in the published data (allele frequency < 3%) were not seen in the new dataset. Conversely, haplotype 6 from the new dataset had not been observed in the published data. The combined dataset showed haplotype 4 underrepresented in AD dogs. To account for possible location influences, a geographical analysis of haplotype frequency in the combined dataset, which consisted predominantly of dogs from North America and Europe, showed differences in haplotype frequency between North American and European bearded collies (Additional file 4: Table S4). Haplotype 3 was more prevalent among North American control and AD dogs, whereas haplotype 4 was more prevalent among European controls and haplotype 2 was more prevalent in North American AD dogs. Nevertheless, haplotype 1 remained associated with increased risk for AD in both regions (Additional file 5: Table S5). No significant differences were seen in haplotype frequency between the regions for SLO dogs. The combined dataset also included dogs from Australia (7 controls and 1 SLO) and New Zealand (2 controls, 2 AD and 1 SLO), however the low sample numbers precluded meaningful statistical comparisons for those regions. AD in other breeds Seventeen DLA-DRB1 alleles were identified among all the breeds assessed, including the bearded collies (Additional file 6: Table S6). Seven of those were shared by three or more breeds whereas five were seen in only one of the breeds. The DLA-DRB1*008:02 and -DRB1*011:01 alleles were only seen in the PWD, DLA-DRB1*016:01 in the Leonberger, DLA-DRB1*017:01 in the WHWT and DLA-DRB1*084:01 in the Labradoodle. Among the seven DQA1 alleles, four were shared by three or more breeds. DLA-DQB1 was diverse across breeds, with 16 different alleles noted, only five of which were seen in more than two breeds. Despite the considerable diversity of alleles seen among the three polymorphic DLA class II genes across the six studied breeds at increased risk for AD, DLA-DQA1*001:01 and DQB1*002:01 were the only alleles shared by all six breeds. Individual DLA class II alleles were significantly associated with AD in the WHWT (DLA-DQA1*001:01 and DLA-DQB1*002:01 - Additional file 7: Table S7) and Leonbergers (DLA-DRB1*001:01 - Additional file 8: Table S8). These alleles are a part of a three-locus haplotype that was significantly associated with AD status in WHWTs and Leonbergers (haplotype 14; Table 5). No alleles showed association with AD in the PWDs (Additional file 9: Table S9), standard poodles (Additional file 10: Table S10) and Labradoodles (Additional file 11: Table S11). Twenty-nine three-locus haplotypes were identified across the six studied breeds, five of which were shared by more than three breeds (Table 5 - haplotypes 3, 9, 12, 13 and 14). Haplotype 14 was the only one associated with risk for AD in our studied breeds. We confirm its association with AD in the WHWT, as previously reported [3], and identify it as the first risk haplotype associated with AD in the Leonberger. No association was seen for the three other breeds that carried haplotype 14 (Labradoodle, PWD and standard poodle). Frequency, code and odds ratio (OR) for DLA class II three-locus haplotypes observed in the six studied breeds at increased risk for Addison's disease (AD): bearded collie (61 cases, 122 controls), West Highland white terrier (WHWT; 43 cases, 166 controls), standard poodles (30 cases, 55 controls), Portuguese water dog (PWD; 17 cases, 76 controls), Labradoodle (12 cases, 9 controls) and Leonberger (11 cases, 14 controls) DRB1 DQA1 DQB1 (2n) 95% CI WHWT CI confidence interval, N/A not enough data points to calculate; †Fisher's exact p-value, significant at p < 0.05 N.B. DLA-DQB1*013–017 is short hand for DQB1*01303 and 01701 appearing on the same haplotype Bolded values were statistically significant at α = 0.05. Haplotype codes 1–9 are as used in the text, additional codes were added as needed SLO in bearded collies Whereas DLA-DRB1*018:01 and -DQA1*001:01 were common alleles in the bearded collie, they were present at a much higher frequency among SLO dogs compared to healthy controls with OR = 9.38 (p = 7.40 × 10− 10) and OR = 7.22 (p = 4.19 × 10− 7), respectively, for SLO disease risk (Additional file 12: Table S12). The majority of SLO dogs (49/50) carried a DLA-DQB1*002:01 and/or DLA-DQB1*008:02 allele although an increased risk for disease was observed only for DLA-DQB1*002:01 (OR = 2.08, p = 0.0029). However, both DQB1 alleles were associated with DLA-DRB1*018:01, and the three-locus haplotypes containing these combinations (haplotypes 4 and 5) were significantly overrepresented among SLO dogs and associated with increased risk for disease (Table 2). While risk for disease was slightly higher for the heterozygous genotype 4 5, dogs homozygous for each of the risk haplotypes showed similar risk for disease (Table 3). Despite this, SLO dogs were significantly more homozygous in their DLA-DRB1 (OR = 8.71; 95% CI = 3.24–23.43; p = 1.85 × 10− 6) and -DQA1 (OR = 6.26; 95% CI = 2.11–18.56; p = 0.0003) genes than controls (Table 4). Homozygosity across all three loci was also significantly greater among SLO dogs compared to controls (OR = 2.52; 95% CI = 1.26–5.06; p = 0.0104). Pairwise genotypic comparisons using logistic regression revealed two major genotype groupings, with genotypes 4 4, 4 5 and 5 5 significantly associated with a higher probability for SLO when compared to genotypes 1 4, 2 3, 1 2, 3 3, 1 3 and 1 1 (Fig. 1b); the inverse of what was seen for AD. Genotypes composed of haplotypes 1 and/or 2 were associated with a reduced probability of having SLO. No sex differences were observed in the analysis (p > 0.05). When geographical regions were considered in the analysis, haplotype 4 remained a risk haplotype for SLO in Europe, but not in North America (Additional file 13: Table S13). Conversely, haplotype 5 remained a risk for SLO in North America, but not in Europe. Some of the strongest genetic associations with human autoimmune diseases, such as AD and type 1 diabetes mellitus, involve MHC class II genes [46]. The present study identified a single DLA class II risk haplotype for AD in the bearded collie, consistent with a previous report in a smaller number of dogs from the United Kingdom [3], although our data did not support the researchers' observation of haplotype 4 being protective for AD. While highly prevalent among control dogs, this haplotype was also observed at similar proportions in AD bearded collies from our dataset. However, when data from the previously published paper was merged with the newly acquired data, a combined analysis showed that haplotype 4 became slightly underrepresented among cases. This may be due to differences in the geographic origin of the samples. Haplotype 4 was more prevalent among controls of European origin and a risk haplotype for SLO in European bearded collies, but not among those from North America. Conversely, haplotype 5 remained a risk for SLO in dogs sampled from North America, but not those from Europe. This finding could indicate actual geographical differences in susceptibility to disease, although it is more likely that they reflect the significantly reduced sample sizes for SLO cases when the data was split by geographical region, thus reducing our power to detect true associations. As for AD, regional differences in haplotype frequency did not affect our initial observation, and haplotype 1 remained significantly associated with AD in both Europe and North America. Surprisingly, the highest risk for AD was seen when haplotype 1 was combined with the SLO risk haplotype 5. The sole difference between these two haplotypes is the DLA-DRB1 allele: DLA-DRB1*009:01 and DLA-DRB1*018:01, respectively. The non-polymorphic DLA-DRA and polymorphic -DRB1 genes encode two chains (alpha and beta, respectively) of the same MHC class II molecule, whereas the DLA-DQA1 and -DQB1 genes encode chains that combine to form a different MHC class II molecule [47]. MHC molecules actively participate in the positive and negative selection of developing T cells in the thymus. The recognition of self-peptides associated with MHC molecules results in selection of T cells that bind MHC with intermediate-to-low avidity; binding that is too strong represents potential for autoreactivity and results in deletion of such T cells. Moreover, intermediate avidity, where binding is strong but falls below the threshold for deletion, results in generation of regulatory T cells capable of suppressing self-antigen presentation in the peripheral lymph nodes, thus preventing autoreactivity [46]. While the exact mechanism through which MHC genes contribute to autoimmune disease development is not fully understood, disease-predisposing MHC molecules appear to confer risk by allowing autoreactive T cells to escape central tolerance, whereas protective MHC molecules confer resistance to autoimmunity by promoting negative selection and generation of regulatory T cells [46, 47]. The two DLA-DRB1 alleles mentioned above produce molecules that differ by four amino acids in the hypervariable regions of the gene, which encode the peptide-binding region of an MHC molecule. Amino acid changes in the peptide-binding region of MHC molecules can affect the repertoire of peptides they are capable of presenting to T cells [46], which is why MHC heterozygosity is generally associated with increased fitness due to the ability to detect a larger number of pathogenic antigens compared to a homozygous individual [46, 48]. However, carrying two autoimmune disease-predisposing MHC haplotypes may actually increase the number and types of autoreactive T cells that escape central tolerance thus increasing the risk of developing autoimmunity. In fact, humans who are heterozygous for the MHC class II risk haplotypes, HLA-DR3 and HLA-DR4, are at much higher risk for autoimmune type 1 diabetes and AD than those with either of the homozygous haplotypes [32, 33, 49]. This may also be the case for bearded collies with AD, where most dogs were heterozygous for the risk haplotype and increased risk was seen in association with the heterozygous 1 5 genotype. As expected, considerable diversity of alleles was seen across the six dog breeds at increased risk for AD, but allele sharing was fairly limited. Five of the 29 three-locus haplotypes identified were shared by more than three of the studied breeds (haplotypes 3, 9, 12, 13 and 14), four of which had been previously associated with AD in the cocker spaniel, springer spaniel, WHWT, Labrador retriever, standard poodle and Nova Scotia duck tolling retriever [3, 38]. However, our study only confirmed association of haplotype 14 with AD in the WHWT and also identified it as the first risk haplotype associated with AD in the Leonberger. Haplotype 9, previously associated with AD in the standard poodle [3], was not confirmed as a risk haplotype for AD in our analysis. Although our dataset had more AD standard poodles, our control group is smaller than the previous study, which may have impaired our ability to detect the association. The absence of an association may also be due to differences between studied populations: our dataset consisted of standard poodles from the United States and United Kingdom, whereas previously published data consisted of dogs almost exclusively from the United Kingdom. Two risk haplotypes were identified for SLO in our bearded collies, consistent with previous findings [24] though that study evaluated only Scandinavian bearded collies using many fewer dogs. Furthermore, in our population, almost half of the SLO dogs were homozygous for one of these two risk haplotypes (i.e. genotypes 4 4 or 5 5). Interestingly, although the highest risk was seen with the heterozygous genotype 4 5, dogs homozygous for either of the risk haplotypes appear to be at similar risk for disease. This may be explained by the high similarity between haplotypes 4 and 5, which differ only in their DLA-DQB1 allele. DLA-DQB1*002:01 (in haplotype 4) differs from DLA-DQB1*008:02 (in haplotype 5) by three neighboring amino acids, only one of which falls within a hypervariable region in exon 2, as defined by Kennedy et al. in 1999. Since the alleles have similar hypervariable regions, they likely have similar antigen binding and functional properties [50], which could explain why genotypes 4 4, 4 5, and 5 5 all offered similar risk for SLO. Three three-locus haplotypes were associated with reduced risk for expressing SLO: haplotypes 1, 2 and 3. Despite its association with increased risk for AD, haplotype 1 was associated with a lower risk of expressing SLO. In fact, as highlighted above, the sole difference between the SLO risk haplotype 5 and the AD risk haplotype 1 is the DRB1 allele, which may suggest a role for this gene in determining the target tissue for autoimmune disease in the bearded collie. However, strong linkage disequilibrium (LD; non-random association between alleles) in the region may actually be responsible for this observation, and it is possible that tissue-specific genetic determinants exist in nearby genes that are in strong LD with the DLA-DRB1 gene resulting in the observed association. Perhaps most interesting, however, is that many of the risk haplotypes associated to date with organ-specific autoimmune diseases in dogs carry one of two DQ haplotypes: DLA-DQA1*001:01/DQB1*002:01 (DQ1) or DLA-DQA1*001:01/DQB1*008:02 (DQ2). The DQ1 haplotype is part of haplotype 14, for instance, which has not only been associated with AD in WHWT and Leonbergers, but also with anal furunculosis in the German shepherd [42], hypothyroidism in the Gordon setter [43] and SLO in the giant schnauzer and English setter [24]. Furthermore, the DQ1 combination is found in two different three-locus haplotypes that have been associated with higher risk for hypothyroidism (lymphocytic thyroiditis) in Doberman pinschers, giant schnauzers (DLA-DRB1*012:01/DQA1*001:01/DQB1*002:01) and English setters (DLA-DRB1*001:07/DQA1*001:01/DQB1*002:01) [40, 43, 44]. Moreover, the four three-locus DLA class II risk haplotypes associated with SLO in different dog breeds [24, 28, 43] carry a DQ1 or DQ2 haplotype. These DQ haplotypes were also observed in two SLO-affected dogs in our database that represent breeds in which SLO is uncommon: one great dane, heterozygous for DQ1, and one Belgian tervuren, homozygous for DQ2 (unpublished finding). DQ2 is found in haplotype 1, the risk haplotype for AD in the bearded collie and diabetes mellitus in multiple dog breeds [39]. It is also a part of haplotype 5, which is associated with risk for SLO in the bearded collie and Gordon setter [24, 28, 43], and a three-locus haplotype associated with meningoencephalitis in the greyhound (DLA-DRB1*018:02/DQA1*001:01/DQB1*008:02) [45]. These observations indicate that DQ haplotypes may constitute signatures of autoimmune predisposition across multiple dog breeds, which perhaps combine with breed-specific genetic components to determine the tissue-specificity of autoimmune disease expression. Whereas an association between MHC class II haplotypes, particularly DQ haplotypes, and autoimmune disease clearly exists in both dogs and humans, other MHC or non-MHC genetic components likely play a role in autoimmune disease development, which would explain why many dogs carrying the risk haplotype for a disease, such as AD, SLO or anal furunculosis, fail to develop the condition. Although our study points to a strong influence of particular DQ haplotypes in different autoimmune diseases across multiple dog breeds, the fact that a risk haplotype can be associated with a disease in one dog and not another suggests that other loci are necessary to determine disease expression and target organ specificity. In fact, studies in mice have shown that, while MHC genes play a major role in autoimmunity, they are still only a part of the genetic components required for disease development. Studies using nonobese diabetic (NOD) mice, which carry a particular MHC haplotype and are prone to developing diabetes, show that the presence of the diabetes-prone MHC haplotype in a different mouse strain fails to cause diabetes [51]. In contrast, when the diabetes-prone MHC of NOD mice is replaced by a different haplotype, autoimmunity develops but in a different target organ [52], thereby demonstrating tissue-specific susceptibility dependent upon factors additional to the MHC. Therefore, autoimmune disease expression is likely dependent upon the existence of non-MHC genetic determinants cooperating with autoimmune disease-predisposing MHC class II haplotypes. Our study results complement published data, and shows that three DLA class II risk haplotypes associated with autoimmune diseases are highly prevalent in the bearded collie population, which may account for incidence of autoimmune disorders in the breed. Moreover, the two-locus DQ haplotypes making up these three risk haplotypes are found in other risk haplotypes associated with diabetes mellitus and hypothyroidism across different dog breeds. Multiple studies have clearly demonstrated that DLA class II genes play a role in canine autoimmunity, although associations have been deemed breed- and disease-specific. Our work, in combination with the published literature, revealed common DLA-DQ haplotypes associated with different autoimmune diseases across multiple dog breeds. While additional loci are clearly necessary for actual disease expression, the DQ two-locus model may be a good indicator for susceptibility to certain organ-specific autoimmune diseases in dogs, and future studies on carriers of these risk DQ haplotypes may prove fruitful in identifying the additional genetic components involved in canine autoimmunity. These markers may then be used to make informed breeding decisions with the purpose of reducing the incidence of each disease in the canine population. Furthermore, genetic markers of canine autoimmune diseases may provide insights into their human counterpart. Blood or buccal swab samples were obtained from 236 bearded collies (125 healthy, 61 AD and 50 SLO) in North America, Europe, Australia and New Zealand that were healthy or affected by one of two autoimmune diseases (AD or SLO). Addisonian cases consisted of dogs diagnosed by a veterinarian on the basis of clinical signs, demonstrated electrolyte imbalance (sodium to potassium ratio < 27:1) and confirmed with an ACTH stimulation test. Serum cortisol levels < 2.0 μg/dL (55 nmol/L) before and after ACTH administration were considered diagnostic of AD. Dogs with a history of corticosteroid use that may have interfered with the ACTH stimulation test were excluded, as well as dogs presenting with atypical AD, characterized by low levels of serum cortisol before and after ACTH administration, but normal electrolyte ratio [19]. SLO cases consisted of dogs diagnosed by a veterinarian through biopsy and/or clinical findings such as pain, abnormal growth of nails, or nails bleeding, splitting or falling off. Control dogs were nine years or older, with no history of autoimmune disease (suspected or diagnosed) and considered healthy based on medical history. Samples were subjected to DNA extraction as previously described [53] and quantified using a Nanodrop® spectrophotometer. DNA samples were stored at − 20 °C until processing. The same procedure was performed for five other dog breeds at increased risk for developing AD: standard poodles (11 cases, 10 controls), WHWT (10 cases, 10 controls), Labradoodles (9 cases, 9 controls), PWD (15 cases, 16 controls) and Leonbergers (11 cases, 13 controls). To complement the AD data, DLA haplotypes from a previously studied population [3] were included in the analyses for the bearded collie (29 cases, 46 controls from the UK for a total of 90 cases and 168 controls), standard poodle (19 cases, 45 controls from the UK and North America for a total of 30 cases and 55 controls), WHWT (33 cases, 156 controls from the UK for a total of 43 cases and 166 controls), Labradoodle (3 cases from the UK for a total of 12 cases and 9 controls), and PWD (2 cases, 60 controls from the USA and UK for a total of 17 cases and 76 controls). The z-ratio test for independent proportions (http://vassarstats.net/propdiff_ind.html) was used to assess differences between geographical regions in the combined dataset [54]. DLA Haplotyping Sequence based typing was used to determine DLA haplotypes of all dogs. Amplification and sequencing of exon 2 for each of the three polymorphic MHC class II genes, DLA-DRB1, −DQA1 and -DQB1, were performed using flanking primers as previously described [55–57] and two new primer sets developed for DLA-DRB1 and DQB1 (Additional file 14: Table S14). Sequences from the newly designed primers provided greater coverage upstream and downstream of exon 2, improving sequence quality obtained for the entire exon, and matched bearded collie DQB1 (n = 30) and DRB1 (n = 18) sequences obtained using published primers. A touchdown polymerase chain reaction (PCR) protocol was initially used for all three genes, which consisted of 14 touchdown cycles with annealing temperatures starting at 62 °C for DLA-DRB1, 57 °C for DLA-DQA1 and 73 °C for DLA-DQB1, and reducing by 0.5 °C each cycle, and 24 additional cycles with annealing temperatures of 55 °C for DLA-DRB1, 50 °C for DLA-DQA1 and 66 °C for DLA-DQB1. The same touchdown PCR protocol was used with the newly designed DLA-DQB1 primer set, but a standard PCR protocol (95 °C denaturation, 65 °C annealing, 72 °C extension, 30 cycles) was used with the new set of DLA-DRB1 primers. Promega GoTaq® Flexi DNA Polymerase (Promega, WI, USA) was used for all PCRs in a 25 μL reaction. Size and concentration of amplicons were verified by running 5 μL of the PCR product on a 1% agarose gel. PCR products were then purified using Exosap-IT™ (Thermo Fisher Scientific, Waltham, MA, USA) according to manufacturer's recommendations, and sequenced by capillary electrophoresis on an Applied Biosystems 3730 (UC Davis DNA Sequencing facility). Nucleotide sequences for each DLA gene were analyzed and alleles assigned using SBTengine v.3.2 (GenDX, Netherlands). Ambiguous sequences were observed in all dog breeds. In these cases, haplotypes were predicted based on the three-locus haplotypes identified within each of the studied populations. If alleles could not be determined for one or more of the DLA class II genes, the individual was removed from analysis. Allele frequencies were calculated for each of the three polymorphic DLA genes. Three-locus DLA-DRB1/DQA1/DQB1 haplotypes were determined based on individuals that were homozygous on all three loci (n = 63), followed by individuals homozygous at two of the loci (n = 87). Odds ratio (OR) estimates were also calculated based on the number of cases and healthy controls carrying a particular haplotype compared to the number of individuals not carrying the haplotype. A 2 × 2 contingency table was used to calculate the ORs and two-tailed Fisher's exact values for the different haplotypes (http://vassarstats.net/odds2x2.html). The same approach was used to determine statistical differences in homozygosity of the DLA genes between cases and controls [58]. Statistical significance was considered at p < 0.05. Logistic regression was used to model the risk of disease as a function of the observed DLA class II three-locus genotypes. Defining the probability of disease as pij for a dog identified with DLA haplotype i and DLA haplotype j, the logit of this probability was determined as θij = log [pij/(1 − pij)]. Modeling the logit as a function of the genotype used the linear model: $$ {\theta}_{ij}={b}_0+{add}_i+{add}_j+{\mathit{\operatorname{dom}}}_{ij} $$ where b0 is an unknown constant common to all dogs, addi is the additive contribution of the i-th (i = 1,2,3,4,5) haplotype to the risk of disease, addj is the additive contribution of the j-th (j = 1,2,3,4,5) haplotype to the risk of disease, and domij is the dominance contribution of both haplotypes i and j to the risk of disease. Estimation of the unknown additive and dominance effects, along with providing predictions of the risk of disease, were implemented with the Bayesian statistical package Stan [59] executed with the public domain language R [60]. The combination of rare haplotypes and a disease of low prevalence, as seen in the data, increased the risk of empty subclasses. A hierarchical Bayesian model with weakly informative prior distributions for the unknown effects stabilized the estimation process [61]. Specifically, the intercept (b0) was drawn from the prior density Cauchy (0,10), the additive effects (addi and addj) were drawn from the prior density N(0,\( {\sigma}_a^2 \)), the dominance effects (domij) were drawn from the prior density N(0,\( {\sigma}_d^2 \)), and the standard deviations of the additive and dominance contributions (σa and σd) were drawn from the positive values of a Cauchy(0,2.5) as recommended for weakly informative priors [62]. The simulation process was conducted across four chains, where each chain was built on a draw of 15,000 total samples, a "burn-in" process of 5000 samples followed by thinning to every 20-th sample. In this way, each chain generated 500 samples, and 2000 samples across the four chains to generate more stable estimates. Convergence of the process was visualized through trace plots of all the unknown values and computation of the Gelman-Rubin statistic for convergence being below 1.05 [63]. We gratefully acknowledge the infrastructure support of the Department of Animal Science, College of Agricultural and Environmental Sciences, and the California Agricultural Experiment Station of the University of California, Davis. The authors would also like to thank the Bearded Collie Foundation for Health, Dr. Elsa Sell and Dr. Linda Aronson for their invaluable help in bearded collie sample recruitment; Jo Tucker (Canine Immune Mediated Disease Awareness, CIMDA) for being instrumental in encouraging sample collection; the technical staff supporting this study, Maria Palij, Jenifer Hallock, Sini Karjalainen, Reetta Hänninen, Kaisu Hiltunen, and Steven Quarmby; Ezinne Ibe, and Simon Rothwell at the UK DNA Archive for Companion Animals; and all the owners and clinicians who have actively participated in our research projects as well as every dog that contributed to the study. We would also like to thank IDEXX laboratories (Harrogate, UK), Nationwide Laboratories (Poulton-le-Fylde, UK), Angela Pedder, Colleen Stead, the Bearded Collie Breed Club (UK Northern Branch), Lucy Davison and Betty Aughey. This study was funded in part by BeaCon (Bearded Collie Foundation for Health), the American Kennel Club Canine Health Foundation (AMO; 02187-MOU and #1236-A), the Academy of Finland, Helsinki Institute of Life Science and the Jane and Aatos Erkko Foundation, and the European Commission FP7 project number 201167, Euradrenal. All data generated or analyzed during this study are included in this published article and its supplementary information files (Supplemental Tables). LCG and AMO designed the study. All authors collected samples. LCG and JMB genotyped the samples with assistance from ML and AS. LCG and TRF conducted the statistical analyses. LCG, JMB, and AMO interpreted the findings and drafted the manuscript. LJK and HL contributed to data interpretation, and LJK provided study input and significantly edited the manuscript. All authors read and approved the final manuscript. All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. All procedures performed were in accordance with the ethical standards of the University of California, Davis (IACUC #20402) and University of Helsinki, Finland (permit ESAVI/6054/04.10.03/ 2012). All UK samples consisted of residual blood remaining after diagnostic testing and were collected in accordance with guidelines of the Royal College of Veterinary Surgeons, UK and the Veterinary Surgeons Act 1966. For this reason ethical committee approval was not required. All samples were collected with informed and written owner consent. Lorna Kennedy is Managing Editor on Canine Genetics and Epidemiology. The other authors declare no competing interests. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Additional file 1: Table S1. Allele frequency for the three polymorphic DLA class II genes in 233 bearded collies (DOCX 15 kb) Additional file 2: Table S2. Allele frequency and odds ratio (OR) for Addison's disease (AD; n = 61) vs controls (n = 122) in bearded collies. Bolded values were statistically significant at α = 0.05 (DOCX 19 kb) Additional file 3: Table S3. Frequency, code, and odds ratio (OR) with 95% confidence interval (CI) of each three-locus haplotype observed in healthy, Addisonian (AD) and symmetrical lupoid onychodystrophy (SLO) bearded collies in the combined dataset. Text in bold indicates the haplotype frequency significantly differed between cases and controls. Haplotype codes are as used in Table 5; additional codes added as needed. (DOCX 16 kb) Additional file 4: Table S4. Frequency of DLA three-locus haplotypes in European (79 controls, 43 AD and 29 SLO) and North American (80 controls, 45 AD and 19 SLO) bearded collies. Bolded values indicate statistical differences in haplotype frequency according to geographical region as determined by the z-ratio test for independent proportions. Haplotype codes are as used in Table 5; additional codes added as needed. (DOCX 16 kb) Additional file 5: Table S5. Frequency, code, and odds ratio (OR) with 95% confidence interval (CI) of each three-locus haplotype observed in healthy and Addisonian (AD) bearded collies by geographical region. Text in bold indicates the haplotype frequency significantly differed between cases and controls. Haplotype codes are as used in Table 5; additional codes added as needed. (DOCX 17 kb) Additional file 6: Table S6. Frequency of DLA class II alleles segregating in six dog breeds at higher risk for developing Addison's disease (AD) (DOCX 23 kb) Additional file 7: Table S7. Allele frequency and odds ratio (OR) for Addison's disease (AD; n = 10) vs controls (2n = 10) in West Highland white terriers. Bolded values were statistically significant at α = 0.05 (DOCX 19 kb) Additional file 8: Table S8. Allele frequency and odds ratio (OR) for Addison's disease (AD; n = 11) vs controls (n = 13) in Leonbergers. Bolded values were statistically significant at α = 0.05 (DOCX 16 kb) Additional file 9: Table S9. Allele frequency and odds ratio (OR) for Addison's disease (AD; n = 17) vs controls (n = 76) in Portuguese water dogs. (DOCX 18 kb) Additional file 10: Table S10. Allele frequency and odds ratio (OR) for Addison's disease (AD; n = 30) vs controls (n = 55) in standard poodles. (DOCX 19 kb) Additional file 11: Table S11. Allele frequency and odds ratio (OR) for Addison's disease (AD; n = 12) vs controls (n = 9) in Labradoodles. (DOCX 18 kb) Additional file 12: Table S12. Allele frequency and odds ratio (OR) for symmetrical lupoid onychodystrophy (SLO; n = 50) vs controls (n = 122) in bearded collies. Bolded values were statistically significant at α = 0.05. (DOCX 19 kb) Additional file 13: Table S13. Frequency, code, and odds ratio (OR) with 95% confidence interval (CI) of each three-locus haplotype observed in healthy and symmetrical lupoid onychodystrophy (SLO) bearded collies by geographical region. Text in bold indicates the haplotype frequency significantly differed between cases and controls. Haplotype codes are as used in Table 5; additional codes added as needed. (DOCX 16 kb) Additional file 14: Table S14. Primer sequences used for DLA class II haplotyping (DOCX 13 kb) Department of Animal Science, University of California, One Shields Ave, Davis, CA 95616, USA Brazilian National Council for Scientific and Technological Development (CNPq) fellow, Brasília, Brazil Centre for Integrated Genomic Medical Research, University of Manchester, Manchester, UK Research Programs Unit, Molecular Neurology, and Department of Veterinary Biosciences, University of Helsinki, Helsinki, Finland Folkhälsan Institute of Genetics, Helsinki, Finland Boag AM, Catchpole B. A review of the genetics of Hypoadrenocorticism. Topics in Companion Animal Medicine. 2014;29(4):96–101.View ArticleGoogle Scholar Boag AM, Christie MR, McLaughlin KA, Syme HM, Graham P, Catchpole B. Autoantibodies against cytochrome P450 side-chain cleavage enzyme in dogs (Canis lupus familiaris) affected with Hypoadrenocorticism (Addison's disease). PLoS One. 2015;10(11):e0143458.View ArticleGoogle Scholar Massey J, Boag A, Short AD, Scholey RA, Henthorn PS, Littman MP, Husebye E, Catchpole B, Pedersen N, Mellersh CS, et al. MHC class II association study in eight breeds of dog with hypoadrenocorticism. Immunogenetics. 2013;65(4):291–7.View ArticleGoogle Scholar Short AD, Boag A, Catchpole B, Kennedy LJ, Massey J, Rothwell S, Husebye E, Ollier B. A candidate gene analysis of canine Hypoadrenocorticism in 3 dog breeds. J Hered. 2013;104(6):807–20.View ArticleGoogle Scholar Charmandari E, Nicolaides NC, Chrousos GP. Adrenal insufficiency. Lancet. 2014;383(9935):2152–67.View ArticleGoogle Scholar Mitchell AL, Pearce SHS. Autoimmune Addison disease: pathophysiology and genetic complexity. Nat Rev Endocrinol. 2012;8(5):306–16.View ArticleGoogle Scholar Oberbauer AM, Benemann KS, Belanger JM, Wagner DR, Ward JH, Famula TR. Inheritance of hypoadrenocorticism in bearded collies. Am J Vet Res. 2002;63(5):643–7.View ArticleGoogle Scholar Soderbergh A, Winqvist O, Norheim I, Rorsman F, Husebye ES, Dolva O, Karlsson FA, Kampe O. Adrenal autoantibodies and organ-specific autoimmunity in patients with Addison's disease. Clin Endocrinol. 1996;45(4):453–60.View ArticleGoogle Scholar Falorni A, Nikoshkov A, Laureti S, Grenback E, Hulting AL, Casucci G, Santeusanio F, Brunetti P, Luthman H, Lernmark A. High diagnostic accuracy for idiopathic Addison's disease with a sensitive radiobinding assay for autoantibodies against recombinant human 21-hydroxylase. J Clin Endocrinol Metab. 1995;80(9):2752–5.PubMedGoogle Scholar Oberbauer A, Bell J, Belanger J, Famula T. Genetic evaluation of Addison's disease in the Portuguese water dog. BMC Vet Res. 2006;2(1):15.View ArticleGoogle Scholar Online Mendelian Inheritance in Animals, OMIA. Sydney School of Veterinary Science, 12/05/2018. World Wide Web URL: https://omia.org/home/. Accessed 05 Dec 2018. Decôme M, Blais M-C. Prevalence and clinical features of hypoadrenocorticism in great Pyrenees dogs in a referred population: 11 cases. The Canadian veterinary journal = La revue veterinaire canadienne. 2017;58(10):1093–9.PubMedPubMed CentralGoogle Scholar Bellumori TP, Famula TR, Bannasch DL, Belanger JM, Oberbauer AM. Prevalence of inherited disorders among mixed-breed and purebred dogs: 27,254 cases (1995-2010). J Am Vet Med Assoc. 2013;242(11):1549–55.View ArticleGoogle Scholar Hanson JM, Tengvall K, Bonnett BN, Hedhammar A. Naturally occurring adrenocortical insufficiency--an epidemiological study based on a Swedish-insured dog population of 525,028 dogs. J Vet Intern Med. 2016;30(1):76–84.View ArticleGoogle Scholar Wiles BM, Llewellyn-Zaidi AM, Evans KM, O'Neill DG, Lewis TW. Large-scale survey to estimate the prevalence of disorders for 192 kennel Club registered breeds. Canine genetics and epidemiology. 2017;4:8.View ArticleGoogle Scholar Famula TR, Belanger JM, Oberbauer AM. Heritability and complex segregation analysis of hypoadrenocorticism in the standard poodle. J Small Anim Pract. 2003;44(1):8–12.View ArticleGoogle Scholar Hughes AM, Nelson RW, Famula TR, Bannasch DL. Clinical features and heritability of hypoadrenocorticism in Nova Scotia duck tolling retrievers: 25 cases (1994-2006). J Am Vet Med Assoc. 2007;231(3):407–12.View ArticleGoogle Scholar Burkitt JM. Chapter 76 - Hypoadrenocorticism. In: Silverstein DC, Hopper K, editors. Small Animal Critical Care Medicine. Saint Louis: W.B. Saunders; 2009. p. 321–4.View ArticleGoogle Scholar Klein SC, Peterson ME. Canine hypoadrenocorticism: part I, vol. 51; 2010.Google Scholar Van Lanen K, Sande A. Canine Hypoadrenocorticism: pathogenesis, diagnosis, and treatment. Topics in Companion Animal Med. 2014;29(4):88–95.View ArticleGoogle Scholar Auxilia ST, Hill PB, Thoday KL. Canine symmetrical lupoid onychodystrophy: a retrospective study with particular reference to management. J Small Anim Pract. 2001;42(2):82–7.View ArticleGoogle Scholar Mueller RS, Rosychuk RA, Jonas LD. A retrospective study regarding the treatment of lupoid onychodystrophy in 30 dogs and literature review. J Am Anim Hosp Assoc. 2003;39:139–50.Google Scholar Scott D, Rousselle S Jr. MW: symmetrical lupoid onychodystrophy in dogs: a retrospective analysis of 18 cases (1989-1993). J Am Anim Hosp Assoc. 1995;31(3):194–201.View ArticleGoogle Scholar Wilbe M, Ziener ML, Aronsson A, Harlos C, Sundberg K, Norberg E, Andersson L, Lindblad-Toh K, Hedhammar Å, Andersson G. DLA class II alleles are associated with risk for canine symmetrical lupoid onychodystropy (SLO). PLoS One. 2010;5(8):e12332.View ArticleGoogle Scholar Ziener ML, Nødtvedt A. A treatment study of canine symmetrical onychomadesis (symmetrical lupoid onychodystrophy) comparing fish oil and cyclosporine supplementation in addition to a diet rich in omega-3 fatty acids. Acta Vet Scand. 2014;56(1):66.View ArticleGoogle Scholar Ziener ML, Bettenay SV, Mueller RS. Symmetrical onychomadesis in Norwegian Gordon and English setters. Vet Dermatol. 2008;19(2):88–94.View ArticleGoogle Scholar Scott DW, Miller WH. Disorders of the claw and claw bed in dogs. Compendium on Continued Education for the Practicing Veterinarian. 1992;14(11):12.Google Scholar Dahlgren S, Ziener ML, Lingaas F. A genome-wide association study identifies a region strongly associated with symmetrical onychomadesis on chromosome 12 in dogs. Anim Genet. 2016;47(6):708–16.View ArticleGoogle Scholar The Bearded Collie Foundation for Health. BeaCon Open Health Registry Report. The Bearded Collie Foundation for Health. 2017. https://www.beaconforhealth.org/YR_17_OHR_Report.pdf. Accessed June 16 2018. Rosenblum MD, Remedios KA, Abbas AK. Mechanisms of human autoimmunity. J Clin Invest. 2015;125(6):2228–33.View ArticleGoogle Scholar Rioux JD, Abbas AK. Paths to understanding the genetic basis of autoimmune disease. Nature. 2005;435(7042):584–9.View ArticleGoogle Scholar Myhre AG, Undlien DE, Løvås K, Uhlving S, Nedrebø BG, Fougner KJ, Trovik T, Sørheim JI, Husebye ES. Autoimmune adrenocortical failure in Norway autoantibodies and human leukocyte antigen class II associations related to clinical features. J Clin Endocrinol Metab. 2002;87(2):618–23.View ArticleGoogle Scholar Noble JA, Valdes AM. Genetics of the HLA region in the prediction of type 1 diabetes. Current Diab Rep. 2011;11(6):533–42.View ArticleGoogle Scholar Stokkers PCF, Reitsma PH, Tytgat GNJ, van Deventer SJH. HLA-DR and -DQ phenotypes in inflammatory bowel disease: a meta-analysis. Gut. 1999;45(3):395.View ArticleGoogle Scholar Catchpole B, Adams JP, Holder AL, Short AD, Ollier WER, Kennedy LJ. Genetics of canine diabetes mellitus: are the diabetes susceptibility genes identified in humans involved in breed susceptibility to diabetes mellitus in dogs? Vet J. 2013;195(2):139–47.View ArticleGoogle Scholar Dyggve H, Kennedy LJ, Meri S, Spillmann T, Lohi H, Speeti M. Association of Doberman hepatitis to canine major histocompatibility complex II. Tissue Antigens. 2011;77(1):30–5.View ArticleGoogle Scholar Greer KA, Wong AK, Liu H, Famula TR, Pedersen NC, Ruhe A, Wallace M, Neff MW. Necrotizing meningoencephalitis of pug dogs associates with dog leukocyte antigen class II and resembles acute variant forms of multiple sclerosis. Tissue Antigens. 2010;76(2):110–8.PubMedGoogle Scholar Hughes AM, Jokinen P, Bannasch DL, Lohi H, Oberbauer AM. Association of a dog leukocyte antigen class II haplotype with hypoadrenocorticism in Nova Scotia duck tolling retrievers. Tissue Antigens. 2010;75(6):684–90.View ArticleGoogle Scholar Kennedy LJ, Davison LJ, Barnes A, Short AD, Fretwell N, Jones CA, Lee AC, Ollier WER, Catchpole B. Identification of susceptibility and protective major histocompatibility complex haplotypes in canine diabetes mellitus. Tissue Antigens. 2006;68(6):467–76.View ArticleGoogle Scholar Kennedy LJ, Huson HJ, Leonard J, Angles JM, Fox LE, Wojciechowski JW, Yuncker C, Happ GM. Association of hypothyroid disease in Doberman pinscher dogs with a rare major histocompatibility complex DLA class II haplotype. Tissue Antigens. 2006;67(1):53–6.View ArticleGoogle Scholar Kennedy LJ, Ollier WER, Marti E, Wagner JL, Storb RF. Canine Immunogenetics. In: Ostrander EA, Ruvinsky A, editors. The Genetics of the Dog. 2nd ed. London: CAB International; 2012. p. 91–126.View ArticleGoogle Scholar Kennedy LJ, O'Neill T, House A, Barnes A, Kyostila K, Innes J, Fretwell N, Day MJ, Catchpole B, Lohi H, et al. Risk of anal furunculosis in German shepherd dogs is associated with the major histocompatibility complex. Tissue Antigens. 2008;71(1):51–6.PubMedGoogle Scholar Ziener ML, Dahlgren S, Thoresen SI, Lingaas F. Genetics and epidemiology of hypothyroidism and symmetrical onychomadesis in the Gordon setter and the English setter. Canine genetics and epidemiology. 2015;2(1):12.View ArticleGoogle Scholar Wilbe M, Sundberg K, Hansen IR, Strandberg E, Nachreiner RF, Hedhammar A, Kennedy LJ, Andersson G, Bjornerfeldt S. Increased genetic risk or protection for canine autoimmune lymphocytic thyroiditis in Giant schnauzers depends on DLA class II genotype. Tissue Antigens. 2010;75:712–9.Google Scholar Shiel RE, Kennedy LJ, Nolan CM, Mooney CT, Callanan JJ. Major histocompatibility complex class II alleles and haplotypes associated with non-suppurative meningoencephalitis in greyhounds. Tissue Antigens. 2014;84(3):271–6.View ArticleGoogle Scholar Tsai S, Santamaria P. MHC Class II Polymorphisms, Autoreactive T-Cells, and Autoimmunity. Front Immunol. 2013;4:321.View ArticleGoogle Scholar Holzer U, Nepom GT. Major Histocompatibility Complex and Autoimmune Disease. In: Burt RK, Marmont AM, editors. Stem Cell Therapy for Autoimmune Disease. Georgetown: Landes Bioscience; 2004. p. 155–62.Google Scholar Piertney SB, Oliver MK. The evolutionary ecology of the major histocompatibility complex. Heredity. 2005;96:7.View ArticleGoogle Scholar Skanes VM, Barnard J, Farid N, Marshall WH, Murphy L, Rideout D, Taylor R, Xidos G, Larsen B. Class III alleles and high-risk MHC haplotypes in type I diabetes mellitus, Graves' disease and Hashimoto's thyroiditis. Molecular biology & medicine. 1986;3(2):143–57.Google Scholar Kennedy LJ, Altet L, Angles JM, Barnes A, Carter SD, Francino O, Gerlach JA, Happ GM, Ollier WE, Polvi A, et al. Nomenclature for factors of the dog major histocompatibility system (DLA), 1998. First report of the ISAG DLA nomenclature committee. Int Society for Animals Genetics Tissue Antigens. 1999;54(3):312–21.View ArticleGoogle Scholar Wicker LS, Todd JA, Peterson LB. Genetic control of autoimmune diabetes in the NOD mouse. Annu Rev Immunol. 1995;13:179–200.View ArticleGoogle Scholar Braley-Mullen H, Sharp GC, Medling B, Tang H. Spontaneous autoimmune thyroiditis in NOD.H-2h4 mice. J Autoimmun. 1999;12(3):157–65.View ArticleGoogle Scholar Rincon G, Tengvall K, Belanger JM, Lagoutte L, Medrano JF, Andre C, Thomas A, Lawley CT, Hansen MS, Lindblad-Toh K, et al. Comparison of buccal and blood-derived canine DNA, either native or whole genome amplified, for array-based genome-wide association studies. BMC Res Notes. 2011;4:226.View ArticleGoogle Scholar Davies-Mostert HT, Mills MGL, Macdonald DW: Hard boundaries influence African wild dogs' diet and prey selection. 2013, 50(6):1358–1366.Google Scholar Bexfield NH, Watson PJ, Aguirre-Hernandez J, Sargan DR, Tiley L, Heeney JL, Kennedy LJ. DLA class II alleles and haplotypes are associated with risk for and protection from chronic hepatitis in the English springer spaniel. PLoS One. 2012;7(8):e42584.View ArticleGoogle Scholar Kennedy LJ, Barnes A, Happ GM, Quinnell RJ, Courtenay O, Carter SD, Ollier WER, Thomson W. Evidence for extensive DLA polymorphism in different dog populations. Tissue Antigens. 2002;60(1):43–52.View ArticleGoogle Scholar Wagner JL, Burnett RC, Works JD, Storb R. Molecular analysis of DLA-DRBB1 polymorphism. Tissue Antigens. 1996;48(5):554–61.View ArticleGoogle Scholar Kim HY. Statistical notes for clinical researchers: chi-squared test and Fisher's exact test. Restorative dentistry & endodontics. 2017;42(2):152–5.View ArticleGoogle Scholar Carpenter B, Gelman A, Hoffman MD, Lee D, Goodrich B, Betancourt M, Brubaker M, Guo J, Li P, Riddell A: Stan: a probabilistic programming language. 2017 2017, 76(1):32%J Stat Softw.Google Scholar R Core. Team: R: a language and environment for statistical computing. In: R Foundation for statistical Computing; 2017.Google Scholar Gelman A, Carlin J, Stern H, Dunson D, Vehtari A, Rubin D. Bayesian data analysis, third edition (chapman & {hall/CRC} texts in statistical science). In: Chapman and Hall/CRC; 2013.Google Scholar Gelman AJBa: Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper). 2006, 1(3):515–534.Google Scholar Gelman A, Rubin DBJSs: Inference from iterative simulation using multiple sequences. 1992, 7(4):457–472.Google Scholar
CommonCrawl